Design like a parrot
It can be easy after engaging with a chatbot to forget that it is, in fact, a stochastic parrot…
I’ve noticed two distinct groups forming among architecture students I’ve been talking to concerning their use of AI chatbots like ChatGPT and text-to-image generators like DALL.E, Midjourney and Stable Diffusion. The latter can produce images that look like photographs, paintings or drawings created by human. beings. There’s the group that’s using it in a variety of ways – just another tool in their design toolbox – to come up with design ideas, sometimes to present AI-generated images of their designs. Then there’s the group that’s resisting, some actively so, not wanting to surrender their agency in the design process. I get the sense that some of them see the software as just a little bit evil.
Which is not entirely surprising. Many in the field of AI see today’s software as just the beginning and highly likely to develop into artificial general intelligence (AGI) – AI capable of thinking at a human level. Some worry AGI’s power could escalate exponentially, that the software might evolve and gain the ability to improve itself until computing technology reaches ‘the singularity’ – a point at which it escapes our control in an intelligence explosion far exceeding the intellectual potential of the human brain.
That’s the underlying concern of the Center for AI Safety, which, at the end of May, released this: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement was signed by more than 350, including prominent researchers and leaders from the top AI firms. The dire warning came on the heels of another open letter in March calling for a six-month moratorium on the development of the largest AI systems, citing concerns that AI tools present “profound risks to society and humanity”. It claimed developers were locked into an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control.” It was signed by some 1000 technology leaders and researchers, including Elon Musk, who co-founded OpenAI, which recently released the widely used chatbot ChatGPT.
Yikes. Another existential threat – as if we didn’t already have enough on our plate thanks to climate change and what might be next on the pandemic horizon. But as commentators like John Naughton in The Observer noted recently, there’s something wrong with this picture: “Here we have senior representatives of a powerful and unconscionably rich industry – plus their supporters and colleagues in elite research labs across the world – who are on the one hand mesmerised by the technical challenges of building a technology that they believe might be an existential threat to humanity, while at the same time calling for governments to regulate it.” Naughton goes on to ask the question the tech guys seem incapable of asking themselves. “If it is so dangerous, why do you continue to build it? Why not stop and do something else? Or at the very least, stop releasing these products into the wild?”
Another group calling foul on the fearmongering and hype of the ‘AI pause’ letter comprised four women researchers, authors of the much-discussed 2021 paper ‘Stochastic Parrots’, warning of the ethical dangers of AI that arise from the parroting qualities of the large-language model powering the latest AI chatbots. The group says focusing on catastrophic AI risks is part of “dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today”. Those ongoing actual harms listed include: “worker exploitation and massive data theft to create products that profit a handful of entities; the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem; and the concentration of power in the hands of a few people which exacerbates social inequities”.
The group argues that the language of a fantasised AI-enabled utopia or apocalypse inflates the capabilities of automated systems and anthropomorphises them, deceiving people into thinking that there is a sentient being behind the synthetic media. “This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency.”
It can be easy after engaging with a chatbot to forget that it is, in fact, a stochastic parrot, which does little else than make statistical predictions of the most likely word to be appended to the sentence it is, at that moment, engaged in composing. The same can be said for Generative AI used in the world of architecture. Powered by ‘diffusion models’ trained on vast swathes of pictures with associated meanings harvested from the web, the software destroys and recreates images to find statistical patterns in them – an entirely mechanical process, built upon a foundation of probability calculations.
Some might call that design. But it’s not being a technophobe to say something fundamental is missing. Or, as one commenter said when studio principal Patrik Schumacher revealed Zaha Hadid Architects is using text-to-image generators to come up with design ideas for projects: “Your design style is so predictable, that a soulless, nonrational, unconscious AI algorithm can generate new and better designs than you.” Parroting.