Artificially clever

Click to enlarge
AI-generated urban farm created by Hanson Wang and Midjourney.

AI-generated urban farm created by Hanson Wang and Midjourney.

1 of 4
AI-generated view of an organic interior space created by Joshua Wishart and Midjourney.

AI-generated view of an organic interior space created by Joshua Wishart and Midjourney.

2 of 4
An AI-generated internal atrium created by Mira Messerly and Midjourney.

An AI-generated internal atrium created by Mira Messerly and Midjourney.

3 of 4
Initial screenshot of massing model created by Cathy Zhou.

Initial screenshot of massing model created by Cathy Zhou.

4 of 4

Anthony Brand encourages a group of third-year architectural students to seek out AI tools that might help them design more efficiently, effectively and innovatively. While the expedition uncovered the unusual, unconventional and often unconvincing, a truly intelligent AI remains (currently) elusive.

In April this year, architect, author and educator Neil Leach arrived in Auckland for the beginning of an ambitious world tour (33 cities in nine weeks), promoting his most recent book, Architecture in the Age of Artificial Intelligence: An Introduction to AI for Architects.

Leach presented to a packed house (and livestream) at Warren and Mahoney’s Auckland offices, where he shared his perspective on the ways in which the rapid advancement of AI could radically transform and possibly render obsolete the traditional role of the architect. The presentation was clear, coherent and convincing, and the opinions well-founded — albeit, at times, a little disconcerting. Among the many takeaways was the idea that, while AI may not be taking our jobs (yet), it will inevitably have an impact on the nature of the practice of architecture and the process of design. While the particularities of that shift may not yet be fully in focus, what is clear is that architects and designers who use AI will, all things being equal, supersede those who do not.1

AI-generated view of an organic interior space created by Joshua Wishart and Midjourney.

This piqued my curiosity sufficiently to explore this notion further in my teaching: specifically, the development of a third-year BAS design studio at the University of Auckland.

On the face of things, it would be a reasonably standard design brief — a mixed-use, medium-density building located in the Wynyard Quarter (at the current Hirepool location). The twist was that students were not only permitted to use AI but were actively encouraged to do so. If they could find a tool that empowered them to produce a design proposal more efficiently, effectively and innovatively — a tool that would help expand their search for solutions into places that they may not have hitherto considered — they should absolutely use it.

Initial screenshot of massing model created by Cathy Zhou.

This was met with equal parts of excitement and trepidation. Six months ago, when I wrote the design paper, the internet was littered with various sites marketing themselves as the AI tool for architects, professing to solve all manner of design aches and pains with their AI snake oil. This provoked the first of several realisations from the students: that there was no ChatGPT design equivalent — an AI ArchitectureGPT that you could instruct to “design X” and, at the click of a button, have a passable proposal generated.

What followed was a wayfinding exercise through a seemingly endless sea of unusual, unconventional and, often, unconvincing tools and devices. Students would look at the challenge in front of them, look to this alien tool box and pull something out that appeared to be able to do the job (or at least part of it); they would then proceed to teach themselves how to wield it effectively and apply it to the task at hand. An analogue analogy might be watching someone wanting to connect two materials with a screw, not knowing what a screwdriver looked like (or even that such a thing existed) and picking up a wrench, experimenting with curious prods and frustrated whacks to see what would happen. As with the wrench, often the students would discover that the tool was not fit for purpose but may yet reveal itself to be useful later.

The following five images were created on Leonardo.ai (in 46.5 seconds).

Other ‘AI’ tools would transpire to be less artificially intelligent and, at best, artificially clever. This was true for many of the plan-generators, for instance. There are a few of these available and they all seem to work in more or less the same way; you upload a CAD plan, demarcate the external walls and work through a short form specifying the number of room types you need. Within a few minutes, the software has whizzed through a few hundred iterations to identify the top five or so that best meet the criteria. While this often worked well — or at least quickly — it would fail to take into account anything that was not a predetermined parameter, such as the Māori concept of tapu and noa, or the spatial relationship between bathrooms and kitchens, for instance. There were also a few oddities or tumorous growths, where a corridor might have swollen to the width of a bedroom in order to assimilate any remaining space. Both limitations presented nice learning opportunities for the students — really having to look at and critique what was in front of them, rather than accepting it as is, and scrutinising the plan for errors that may or may not be there — instilling a healthy degree of mistrust and scepticism along the way.

An AI-generated internal atrium created by Mira Messerly and Midjourney.

There were also numerous wins within this process: being able to upload lengthy planning and regulatory documents with which students could then converse in colloquial language, helping them to find answers and ask follow-up questions to improve their understanding as required. The same was true for drafting written documents like a return brief or precedent study. The various text-to-image generators (Midjourney, Leonardo, Krea, etc.) had very distinct periods of usefulness within the design process. During pre-design and concept phases, they were helpful in churning out curious and intriguing render-quality images in seconds that, for the most part, were wildly impractical, inappropriate or simply unbuildable but offered glimpses of potentially good ideas that the students could latch onto and explore further — ideas that may not have otherwise occurred to them. Towards the end of the design phase, these image generators were also incredibly effective at quickly re-imagining a simple, greyscale screenshot, a basic physical 3D model or, indeed, an image of anything that provided some indication of form, light and surface conditions in order to produce a visually compelling image whilst maintaining the same depth, form and lighting qualities as those of the source image (see renders opposite).

But, perhaps the most revealing moment in this studio came after the first few weeks, when a particularly diligent student had spent days learning how to use a self-ordained “generative AI-powered building design platform to help design optimal residential developments in minutes, rather than months”. The workflow was similar to that of the plan-generators and, in return, it would generate the most optimal response for that specific site. After an initially steep and mildly infuriating learning curve, it did indeed work as advertised, presenting a scheme that met all of the spatial requirements, optimised access to daylight and uninterrupted viewshafts, with a minimised building and carbon footprint.

Not only did it create a form that the student had not considered but it also provided a neat little CV of quantified qualifications: numerical proof as to why this proposal was a superior response when compared with any of the student’s initial massing proposals. Still, the student was hesitant to present the optimised scheme and was visibly perturbed by something. When asked whether or not they were happy with the proposal, the student conceded that it worked very well. “But…?” I probed. “But I don’t like it. [pause] I wouldn’t want to live in it,” replied the student. There was something amiss: a glitch in the matrix, a disturbance in the force.

“There was no ChatGPT design equivalent — an AI ArchitectureGPT.” The Tin-man, designed by Anthony Brand and Leonardo.ai.

This is something that Toby Walsh, Professor of AI at the University of New South Wales, describes as the artificial aspect of AI. In his book, Faking It: Artificial Intelligence in a Human World, Walsh explains that, for all its unfathomably high IQ (intellectual intelligence), AI lacks both EQ (emotional intelligence) and SQ (social intelligence). This is the quintessence of what makes us human: qualities developed through our embodied and lived experience of the world around us. This was viscerally perceptible to the student. It just didn’t feel right. This should come as no surprise to architects, where Venustas has always played such an intangible yet integral measure of architectural quality.

As Le Corbusier observed, exactly a century ago (1924): 

“My house is practical. I thank you, as I might thank railway engineers or the telephone service. You have not touched my heart. But suppose that walls rise towards the heaven in such a way that I am moved […] suddenly you touch my heart, you do me good, I am happy and I say: “This is beautiful.” That is architecture”.

This is a fitting sentiment as we move once more Towards a(nother) New Architecture, and one that, perhaps, Leach also had in mind in the closing of his talk when he implored the audience not to panic, but to stay calm and carry on.

Dr Anthony Brand is a senior lecturer at the University of Auckland, specialising in architectural history, theory and criticism. His core research interests are phenomenology, embodiment and situated cognition. His first book, Touching Architecture: Affective Atmospheres and Embodied Encounters, explores how and why we feel the way we do and ways in which architecture can influence this.

References

1. NZIA Youtube channel.


More practice