Skip to main content
Global Innovation Design (MA/MSc)

Luisa Jane Charles

I am a design engineer with a background in interactive and experiential design, film, and installation artwork. I have a strong belief in the emancipatory potential of design and new technologies to help make the world a more equitable and sustainable place to live, be it on a micro or macro scale. 

I am fascinated by systems, and the convergence of and interactions between human, organic, synthetic, and technological systems. Much of my work exists at the intersections of art, design, science, and technology, and my work at the RCA explores technoethics, social innovation, artificial intelligence, robotics, science communication, and biodesign.


Exhibitions

Elemental, National Science Museum of Thailand, TBC 2025

Tidal Light, National Science Museum of Thailand, TBC 2023

Float, The 1851der tent, Great Exhibition Road Festival, London, 2022

Curative, Gaea: A Garden of the 2nd and 3rd Dimensions, MIKA, New York, 2021

Tamagukiyo, Together in Europe: Creative Communities for Change, London Design Festival 2021

Linger on Matter, MODA-FAD, Disseny Hub, Barcelona, 2021

The Data Box, National Science Museum of Thailand, 2021

Scicomm Cubes, National Science Museum of Thailand, 2021

Sense of Direction, Dark Matter, Science Gallery, London, 2019

Inorganisms, Everything Happens So Much, London Design Festival 2018

Taste Sensations, Lates: Food and Drink, Science Museum, London, 2018

Kardia, Blood: Life Uncut, Science Gallery, London, 2017


Work

Special Effects, Dr. Strange and the Multiverse of Madness, Marvel Studios, 2020-2021

Props, Jurassic World Dominion, Universal Pictures, 2020

Project Manager, London Design Festival, University of the Arts London, 2019

Project Co-ordinator, Where Walworth Eats, University of the Arts London, 2019

Action Vehicles, Star Wars Episode IX, Lucasfilm Ltd., 2018-2019


Education

MA + MSc Global Innovation Design, Royal College of Art + Imperial College London, 2022 - Merit

BA (Hons) Interaction Design Arts, University of the Arts London, 2018 - First

Foundation Diploma in Art and Design, University of the Creative Arts, 2015 - Distinction

Luisa Jane Charles-statement

With work that spans the intersections of art, design, science, and technology, I define myself as a media-agnostic Design Engineer, a thinker, maker, and provocateur. My work attempts to tackle big, complex, and controversial topics in a playful, interactive, and digestible way - seeking to spark conversation and find unconventional solutions to abstract problems.

I prioritise research processes, allowing my work to form itself organically through investigation and experimentation, and use various creative media as a driver for sociopolitical change. I came to Global Innovation Design to gain engineering skills to compliment my existing design education; to bring my work from awareness raising to real world change. 





Project Themes

Exploring interactions between human, digital, and natural systems

  • Float - A surface water drone that measures water quality data in realtime, co-designed with Colombo City's wetland communities
  • Tamagukiyo - An interactive sculpture that places the user as a 'god' of their own living ecosystem

Ethics and creativity in Artificial Intelligence

  • GPT-3 and Me - An interview with the world's most sophisticated AI language model, exploring the algorithmic experience
  • Gaea: A Garden of the 2nd and 3rd Dimensions - A Virtual Reality exhibition of AI generated artwork
  • G:A GOT 2&3D - An exhibition exploring technoethics and AI
  • Curative - An Augmented Reality experience entering the atelier of an AI artist
  • Graphia - A tablet based essay writing tool that uses AI analysis to diagnose Dysgraphia - a neurological condition

Science Communication

  • The Data Box - An interactive, content-agnostic data visualisation system
  • The Future is Now - Co-designing with rubber tree farmers in Krabi
  • SciCom Square - A touring, modular exhibition of interactive science communication
  • Elemental and Tidal Light - Large scale sculptures for the National Science Museum of Thailand


Float, media item 1
Float, media item 2
Float, media item 3
Float, media item 4
Float, media item 5
Float, media item 6
Float, media item 7

Float is robotic device known as an Unmanned Surface Vehicle (USV) that measures water quality data in real time. It was developed for and alongside members of Colombo City’s wetland communities, and is designed to be built by unskilled citizen scientists out of low-cost and easily accessible materials.

Wetlands are among the planet’s most threatened ecosystems, disappearing at a rate three times faster than rainforests. In Colombo City, Sri Lanka, 60% of households directly benefit from wetland livelihoods and products, such as fish and rice, but the water quality in around two-thirds of these wetlands is considered poor or very poor.

Monitoring of water quality is essential to wetland management, but governing bodies are significantly under-resourced. Traditional water sampling is very time consuming and expensive, and poses a risk to human health by coming into close contact with heavily polluted waters.

Developed alongside the International Water Management Institute, Float provides a cheaper, safer, and more accessible alternative to traditional water sampling via a low-cost, DIY robotic platform and associated capacity building program. It enables community participation in a wider system of wetland management - putting the power to improve conditions into the hands of those most affected by them.

Tamagukiyo, media item 1
Tamagukiyo, media item 3
Tamagukiyo, media item 4

Tamagukiyo is a self contained, aquatic biosphere with an Arduino powered structure that allows users to manipulate its temperature and light conditions through embodied interactions. It was showcased at London Design Festival 2021 as part of UAL's show, Together in Europe: Creative Communities for Change.

It is an experiment looking at food security in areas with poor air quality and lack of access to soil and nutrients - as an extension to the work undertaken by NASA with their biosphere II project, investigating how food supplies might be sustained outside of suitable conditions. By growing plants and animals in a closed ecosystem, we can avoid the problems caused by fluctuations in climate and extreme weather events on traditional agricultural land. The self-contained system has the added benefit of not requiring cultivation, as it keeps itself alive with no more involvement than some small adjustments to temperature and light.

Tamagukiyo creates a system that allows humans to manipulate the light and temperature conditions of the self-contained ecosystem, controlling the growth inside it. It is small enough to exist within a family home, providing people with methods of growing and maintaining their own food supplies without agricultural knowledge.

Through embodied interactions, users are able to respond to the needs of the aquatic, self-contained biosphere - communicated through changes in colour, an OLED display, and a detailed report accessible through a thermal printer. This project, through a playful metaphor, gives users the chance to ‘play god’ and decide the fate of the self-sustaining ecosystem.

GPT-3 And Me, media item 1
GPT-3 And Me, media item 1
GPT-3 And Me, media item 2

GPT-3 and Me is my Master's dissertation, written in collaboration with Open AI's autoregressive language model - the third 'Generative Pre-trained Transformer', or GPT-3. This is an AI that can process and generate text that is indistinguishable from a human's. I prompted the AI to take on the character of GPT-3 itself, and wrote the paper back and forth with the program in real time, as if in interview with a human.

From medical care to university canteens, artificial intelligences are entering the world around us. What’s more is that they are becoming increasingly sophisticated, and being tasked with making more important decisions – ones with the ability to have a great effect on people’s lives. As AI’s become more complex, the harder it is to explain what is going on for them to reach the decisions that they do – known as being a ‘black box’ of processing. As people with no technical knowledge are starting to interact with AIs on a daily basis, how are they supposed to evaluate the decisions they make when even computer scientists can’t explain the processes? 

Through primary, empirical research, and an analysis of secondary sources, I engaged in conversation with the world’s most sophisticated autoregressive language model (so far!) – GPT-3. We discuss the inner workings of AI, and ultimately try and uncover what can be learnt through engagement and intuition with artificial intelligences. Can autoregressive language models play a part in facilitating understanding of machine learning processes? Can GPT-3 help us to understand new things about the algorithmic experience? And, finally, to what extent does GPT-3 enable plain, written English to become a common language between human and machine?

Gaea: A Garden of the 2nd and 3rd Dimensions, media item 1
Gaea: A Garden of the 2nd and 3rd Dimensions, media item 1
Gaea: A Garden of the 2nd and 3rd Dimensions, media item 2

Gaea: A Garden of the 2nd and 3rd Dimensions (G:A GOT 2&3D) is a body of work exploring technoethics, and the liminal spaces in between different types of reality. 

It began as an experiment in computational creativity, trying to figure out whether one could force an AI to be considered creative or an ‘artist’ in and of itself. 

The initial hypothesis was that if I, as the designer, was to position myself as the curator and the muse, could the AI be considered the author of the artwork itself?

This part began with running all my project names and descriptions through a GPT-2 based programme, and getting it to generate more project names and descriptions that sound like things that I have worked on, but don't actually exist.

This is where the name, Gaea: A Garden of the 2nd and 3rd Dimensions, was born.

The VR Exhibition

The first project within the G:A GOT 2&3D collection culminated in a virtual reality exhibition of AI generated artwork. It began as an exploration into creating frameworks for abstract projects - without any concrete context, media, theme, or methodology.

Whilst delving into computational creativity, I set up an experiment with the following hypothesis:

By positioning the designer/artist as the curator and the muse, can one reasonably argue that an Artificial Intelligence has become an artist or creator within its own right?

The Framework

I began by creating an algorithmic representation of my own creative process. A human curator is able to follow this algorithm, using various machine learning tools, and by following the steps will eventually end up generating imagery, moving image, and 3D models. The person following the algorithm has no ability to impact the visuals or quality of the final generated assets - though they are able to curate and choose the ones they see fit at the end of the process.

Within the field of computational creativity, there is the concept of accountability. That is to say, how can one make an AI accountable for the creative decisions it is making, rather than relying on random number generators, or human input.

As creative accountability for this piece, I positioned my own work as the inspiration for the AI. The first step of the algorithm was to feed a list of my previous project names and descriptions to a GPT-2 based programme, and get it to generate more names and descriptions that sound like things that I have worked on, but do not exist. This is how the name Gaea: A Garden of the 2nd and 3rd Dimensions was born.

Outcome

After repeatedly following the algorithm, I accumulated a set of AI generated artwork, which I then curated into a virtual reality exhibition - following the garden of the 2nd and 3rd dimensions theme. This project explores how context changes one's perception of what is considered art, and questions whether you can ever separate the designer from the artist - via an AI artist named AIsha Sculler.

It is up to the audience to decide whether or not the artist is myself, AIsha, a collaboration, or perhaps she and I are one and the same.

G:A GOT 2&3D, media item 1
G:A GOT 2&3D, media item 1

G:A GOT 2&3D was an immersive exhibition I designed and curated, showcasing 10 works from international artists and collectives, all working within the theme of technoethics and artificial intelligence. It was held in New York City in 2021 and invited audiences into a liminal garden space on the cusp between the digital and physical worlds.

We have arrived at a crossroads. For the last 10 years, tech companies have been running covert social experiments on users that have negatively altered their moods, stealing and selling user data in order to influence democratic processes, and creating facial recognition software to help put immigrant children in cages. Technology has been developing at an unparalleled pace, and has led to genuine, real world harm. And this is because, put simply, no one has decided that exploiting people for profits is wrong, and put rules and regulations in place to prevent this from happening. 

We are now at a point where even more dangerous technologies are under development. GPT3 can write texts that are indistinguishable from an undergrad literature student’s, and a model of robot dog by Ghost Robotics now features a sniper rifle. There are probably plenty of apocalypses we have to survive before the robot one gets us, however, with some potentially misguided faith in humanity’s ability to adapt and move beyond challenges, there are only two optional eventualities: either we don’t make it through the climate disaster, mass extinction, and nuclear war, or we adapt, we design, we survive, and we move forward. And then we’ll have the robot apocalypse to worry about.

G:A GOT 2&3D situates itself in this present moment, where we can focus on working out the ethical frameworks from which these technologies should be held accountable before we experience the disasters and traumas caused by our current technologies and lifestyles – not afterwards.

Curative, media item 1
Curative, media item 2
Curative, media item 3
Curative, media item 4

Curative is a mixed reality experience exploring computational creativity, authorship, and encoded accountability. Who is the author of AI generated artwork? And who owns the passive data people shed simply by engaging with the internet? Curative transports you to the atelier of an artificially intelligent artist, allowing you to co-create a piece of artwork that exists at the intersections of the physical and digital worlds.

Users begin by filling out an online habits and data privacy quiz before stepping into the experience. This both primes the audience to feel as if their data privacy is being compromised and sets the tone for the experience, as well as sorting people into one of 16 different online personas which affect the outcome of their mixed reality experience.

They then step into a bare gardenscape, populated by virtual reality foliage. Having been inspired by the user's online personality, the AI artist takes audience members through the process of how Machine Learning algorithms generate imagery, ultimately leaving visitors with 3 pieces of unique AI artwork created in response to the initial quiz answers.

Graphia, media item 1
Graphia, media item 1

Graphia is an essay writing tool for children that encourages visual thinking and improves visual and auditory processing, whilst simultaneously democratising the diagnosis process for Dysgraphia in middle school aged kids. 

In collaboration with Wacom, Graphia offers an AI (Artificial Intelligence) driven graphic tablet based diagnostic tool at a fraction of the price of traditional diagnostic methods. 

Dysgraphia is estimated to effect up to 20% of children, and is a neurological disorder that is considered a symptom of Autism Spectrum Disorders, Dyspraxia, Attention Deficit Disorders, and other Specific Learning Disabilities. It manifests in illegible handwriting, poor visual, auditory, and phonological processing, pain throughout the nervous system, poor thought organisation, and an inability to use writing as a communication tool. 

Children with Dysgraphia tend to perform poorly in school, and have a higher rate of depression and anxiety than children with no learning disabilities.

Currently, diagnosis must be done by a child behavioural psychologist with tests such as the TOWL, which is costly, time consuming, and usually does not occur until children have already begun to fall behind in school.

Outcome

Graphia aims to solve for both the issues of under diagnosis, and to provide strategies for those experiencing Dysgraphia.

The coping tools focus on the core of the symptoms of Dysgraphia - poor visual, auditory, and phonological processing. They include visual and voice recorded mind mapping, spelling and grammar prompts, and progress reviews to help children understand their improvements.

Whilst children are writing on the tablet, an ML algorithm analyses their handwriting, pen pressure, angle & altitude, and spelling, grammar & punctuation - giving it the ability to offer a pre-diagnosis of dysgraphia behind the scenes, with 80% accuracy. 

The Data Box, media item 1
The Data Box, media item 2
The Data Box, media item 3
The Data Box, media item 4
The Data Box, media item 5
The Data Box, media item 6

Designed for the National Science Museum of Thailand, The Data Box is a content-agnostic data visualisation system that uses playful, embodied interactions to contextualise where audience members personally fit in to a specific, given topic.

Data is becoming the primary output of modern scientific endeavour, and people are beginning to encounter data more and more in their everyday lives. Because of this, data literacy amongst all populations is becoming more and more important. Studies have shown that people are more likely to understand more and engage with data for longer when embodied interactions are employed and a personal connection to the data is established (Alhakamy, 2021).

The Data Box gathers trends on the subject knowledge, opinions, and behaviours of visitors to an exhibition or pop up space through gesture, proximity, and hovering button control. It enables users to compare said trends across all visitors, whilst simultaneously offering learning opportunities on the given subject.

Royal Commission for the Exhibition 1851