Skip to main content
Global Innovation Design (MA/MSc)

Luqian Wang

I am Luqian, a futuristic design practitioner and researcher with a background in engineering.

I am devoted to being a catalyst for social change and a junction linking all decentralized personalities, functions, and resources in modern society. I show passion for various digital interactions and installations relating to human rights, social psychology and ego consciousness.


MA/MSc  Global Innovation Design, Royal College of Art & Imperial College London, 2022

B.Eng.  Product design and Manufacture, University of Nottingham, 2019

Academy of Arts and Design, Tsinghua University, 2020 (Exchange)

School of Art, Design and Media, Nanyang Technological University, 2021 (Exchange)


UX Designer Intern - Microsoft Research Asia (Beijing Office), Beijing, China (2021)

UI Designer Intern - Bytedance, Beijing, China (2021)

Brand Design and Operation Intern - Baidu (Intelligent Driving Group), Beijing, China (2020)

Show Location: Kensington campus: Darwin Building, Lower ground floor

Although the past year has witnessed the disorder among global-scale systems, with its contradictions, conflicts, prejudices and disintegration, I remain to believe that the world is moving towards convergence, not separation. I therefore regard multidisciplinary and multicultural insights are the indispensable requirements for designers, which is echoed in the GID programme. 

I have explored various forms of digital interactions throughout my GID journey, seeking an in-depth understanding of what role humans would play in the human-technology system. It is challenging but significant to empower each individual in the community to actively redefine and re-conceptualize our connections with ego, technology, social networks and the environment from a micro or macro scale, especially within this unpredictable era. 

Here showcases a series of projects reflecting the shift in understanding the human-technology relationships. “Motiv” is a representative project that shows a novel perspective to consider how human beings could deal with social media actively and consciously.

Are we using the technology, or are we being used? 

Social media is a significant source of connection, entertainment and information, while it also exposes individual vulnerabilities in a world manipulated by science and technology. It is driven by powerful algorithms, triggering intense competition within the attention economy. 

Dual-system theories unveil the inconsistency between implicit intentions and actual behaviours towards social media. Two conflicting cognitive systems process simultaneously; one is automatic and impulsive, and the other is conscious and rational. The automatic one is always effortless and more robust than the conscious one, primarily driving unplanned and excessive social media engagement.

This project explores a novel perspective to consider our relationship with social media, encouraging individuals to be active participants and underlining the ownership of users’ limited attention when engaging in persuasive technologies instead of being manipulated easily. 

Motiv is a supportive tool that helps people re-navigate their attention resources when they are about to be manipulated by social media.

It is based on a motivation-driven intervention that guides people to clarify and reinforce specific motives before social media engagement. It attempts to build transparency about how people’s minds work to prevent them from being easily triggered and influenced.

The underlying attachment concerning impulsive social media engagement
The underlying mechanism concerning excessive social media engagement
Literature Review

Dual-system theories of behaviour change unveil the inconsistency between implicit intentions and actual behaviours towards social media (Robinson & Berridge, 2001). Two conflicting cognitive systems represent different processing modes; one is automatic and impulsive, and the other is conscious and rational. Evans (2008) revealed that the automatic one was always effortless and stronger than the conscious one, primarily driving impulsive and excessive behaviours. The specialness regarding social media addiction with other excessive behaviours is that their inhibitory system can work as usual, which means the excessive social media users also have the potential to inhibit impulsive actions. 

When children are to disclose their experiences, are we prepared for the conversation?

Most of the disclosure of sexual violence from children takes place in rather a passive manner – children are pushed to confront consecutive questions to recall and clarify their experience in the risk of getting more harm from insensitive communication measures. 

Mocra creates a friendly and loving contextual environment for sensitive experience disclosure for children.

Video Reference:

Hooligan Sparrow, 2016 ‧ Documentary/Drama, Nanfu Wang

China school abuse: Search for justice after son raped, 2015 Apr 6th, BBC News

Among China's 7 million left-behind children, 10% think their parents have died, 2009 Feb 3rd, Yit

Mocra - User Flow
User Flow - To Customer
User Flow - To Business

In 2020, 332 cases of sexual abuse were reported in China. 74% of the cases were perpetrated by acquaintances, hinting at a possibility of more unreported incidents. Despite the increasing public attention, child molestation is still a “taboo” in daily conversations, especially for the left-behind children who are geographically separated from their parents. Apart from the limited communication with their children, parents and caregivers don’t always have the experience on how to communicate, or even guide children to speak up, resulting in non-disclosure or a delay in disclosure. 

Mocra is an AI-powered tool that aims to help caregivers and professional workers to improve their communication skills when children come to disclose sexual abuse. Users can pre-install the plugin on their messaging application of choice, and activate Mocra when needed. 

Mocra’s algorithm will be able to evaluate texts sent by users based on emotional cues, tone, preconceptions, evaluation and verbal language. It then provides support in text revision and feedback to help users deliver messages and love. 

With the current capability, we envision a promising application of Mocra in the scope of other sensitive problems such as depression, school bullying and post-traumatic stress disorder (PTSD) with specific treatment principles in use.

Scanning - What do children say to AI voice assistants
Predictions of the relationships between children and AI-powered toys
Scanning and Sensemaking

We are in a time when living with “smart” things isn’t new anymore. While adults rely on their phones 24/7, children are spending more time with these AI-powered social bots. This draws a lot of concerns over how kids perceive these technologies, the impact of developing relationships with them, and most importantly – if children were to form any emotional relationship with social bots (AI toy robots)– how can this relationship be balanced?

To unpack this question, we started looking into the current formation of Children-AI relationships through two rounds of scanning. Initially, we asked parents about the tone children are using when they speak to voice assistants and observe a case where their child speaks to an AI. From the feedback, we found children repeating keywords from speeches by Siri, and girls showing a tendency to alter or raise their voice when they speak, whereas boys tend to speak in their normal tone. These findings hint at a learning pattern of children in the interaction with AI assistants and a deliberate manner of speech. Then, we collected, analyzed and categorized videos from Youtube and RED featuring children using Siri, Alexa and Xiaodu. These videos show that children tend to believe that these technologies are actually living, e.g. trying to get access to an iPad by telling a lie to SIRI, or asking “Did you miss me when I was in kindergarten?”

Through our scan, we can also see a subtle increase in the frequency of use. The future of the children-AI relationship remains yet unsure — due to differences in duration and power hierarchies formation. We believe such a relationship will depend heavily on the technological ability and experience these assistants offer. Thus we predicted three models of potential relationships between children and AI: blending, disposal and reciprocal growth, with children's differentiated reliance and perception of the technology.