2024
Interactive Installation • Physical + Digital
AuraMotions
Role
Product Designer
Tools
Python, Whisper, MQTT,
TouchDesigner, Arduino, Figma
Team Size
4
Year
2024
AuraMotions is an interactive installation that translates spoken emotions into water ripples and colored light, using real-time sentiment analysis and generative visuals to explore the emotional impact of words. This project encourages users to consider the impact of their words on the environment.


Project Overview
AuraMotions is an interactive installation that visualizes the emotional tone of spoken words using water, sound, and light. When a person speaks, the system listens and translates their emotion into dynamic ripple patterns and colored projections on water.
The goal of the project was to show how our words carry emotional weight — not just for people, but for the environment around us. Inspired by Dr. Masaru Emoto’s work on water and emotion, we wanted to bring this idea to life through design and technology.
This project was developed as part of my Master’s in Digital Design, with a multidisciplinary team, blending physical prototyping, emotion-driven design, and technologies.
Design Challenge
In our increasingly digital world, human interactions often become emotionally diluted. Text-based communication, though efficient, lacks tone, nuance, and affective feedback. During the COVID-19 pandemic, this disconnect became even more evident. AuraMotions was conceived as a response to this gap — a way to explore whether design and technology could help people feel more, not less.
Our challenge was:
“How can we design an interface that understands emotions and responds in a way that feels human and natural?”
We wanted to create a meaningful experience where people could speak, reflect, and see the emotional impact of their voice through nature.
My Role
As a core team member, I worked across multiple areas of the project:
-
Led UX research, user interviews, and emotional mapping exercises.
-
Contributed to coding the sentiment analysis and prototyping the speaker-water system.
-
Designed and built physical prototypes, working closely with mentors in the Maker's Lab.
-
Developed branding and visual assets, including presentation decks and posters.
-
Acted as a design facilitator during group sessions, helping drive collaboration and creativity.
Exploring
We started by gathering inspiration from nature and emotion-based installations — such as rain simulators, emotional mirrors, and sound art.
In a rapid ideation activity called "8 ideas in 8 minutes," each team member sketched concepts, and a common thread emerged—our fascination with natural elements.

To deepen our understanding, we conducted outdoor contextual research. During a walk through a nearby park, we encountered a still pond. The water’s fluidity, reflections, and sound instantly connected with our emotional goal. It was clear: water would be the medium of our design—a living, moving canvas capable of expressing calm, chaos, and everything in between.
Concept Development
We grounded our concept in both scientific inquiry and emotional storytelling. Dr. Emoto’s experiments—showing how water crystals change form based on positive or negative words—became our foundation. We wanted users to experience that transformation firsthand.

We held regular brainstorming sessions, consulted mentors, and explored other interactive installations for inspiration. Influences included:
-
Refik Anadol’s data sculptures blending emotion and environment
-
Google’s “Shaped by Water” exhibit using sound and light to express perception
-
Cymatics—the visual patterns created by sound vibrations in water
Each of these validated our belief that combining emotion + nature + technology could lead to a compelling, meaningful experience.

How We Did It
To turn our concept into a tangible experience, we broke the challenge into two parts:
1. Emotional detection (understanding what the user is feeling).
2. Emotional expression (showing that emotion through water and colors).
Tools and Methods:
-
Python (Whisper + NLP models) to convert speech into text and analyze emotional tone
-
MQTT protocol to transmit data between devices
-
TouchDesigner to create dynamic, real-time visuals
-
Arduino to manipulate water movement via speakers and pumps
We also conducted user interviews within our diverse MDD cohort to map how different cultures perceive emotion through color.
Drawing inspiration from Apple’s Health app, I categorised emotions into Pleasant, Neutral, and Unpleasant, and surveyed twenty students on how they associate Red, Green, and Blue with emotions.
This ensured our design was emotionally inclusive and globally intuitive.

Ideation
With the emotional mapping defined, we moved into prototyping.
We first tested whether sound could physically move water by placing speakers beneath a shallow acrylic container. Playing different frequencies created distinct ripple patterns. This low-tech, high-insight prototype validated our idea—sound could indeed “talk” to water.
We then iterated on:
-
Emotion-to-color mapping (e.g., anger = red, joy = green, calm = blue)
-
How to project visuals onto water in a dark environment
-
The responsiveness of the system in real-time
Our final concept proposed a seamless, real-time interaction loop:
-
User speaks - Turn speech into text [Make a system that changes spoken words into written words].
-
Emotion is analysed - Understand feelings
-
Light and sound respond - Understand and use the emotions [Figure out the emotions in those words and make waves on the water based on them. Send emotions through MQTT messages and put them into TouchDesigner].
-
Water moves and reflects emotional visuals - TouchDesigner will use colours to show the emotions. Project these coloured patterns onto the water to show how it's reacting.


.jpeg)


Development
1. Digital Aspects
-
We used OpenAI’s Whisper model for highly accurate speech-to-text conversion.
-
A Python-based sentiment classifier analysed emotions as pleasant, neutral, or unpleasant.
-
MQTT acted as a bridge between the analysis model and TouchDesigner, where abstract, flowing visuals were generated in real time to represent each emotional category.
We selected TouchDesigner for its flexibility in creating live, audio-reactive visuals. I actively revisited my training from the Dojo sessions and collaborated with our mentors to refine the look and logic.




2. Physical Aspects
-
Built a transparent acrylic plate to hold water
-
Used a speaker mounted beneath the plate to vibrate water
-
Constructed a minimal black frame to isolate the experience in a dark room
-
Integrated directional lighting and a ceiling-mounted projector to reflect the visuals through water
Prototyping involved several setbacks—breaking acrylic, cutting mishaps, and even a machine catching fire. But each obstacle led to a better, more refined outcome.






Final Testing
In the final phase, we tested the full installation with mentors and classmates. The experience allowed users to speak naturally, observe water ripple in sync with their tone, and watch emotion-tinted visuals bloom across the surface.
The Result:
-
Positive emotions led to soft, harmonic ripples with warm light
-
Neutral emotions created slow, muted movement
-
Negative emotions triggered sharper vibrations with intense, darker hues
The experience prompted awe, curiosity, and even self-reflection among users—exactly what we aimed for.




Final Prototyping
Our refined prototype was showcased in a dedicated dark room. The installation featured:
-
A circular water plate with clean, edge-lit visibility.
-
Real-time projection mapped through TouchDesigner.
-
Seamless emotion-to-motion feedback loop.
-
A clean and minimal aesthetic to keep attention focused on the interaction.






Reflections, Learnings,
and Future Improvements
This project pushed me to blend design with emotional intelligence, coding, and tangible interaction. Some key takeaways:
-
Emotion-driven design can create deeper engagement when it responds in real time.
-
Natural elements like water can act as emotional amplifiers—powerful, relatable, and universal.
-
I strengthened my technical fluency with tools like TouchDesigner, Arduino, and Python.
-
I learned the value of cultural research when working with abstract concepts like emotions.
Opportunities for the Future:
-
Broaden emotional recognition with more nuanced emotion categories and multimodal inputs like gestures or facial cues.
-
Enhance visual clarity through better projection and lighting for richer emotional feedback.
-
Scale the experience for public spaces, schools, and galleries to reach wider audiences.
-
Improve accessibility by adding non-verbal input methods for inclusive interaction.
-
Refine usability with microphone feedback and clear user guidance for smoother engagement.







