Skip to content

Cover Image

Reflection

During the LLUM Barcelona festival, we had the opportunity to explore interactive installations and develop our own concepts. Working with light as a medium opened up new perspectives on how to create engaging public experiences. I could relate the Project topic to my research and lern a lot in prototyping and development

Project Description

Our installation focuses on creating an interactive light experience that responds to movement and sound. The concept revolves around making visible the invisible connections between people in public spaces.

Concept

The questions displayed on a huge screen relate to faculty research, and participants’ answers feed an accessible database of collective data and image artifacts created through user input using a multilingual transcription model and further processing and generative AI steps.

Alongside countless playful visualizations and visions of the future, we could see a common tone towards a hopeful future that achieves possibilities through technology-driven systems and sustainable ways of living in shared habitats with other species.

It was a challenge to run this installation with moderate power consumption in an outdoor environment using the internet and a standard windows computer. Along the way of developing a robust interactions model based on various programming languages, we were able to gain some insights into different AI models and no-code approaches that could achieve similar results in the future.

Technical Implementation

Cover Image Cover Image Cover Image

Installation

The installation ran on an Intel NUC. By outsourcing the generative tasks to APIs, it was possible to avoid the need for a powerful graphics processor.

The process: **1 question → 3 inputs → 1 output.

To ensure a smooth experience in a public space, several steps were required:
- Audio filtering detected volume and transcribed speech.
- Content moderation replaced inappropriate words with emojis before the input was displayed on the public screen.
- An LLM-generated description enriched the input, which was then visualized by an image generation model.

The system communicated with an ESP32 over local WiFi to control LED animations on the microphone that displayed process steps with colors and countdowns. A local website displayed the interface while synchronized with the main script.

Finally, the data was uploaded to Supabase for archiving and online capture of all inputs.

The code is open source, but still needs to be refined and documented. If you have any questions, please feel free to contact!

Testing & Results

Future Development

  • Local processing
  • Realtime diffusion
  • Improve motion interaction



Last update: February 17, 2025