A collaboration between Madelaine Dowd and myself, Unstable Ground is an interactive exhibition installation exploring the interconnected stories of the 2009 L’Aquila earthquake in Italy.
I have been experimenting with applications of web technologies in exhibition design and Madelaine has been researching post-disaster relief. As part of her research she talked to earthquake survivors about their experiences, and how they have been rebuilding their lives in the years since.
We wanted to create an interactive installation which would allow users to explore these interviews in a way which was engaging and encouraged focused listening. The installation also needed to be cheap and easy to distribute, setup and run, and powered by consumer electronics so that it could be taken back into the disaster zone as part of a temporary pop-up exhibition. Using the web as the platform allowed for this.
The first task was to make sense of the hours of interviews recorded. The conversations were annotated, mapped, edited, and cut into shorter clips of around 30 seconds each. We then identified how the clips followed on from one another, before finally identifying connections between clips from the different interviews.
The clips were then mapped in 3D space—being placed further from the center as time since the earthquake passes—and we then drew the paths of the interview’s progression and connections between different interviews. This was to form the basis of the interface by which the interviews could be explored.
Three.js was used to create the 3D interface. One of the big challenges when building the interface was spacing out the clips in a way which was highly structured and yet felt disorderly, without any overlapping or crowding. I first explored using Fibonacci distribution on a sphere as my starting point, but this approach gave results which felt too structured and rigid for the naturally chaotic subject-matter, and it was difficult to recreate the progression of clips moving outwards from the center as the time passed since the event.
Eventually a conical spiral was used to guide the placement of clips: for each interview a spiral was created, growing from the centre outwards, and clips were distributed at points along this spiral, with random position offsets to create the illusion of disorder.
I then created the paths which connect the clips. Bezier curves were plotted between connected content, the control points were semi-randomly positioned, and these formed the basis for the geometry of the ribbons and camera movement paths.
The vertexes of the ribbons were offset using noise to give them an imperfect aesthetic, and the camera movement paths loosely followed these ribbons, giving the impression that the camera was naturally following the ribbons, rather than being attached to them on rails.
As the installation was to be used as part of an exhibition, rather than in a solitary setting at home, we felt it was important to connect the interface to the environment in which it was to be experienced.
Due to the need to make use of consumer electronics rather than specialist equipment, a projector was used to light the room in colours matching the content on screen, and the audio clips were played through a set of speakers. The environment was powered by a separate web-app running on a Mac Mini and the two apps communicated with each-other and the server via Web Sockets, which provided the low-latency bi-directional communication necessary to have the environment react instantly to interactions on the tablet.
The audio spatialisation capabilities of the Web Audio API were exploited to create an ambient audio landscape made up of all the audio clips. Each clip was placed in a sound scene in the environment app matching the corresponding clip position in the 3D scene on the interface app, and the listener was updated each frame to match the position and orientation of the camera. When a clip was triggered on the interface, the ambient audio faded out, and the selected clip was played in the environment.
The user interface layer was built using React, my go-to framework for UI development. A key element of the interface is the interactive SVG border around the frame of the screen which worked as an indicator for when a trigger point would be activated, and then showed the progress of the audio clips. There is also a viewfinder which acts as a focus aid, and interacts with the 3D scene to clearly indicate when a trigger point has focus, UI copy which shows the user information about the clip they are listening to, and some help tips which are shown if a user is having difficulty exploring.
Extensive user testing was undertaken in the development stage, and in situ the day before opening, and this allowed us to tweak the UX and UI where necessary. I have also recently been researching and prototyping UI for VR, and was able to employ what I have learned here, and in the end the interface was intuitively understood by the vast majority of users. Unfortunately, one unforeseen issue was that a few users did not understand to pick up the tablet from the table after pressing the ‘Start’ button, and this is something we will be addressing with clearer on-boarding instructions.
Due to the discrepancy between the very physical subject of the earthquake, and the very digital 3D interface the Art Direction was a tough challenge to get right. Much needed inspiration came in the form of a lecture I attended by Cristiano Toraldo di Francia, of the now legendary Italian speculative Architecture practice Superstudio, which he co-founded in 1966. This encouraged me to experiment with heavily treated textures, which ended in a result which nodded to the natural, while remaining honestly digital at its core. I also made use of noise to animate objects in the scene to add to the sense that “everything is moving” (as one of the interviewees put it), and offset vectors of my geometries to add some natural-feeling imperfections and instability.
Various colour ways were explored — as there are also aspects of data-visualisation to the interface this complicated things further as colours needed to be strong enough to differentiate between the interviews, and it was important for connections between clips to be clearly indicated. Colours were taken from Madelaine’s photographs of L’Aquila as our starting point, and we experimented until we found colours which had enough contrast and impact, while retaining a natural feeling which was familiar to L’Aquila.
The installation was shown at Into The Dark, the symposium of the Research, Design, Publish Visual Communication elective at the Royal College of Art, which was held in a disused World War Two air-raid shelter hidden behind the McDonalds in Dalston. The installation was shown in a space designed by Madelaine, along side her research material and Oliver Joe McLaughlin’s accompanying photography, including animated portraits of the interviewees featured.
Encouraged by the feedback received, and lessons learned from watching people engage with the installation, we plan to continue to explore ways in which web technologies can be used in interactive exhibition design for temporary spaces, and aim to take the exhibition back to the areas affected by the most recent Italian earthquakes in Abruzzo to help those affected learn from the challenges and positive outcomes of the efforts to rebuild L’Aquila.