Tuesday, July 8, 2008

An elaboration on the concept

After feedback from our teachers we were instructed to make our work more interactive. After brainstorming a lot of ideas were put out and a stronger concept has solidified.

A user enters a gate with a viewscreen where he or she can enter some demographic information (but not too personal) about themselves. When this information is added the gate opens to let the user enter a ‘personal experience room’ (one of five that is surrounded by the main exhibition room). Your information is the key to your experience that adds to the global visualisation of a multibrain that’s interconnected with all the different other users.

In the personal room you’re seated in a chair where you’re put into a suggestive state by the use of white noise and darkness. At a certain point the chair, which is part of a galvanic skin response system that measures your stress levels, will tell the system that you (or the user or the subject) are in the right ‘mood’. You will then get bombarded with stimuli in the form of audio and videoclips for about 20 seconds, which the system records and will present back to you later. The stimuli are generated and compiled from a realtime, current and actual information database that consists of information taken from websources (newssites, rss feeds, popular blog/podcasts), but related to the demographic information you’ve put into the system. This means that if you’re from Austria the information comes from Austrian newssites and general Austrian websources that your demographic would be interested in.

After the video, there is a moment of silence after which a graphical representation of your stress levels is projected from the middle of the screen and this symmetrically flows out of on the wall next to the videoscreen and runs along the wall of the circular room to the opposite side of the video screen. This will run about as long as the videoclip lasted. While this stress graphic is running it slowly abstracts and morphes into a more shapeful visualisation. This is accompanied by the echoing sound of the video but not the visual part, just as an enhancement. This visualisation flows into a one way mirror on the opposite side of the video screen where it becomes a 3D static spatial model. The user then has the possiblity to manipulate the visualisation.

The one way mirror opens up to the central exhibition room where a conglomerate of all the visualisations of the users are put together as a multibrain or a universe of experiences. The user can look out but the people in the exhibition room cannot look in. This mirror is a multitouch screen and he/she can manipulate the forms and shapes that came out his stress levels by pulling and squeezing the shape. Shapes that the user deems unimportant can be made smaller. The user can hear and see the parts of the visualisation that relate to the experience he had while watching the video. The user can pull the shapes and make some of his reactions smaller or greater to suit his or her emotional values, so the representation isn’t just a reaction of his physological reactions. The user isn’t able to remove items completely, just to transform the shape, because the an experience cannot be undone only changed in the way the user remembers it.

The manipulations of the user affect the multibrain in realtime. So if the user makes a certain shape smaller, the corresponding part of his/her shape in the multibrain will also become smaller. People visiting the exhibition room will see when the users manipulate their shapes because the places the shapes that are touched by the users in the personal rooms will start to glow for as long as the touch takes place.

So far so good... :)

No comments: