Tuesday, July 8, 2008

A further elaboration on the concept...

After feedback from our teachers we were instructed to make our work more interactive. After brainstorming a lot of ideas were put out and a stronger concept has solidified.

A user enters a gate with a viewscreen where he or she can enter some demographic information (but not too personal) about themselves. When this information is added the gate opens to let the user enter a ‘personal experience room’ (one of five that surround the main exhibition room). The users’ information is the key to his/her experience that adds to the global visualisation of a sort of multibrain that’s interconnected with all the different other users.

In the personal room the user is seated in a chair where he/she is put into a suggestive state by the use of white noise and darkness. At a certain point the chair, which is part of a galvanic skin response system that measures the users’ stress levels, will tell the system that the user is in the right ‘mood’. The user will then get bombarded with stimuli in the form of audio and videoclips for about 20 seconds, which the system records and will present back to him/her later. The stimuli are generated and compiled from a realtime, current and actual information database that consists of information taken from websources (newssites, rss feeds, popular blog/podcasts, etc.), but related to the demographic information the user has put into the system. This means that if you’re from Austria the information comes from Austrian newssites and general Austrian websources that your demographic would be interested in.

After the video there’s a moment of silence, after which a graphical representation of the users’ stress levels are projected from the middle of the screen. This then symmetrically flows out of the screen onto the wall on both sides of the videoscreen and runs along the walls of the circular room to the opposite side of the video screen. This will run about as long as the videoclip lasted. While this stress graphic is running, it slowly abstracts and morphes into a more shapeful visualisation. This is accompanied by the echoing sound of the video without any visuals, just as an enhancement. This visualisation flows into a one way mirror on the opposite side of the video screen and into the 3D spatial model at which point the model will change into the shape of the corresponding colour (the coloured room corresponds to the colour of the shape) The user then has the possiblity to manipulate the visualisation.

The one way mirror opens up to the central exhibition room where a conglomerate of all the visualisations of the users are put together as a combination of abstracted shapes, a sort of multibrain. The user can look out, but the people in the exhibition room cannot look in. This mirror is a multitouch screen and he/she can manipulate their own forms and shapes in the central exhibition, that came out his stress levels by pulling and squeezing the shape. Shapes that the user deems unimportant can be made smaller. The user can hear and see the parts of the visualisation that relate to the experience he had while watching the video when he touches a part of the shape, this is shown as a beamed in a little square on the window. The user can then pull the shapes and make some of his reactions smaller or greater to suit his or her emotional values, so the representation isn’t just a reaction of his psychological reflexes. However the user isn’t able to remove items completely, just to transform the shape, because an experience cannot be undone only changed in the way the user (wishes to) remembers it.

The manipulations of the user affect the spatial model in realtime. So if the user makes a certain shape smaller, the corresponding part of his/her shape in the spatial model will also become smaller. People visiting the exhibition room will see when the users manipulate their shapes because the places of the shapes that are touched by the users in the personal rooms will start to glow for as long as the interaction takes place.

Users can interact with each other and connect their lines to each other. The way this is done is by jointly moving parts of their shapes to one and other. But user A can only see what part of his/her shape corresponds with what frame in their own movie-sequence. User A can’t see what part of user B’s shape corresponds to what clip in user B’s sequence. Unless user A touches the part of user B’s shape to ask user B to show what that piece of shape means (what image in the sequence corresponds to that part of user B’s shape). User B can then acknowledge user A by touching that part of user B’s shape to give user A a glimpse of that part of user B’s sequence.

If both users are happy with the clips shown they can connect to each other by both grabbing that part of their own shape they want to connect and bringing them together. When they meet, both users will be able to see each other’s specific clips that correspond to those parts of their shapes in full. When their grabbing gestures meet each other fully they link up. Of course either one can always disconnect the link by grabbing it and pulling their own part of the shape out of the joined shape-parts.

When two users link parts of their shape to each other new information is created and therefore new meaning. Child shapes, smaller shaped lines, spread out from the linkpoint. These are the visualisation of the new information and meaning that has been created. The visitors in the main exhibition room won’t be able to see what information is contained within a main individual shape but can see the new information that is created by the linkage of shapes in the child lines and shapes that come out of them.

The connections stay only for as long as both users remain in their respective personal rooms. The shapes of the different active users revolve slightly for as long as they remain active. A shape will become inactive when a user leaves his/her room, become less luminant and stop revolving. When a shape stops revolving the connections that it has to another user’s shape will stretch out and eventually break, releasing the child shapes into a free state where they will live on for a time but eventually dissipate.

When a different user uses one of the personal experience rooms after someone else has just finished, the shape will straighten out, losing all connections, freeing it’s children, and form into a new shape according to the new users psychological reactions and further rational/emotional manipulations.

Questions:

- What is the relation to the global digital village?

- Why should the user be able to change shapes of their visualisations? We could present the user with a set of pictures too, and make them put appropriate values to those pictures? The stimuli bombardment doesn’t seem too nescesarry in this way. (The user can't manipulate his stuff only blow it up to take a better look and stuff / can't put anymore thought into this because of time constraints)

- How can the audience in the exhibition room interact with the main exhibition to see the child shape information visualisations? (see it in the personal experience rooms so they know what's going on)

- How are the child visualisations of new information put together? One pasted over the other? Morphed according to a chaotic pattern? (fade in/out)

No comments: