Creative Content 

squidsoup's Online Journal

Documentation of squidsoup's project development.

Watershed Fri 11 June 2004 Workshop

(Click on the images above and below to view larger versions).

with Gill and Laura, Dani Alex and Paddy (Watershed), Constance (Bristol Uni) and us

Fun day - 33 10/11-year old kids come over to learn about and experience 3D and stereo imaging. We literally invade Watershed's Waterside 1 2 and 3 :-) The hordes are split into 3 groups, variously exploring Come Closer with the wearables or a joystick, and also making stereograms of themselves using a pair of digital cameras and Photoshop. The day wraps up with us all being treated to a slideshow of everyone in red/blue 3D on the big screen, followed by SpyKids 3D.

Apart from a few headaches and people walking away cross-eyed, things went well. Interviews with some of the participants should be online soon...

Watershed Fri 28 - Mon 31 May (Bank Holiday Weekend)

(Click on the image above to view a larger version).

4 day showing for Come Closer, along with Dane's LoveMatch. All goes well, lots of people and comments/feedback. Interesting chat with a behavioural psychologist - this is an area of collaborative research that I think would produce some interesting results.

Futuresonic Wed 28 April - Sat 8 May

Watershed very generously offers Dani's services for the duration of the show (nearly 2 weeks) which proves invaluable.

Several lessons learnt, mainly to do with self-sufficiency. DON'T rely on anyone else's network (especially if you are sharing a network with another piece of net art called WiFi Hog), ignore network managers when they say the network is fine, don't count on the internet, and put WiFi base stations as near as possible to the transceivers in wifi-heavy environments. After much stress, with Dani Cliff and three squidies all pulling much of our remaining hair out, we restructure Come Closer to run off a local server and then, unbelievably, everything slots into place. One evening we are saying 'it can't be done - let's run it in automatic demo mode', the next day - and for the rest of the show - everything works well.

Even the old Jornadas we're using run for 8 hour slots no problem. The piece can be left to the Urbis invigilators for hours at a stretch.

Again, responses (at least written) seem positive. The 3D goggles lend the piece a fun/gimmicky edge which is no bad thing.

Futuresonic itself - interesting - several other pieces and discussions on 'locative media' (this is what we do apparently). check out "Sonic Interface" by Akitsugu Maebayashi (some details at http://www2.gol.com/users/m8/), also work by Steve Symons. Also good to see Tom Melamed peddling his Schminky wares (and helping us out again :-).

Talk to Bronac and Rachel from ACE - beginning to look at where next to take the piece, in terms of future development.

Fri 19 March - Wed 28 April

The rethink on Dandelions results in a combination of the mirror version of interactivity with the dandelion-scape. Avatars move around as bulges under a virtual blanket, creating new topographies and wind eddies that blow the dandelions around. This seem to work much better. Questionable aesthetic changes - the 'blanket' is a wireframe mesh. Adds to the sense of perspective but detracts from the visual purity. Contrasts quite effectively with the soft (relative) realism of the seeds though.

Public Trials (Thu 11 - Thu 18 March)

(Click on the image above to view a larger version).

Reduced to a single day at the end of a very long week. But finally, the last bit of technology falls into place and suddenly we have movement on the projector that corresponds to movement in the space. Major result. The paintbrush finally works - now to paint the picture.

Initial trials with a sound-only piece prove less than convincing, so we opt for an audio-visual approach, using the 3D specs and sound in a single piece. This makes sense for several reasons, but mainly the idea of using the projection as a mirror into an altered virtual space is definitely enhanced by the use of stereo vision - the impression of distance and space is important, and the idea works well with the sonic Come Closer idea.

By the trial day we have two separate pieces to show, that use the same setup in different ways. 'Dandelions' uses the average position of everyone in the space to steer your collective way through a cloud of dandelion seeds; 'Mirror Mirror' creates an abstracted columnar avatar for each person in the space, and uses the positions of these as the starting point for a drone that focusses on the distances between people - gentle deep drones when people are near each other, grating louder higher pitches when they move away.

Reactions to the trial seem positive, judging from the comments book. Some 50 people tried the piece out.

Generally the 'mirror' idea worked better as the connection between what each person does and the visual representation is clearer. Dandelion needs a rethink.

We KNOW that using the average position of several people in an active space doesn't work, as this is very close to a setup we used at the ICA (with another project, 'altzero') three years ago. Some of these reasons:

This once again proved that such a setup only works well with one person, or with two or more people physically tied together

(posted 13 Feb 04)

Come Closer: Another week of extremes. On Tuesday at about 3pm, Tom from Mobile Bristol turned our deep frustrations to instant ecstasy as the last link fell into place and we got the handheld iPacs talking, via an Elvin server, to Director. Geeky, possibly, but very exciting. For the first time, the project seemed possible.

A long list of technical snafu's excepted, the week was probably more positive than it felt. Headway was made on the visuals side, the idea and concepts are beginning to feel a bit more lived in, and a bigger picture is also emerging about how this fits into what we're doing on a broader scale, and where we can go with it.

The BIG problem with this project has been the number of technologies and people involved; Mobile Bristol, HP, Watershed, three of us....WiFi, iPacs, C, ultrasound rigs, Elvin, Director... hopefully we are all getting our communications sorted out. There's a lot of work to do by 11th March, we may not get the time to massage the thing as much as we'd like to, but it still seems that it'll run on the day. Cliff, Dani, Gill and Tom - we love you!

Data (over)flow: Feel mildly battered after last week (9-13 Feb). Our first milestone was reached, however, on Tuesday when we finally managed to get elvin and director formally introduced, and communicating. Hooray. Big up Tom and Dani. That meant Come Closer was off and running. Obviously Wednesday frustrated that when the ultrasound rig went down. What this did was to highlight the need for standing back and trying to preempt technological problems for DoF. Thoughts of technology leading the creative etc being dismissed out of hand. Well, at least until the damn gadgetry works properly.

As we are now working on each of the projects consecutively, rather than in tandem we can plan DoF in a lot more detail before we go any nearer to starting production on it. Over the next few weeks we will think about exactly what type of data streams we want to pull into the project and how we can go about capturing these. Up until now our thinking has been to use a set of various environmental and atmospheric sensors. What might work better for us is to use the same way of capturing all data (e.g. microphones and webcams) but treating each stream in a different way. However, a likely testing session with the sensor kit might resolve this.

(posted 23 Jan 04)

(Click on the image above to view a larger version).

Data (over)flow
Problem: Tested possibility of projecting from Watershed's Waterside 3 space onto River Frome at night. Although only using a low lumen projector it quickly became apparent that there were some "issues" with our original idea. In other words without a very powerful projector nothing would be visible when projected onto the quay wall on the opposite bank of the river. On top of that we have the wrong kind of dirty water, green gunk absorbs ALL light emitting nothing back for an immersive user experience.

Our fear is that we could spend a lot of time trying to get the thing working on a tech level, and that might impact on the quality of the actual content. We would rather spend longer on the fun bit that ultimately makes an idea work, or not.

Solution: Attach 2 video cameras to the exterior of Watershed and point them at the opposite bank. One with a blue filter, the other with red. They will be pretty close (approx same distance apart as a persons eyes). These would create a view of our original projection area. However this view will be itself projected inside the Watershed as our background. ON TOP of this we will then project our Data(over)flow (DOF) piece.This double projection (the video of outside plus DOF over the top of it) will, in turn, be filmed and then shown realtime on other screens around Watershed (plasma, cinemas, bar). Overall a viewer will get an abstracted experience of looking at the River Frome and what might be flowing through and around it at any time.

Data input from sensors including:

top | watershed | All material copyright - IPR Policy