Stereoscopic Performance: SIGGRAPH

26 08 2008

SIGGRAPPH (ACM Special Interest Group in Graphics, http://www.siggraph.org/) has been a long running and very large conference aimed mostly at the technical aspects of Computer Graphics: Games industry, Hollywood, animation productions are all beneficiaries of the outcomes from these research topics. The University of Manchester runs a Professional Chapter of the organisation and was at the meeting (LA August 2008) to promote activities (http://manchester.siggraph.org/).

SIGGRAPH08

This year there was an arts orientated keynote Catherine Owens (Artist/Director CO-DIRECTOR, “U2 3D”) Giving Technology Emotion: From the Artist’s Mind to “U2 3D”. She discussed the aspects of creating a 3D movie from U2’s tour of South America. Catherine has been working with U2 for many years, and the move to 3D was adventurous – using multiple 3D camera setups from famous sources. Time and development were key – the post editing took 18 months and new 3D blends were specially developed to allow different 3D scenes to blend together; as well as the use of a multiple sets of expensive 3D cameras.

A quote was 3D should be “easy on the mind”. So there were very few “obvious” 3D effects and more story telling: although quiet a few 3D overlay and blending effects; compare with transparency and overlay requirements.  Link to the official site: http://www.u23dmovie.com/

“Irish artist/director Catherine Owens creates installations that evolve through painting, sculpture, photography, sound, and video. She is well known for her collaboration with the Irish band U2 on their last four world tours. She co-directed “U2 3D,” a documentary of the band’s live performance in South America on their 2006 Vertigo tour. “U2 3D” is the first live-action feature-length 3D digital theatrical release…”

Throughout the meeting projected 3D presentation was a focus, and the screening of U2 3D being a key point. This also demonstrated that mix between arts and science is also blending. This work is an example of a much larger and (obviously) commercial exploitation of similar e-Dance type outcomes and interesting to compare.



August Research Intensive

25 08 2008

       pict0008-smhb.jpg

The fifth research intensive took place between 4 – 8 August in Bedford.  This week-long laboratory included the following project participants: Helen Bailey, Michelle Bachler, Anje Le Blanc and  Andrew Rowley.  We were joined by Video Artist: Catherine Watling, and four Dance Artists: Catherine Bennett, River Carmalt, Amalia Garcia and James Hewison.  There were several aims for this lab: 1. To explore, in a distributed performance context, the software developments that have been made to Memetic and the AG environment for the manipulation of video/audio streams. 2. To integrate other technologies/applications  into this environment. 3. To initiate developments for Compendium. 4. Explore the compositional opportunities that the new developments offer. 

In terms of set-up this has been the most ambitious research intensive of the project.  We were working with four dancers distributed across two networked theatre spaces.  For ease we were working in the theatre auditorium and theatre studio at the University.  Both of which are housed within the same building.  This provided the opportunity for emergency dashes between the two locations if the technology failed.  We worked with three cameras in each space for live material and then pre-recorded material in both locations.  Video Artist Catherine Watling worked with us during the week to generate pre-recorded, edited, video material that could then be integrated into the live context.

1. Software developments in a distributed environment

The significant developments from my perspective here was firstly the the ability of the software to remember the layout of projected windows from one ‘scene’ to another.  This allowed for a much smoother working process and for the development of more complex compositional relationships between images .  The second significant development was the transparency of overlaid windows which allows for the creation of composite live imagery across multiple locations.

What’s really interesting is that the software development is at the stage where we are just beginning to think about user interface design.  During the week we looked at various applications to assess the usefulness of the interface or elements of the design for e-Dance.  We focused on Adobe Premiere, Isadora, Arkaos, Wizywyg and Powerpoint.  These all provided useful insights into where we might take the design. 

2. Integration of other technologies

We had limited opportunity to integrate other technology and applications during the week.  I think this is better left until the software under development is more robust and we have a clear understanding of its functionality.  We did however integrate the Acecad Digi Memo Pads into the live context as graphic tablets.  I was first introduced to these by  those involved in the JISC funded VRE2 VERA Project running concurrently with e-Dance.  This provided an interesting set of possibilities both in terms of operator interface and also the inclusion of the technology within the performance space to be used directly by the performers.

3. Begin Compendium development

The OU software development contribution to the project began in earnest with this intensive.  Michelle was present throughout the week, which gave her the opportunity to really immerse herself in the environment and gain some first-hand experience of the choreographic process and the kinds of working practices that have been adopted by the project team so far.

Throughout the week Michelle created Compendium maps for each day’s activity.  It became clear that the interface would currently militate against the choreographic process we are involved in.  So having someone dedicated to the documentation of the process was very useful.  It also gave Michelle first-hand experience of the problems.  The choreographic process is; studio-based, it is dialogic in terms of the construction of material between choreographer and dancers, it involves the articulation of ideas that are at the edges of verbal communication and judgements are tacitly made and understood.  Michelle’s immediate response to this context was to begin to explore voice-to-text software as a means of mitigating some of these issues.

The maps that were generated during the week are really interesting in that they have already captured thinking, dialogue and decision-making within the creative process that would previously have been lost.   The maps also immediately raise the question about authorship and perspective.  The maps from the intensive had a single author, they were not collaboratively constructed, they represent a single perspective on a collaborative process.  So over the next few months it will be interesting to explore the role/function of collaboration in terms of mapping the process – whether what we should aim for is  a poly-vocal single map that takes account of multiple perspectives or an interconnected series of individual authored maps will need to be considered.

4. Compositional developments

Probably influenced by the recent trip to ISEA (the week before!), the creative or thematic focus for the laboratory was concerned with spatio-temporal structure again but specifically location.  I began with a series of key terms that clustered around this central idea – they were; dislocate, relocate, situate, resituate, trace, map.  A series of generative tasks were developed that would result in either live/projected material or pre-recorded video material.  This material was then organised or rather formed the basis of the following sections of work-in-progress material:

Situate/Resituate Duet (Created in collaboration with and performed by River Carmalt and Amalia Garcia) (approx. 5 minutes)

                                 katherinedrawing2-crophb.JPG                               

The duet was constructed around the idea of ‘situating’ either part of the body or the whole body, then resituating that material either spatially, or onto another body part or onto the other dancer’s body.  We used an overhead camera to stream this material and then project it live in real-time to scale on the wall of the performance space.  This performed an ambigious mapping of the space. 

                                  pict0015i-smhb.jpg                       

The Situate/Resituate Duet developed through the integration of the Digi Memo Pads into the performance context.  James and Catherine were seated at a down-stage table where they could be seen drawing.  The drawing was projected as a semi-transparent window over the video stream.  This allowed them to directly ‘annotate’ or graphically interact with the virtual performers. 

  katherinedrawing6-crophb.JPG  

Auto(bio/geo) graphy (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison.  Filmed and edited by Catherine Watling) (4 x approx. 2-minute video works)

In this task we integrated Googlemaps into the performance context as both content and scenography.  We used a laptop connected to Googlemap on stage and simultaneously projected  te website onto the back wall of the performance space.   The large projection of the satellite image of the earth resituated the performer into a extra-terrestrial position. 

               dscf6424_sm.jpg

Each performer was asked to navigate Googlemap to track their own movements in the chronological order of their lives.  They could be as selective as they wished whilst maintaining chronological order.  This generated a narrativised mapping of space/time.  This task resulted in a series of edited 2-minute films of each performer mapping their chronological movements around the globe.  Although we didn’t have the time within the week to utilise these films vis-a-vis live performance, we have some very clear ideas for their subesquent integration at the next intensive. 

                dscf6428.JPG

Dislocate/Relocate: Composite Bodies (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison) (approx. 4-minutes)

This distributed quartet was constructed across two locations.  The ideas of dislocation and relocation through a fragmentation of the four performers’ bodies and then reconstituting two composite virtual bodies from fragments of the four live performers. 

          wedspict0003_sm.JPG

We began with the idea of attempting to ‘reconstruct’ a coherent singluar body from the dislocated/relocated bodily fragments.  Then we went on to explore the radical juxtaposition created by each fragmentary body part moving in isolation.

                 pict0006h-sm.jpg

                            pict0001h-smhb.jpg

“From Here I Can See…” (A Distributed Monologue) (Created in collaboration with and performed by Catherine Bennett, River Carmalt, James Hewison) (approx. 5 -minutes)

This distributed trio  was initially focused on the construction of a list-based monologue in which the sentence “From here I can see…” was completed in a list form that functioned through  a series of associative relationships.

pict0023-smhb.jpg

 In one location Catherine delivered a verbal monologue, another dancer performed a micro-gestural solo with his feet within the same location as Catherine.  A third non-co-located dancer  in the other space performed a floor based solo. 

Trace Duet (Created in collaboration with and performed by Catherine Bennett and James Hewison)

In this duet we focused on bringing togther the new software capability to produce layered transparency of windows and the use of the Acecad Digi Memo Pad as graphics tablet.  We also worked with  a combination of handheld and fixed camera positions.

pict0002j-smhb.jpg

pict0004j-smhb.jpg

pict0013j-smhb.jpg



ISEA 2008 in Singapore

24 08 2008

isea-banner.bmp 

Sita and I went to Singapore at the end of July to present the project paper titled “e-Dance: Relocating choreographic practice as a new modality for performance and documentation”.  The conference took place at venues throughout Singapore including the Singapore Management University, National University of Technology, Nang Yang University and Lasalle College of the Arts.

 iseauni.jpg  isealasalles.jpg

We both had a really interesting time.  There were some great presentations.  The key-notes by Ken Mogi on Creativity and the Brain and Lev Manovich on Visualising Culture were both particularly relevant to the project. Ken Mogi constructed a view of creativity from the perspective of Evolutionary Psychology.  He drew parallels between memory and creativity in terms of brain function.  He talked about the ‘feeling of knowing’ as referred to by Psychologists and Neuroscientists. This seems particularly relevant in relation to Simon’s previous post. 

There was a juried exhibition as part of the symposium and various shows that included a range of installation art work, the majority of which focused on interactivity. 

 iseaexodus.jpg iseainteractive.jpg

The most compelling performance experience was “True” a Japanese dance-theatre work created collaboratively by an interdisciplinary team of artists and technologists (some of which were members of Dumb Type).  The integration of technology into the live work was so subtlely managed, it was a visual, visceral, auditory treat.

main_true.jpg

 



EVA London 2008

24 08 2008

morphtrio.jpg 

Martin and I, together with Dance Artist James Hewison, who has contributed to several of the research intensives, presented a paper at EVA in London in July.  The presentation titled “Choreographic Morphologies”, outlines the key findings from the Morphologies practice-led research.  The conference proceedings have been published by British Society of Computing.

morphama.jpg



Performance, programming and user interfaces: The Fifth eDance Intensive

11 08 2008

Last week, I attended the fifth eDance intensive in Bedford.  Upon arriving, we discovered that we were to put on a public performance on Friday (unfortunatly after I had to leave).  This increased the pressure somewhat, although our confidence in the recording software had recently improved after it had been successfully used to record the Research Methods Festival “What Is” sessions in Oxford over 4 days without incident.  To add further to this task, we were also to set up two separate spaces for this performance, with three cameras in each and audio conferencing between the spaces, and we hadn’t brought any echo-cancelling microphones and only powerful laptop!  We immediatly got on with the task of setting up the two spaces. 

One of the aspects of eDance that is different from Access Grid is the requirement of high-quality video at high frame-rates.  Access Grid has historically used H.261 video, which is transmitted at a resolution of 352 x 288, and tends to reach about 20 frames per second.  For eDance (and the related CREW project), we decided to use miniDV camcorders for the cameras using the firewire interface.  This allows 25 fps video at 720 x 576, 4 times the quality of regular Access Grid video.  Thankfully, AG is easily extensible, which allow us to make this change without much effort.  The main issue is however with the laptop processing load.  We initially thought that our tool was not very good in terms of CPU loading, and assumed that this was because we are using Java.  However, we tried to use Vic, the video tool that AG uses, and found that with the same resolution and frame-rate, this also tends to result in high CPU usage.  The main advantage that vic has over our tool is that it allows you to control the bandwidth and frame-rate of the transmitted video.  Fortunatly, the piece that Helen was composing did not require us to use all of our video streams simultanously, so we managed to get away without too much overloading (although the fans on the laptops were running full steam ahead)!

The first of the successful uses of the videos between the two spaces was a piece where the video from the top half of one person was displayed over the video of the bottom half of another.  The two dancers then had to coordinate their movements so that they appeared to be one person.  This produced some wonderful images, as well as some good comedy moments!

Our next issue came with a misunderstanding about what our software could do.  We had bragged about how we could now have transparent windows.  We also tested to see if the video underneath would also show through the one on top (rather than drawing a pink area as had happened before).  This lead Helen to start planning a piece that could use this functionality (it looked like it could be really cool – overlaying people onto a model of a village)!  Unfortunalty, the way the technology was working was to capture the scene behind the transparent video, and then statically mix this in.  This meant that the image behind the transparent window could not move.  I set to fixing this problem straight away (I had already thought about how this might work previously), but by the time I had made it work, Helen had moved away to something else.

 During the course of the week, there was a small saga related to the ordering of some drawing pads to arrive by next-day delivery on the Monday.  When these did arrive on the Thursday, Helen immediatly devised a new idea involving overlaying the drawings done on the devices over the video.  As I now had the transparent video working, we immediatly got this working by having the transparent video window on top of the drawings.  Michelle also worked out how to give this a nice black background with a light grey pen colour, so this looked good behind the video windows.

The second successful use of the software was then the drawing on the overlayed transparent video windows.  The dancers moved around the space, and then when the drawing pad operator said “Stop”, they froze while the operator drew around their shapes (and also sometimes the shape of their shadow).  This resulted some quite complex images!

 We also realised on the Thursday that we had still not got the audio working between the spaces.  As we did not have echo-cancelling equipment, our efforts using the audio in the software tended to result in loud noises being played into the spaces!  We decided that we should therefore use some software that had echo-cancelling built in, such as Skype or MSN Messenger.  Skype turned out not to work too well as it had not been tested on the Bedford campus.  Although this was resolved before the end of the week, we decided to stick with MSN Messenger as it seemed to work fine.  This meant that the audio was not recorded, but did mean that it didn’t echo!

Another limitation of the software was revealed when we were asked to play back some recorded media.  During the course of the week, we were joined by a video specialist who had recorded the performers using Google Maps (which looked very impressive) and also whilst out walking in a forest (which gave us some much-needed time to get the transparencies working, and to look at other software – see later).  We then had to ask her to write this back to DV tape to be imported into our software, as we couldn’t yet import avi or mpeg files.  We also realised that we would have to launch a separate instance of the software to play back each of the clips quickly, otherwise the audience would have to wait while we loaded each clip up.  With the CPU load high from receiving video, we had to add an option for these extra instances not to receive the transmitted videos, otherwise the computer would quickly become overloaded.  Anja did this and this worked well.

Finally, one important outcome from the intensive was us all finally working out how the interface might look from the video perspective.  We worked out that we had over-complicated the interface in terms of representing video timelines, and that it might be best if the interface looked a little bit like Microsoft PowerPoint (or other presentation editing software).  This would obviously have major differences, such as being able to edit what was seen on the live screen without exiting what would be Presentation Mode in PowerPoint.  We were also shown the user interfaces to other tools used during and before performances, such as Arkaos for “VJing” (like DJing but with video).  I have since developed a mockup (in PowerPoint funnily enough), for this interface, which takes some features from each of these tools, and adds some of our own.