e-Dance@BL’s Growing Knowledge Exhibition

12 10 2010

Today sees the launch of the British Library’s major new exhibition on how digital tools are already transforming how we do research: Growing Knowledge: The Evolution of Research (12 October 2010 – 16 July 2011). [Media coverage]

We’re delighted to say that the e-Dance Project was selected as one of the examples, showcasing how close collaboration between technology researchers (originally developing Access Grid video-conferencing/collaboration tools, and Compendium hypermedia mapping, in an e-Science context) enables arts and humanities researchers, in this case Choreographic researchers/practitioners, to break new ground playing with time and space in their discipline.

Flickr set: e-Dance@BL photos

The e-Dance exhibit presents video material introducing the project, with examples of the technologies in action. Some of this is on the BLGK demos website,  an extract from an extended podcast playlist.

Browse the blog to learn more, and to download the Access Grid Scene Editor and Compendium e-Dance Edition.

This article sets out the academic rationale for e-Dance:

Bailey, H., Bachler, M., Buckingham Shum, S., Le Blanc, A., Popat, S., Rowley, A. and Turner, M. (2009). Dancing on the Grid: Using e-Science Tools to Extend Choreographic Research. Philosophical Transactions of the Royal Society A, 13 July 2009, Vol. 367, No. 1898, pp. 2793-2806. [PDF]



e-Dance at an e-Infrastructure workshop

8 09 2010

During a pre-DRHA workshop the barriers for e-Sciencce take-up and other issues were discussed involving the experiences of the e-Dance project in connection to the national services including the AGSC (Access Grid Support Centre).



Research Intensive 6

14 11 2008

Performers: Louise Douse and Sita Popat

The sixth research intensive of the project is taking place over several days throughout November, the first of which was held on 13 November 2008.  The team worked in two locations, at University of Manchester and the Open University in Milton Keynes. The team comprised myself (Helen), Simon, Michelle and Louise at the OU and Sita, Martin, Anja and Andrew at Manchester. This research intensive has two key aims – (1) to explore the enhanced AG environment for distributed live performance from a purely screen-based perspective, (2) to use Compendium to map this process in order to feed into this aspect of the software development.  It was really exciting to be working across such a geographically distributed context.  Although we have worked in this way in previous Intensives, the multiple locations have been within the same campus (at either Manchester or Bedford) in order that we were to be able to firefight more easily.  So this is an indication of the robustness of the developments, that we were able to do this with a fairly short set-up time. 



August Research Intensive

25 08 2008

       pict0008-smhb.jpg

The fifth research intensive took place between 4 – 8 August in Bedford.  This week-long laboratory included the following project participants: Helen Bailey, Michelle Bachler, Anje Le Blanc and  Andrew Rowley.  We were joined by Video Artist: Catherine Watling, and four Dance Artists: Catherine Bennett, River Carmalt, Amalia Garcia and James Hewison.  There were several aims for this lab: 1. To explore, in a distributed performance context, the software developments that have been made to Memetic and the AG environment for the manipulation of video/audio streams. 2. To integrate other technologies/applications  into this environment. 3. To initiate developments for Compendium. 4. Explore the compositional opportunities that the new developments offer. 

In terms of set-up this has been the most ambitious research intensive of the project.  We were working with four dancers distributed across two networked theatre spaces.  For ease we were working in the theatre auditorium and theatre studio at the University.  Both of which are housed within the same building.  This provided the opportunity for emergency dashes between the two locations if the technology failed.  We worked with three cameras in each space for live material and then pre-recorded material in both locations.  Video Artist Catherine Watling worked with us during the week to generate pre-recorded, edited, video material that could then be integrated into the live context.

1. Software developments in a distributed environment

The significant developments from my perspective here was firstly the the ability of the software to remember the layout of projected windows from one ‘scene’ to another.  This allowed for a much smoother working process and for the development of more complex compositional relationships between images .  The second significant development was the transparency of overlaid windows which allows for the creation of composite live imagery across multiple locations.

What’s really interesting is that the software development is at the stage where we are just beginning to think about user interface design.  During the week we looked at various applications to assess the usefulness of the interface or elements of the design for e-Dance.  We focused on Adobe Premiere, Isadora, Arkaos, Wizywyg and Powerpoint.  These all provided useful insights into where we might take the design. 

2. Integration of other technologies

We had limited opportunity to integrate other technology and applications during the week.  I think this is better left until the software under development is more robust and we have a clear understanding of its functionality.  We did however integrate the Acecad Digi Memo Pads into the live context as graphic tablets.  I was first introduced to these by  those involved in the JISC funded VRE2 VERA Project running concurrently with e-Dance.  This provided an interesting set of possibilities both in terms of operator interface and also the inclusion of the technology within the performance space to be used directly by the performers.

3. Begin Compendium development

The OU software development contribution to the project began in earnest with this intensive.  Michelle was present throughout the week, which gave her the opportunity to really immerse herself in the environment and gain some first-hand experience of the choreographic process and the kinds of working practices that have been adopted by the project team so far.

Throughout the week Michelle created Compendium maps for each day’s activity.  It became clear that the interface would currently militate against the choreographic process we are involved in.  So having someone dedicated to the documentation of the process was very useful.  It also gave Michelle first-hand experience of the problems.  The choreographic process is; studio-based, it is dialogic in terms of the construction of material between choreographer and dancers, it involves the articulation of ideas that are at the edges of verbal communication and judgements are tacitly made and understood.  Michelle’s immediate response to this context was to begin to explore voice-to-text software as a means of mitigating some of these issues.

The maps that were generated during the week are really interesting in that they have already captured thinking, dialogue and decision-making within the creative process that would previously have been lost.   The maps also immediately raise the question about authorship and perspective.  The maps from the intensive had a single author, they were not collaboratively constructed, they represent a single perspective on a collaborative process.  So over the next few months it will be interesting to explore the role/function of collaboration in terms of mapping the process – whether what we should aim for is  a poly-vocal single map that takes account of multiple perspectives or an interconnected series of individual authored maps will need to be considered.

4. Compositional developments

Probably influenced by the recent trip to ISEA (the week before!), the creative or thematic focus for the laboratory was concerned with spatio-temporal structure again but specifically location.  I began with a series of key terms that clustered around this central idea – they were; dislocate, relocate, situate, resituate, trace, map.  A series of generative tasks were developed that would result in either live/projected material or pre-recorded video material.  This material was then organised or rather formed the basis of the following sections of work-in-progress material:

Situate/Resituate Duet (Created in collaboration with and performed by River Carmalt and Amalia Garcia) (approx. 5 minutes)

                                 katherinedrawing2-crophb.JPG                               

The duet was constructed around the idea of ‘situating’ either part of the body or the whole body, then resituating that material either spatially, or onto another body part or onto the other dancer’s body.  We used an overhead camera to stream this material and then project it live in real-time to scale on the wall of the performance space.  This performed an ambigious mapping of the space. 

                                  pict0015i-smhb.jpg                       

The Situate/Resituate Duet developed through the integration of the Digi Memo Pads into the performance context.  James and Catherine were seated at a down-stage table where they could be seen drawing.  The drawing was projected as a semi-transparent window over the video stream.  This allowed them to directly ‘annotate’ or graphically interact with the virtual performers. 

  katherinedrawing6-crophb.JPG  

Auto(bio/geo) graphy (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison.  Filmed and edited by Catherine Watling) (4 x approx. 2-minute video works)

In this task we integrated Googlemaps into the performance context as both content and scenography.  We used a laptop connected to Googlemap on stage and simultaneously projected  te website onto the back wall of the performance space.   The large projection of the satellite image of the earth resituated the performer into a extra-terrestrial position. 

               dscf6424_sm.jpg

Each performer was asked to navigate Googlemap to track their own movements in the chronological order of their lives.  They could be as selective as they wished whilst maintaining chronological order.  This generated a narrativised mapping of space/time.  This task resulted in a series of edited 2-minute films of each performer mapping their chronological movements around the globe.  Although we didn’t have the time within the week to utilise these films vis-a-vis live performance, we have some very clear ideas for their subesquent integration at the next intensive. 

                dscf6428.JPG

Dislocate/Relocate: Composite Bodies (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison) (approx. 4-minutes)

This distributed quartet was constructed across two locations.  The ideas of dislocation and relocation through a fragmentation of the four performers’ bodies and then reconstituting two composite virtual bodies from fragments of the four live performers. 

          wedspict0003_sm.JPG

We began with the idea of attempting to ‘reconstruct’ a coherent singluar body from the dislocated/relocated bodily fragments.  Then we went on to explore the radical juxtaposition created by each fragmentary body part moving in isolation.

                 pict0006h-sm.jpg

                            pict0001h-smhb.jpg

“From Here I Can See…” (A Distributed Monologue) (Created in collaboration with and performed by Catherine Bennett, River Carmalt, James Hewison) (approx. 5 -minutes)

This distributed trio  was initially focused on the construction of a list-based monologue in which the sentence “From here I can see…” was completed in a list form that functioned through  a series of associative relationships.

pict0023-smhb.jpg

 In one location Catherine delivered a verbal monologue, another dancer performed a micro-gestural solo with his feet within the same location as Catherine.  A third non-co-located dancer  in the other space performed a floor based solo. 

Trace Duet (Created in collaboration with and performed by Catherine Bennett and James Hewison)

In this duet we focused on bringing togther the new software capability to produce layered transparency of windows and the use of the Acecad Digi Memo Pad as graphics tablet.  We also worked with  a combination of handheld and fixed camera positions.

pict0002j-smhb.jpg

pict0004j-smhb.jpg

pict0013j-smhb.jpg



Performance, programming and user interfaces: The Fifth eDance Intensive

11 08 2008

Last week, I attended the fifth eDance intensive in Bedford.  Upon arriving, we discovered that we were to put on a public performance on Friday (unfortunatly after I had to leave).  This increased the pressure somewhat, although our confidence in the recording software had recently improved after it had been successfully used to record the Research Methods Festival “What Is” sessions in Oxford over 4 days without incident.  To add further to this task, we were also to set up two separate spaces for this performance, with three cameras in each and audio conferencing between the spaces, and we hadn’t brought any echo-cancelling microphones and only powerful laptop!  We immediatly got on with the task of setting up the two spaces. 

One of the aspects of eDance that is different from Access Grid is the requirement of high-quality video at high frame-rates.  Access Grid has historically used H.261 video, which is transmitted at a resolution of 352 x 288, and tends to reach about 20 frames per second.  For eDance (and the related CREW project), we decided to use miniDV camcorders for the cameras using the firewire interface.  This allows 25 fps video at 720 x 576, 4 times the quality of regular Access Grid video.  Thankfully, AG is easily extensible, which allow us to make this change without much effort.  The main issue is however with the laptop processing load.  We initially thought that our tool was not very good in terms of CPU loading, and assumed that this was because we are using Java.  However, we tried to use Vic, the video tool that AG uses, and found that with the same resolution and frame-rate, this also tends to result in high CPU usage.  The main advantage that vic has over our tool is that it allows you to control the bandwidth and frame-rate of the transmitted video.  Fortunatly, the piece that Helen was composing did not require us to use all of our video streams simultanously, so we managed to get away without too much overloading (although the fans on the laptops were running full steam ahead)!

The first of the successful uses of the videos between the two spaces was a piece where the video from the top half of one person was displayed over the video of the bottom half of another.  The two dancers then had to coordinate their movements so that they appeared to be one person.  This produced some wonderful images, as well as some good comedy moments!

Our next issue came with a misunderstanding about what our software could do.  We had bragged about how we could now have transparent windows.  We also tested to see if the video underneath would also show through the one on top (rather than drawing a pink area as had happened before).  This lead Helen to start planning a piece that could use this functionality (it looked like it could be really cool – overlaying people onto a model of a village)!  Unfortunalty, the way the technology was working was to capture the scene behind the transparent video, and then statically mix this in.  This meant that the image behind the transparent window could not move.  I set to fixing this problem straight away (I had already thought about how this might work previously), but by the time I had made it work, Helen had moved away to something else.

 During the course of the week, there was a small saga related to the ordering of some drawing pads to arrive by next-day delivery on the Monday.  When these did arrive on the Thursday, Helen immediatly devised a new idea involving overlaying the drawings done on the devices over the video.  As I now had the transparent video working, we immediatly got this working by having the transparent video window on top of the drawings.  Michelle also worked out how to give this a nice black background with a light grey pen colour, so this looked good behind the video windows.

The second successful use of the software was then the drawing on the overlayed transparent video windows.  The dancers moved around the space, and then when the drawing pad operator said “Stop”, they froze while the operator drew around their shapes (and also sometimes the shape of their shadow).  This resulted some quite complex images!

 We also realised on the Thursday that we had still not got the audio working between the spaces.  As we did not have echo-cancelling equipment, our efforts using the audio in the software tended to result in loud noises being played into the spaces!  We decided that we should therefore use some software that had echo-cancelling built in, such as Skype or MSN Messenger.  Skype turned out not to work too well as it had not been tested on the Bedford campus.  Although this was resolved before the end of the week, we decided to stick with MSN Messenger as it seemed to work fine.  This meant that the audio was not recorded, but did mean that it didn’t echo!

Another limitation of the software was revealed when we were asked to play back some recorded media.  During the course of the week, we were joined by a video specialist who had recorded the performers using Google Maps (which looked very impressive) and also whilst out walking in a forest (which gave us some much-needed time to get the transparencies working, and to look at other software – see later).  We then had to ask her to write this back to DV tape to be imported into our software, as we couldn’t yet import avi or mpeg files.  We also realised that we would have to launch a separate instance of the software to play back each of the clips quickly, otherwise the audience would have to wait while we loaded each clip up.  With the CPU load high from receiving video, we had to add an option for these extra instances not to receive the transmitted videos, otherwise the computer would quickly become overloaded.  Anja did this and this worked well.

Finally, one important outcome from the intensive was us all finally working out how the interface might look from the video perspective.  We worked out that we had over-complicated the interface in terms of representing video timelines, and that it might be best if the interface looked a little bit like Microsoft PowerPoint (or other presentation editing software).  This would obviously have major differences, such as being able to edit what was seen on the live screen without exiting what would be Presentation Mode in PowerPoint.  We were also shown the user interfaces to other tools used during and before performances, such as Arkaos for “VJing” (like DJing but with video).  I have since developed a mockup (in PowerPoint funnily enough), for this interface, which takes some features from each of these tools, and adds some of our own.



Manchester practicum

22 02 2008

replay-miming.jpg

[Simon / Open U writes…] — We’ve just completed a 2-day practicum in Manchester, continuing our experiments to understand the potential of Memetic’s replayable Access Grid functionality for choreographic rehearsal and performance.

Two AG Nodes (i.e. rooms wired for sound and large format video) were rearranged for dance, clearing the usual tables and chairs. With the Nodes connected, a dancer in each Node, and Helen as choreographic researcher co-located with each in turn, we could then explore the impact of different window configurations on the dancers’ self image, as well as their projected images to the other Node, and mixing recorded and live performance.

feb08-1.jpg

Helen will add some more on the choreographic research dimensions she was exploring. My interest was in trying to move towards articulating the “design space” we are constructing, so that we can have a clearer idea of how to position our work along different dimensions. (Reflecting on how our roles are playing out as we figure out how to work with each other is of course a central part of the project…)

feb08-2.jpg

To start with, we can identify a number of design dimensions:

  • synchronous — asynchronous
  • recorded — live (noting that ‘live’ is a problematic term now: Liveness: Performance in an Mediatized Culture by Philip Auslander)
  • virtual — physical
  • modality of annotation: spoken dialogue/written/mapped
  • AG as performance environment vs. as rehearsal documentation context
  • AG as performance environment enabling traditional co-present choreographic practices — or as a means of generating/enabling new choreographic practices
  • documenting process in the AG — ‘vs’ the non-AG communication ecology that emerges around the e-dance tools (what would a template for an edance–based project website look like, in order to support this more ‘invisible’ work?)
  • deictic annotation: gesture, sketching, highlighting windows
  • in-between-ness: emergent structures/patterns are what make a moment potentially interesting and worth annotating, e.g. the relationships between specific video windows
  • continuous — discontinuous space: moving beyond geometrical/Euclidean space
  • continuous — discontinuous time: moving beyond a single, linear time
  • framing: aesthetic decisions/generation of meaning – around the revealing of process. Framing as in ‘window’ versus visual arts sense

Bringing to bear an HCI orientation, an initial analysis of the user groups who could potentially use the e-Dance tools, and the activities they might perform with them, yielded the following matrix:

edance-user-activity-matrix.jpg

We will not be working with all of these user communities of course, but the in-depth work we do can now be positioned in relation to the use cases we do not cover.

feb08-3.jpg