e-Dance@BL’s Growing Knowledge Exhibition

12 10 2010

Today sees the launch of the British Library’s major new exhibition on how digital tools are already transforming how we do research: Growing Knowledge: The Evolution of Research (12 October 2010 – 16 July 2011). [Media coverage]

We’re delighted to say that the e-Dance Project was selected as one of the examples, showcasing how close collaboration between technology researchers (originally developing Access Grid video-conferencing/collaboration tools, and Compendium hypermedia mapping, in an e-Science context) enables arts and humanities researchers, in this case Choreographic researchers/practitioners, to break new ground playing with time and space in their discipline.

Flickr set: e-Dance@BL photos

The e-Dance exhibit presents video material introducing the project, with examples of the technologies in action. Some of this is on the BLGK demos website,  an extract from an extended podcast playlist.

Browse the blog to learn more, and to download the Access Grid Scene Editor and Compendium e-Dance Edition.

This article sets out the academic rationale for e-Dance:

Bailey, H., Bachler, M., Buckingham Shum, S., Le Blanc, A., Popat, S., Rowley, A. and Turner, M. (2009). Dancing on the Grid: Using e-Science Tools to Extend Choreographic Research. Philosophical Transactions of the Royal Society A, 13 July 2009, Vol. 367, No. 1898, pp. 2793-2806. [PDF]



Compendium e-Dance edition released!

5 05 2010

Hurrah! All of the extensions that we added to Compendium during the e-Dance project have now been folded into the new Compendium 2.0 beta release, available free and open source via the Compendium Institute download page.

In addition to adding further controls over the aesthetic appearance of knowledge maps, and significant ‘under the bonnet work’ to make the software leaner and faster when working on collaborative projects over the internet, the key new feature is “Movie Maps”. From the release notes:

Movie Maps: You can bring videos directly into a new kind of Compendium view, called a “Movie Map.” With this you can add nodes and links on top of a movie, having these annotations appear and disappear wherever you want over the length of the video. You can even animate maps without a video, so that you can add movement to your maps.

As part of creating a visual language tuned to choreographic research and practice, there is also an e-Dance stencil, ie. a custom (and editable) palette of icons representing camera/stage setups and compositional structures. These can be dragged and dropped onto maps as part of the annotation process. Two examples are shown below:

The desire to create aesthetically pleasing look and feel customised to the user community led us to break out some of the key graphical elements in Compendium into set of Theme resources. These can now be downloaded and shared via the Theme web exchange. Compendium-eDance theme is illustrated below:

It is very satisfying to see the work from e-Dance released to the world, and towards the end of the project we’ll be talking with the choreography researchers to see how they are starting to play with it.

This constitutes the project deliverable of an application, able to run on any laptop, to enable portable video annotation. Many thanks to the Andrew Rowley in the Manchester team, whose world class expertise in Java and video codecs made it possible for Michelle here at Open U. to drop in code that handles many kinds of video format, and to choreography researchers Sita Popat (U. Leeds) and Helen Bailey (U. Bedfordshire) who gave us detailed design input and feedback as we worked through many iterations of the movie maps and e-Dance stencil. A great example of collaborative work.



Digital Interfaces in Dance Performance Environments

18 09 2008

       logodrha08.gif

We all managed to get together for one day during DRHA 2008, University of Cambridge, UK, 14-17 September http://www.rsd.cam.ac.uk/drha08/ .  It was great to have the chance to catch up after a summer of really interesting activity on the project.  Sita, Martin, Simon and myself (together with Scott Palmer from University of Leeds) were all participating in a panel that we had proposed for the conference.  We presented three projects Stereobodies/Choreographic Morphologies (Martin and I) http://kato.mvc.mcc.ac.uk/rss-wiki/SAGE/StereoBodies , Projecting Performance (Sita and Scott) http://www.leeds.ac.uk/paci/projectingperformance/home.html and e-Dance (Simon and I).  The focus of the panel that brought the three projects together, was concerned with the role/function/construction of ‘interfaces’ in dance.   The panel abstract summarised the dicussion as follows; 

“This panel will discuss visual communication interfaces in dance performance environments through the frames of three current/recent projects.  These projects use digital technologies to facilitate non-co-located performances, either between dancers at different sites or between on-stage dancers and off-stage operators.  The three projects offer intersecting yet distinct perspectives on this process.  The panel will question how such interfaces are experienced by th performers, how they can be approached as choreographic environments, and how they affect the rpocess of viewing.  In an art form that prioritises physicality and embodied knowledge, how do dancers negotiate their performances with remote partners via digital interfaces?  how might such interfaces redefine choreographic understandings of embodied spatio-temporal relationships? How might spectators engage differently with performances that exist in their entirety only at the boundary of the digital interface?”

This set of presentations, together with All Hands last week served as a really good opportunity for reflection at the half way point in the project.  We also had the chance to have some good discussions about where we are heading.  The next ‘Project Sandpit’ will take place on 9-10 October in Manchester.  So that will give us the space to map out the next few months activity and chart the direction for developments with Compendium/Cohere.

 



August Research Intensive

25 08 2008

       pict0008-smhb.jpg

The fifth research intensive took place between 4 – 8 August in Bedford.  This week-long laboratory included the following project participants: Helen Bailey, Michelle Bachler, Anje Le Blanc and  Andrew Rowley.  We were joined by Video Artist: Catherine Watling, and four Dance Artists: Catherine Bennett, River Carmalt, Amalia Garcia and James Hewison.  There were several aims for this lab: 1. To explore, in a distributed performance context, the software developments that have been made to Memetic and the AG environment for the manipulation of video/audio streams. 2. To integrate other technologies/applications  into this environment. 3. To initiate developments for Compendium. 4. Explore the compositional opportunities that the new developments offer. 

In terms of set-up this has been the most ambitious research intensive of the project.  We were working with four dancers distributed across two networked theatre spaces.  For ease we were working in the theatre auditorium and theatre studio at the University.  Both of which are housed within the same building.  This provided the opportunity for emergency dashes between the two locations if the technology failed.  We worked with three cameras in each space for live material and then pre-recorded material in both locations.  Video Artist Catherine Watling worked with us during the week to generate pre-recorded, edited, video material that could then be integrated into the live context.

1. Software developments in a distributed environment

The significant developments from my perspective here was firstly the the ability of the software to remember the layout of projected windows from one ‘scene’ to another.  This allowed for a much smoother working process and for the development of more complex compositional relationships between images .  The second significant development was the transparency of overlaid windows which allows for the creation of composite live imagery across multiple locations.

What’s really interesting is that the software development is at the stage where we are just beginning to think about user interface design.  During the week we looked at various applications to assess the usefulness of the interface or elements of the design for e-Dance.  We focused on Adobe Premiere, Isadora, Arkaos, Wizywyg and Powerpoint.  These all provided useful insights into where we might take the design. 

2. Integration of other technologies

We had limited opportunity to integrate other technology and applications during the week.  I think this is better left until the software under development is more robust and we have a clear understanding of its functionality.  We did however integrate the Acecad Digi Memo Pads into the live context as graphic tablets.  I was first introduced to these by  those involved in the JISC funded VRE2 VERA Project running concurrently with e-Dance.  This provided an interesting set of possibilities both in terms of operator interface and also the inclusion of the technology within the performance space to be used directly by the performers.

3. Begin Compendium development

The OU software development contribution to the project began in earnest with this intensive.  Michelle was present throughout the week, which gave her the opportunity to really immerse herself in the environment and gain some first-hand experience of the choreographic process and the kinds of working practices that have been adopted by the project team so far.

Throughout the week Michelle created Compendium maps for each day’s activity.  It became clear that the interface would currently militate against the choreographic process we are involved in.  So having someone dedicated to the documentation of the process was very useful.  It also gave Michelle first-hand experience of the problems.  The choreographic process is; studio-based, it is dialogic in terms of the construction of material between choreographer and dancers, it involves the articulation of ideas that are at the edges of verbal communication and judgements are tacitly made and understood.  Michelle’s immediate response to this context was to begin to explore voice-to-text software as a means of mitigating some of these issues.

The maps that were generated during the week are really interesting in that they have already captured thinking, dialogue and decision-making within the creative process that would previously have been lost.   The maps also immediately raise the question about authorship and perspective.  The maps from the intensive had a single author, they were not collaboratively constructed, they represent a single perspective on a collaborative process.  So over the next few months it will be interesting to explore the role/function of collaboration in terms of mapping the process – whether what we should aim for is  a poly-vocal single map that takes account of multiple perspectives or an interconnected series of individual authored maps will need to be considered.

4. Compositional developments

Probably influenced by the recent trip to ISEA (the week before!), the creative or thematic focus for the laboratory was concerned with spatio-temporal structure again but specifically location.  I began with a series of key terms that clustered around this central idea – they were; dislocate, relocate, situate, resituate, trace, map.  A series of generative tasks were developed that would result in either live/projected material or pre-recorded video material.  This material was then organised or rather formed the basis of the following sections of work-in-progress material:

Situate/Resituate Duet (Created in collaboration with and performed by River Carmalt and Amalia Garcia) (approx. 5 minutes)

                                 katherinedrawing2-crophb.JPG                               

The duet was constructed around the idea of ‘situating’ either part of the body or the whole body, then resituating that material either spatially, or onto another body part or onto the other dancer’s body.  We used an overhead camera to stream this material and then project it live in real-time to scale on the wall of the performance space.  This performed an ambigious mapping of the space. 

                                  pict0015i-smhb.jpg                       

The Situate/Resituate Duet developed through the integration of the Digi Memo Pads into the performance context.  James and Catherine were seated at a down-stage table where they could be seen drawing.  The drawing was projected as a semi-transparent window over the video stream.  This allowed them to directly ‘annotate’ or graphically interact with the virtual performers. 

  katherinedrawing6-crophb.JPG  

Auto(bio/geo) graphy (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison.  Filmed and edited by Catherine Watling) (4 x approx. 2-minute video works)

In this task we integrated Googlemaps into the performance context as both content and scenography.  We used a laptop connected to Googlemap on stage and simultaneously projected  te website onto the back wall of the performance space.   The large projection of the satellite image of the earth resituated the performer into a extra-terrestrial position. 

               dscf6424_sm.jpg

Each performer was asked to navigate Googlemap to track their own movements in the chronological order of their lives.  They could be as selective as they wished whilst maintaining chronological order.  This generated a narrativised mapping of space/time.  This task resulted in a series of edited 2-minute films of each performer mapping their chronological movements around the globe.  Although we didn’t have the time within the week to utilise these films vis-a-vis live performance, we have some very clear ideas for their subesquent integration at the next intensive. 

                dscf6428.JPG

Dislocate/Relocate: Composite Bodies (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison) (approx. 4-minutes)

This distributed quartet was constructed across two locations.  The ideas of dislocation and relocation through a fragmentation of the four performers’ bodies and then reconstituting two composite virtual bodies from fragments of the four live performers. 

          wedspict0003_sm.JPG

We began with the idea of attempting to ‘reconstruct’ a coherent singluar body from the dislocated/relocated bodily fragments.  Then we went on to explore the radical juxtaposition created by each fragmentary body part moving in isolation.

                 pict0006h-sm.jpg

                            pict0001h-smhb.jpg

“From Here I Can See…” (A Distributed Monologue) (Created in collaboration with and performed by Catherine Bennett, River Carmalt, James Hewison) (approx. 5 -minutes)

This distributed trio  was initially focused on the construction of a list-based monologue in which the sentence “From here I can see…” was completed in a list form that functioned through  a series of associative relationships.

pict0023-smhb.jpg

 In one location Catherine delivered a verbal monologue, another dancer performed a micro-gestural solo with his feet within the same location as Catherine.  A third non-co-located dancer  in the other space performed a floor based solo. 

Trace Duet (Created in collaboration with and performed by Catherine Bennett and James Hewison)

In this duet we focused on bringing togther the new software capability to produce layered transparency of windows and the use of the Acecad Digi Memo Pad as graphics tablet.  We also worked with  a combination of handheld and fixed camera positions.

pict0002j-smhb.jpg

pict0004j-smhb.jpg

pict0013j-smhb.jpg



Performance, programming and user interfaces: The Fifth eDance Intensive

11 08 2008

Last week, I attended the fifth eDance intensive in Bedford.  Upon arriving, we discovered that we were to put on a public performance on Friday (unfortunatly after I had to leave).  This increased the pressure somewhat, although our confidence in the recording software had recently improved after it had been successfully used to record the Research Methods Festival “What Is” sessions in Oxford over 4 days without incident.  To add further to this task, we were also to set up two separate spaces for this performance, with three cameras in each and audio conferencing between the spaces, and we hadn’t brought any echo-cancelling microphones and only powerful laptop!  We immediatly got on with the task of setting up the two spaces. 

One of the aspects of eDance that is different from Access Grid is the requirement of high-quality video at high frame-rates.  Access Grid has historically used H.261 video, which is transmitted at a resolution of 352 x 288, and tends to reach about 20 frames per second.  For eDance (and the related CREW project), we decided to use miniDV camcorders for the cameras using the firewire interface.  This allows 25 fps video at 720 x 576, 4 times the quality of regular Access Grid video.  Thankfully, AG is easily extensible, which allow us to make this change without much effort.  The main issue is however with the laptop processing load.  We initially thought that our tool was not very good in terms of CPU loading, and assumed that this was because we are using Java.  However, we tried to use Vic, the video tool that AG uses, and found that with the same resolution and frame-rate, this also tends to result in high CPU usage.  The main advantage that vic has over our tool is that it allows you to control the bandwidth and frame-rate of the transmitted video.  Fortunatly, the piece that Helen was composing did not require us to use all of our video streams simultanously, so we managed to get away without too much overloading (although the fans on the laptops were running full steam ahead)!

The first of the successful uses of the videos between the two spaces was a piece where the video from the top half of one person was displayed over the video of the bottom half of another.  The two dancers then had to coordinate their movements so that they appeared to be one person.  This produced some wonderful images, as well as some good comedy moments!

Our next issue came with a misunderstanding about what our software could do.  We had bragged about how we could now have transparent windows.  We also tested to see if the video underneath would also show through the one on top (rather than drawing a pink area as had happened before).  This lead Helen to start planning a piece that could use this functionality (it looked like it could be really cool – overlaying people onto a model of a village)!  Unfortunalty, the way the technology was working was to capture the scene behind the transparent video, and then statically mix this in.  This meant that the image behind the transparent window could not move.  I set to fixing this problem straight away (I had already thought about how this might work previously), but by the time I had made it work, Helen had moved away to something else.

 During the course of the week, there was a small saga related to the ordering of some drawing pads to arrive by next-day delivery on the Monday.  When these did arrive on the Thursday, Helen immediatly devised a new idea involving overlaying the drawings done on the devices over the video.  As I now had the transparent video working, we immediatly got this working by having the transparent video window on top of the drawings.  Michelle also worked out how to give this a nice black background with a light grey pen colour, so this looked good behind the video windows.

The second successful use of the software was then the drawing on the overlayed transparent video windows.  The dancers moved around the space, and then when the drawing pad operator said “Stop”, they froze while the operator drew around their shapes (and also sometimes the shape of their shadow).  This resulted some quite complex images!

 We also realised on the Thursday that we had still not got the audio working between the spaces.  As we did not have echo-cancelling equipment, our efforts using the audio in the software tended to result in loud noises being played into the spaces!  We decided that we should therefore use some software that had echo-cancelling built in, such as Skype or MSN Messenger.  Skype turned out not to work too well as it had not been tested on the Bedford campus.  Although this was resolved before the end of the week, we decided to stick with MSN Messenger as it seemed to work fine.  This meant that the audio was not recorded, but did mean that it didn’t echo!

Another limitation of the software was revealed when we were asked to play back some recorded media.  During the course of the week, we were joined by a video specialist who had recorded the performers using Google Maps (which looked very impressive) and also whilst out walking in a forest (which gave us some much-needed time to get the transparencies working, and to look at other software – see later).  We then had to ask her to write this back to DV tape to be imported into our software, as we couldn’t yet import avi or mpeg files.  We also realised that we would have to launch a separate instance of the software to play back each of the clips quickly, otherwise the audience would have to wait while we loaded each clip up.  With the CPU load high from receiving video, we had to add an option for these extra instances not to receive the transmitted videos, otherwise the computer would quickly become overloaded.  Anja did this and this worked well.

Finally, one important outcome from the intensive was us all finally working out how the interface might look from the video perspective.  We worked out that we had over-complicated the interface in terms of representing video timelines, and that it might be best if the interface looked a little bit like Microsoft PowerPoint (or other presentation editing software).  This would obviously have major differences, such as being able to edit what was seen on the live screen without exiting what would be Presentation Mode in PowerPoint.  We were also shown the user interfaces to other tools used during and before performances, such as Arkaos for “VJing” (like DJing but with video).  I have since developed a mockup (in PowerPoint funnily enough), for this interface, which takes some features from each of these tools, and adds some of our own.



eDance Discussions in Manchester

1 02 2008

Today and yesterday Anja, Helen, Sita and myself have been getting into the nitty-gritty of the eDance project software requirements in Manchester.  Helen and Sita arrived (after what sounded a monumental train journey for Helen!) and we got straight into discussing their experience of using the mish-mash of software we have given them so far!  Of course, this software hadn’t been working 100% smoothly (as it was being used in a context it had not been conceived for – namely all running on one machine without a network).  However, they had managed to get some useful recordings which we had they had sent to us, and we had already imported them onto our local recording server before they arrived.

We started by discussing what Helen and Sita found was missing from the existing recordings.  This included things like the fact that the windows all looked like windows (i.e. had hard frames) which made it hard to forget that there was a computer doing the work.  This was expanded with further windowing requirements, like window transparency and windows with different shapes, which would help allow more free layouts of the videos.  We quickly realised that this could also help in a meeting context, as it would help AG users forget that they are using computers and just get on with the communication. 

 We also discussed having a single window capable of displaying different videos; this could make it look better in a performance context, where you wouldn’t want to see the movement of the windows, but want to change between videos.  It was also desirable to split up the recorded video into separate streams that could be controlled independantly.  This would allow different parts of different recordings to be integrated.  This would also require the ability to jump between recordings, something that the current software does not allow.

 We moved on to talk about drawing on videos.  This would allow a level of visual communication between the dancers and choreographers, which can be essential to the process; it was mentioned earlier that much of the communication is visual (e.g. “do this” rather than “move from position A to position B”).  Drawings on the videos would enable this type of communication – although for effective communication, the lines would need to be reproduced as they are drawn, rather than just the line (i.e. the movement of the drawing, not just the result).  We realised that there was a need to have tools for the lines, as you may want lines that stay for the duration of the video and lines that disappear after some predetermined interval (and of course a clear function to remove lines).

 We finally discussed how all this would be recorded, so that it could be replayed either during a live event or during another recording, including the movement of windows and drawings on the screen.  We realised that this would need a user interface.  This is where we hit problems, as we found that it would be complicated to represent the flow through the videos.  We realised that this may be related to the work on Compendium – this is where we left this part as Simon was not present to help out with this!



The sand starts to fly

28 09 2007

Day 2 of the Sandpit – Simon’s view:

The morning was spent in a dance studio with Helen and Sita (choreographers) and Amalia, one of Helen’s team, introducing the technologists (Mike, Andrew, Anya and me) to their world. We reconnected with our bodies as they walked us through movement exercises that helped us to glimpse what it is to be a dancer, and how a visual language can be built up. We each worked on a short ‘performance’ (all 15 seconds of it) composed from movements based on our names, then combined them in pairs, and culminating in an improvisational piece with all of us performing, and trying to respond creatively to what was going on around us. Great fun! But more importantly, hugely complex, helping us appreciate how much is going on at any moment for a dancer.

edancing1.jpg

We rounded off the morning by playing with a dance DVD which featured an interactive, annotated timeline for each dance clip. This helped the viewer navigate flexibly to the point of interest — just like we generate in the Memetic replay interface! The positive response that this DVD always gets from dancers is hopefully an encouraging sign of how Memetic will be received…

 

dance-dvd.jpg

In the afternoon we moved into the new theatre, where we ‘learnt to be choreographers’, and Helen and Sita got their hands dirty using the software. Helen directed Amalia and Lisa, another dancer, through several sequences of a multimedia piece Helen has been choreographing. Our job was to get some insight into the process.

helen-coaching.jpg

It was great to see some real dancing after our morning’s efforts!…

amalia-lisa.jpg

Then Mike and Anya took over, inventing some new variations, and experiencing what it is like to work with dancers to refine an idea.

Meanwhile, Andrew got Memetic running locally on Helen’s laptop so she now has a portable ‘Access Gird recording studio’ she can start playing with, while I created a dance annotation stencil in Compendium. This enabled the real time annotation of points to which the choreographer wants to return for reflection, eg. when a particular dancer does something, a moment that works/doesn’t work, an intervention from the choreographer, or a compositional strategy.

Sita and Helen then experimented with this to annotate key moments in the rehearsal, which opened up some great discussions about future requirements to support flexible structures.

compendium-sandpit1.jpg

If I’ve learnt anything today, it’s that emergent structure, themes, patterns are central to how choreographers work. Our e-science knowledge cartography and video replay tools must support this.