e-Dance@BL’s Growing Knowledge Exhibition

12 10 2010

Today sees the launch of the British Library’s major new exhibition on how digital tools are already transforming how we do research: Growing Knowledge: The Evolution of Research (12 October 2010 – 16 July 2011). [Media coverage]

We’re delighted to say that the e-Dance Project was selected as one of the examples, showcasing how close collaboration between technology researchers (originally developing Access Grid video-conferencing/collaboration tools, and Compendium hypermedia mapping, in an e-Science context) enables arts and humanities researchers, in this case Choreographic researchers/practitioners, to break new ground playing with time and space in their discipline.

Flickr set: e-Dance@BL photos

The e-Dance exhibit presents video material introducing the project, with examples of the technologies in action. Some of this is on the BLGK demos website,  an extract from an extended podcast playlist.

Browse the blog to learn more, and to download the Access Grid Scene Editor and Compendium e-Dance Edition.

This article sets out the academic rationale for e-Dance:

Bailey, H., Bachler, M., Buckingham Shum, S., Le Blanc, A., Popat, S., Rowley, A. and Turner, M. (2009). Dancing on the Grid: Using e-Science Tools to Extend Choreographic Research. Philosophical Transactions of the Royal Society A, 13 July 2009, Vol. 367, No. 1898, pp. 2793-2806. [PDF]



Compendium e-Dance edition released!

5 05 2010

Hurrah! All of the extensions that we added to Compendium during the e-Dance project have now been folded into the new Compendium 2.0 beta release, available free and open source via the Compendium Institute download page.

In addition to adding further controls over the aesthetic appearance of knowledge maps, and significant ‘under the bonnet work’ to make the software leaner and faster when working on collaborative projects over the internet, the key new feature is “Movie Maps”. From the release notes:

Movie Maps: You can bring videos directly into a new kind of Compendium view, called a “Movie Map.” With this you can add nodes and links on top of a movie, having these annotations appear and disappear wherever you want over the length of the video. You can even animate maps without a video, so that you can add movement to your maps.

As part of creating a visual language tuned to choreographic research and practice, there is also an e-Dance stencil, ie. a custom (and editable) palette of icons representing camera/stage setups and compositional structures. These can be dragged and dropped onto maps as part of the annotation process. Two examples are shown below:

The desire to create aesthetically pleasing look and feel customised to the user community led us to break out some of the key graphical elements in Compendium into set of Theme resources. These can now be downloaded and shared via the Theme web exchange. Compendium-eDance theme is illustrated below:

It is very satisfying to see the work from e-Dance released to the world, and towards the end of the project we’ll be talking with the choreography researchers to see how they are starting to play with it.

This constitutes the project deliverable of an application, able to run on any laptop, to enable portable video annotation. Many thanks to the Andrew Rowley in the Manchester team, whose world class expertise in Java and video codecs made it possible for Michelle here at Open U. to drop in code that handles many kinds of video format, and to choreography researchers Sita Popat (U. Leeds) and Helen Bailey (U. Bedfordshire) who gave us detailed design input and feedback as we worked through many iterations of the movie maps and e-Dance stencil. A great example of collaborative work.



Scene Editor on Google Code

16 10 2009

We finaly made it ‘Open Source’!

The part of e-Dance which was developed to help create performances is now on Google Code:

http://code.google.com/p/scene-editor/

There is a binary package for people to try out and developers are welcome to help completing the features and fixing the bugs. A user documentation is on the wiki. Comments welcome!



Choreographic video annotation

14 09 2009

edance-demo

This series of movies brings together Choreography researcher Sita Popat and e-Science researcher Simon Buckingham Shum, who demonstrate and discuss the adaptation of one of the project’s e-Science tools for Choreography, the Open University’s Compendium tool for mapping ideas and annotating media. Acknowledgements to Michelle Bachler (Open U.) and Andrew Rowley (U. Manchester) for expert software development, and webcast wizard Ben Hawkridge (Open U.) for helping us migrate the footage to Web. High-resolution versions of the screen recordings are linked to the relevant tracks.

The video-enabled version of Compendium will be going into alpha release this month with invited testers, for full release within a couple of months.

The academic context for this work is set out in a recent article:

Bailey, H., Bachler, M., Buckingham Shum, S., Le Blanc, A., Popat, S., Rowley, A. and Turner, M. (2009). Dancing on the Grid: Using e-Science Tools to Extend Choreographic Research. Philosophical Transactions of the Royal Society A, 13 July 2009, Vol. 367, No. 1898, pp. 2793-2806. [PDF]

edance-demo3

edance-demo2

The movie summaries are listed below… high resolution versions of the screen recordings are linked to the web versions for detailed viewing.

  1. Demo: Sita takes us through a demonstration which illustrates how Compendium can be used to annotate video footage as part of choreographic scholarship.
  2. Background: The e-Dance project conducted rapid application development through asynchronous collaboration between the partners, punctuated with intense day long workshops. At these, the choreographer would demonstrate how she used (or wished she could use) the Compendium e-science tool. The software developer would then code changes for feedback.
  3. Demo: Sita explains how and why she requested a feature to create Transition Points in video footage.
  4. Demo: Sita explains the value of placing images into key moments in a video, as supplementary material that she can use to support discourse in scholarship or teaching.
  5. Demo: Sita explains the value of being able to lay out an arbitrary number of videos on the canvas.
  6. Demo: Following the last clip, Sita discusses opportunistic and planned juxtaposition of video.
  7. Interview: Simon asks Sita to consider how hypermedia annotation tools such as this could scaffold students’ project work and reflection as they track and communicate their work.
  8. Demo: Sita works through an example of linking three interconnected video clips
  9. Demo: Following the last example, Sita shows how annotations in one context can co-exist in multiple other projects.
  10. Interview: Sita and Simon discuss to what extent a hypermedia tool such as this might shape practice, and reflect on other aspects of the project.

View the movies

edance-demo4

edance-demo5



MIMA Workshop in New York

10 11 2008

ny1.JPG 

I spent a week in New York (26-31 October) with New York/Berlin-based Dance tech. company Troika Ranch http://www.troikaranch.org/.  I was participating in the MIMA: Moving Image Media Artists workshop, along with ten other international artists/academics.  The workshop was funded by the New York State Council for the Arts and provided an intensive engagement with the Isadora software developed by Company co-Director Mark Coniglio.  The workshop was held at the New York City arts and technology space 3LD Art & Technology Center http://3leggeddog.org/mt/

 ny12.JPG

It was a really interesting venue, not least because of the sci-fi 2001/Star Wars interior! The workshop was excellent and pretty intensive in terms of the breadth of material covered.  The whole week was essentially spent getting to grips with the software. 

ny6.JPG

It was led by Mark Coniglio and Dawn Stopiello the co-Directors of the Company.  It was great to have direct access to the developer of the software in order to understand something of the design imperatives that drove the development.  I think that Isadora will offer huge amounts to our development in terms of how we manage and manipulate live and recorded streamed video.  The issue seems to be around the number of cameras that can be utilised at any given time (although Mark is apparently working on this for the next piece they are currently working on) and networking capacity. I think that probably the best way to negotiate this would be to invite Mark to one of our intensives and focus on the possibilities for integrating the software.

ny2.JPG



Spatio-temporal video annotation in Compendium

7 11 2008

We (Simon & Michelle, Open U) are now working on embedding video into Compendium, to enable not only the usual temporal annotation of video, but adding a spatial dimension so that choreography researchers/practitioners/students can locate annotations in specific locations, for specific time-windows.

Moreover, annotations in Compendium are not simply free-text ‘stickies’, but hypertext nodes, embedded in an arbitrary number of other views and conversations, possibly linked to whole new networks, possibly with their own annotated movies.

Here’s a first screenshot showing the node layer annotated over the video layer (from William Forsythe’s Improvisational Technologies DVD)…

This second example shows two videos running, connected by nodes…

This will be evolved into a more fully developed environment with appropriate support tools in the coming months, working closely with the choreographers as we release early and often for feedback.



Knowledge Cartography for Choreography?

15 09 2008

It’s always a nice feeling when a book you’ve been working on for months finally lands on your doormat, and so what better way to kick off the week than to proudly hold up Knowledge Cartography: Software Tools and Mapping Techniques!

Co-edited with KMi Research Fellow Ale Okada, and NESTA Fellow Tony Sherborne, Creative Director at the Centre for Science Education at Sheffield Hallam U.

We summarise the book’s orientation in the preface:

While “sense” can be expressed in many ways (non-verbally in gesture, facial expression and dance, and in prose, speech, statistics, film…), knowledge cartography as construed here places particular emphasis on digital representations of connected ideas, specifically designed to:

I. Clarify the intellectual moves and commitments at different levels.
(e.g. Which concepts are seen as more abstract? What relationships are legitimate? What are the key issues? What evidence is being appealed to?)

II. Incorporate further contributions from others, whether in agreement or not.
The map is not closed, but rather, has affordances designed to make it easy for others to extend and restructure it.

III. Provoke, mediate, capture and improve constructive discourse.

e-Dance’s mission is to explore how an approach such as this — which emphasises clarity and rigour of thought, making relationships explicit between ideas — meshes with the modes of cognition and creativity that we find in choreography. Tomorrow at DRHA’08 in Cambridge, we’re hosting a panel session reflecting on the changing possibilities of the digital interface that performers, choreographers and audience may now encounter with dance. Thinking specifically about mapping memetic, intellectual worlds, we might muse the following:

  • The concept of “memetic media” is quite an interesting one, pointing to the fact that e-Dance media fragments can have ideas behind them that are being crafted by the choreographer within a knowledge mapping tool
  • In terms of the knowledge mapping interface, it’s possible that performers might not see this at all, if it is primarily a tool for the choreographer to reflect on discussions in rehearsal, to maintain her multimedia research archive, and to craft performance and/or research narratives (e.g. paths of ideas)
  • Another scenario is that the knowledge mapping interface is exposed to performers, as a useful visualization for discussion, eg. of how ideas develop in the unfolding of a piece, or to show spatial configurations
  • Yet another scenario is that the tools generate visualizations of memetic media fragments that are good enough to use as part of the performance, eg. dancers trigger, and/or respond to visualizations
  • It is possible that visualizations of the connections between ideas, themes, moods and media fragments might literally provide an orienting map for audiences to follow the connections between activity in different locations and times.


August Research Intensive

25 08 2008

       pict0008-smhb.jpg

The fifth research intensive took place between 4 – 8 August in Bedford.  This week-long laboratory included the following project participants: Helen Bailey, Michelle Bachler, Anje Le Blanc and  Andrew Rowley.  We were joined by Video Artist: Catherine Watling, and four Dance Artists: Catherine Bennett, River Carmalt, Amalia Garcia and James Hewison.  There were several aims for this lab: 1. To explore, in a distributed performance context, the software developments that have been made to Memetic and the AG environment for the manipulation of video/audio streams. 2. To integrate other technologies/applications  into this environment. 3. To initiate developments for Compendium. 4. Explore the compositional opportunities that the new developments offer. 

In terms of set-up this has been the most ambitious research intensive of the project.  We were working with four dancers distributed across two networked theatre spaces.  For ease we were working in the theatre auditorium and theatre studio at the University.  Both of which are housed within the same building.  This provided the opportunity for emergency dashes between the two locations if the technology failed.  We worked with three cameras in each space for live material and then pre-recorded material in both locations.  Video Artist Catherine Watling worked with us during the week to generate pre-recorded, edited, video material that could then be integrated into the live context.

1. Software developments in a distributed environment

The significant developments from my perspective here was firstly the the ability of the software to remember the layout of projected windows from one ‘scene’ to another.  This allowed for a much smoother working process and for the development of more complex compositional relationships between images .  The second significant development was the transparency of overlaid windows which allows for the creation of composite live imagery across multiple locations.

What’s really interesting is that the software development is at the stage where we are just beginning to think about user interface design.  During the week we looked at various applications to assess the usefulness of the interface or elements of the design for e-Dance.  We focused on Adobe Premiere, Isadora, Arkaos, Wizywyg and Powerpoint.  These all provided useful insights into where we might take the design. 

2. Integration of other technologies

We had limited opportunity to integrate other technology and applications during the week.  I think this is better left until the software under development is more robust and we have a clear understanding of its functionality.  We did however integrate the Acecad Digi Memo Pads into the live context as graphic tablets.  I was first introduced to these by  those involved in the JISC funded VRE2 VERA Project running concurrently with e-Dance.  This provided an interesting set of possibilities both in terms of operator interface and also the inclusion of the technology within the performance space to be used directly by the performers.

3. Begin Compendium development

The OU software development contribution to the project began in earnest with this intensive.  Michelle was present throughout the week, which gave her the opportunity to really immerse herself in the environment and gain some first-hand experience of the choreographic process and the kinds of working practices that have been adopted by the project team so far.

Throughout the week Michelle created Compendium maps for each day’s activity.  It became clear that the interface would currently militate against the choreographic process we are involved in.  So having someone dedicated to the documentation of the process was very useful.  It also gave Michelle first-hand experience of the problems.  The choreographic process is; studio-based, it is dialogic in terms of the construction of material between choreographer and dancers, it involves the articulation of ideas that are at the edges of verbal communication and judgements are tacitly made and understood.  Michelle’s immediate response to this context was to begin to explore voice-to-text software as a means of mitigating some of these issues.

The maps that were generated during the week are really interesting in that they have already captured thinking, dialogue and decision-making within the creative process that would previously have been lost.   The maps also immediately raise the question about authorship and perspective.  The maps from the intensive had a single author, they were not collaboratively constructed, they represent a single perspective on a collaborative process.  So over the next few months it will be interesting to explore the role/function of collaboration in terms of mapping the process – whether what we should aim for is  a poly-vocal single map that takes account of multiple perspectives or an interconnected series of individual authored maps will need to be considered.

4. Compositional developments

Probably influenced by the recent trip to ISEA (the week before!), the creative or thematic focus for the laboratory was concerned with spatio-temporal structure again but specifically location.  I began with a series of key terms that clustered around this central idea – they were; dislocate, relocate, situate, resituate, trace, map.  A series of generative tasks were developed that would result in either live/projected material or pre-recorded video material.  This material was then organised or rather formed the basis of the following sections of work-in-progress material:

Situate/Resituate Duet (Created in collaboration with and performed by River Carmalt and Amalia Garcia) (approx. 5 minutes)

                                 katherinedrawing2-crophb.JPG                               

The duet was constructed around the idea of ‘situating’ either part of the body or the whole body, then resituating that material either spatially, or onto another body part or onto the other dancer’s body.  We used an overhead camera to stream this material and then project it live in real-time to scale on the wall of the performance space.  This performed an ambigious mapping of the space. 

                                  pict0015i-smhb.jpg                       

The Situate/Resituate Duet developed through the integration of the Digi Memo Pads into the performance context.  James and Catherine were seated at a down-stage table where they could be seen drawing.  The drawing was projected as a semi-transparent window over the video stream.  This allowed them to directly ‘annotate’ or graphically interact with the virtual performers. 

  katherinedrawing6-crophb.JPG  

Auto(bio/geo) graphy (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison.  Filmed and edited by Catherine Watling) (4 x approx. 2-minute video works)

In this task we integrated Googlemaps into the performance context as both content and scenography.  We used a laptop connected to Googlemap on stage and simultaneously projected  te website onto the back wall of the performance space.   The large projection of the satellite image of the earth resituated the performer into a extra-terrestrial position. 

               dscf6424_sm.jpg

Each performer was asked to navigate Googlemap to track their own movements in the chronological order of their lives.  They could be as selective as they wished whilst maintaining chronological order.  This generated a narrativised mapping of space/time.  This task resulted in a series of edited 2-minute films of each performer mapping their chronological movements around the globe.  Although we didn’t have the time within the week to utilise these films vis-a-vis live performance, we have some very clear ideas for their subesquent integration at the next intensive. 

                dscf6428.JPG

Dislocate/Relocate: Composite Bodies (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison) (approx. 4-minutes)

This distributed quartet was constructed across two locations.  The ideas of dislocation and relocation through a fragmentation of the four performers’ bodies and then reconstituting two composite virtual bodies from fragments of the four live performers. 

          wedspict0003_sm.JPG

We began with the idea of attempting to ‘reconstruct’ a coherent singluar body from the dislocated/relocated bodily fragments.  Then we went on to explore the radical juxtaposition created by each fragmentary body part moving in isolation.

                 pict0006h-sm.jpg

                            pict0001h-smhb.jpg

“From Here I Can See…” (A Distributed Monologue) (Created in collaboration with and performed by Catherine Bennett, River Carmalt, James Hewison) (approx. 5 -minutes)

This distributed trio  was initially focused on the construction of a list-based monologue in which the sentence “From here I can see…” was completed in a list form that functioned through  a series of associative relationships.

pict0023-smhb.jpg

 In one location Catherine delivered a verbal monologue, another dancer performed a micro-gestural solo with his feet within the same location as Catherine.  A third non-co-located dancer  in the other space performed a floor based solo. 

Trace Duet (Created in collaboration with and performed by Catherine Bennett and James Hewison)

In this duet we focused on bringing togther the new software capability to produce layered transparency of windows and the use of the Acecad Digi Memo Pad as graphics tablet.  We also worked with  a combination of handheld and fixed camera positions.

pict0002j-smhb.jpg

pict0004j-smhb.jpg

pict0013j-smhb.jpg



Performance, programming and user interfaces: The Fifth eDance Intensive

11 08 2008

Last week, I attended the fifth eDance intensive in Bedford.  Upon arriving, we discovered that we were to put on a public performance on Friday (unfortunatly after I had to leave).  This increased the pressure somewhat, although our confidence in the recording software had recently improved after it had been successfully used to record the Research Methods Festival “What Is” sessions in Oxford over 4 days without incident.  To add further to this task, we were also to set up two separate spaces for this performance, with three cameras in each and audio conferencing between the spaces, and we hadn’t brought any echo-cancelling microphones and only powerful laptop!  We immediatly got on with the task of setting up the two spaces. 

One of the aspects of eDance that is different from Access Grid is the requirement of high-quality video at high frame-rates.  Access Grid has historically used H.261 video, which is transmitted at a resolution of 352 x 288, and tends to reach about 20 frames per second.  For eDance (and the related CREW project), we decided to use miniDV camcorders for the cameras using the firewire interface.  This allows 25 fps video at 720 x 576, 4 times the quality of regular Access Grid video.  Thankfully, AG is easily extensible, which allow us to make this change without much effort.  The main issue is however with the laptop processing load.  We initially thought that our tool was not very good in terms of CPU loading, and assumed that this was because we are using Java.  However, we tried to use Vic, the video tool that AG uses, and found that with the same resolution and frame-rate, this also tends to result in high CPU usage.  The main advantage that vic has over our tool is that it allows you to control the bandwidth and frame-rate of the transmitted video.  Fortunatly, the piece that Helen was composing did not require us to use all of our video streams simultanously, so we managed to get away without too much overloading (although the fans on the laptops were running full steam ahead)!

The first of the successful uses of the videos between the two spaces was a piece where the video from the top half of one person was displayed over the video of the bottom half of another.  The two dancers then had to coordinate their movements so that they appeared to be one person.  This produced some wonderful images, as well as some good comedy moments!

Our next issue came with a misunderstanding about what our software could do.  We had bragged about how we could now have transparent windows.  We also tested to see if the video underneath would also show through the one on top (rather than drawing a pink area as had happened before).  This lead Helen to start planning a piece that could use this functionality (it looked like it could be really cool – overlaying people onto a model of a village)!  Unfortunalty, the way the technology was working was to capture the scene behind the transparent video, and then statically mix this in.  This meant that the image behind the transparent window could not move.  I set to fixing this problem straight away (I had already thought about how this might work previously), but by the time I had made it work, Helen had moved away to something else.

 During the course of the week, there was a small saga related to the ordering of some drawing pads to arrive by next-day delivery on the Monday.  When these did arrive on the Thursday, Helen immediatly devised a new idea involving overlaying the drawings done on the devices over the video.  As I now had the transparent video working, we immediatly got this working by having the transparent video window on top of the drawings.  Michelle also worked out how to give this a nice black background with a light grey pen colour, so this looked good behind the video windows.

The second successful use of the software was then the drawing on the overlayed transparent video windows.  The dancers moved around the space, and then when the drawing pad operator said “Stop”, they froze while the operator drew around their shapes (and also sometimes the shape of their shadow).  This resulted some quite complex images!

 We also realised on the Thursday that we had still not got the audio working between the spaces.  As we did not have echo-cancelling equipment, our efforts using the audio in the software tended to result in loud noises being played into the spaces!  We decided that we should therefore use some software that had echo-cancelling built in, such as Skype or MSN Messenger.  Skype turned out not to work too well as it had not been tested on the Bedford campus.  Although this was resolved before the end of the week, we decided to stick with MSN Messenger as it seemed to work fine.  This meant that the audio was not recorded, but did mean that it didn’t echo!

Another limitation of the software was revealed when we were asked to play back some recorded media.  During the course of the week, we were joined by a video specialist who had recorded the performers using Google Maps (which looked very impressive) and also whilst out walking in a forest (which gave us some much-needed time to get the transparencies working, and to look at other software – see later).  We then had to ask her to write this back to DV tape to be imported into our software, as we couldn’t yet import avi or mpeg files.  We also realised that we would have to launch a separate instance of the software to play back each of the clips quickly, otherwise the audience would have to wait while we loaded each clip up.  With the CPU load high from receiving video, we had to add an option for these extra instances not to receive the transmitted videos, otherwise the computer would quickly become overloaded.  Anja did this and this worked well.

Finally, one important outcome from the intensive was us all finally working out how the interface might look from the video perspective.  We worked out that we had over-complicated the interface in terms of representing video timelines, and that it might be best if the interface looked a little bit like Microsoft PowerPoint (or other presentation editing software).  This would obviously have major differences, such as being able to edit what was seen on the live screen without exiting what would be Presentation Mode in PowerPoint.  We were also shown the user interfaces to other tools used during and before performances, such as Arkaos for “VJing” (like DJing but with video).  I have since developed a mockup (in PowerPoint funnily enough), for this interface, which takes some features from each of these tools, and adds some of our own.



eDance Discussions in Manchester

1 02 2008

Today and yesterday Anja, Helen, Sita and myself have been getting into the nitty-gritty of the eDance project software requirements in Manchester.  Helen and Sita arrived (after what sounded a monumental train journey for Helen!) and we got straight into discussing their experience of using the mish-mash of software we have given them so far!  Of course, this software hadn’t been working 100% smoothly (as it was being used in a context it had not been conceived for – namely all running on one machine without a network).  However, they had managed to get some useful recordings which we had they had sent to us, and we had already imported them onto our local recording server before they arrived.

We started by discussing what Helen and Sita found was missing from the existing recordings.  This included things like the fact that the windows all looked like windows (i.e. had hard frames) which made it hard to forget that there was a computer doing the work.  This was expanded with further windowing requirements, like window transparency and windows with different shapes, which would help allow more free layouts of the videos.  We quickly realised that this could also help in a meeting context, as it would help AG users forget that they are using computers and just get on with the communication. 

 We also discussed having a single window capable of displaying different videos; this could make it look better in a performance context, where you wouldn’t want to see the movement of the windows, but want to change between videos.  It was also desirable to split up the recorded video into separate streams that could be controlled independantly.  This would allow different parts of different recordings to be integrated.  This would also require the ability to jump between recordings, something that the current software does not allow.

 We moved on to talk about drawing on videos.  This would allow a level of visual communication between the dancers and choreographers, which can be essential to the process; it was mentioned earlier that much of the communication is visual (e.g. “do this” rather than “move from position A to position B”).  Drawings on the videos would enable this type of communication – although for effective communication, the lines would need to be reproduced as they are drawn, rather than just the line (i.e. the movement of the drawing, not just the result).  We realised that there was a need to have tools for the lines, as you may want lines that stay for the duration of the video and lines that disappear after some predetermined interval (and of course a clear function to remove lines).

 We finally discussed how all this would be recorded, so that it could be replayed either during a live event or during another recording, including the movement of windows and drawings on the screen.  We realised that this would need a user interface.  This is where we hit problems, as we found that it would be complicated to represent the flow through the videos.  We realised that this may be related to the work on Compendium – this is where we left this part as Simon was not present to help out with this!