August Research Intensive

25 08 2008

       pict0008-smhb.jpg

The fifth research intensive took place between 4 – 8 August in Bedford.  This week-long laboratory included the following project participants: Helen Bailey, Michelle Bachler, Anje Le Blanc and  Andrew Rowley.  We were joined by Video Artist: Catherine Watling, and four Dance Artists: Catherine Bennett, River Carmalt, Amalia Garcia and James Hewison.  There were several aims for this lab: 1. To explore, in a distributed performance context, the software developments that have been made to Memetic and the AG environment for the manipulation of video/audio streams. 2. To integrate other technologies/applications  into this environment. 3. To initiate developments for Compendium. 4. Explore the compositional opportunities that the new developments offer. 

In terms of set-up this has been the most ambitious research intensive of the project.  We were working with four dancers distributed across two networked theatre spaces.  For ease we were working in the theatre auditorium and theatre studio at the University.  Both of which are housed within the same building.  This provided the opportunity for emergency dashes between the two locations if the technology failed.  We worked with three cameras in each space for live material and then pre-recorded material in both locations.  Video Artist Catherine Watling worked with us during the week to generate pre-recorded, edited, video material that could then be integrated into the live context.

1. Software developments in a distributed environment

The significant developments from my perspective here was firstly the the ability of the software to remember the layout of projected windows from one ‘scene’ to another.  This allowed for a much smoother working process and for the development of more complex compositional relationships between images .  The second significant development was the transparency of overlaid windows which allows for the creation of composite live imagery across multiple locations.

What’s really interesting is that the software development is at the stage where we are just beginning to think about user interface design.  During the week we looked at various applications to assess the usefulness of the interface or elements of the design for e-Dance.  We focused on Adobe Premiere, Isadora, Arkaos, Wizywyg and Powerpoint.  These all provided useful insights into where we might take the design. 

2. Integration of other technologies

We had limited opportunity to integrate other technology and applications during the week.  I think this is better left until the software under development is more robust and we have a clear understanding of its functionality.  We did however integrate the Acecad Digi Memo Pads into the live context as graphic tablets.  I was first introduced to these by  those involved in the JISC funded VRE2 VERA Project running concurrently with e-Dance.  This provided an interesting set of possibilities both in terms of operator interface and also the inclusion of the technology within the performance space to be used directly by the performers.

3. Begin Compendium development

The OU software development contribution to the project began in earnest with this intensive.  Michelle was present throughout the week, which gave her the opportunity to really immerse herself in the environment and gain some first-hand experience of the choreographic process and the kinds of working practices that have been adopted by the project team so far.

Throughout the week Michelle created Compendium maps for each day’s activity.  It became clear that the interface would currently militate against the choreographic process we are involved in.  So having someone dedicated to the documentation of the process was very useful.  It also gave Michelle first-hand experience of the problems.  The choreographic process is; studio-based, it is dialogic in terms of the construction of material between choreographer and dancers, it involves the articulation of ideas that are at the edges of verbal communication and judgements are tacitly made and understood.  Michelle’s immediate response to this context was to begin to explore voice-to-text software as a means of mitigating some of these issues.

The maps that were generated during the week are really interesting in that they have already captured thinking, dialogue and decision-making within the creative process that would previously have been lost.   The maps also immediately raise the question about authorship and perspective.  The maps from the intensive had a single author, they were not collaboratively constructed, they represent a single perspective on a collaborative process.  So over the next few months it will be interesting to explore the role/function of collaboration in terms of mapping the process – whether what we should aim for is  a poly-vocal single map that takes account of multiple perspectives or an interconnected series of individual authored maps will need to be considered.

4. Compositional developments

Probably influenced by the recent trip to ISEA (the week before!), the creative or thematic focus for the laboratory was concerned with spatio-temporal structure again but specifically location.  I began with a series of key terms that clustered around this central idea – they were; dislocate, relocate, situate, resituate, trace, map.  A series of generative tasks were developed that would result in either live/projected material or pre-recorded video material.  This material was then organised or rather formed the basis of the following sections of work-in-progress material:

Situate/Resituate Duet (Created in collaboration with and performed by River Carmalt and Amalia Garcia) (approx. 5 minutes)

                                 katherinedrawing2-crophb.JPG                               

The duet was constructed around the idea of ‘situating’ either part of the body or the whole body, then resituating that material either spatially, or onto another body part or onto the other dancer’s body.  We used an overhead camera to stream this material and then project it live in real-time to scale on the wall of the performance space.  This performed an ambigious mapping of the space. 

                                  pict0015i-smhb.jpg                       

The Situate/Resituate Duet developed through the integration of the Digi Memo Pads into the performance context.  James and Catherine were seated at a down-stage table where they could be seen drawing.  The drawing was projected as a semi-transparent window over the video stream.  This allowed them to directly ‘annotate’ or graphically interact with the virtual performers. 

  katherinedrawing6-crophb.JPG  

Auto(bio/geo) graphy (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison.  Filmed and edited by Catherine Watling) (4 x approx. 2-minute video works)

In this task we integrated Googlemaps into the performance context as both content and scenography.  We used a laptop connected to Googlemap on stage and simultaneously projected  te website onto the back wall of the performance space.   The large projection of the satellite image of the earth resituated the performer into a extra-terrestrial position. 

               dscf6424_sm.jpg

Each performer was asked to navigate Googlemap to track their own movements in the chronological order of their lives.  They could be as selective as they wished whilst maintaining chronological order.  This generated a narrativised mapping of space/time.  This task resulted in a series of edited 2-minute films of each performer mapping their chronological movements around the globe.  Although we didn’t have the time within the week to utilise these films vis-a-vis live performance, we have some very clear ideas for their subesquent integration at the next intensive. 

                dscf6428.JPG

Dislocate/Relocate: Composite Bodies (Created in collaboration with and performed by Catherine Bennett, River Carmalt, Amalia Garcia, James Hewison) (approx. 4-minutes)

This distributed quartet was constructed across two locations.  The ideas of dislocation and relocation through a fragmentation of the four performers’ bodies and then reconstituting two composite virtual bodies from fragments of the four live performers. 

          wedspict0003_sm.JPG

We began with the idea of attempting to ‘reconstruct’ a coherent singluar body from the dislocated/relocated bodily fragments.  Then we went on to explore the radical juxtaposition created by each fragmentary body part moving in isolation.

                 pict0006h-sm.jpg

                            pict0001h-smhb.jpg

“From Here I Can See…” (A Distributed Monologue) (Created in collaboration with and performed by Catherine Bennett, River Carmalt, James Hewison) (approx. 5 -minutes)

This distributed trio  was initially focused on the construction of a list-based monologue in which the sentence “From here I can see…” was completed in a list form that functioned through  a series of associative relationships.

pict0023-smhb.jpg

 In one location Catherine delivered a verbal monologue, another dancer performed a micro-gestural solo with his feet within the same location as Catherine.  A third non-co-located dancer  in the other space performed a floor based solo. 

Trace Duet (Created in collaboration with and performed by Catherine Bennett and James Hewison)

In this duet we focused on bringing togther the new software capability to produce layered transparency of windows and the use of the Acecad Digi Memo Pad as graphics tablet.  We also worked with  a combination of handheld and fixed camera positions.

pict0002j-smhb.jpg

pict0004j-smhb.jpg

pict0013j-smhb.jpg



Performance, programming and user interfaces: The Fifth eDance Intensive

11 08 2008

Last week, I attended the fifth eDance intensive in Bedford.  Upon arriving, we discovered that we were to put on a public performance on Friday (unfortunatly after I had to leave).  This increased the pressure somewhat, although our confidence in the recording software had recently improved after it had been successfully used to record the Research Methods Festival “What Is” sessions in Oxford over 4 days without incident.  To add further to this task, we were also to set up two separate spaces for this performance, with three cameras in each and audio conferencing between the spaces, and we hadn’t brought any echo-cancelling microphones and only powerful laptop!  We immediatly got on with the task of setting up the two spaces. 

One of the aspects of eDance that is different from Access Grid is the requirement of high-quality video at high frame-rates.  Access Grid has historically used H.261 video, which is transmitted at a resolution of 352 x 288, and tends to reach about 20 frames per second.  For eDance (and the related CREW project), we decided to use miniDV camcorders for the cameras using the firewire interface.  This allows 25 fps video at 720 x 576, 4 times the quality of regular Access Grid video.  Thankfully, AG is easily extensible, which allow us to make this change without much effort.  The main issue is however with the laptop processing load.  We initially thought that our tool was not very good in terms of CPU loading, and assumed that this was because we are using Java.  However, we tried to use Vic, the video tool that AG uses, and found that with the same resolution and frame-rate, this also tends to result in high CPU usage.  The main advantage that vic has over our tool is that it allows you to control the bandwidth and frame-rate of the transmitted video.  Fortunatly, the piece that Helen was composing did not require us to use all of our video streams simultanously, so we managed to get away without too much overloading (although the fans on the laptops were running full steam ahead)!

The first of the successful uses of the videos between the two spaces was a piece where the video from the top half of one person was displayed over the video of the bottom half of another.  The two dancers then had to coordinate their movements so that they appeared to be one person.  This produced some wonderful images, as well as some good comedy moments!

Our next issue came with a misunderstanding about what our software could do.  We had bragged about how we could now have transparent windows.  We also tested to see if the video underneath would also show through the one on top (rather than drawing a pink area as had happened before).  This lead Helen to start planning a piece that could use this functionality (it looked like it could be really cool – overlaying people onto a model of a village)!  Unfortunalty, the way the technology was working was to capture the scene behind the transparent video, and then statically mix this in.  This meant that the image behind the transparent window could not move.  I set to fixing this problem straight away (I had already thought about how this might work previously), but by the time I had made it work, Helen had moved away to something else.

 During the course of the week, there was a small saga related to the ordering of some drawing pads to arrive by next-day delivery on the Monday.  When these did arrive on the Thursday, Helen immediatly devised a new idea involving overlaying the drawings done on the devices over the video.  As I now had the transparent video working, we immediatly got this working by having the transparent video window on top of the drawings.  Michelle also worked out how to give this a nice black background with a light grey pen colour, so this looked good behind the video windows.

The second successful use of the software was then the drawing on the overlayed transparent video windows.  The dancers moved around the space, and then when the drawing pad operator said “Stop”, they froze while the operator drew around their shapes (and also sometimes the shape of their shadow).  This resulted some quite complex images!

 We also realised on the Thursday that we had still not got the audio working between the spaces.  As we did not have echo-cancelling equipment, our efforts using the audio in the software tended to result in loud noises being played into the spaces!  We decided that we should therefore use some software that had echo-cancelling built in, such as Skype or MSN Messenger.  Skype turned out not to work too well as it had not been tested on the Bedford campus.  Although this was resolved before the end of the week, we decided to stick with MSN Messenger as it seemed to work fine.  This meant that the audio was not recorded, but did mean that it didn’t echo!

Another limitation of the software was revealed when we were asked to play back some recorded media.  During the course of the week, we were joined by a video specialist who had recorded the performers using Google Maps (which looked very impressive) and also whilst out walking in a forest (which gave us some much-needed time to get the transparencies working, and to look at other software – see later).  We then had to ask her to write this back to DV tape to be imported into our software, as we couldn’t yet import avi or mpeg files.  We also realised that we would have to launch a separate instance of the software to play back each of the clips quickly, otherwise the audience would have to wait while we loaded each clip up.  With the CPU load high from receiving video, we had to add an option for these extra instances not to receive the transmitted videos, otherwise the computer would quickly become overloaded.  Anja did this and this worked well.

Finally, one important outcome from the intensive was us all finally working out how the interface might look from the video perspective.  We worked out that we had over-complicated the interface in terms of representing video timelines, and that it might be best if the interface looked a little bit like Microsoft PowerPoint (or other presentation editing software).  This would obviously have major differences, such as being able to edit what was seen on the live screen without exiting what would be Presentation Mode in PowerPoint.  We were also shown the user interfaces to other tools used during and before performances, such as Arkaos for “VJing” (like DJing but with video).  I have since developed a mockup (in PowerPoint funnily enough), for this interface, which takes some features from each of these tools, and adds some of our own.



Presentation at the National e-Science Centre

9 05 2008

Simon and I were in Edinburgh on 6-7 May at the Arts & Humanities e-Science Projects meeting.  The event was co-hosted by AHeSSC www.ahessc.ac.uk the three Research Councils and NESC www.nesc.ac.uk.  The two-day event brought together the representatives from the seven funded projects.  We gave a presentation on the first day.  You can see the powerpoint attached below.  We also did a poster session in the evening and had a chance to catch-up in a less formal context with some of the other project teams.  The second day was led by Tobias, Stuart and Torsten from AHeSSC and provided a really useful context for discussion.  We also had a quite detailed tour of the www.arts-humanities.net site which is really interesting.  Torsten is going to link the e-Dance website in to it, which will hopefully generate more interest in and dialogue around the project.  Simon and I had a really good chat about the interdisciplinary concepts driving the project.  In the powerpoint you’ll see some slides towards the end that begin to articulate some of the thinking.  Simon might add some further throughts on that.

edinburgh.pdf



Summary of Research Intensive 7 – 11 April

15 04 2008

Participants: Helen Bailey, Catherine Bennett, James Hewison, Anja le Blanc, Mary McDerby, Sita Popat, Andrew Rowley, Martin Turner

 

The aim of this week-long intensive was to explore the use of AG as a performance environment and in particular in terms of the range of levels of engagement with space both compositionally and performatively.  This built on and consolidated the previous shorter workshops and provided the context for testing the software developments that have already been made.

 

I thought for the purposes of the ISEA paper and as a means of framing the research I’d think about the activities and outcomes in terms of different categories of, and engagements with, the concept of space –

Space 1: Compositional Space

In order to push further in terms of consolidating previous work, it seemed necessary to create a more considered choreographic fragment of material to form the basis of the experimentation. 

1.      Tasks

So I began with a series of choreographic/improvisatory tasks that took various notions of space as a starting point –

  • Generative compositional tasks focused on proxemic relations i.e., the nexus of proximity and orientation within the duet form.
  • Fragmented approach to the body through an exploration of ‘fixing’ parts of the body in space as an anchor point.  The rest of the body motionally relative to this fixed point.
  • Scale – the shift from whole body vocabulary to gestural material.  The use of focus as a means of framing these shifts.
  • Dialogic structure (movement between/across/and in relation to two embodied positions) emphasizing the communicative aspect of the material and the context.
  • Textual improvisations exploring different experiential perspectives of ‘space’- (1) journey to work from a first person perspective, (2) experiential commentary on the movement from first person perspective, (3) commentary on proxemic relationship to the other performer whilst performing the material, (4) verbal account of first person perspective of moving through the choreographed duet material whilst not performing.

rehearsal1.jpg

2.      Choreographic Structure

 

From the task above material was generated and then structured into four phases –

·        Phase 1: Duet material distributed spatially within the performance space in such a way as to provide the sense from both the performer and spectator postions of non-co-located but synchronous solos

·        Phase 2: Solo (one half of the duet material) performed whilst non-co-located, but synchronously the other dancer performed text based on Textual Improvisation (4).

·        Phase 3: Duet material co-located and including sychronous verbal monologues that intercut descriptions of journey to work with verbal commentary on the movement.

·        Phase 4: Duet material co-located.

 

 rehearsal3.jpg

Space 2: Physical Space

We used a series of white flats to construct a wall that created a ‘z’-shaped space as a performance environment.  This allowed for multiple projection surfaces and for the live performers to have a sense of being in either co-located or non-co-located relationships with one another – both in and out of visual contact with one another. The data projectors were both set-up down stage and parallel with one another .  One was at middle level and the other was rigged at a high level.  The parameters of the performance space were delineated by the camera orientation and proximity.

Space 3: Camera Space

A clock face system similar to that used in NVC theory, developed by E.T. Hall to document the proximal aspects of social interaction, for the documentation of the camera positions was developed. (It was interesting because this was identified on the fly in a discussion I was having with Mary about meta-data. I will provide  a diagram of what I’m imagining later) We used four cameras in live relay mode, however the streams were later multiplied as they were being run through two networked computers.  Therefore providing eight separate windows, projected through two projectors for distribution in the physical space.

 

Generally,  the cameras were positioned radially in terms of orientation, in the horizontal plane in order for the field of vision to overlap, thus giving the opportunity for multiple synchronous images from different orientations.  Proximity was radically different ranging from XCU to wide-shot.  One camera was placed overhead oriented on the vertical plane and in wide-shot. In terms of level, two cameras were in the middle-level, one low at floor level and one in high-level overhead.

 

perf11.jpg

Space 4: Video Space

This category is concerned with the spatial aspects and properties of the projected material only.  In terms of software developments this is where really significant progress has been made from a choreographic perspective.  We are now able to change the boarders on the individual windows that present the streamed material.  The traditional Microsoft ‘Windows’ boarders can now be removed which has a radical aesthetic and semiotic impact on the material.  There is also now the capacity to assign different degrees of transparency to individual windows.  We didn’t really explore this new development – as the really significant aspect of this will be the ability to then layer multiple windows and play different streams simultaneously in order to create a sense of a ‘shared’ virtual space.  This will be ready to be explored at the next intensive.

 

We documented the position and size and foreground/background relationship of the windows and created a repeatable set of spatial structures in relationship to the live compositional structure of the four phases outlined above.  Again a diagram of the spatial relationship of the windows in each phase will follow – I’m reliant on Mary’s metadata documentation for this.

In Phase 4 of the material we configured the windows to use ‘to scale’ projections of the dancers. In particular we used this with the overhead shot of the space.  This created an enhanced sense of the ‘liveness’ of the mediated material.  The whole concept of scale became significant in terms of its relationship to notions of distance and intimacy in the framing of the body.

 

As a result of the multi-perspectival images projected there was a fragmentation of the body within the individual frames as a result of the partial views generated from the selected camera positions.

 

Composite and multiple representations of the dancing bodies were generated through the interrelationship of the different configuration of windows being projected.  This articulated an ever shifting proximal relationship, playing with distance and intimacy in a way that highlighted the live (virtual) presence within a mediated rather than live (actual) context. 

 

‘Disappearance’ of the virtual performers from the projected material was a really interesting moment – at one point in the performance of the material the live performers accidentally ‘found’ a non-mediatised space within the performance space – so the live/actual material was the only performance visible and the windows remained empty.  This was interesting in that it marked an absence, which through into question the ‘legitimacy’ of what they were doing, in performative terms.  Both literally and philosophically there was a spatial ‘reduction’ in the material.

 

perf5.jpg

Space 5: Performative Space

By this I’m referring to the generative capacity of the performers to construct space(s).  So these ideas/thoughts are not limited to a particular category of space, but are perhaps more concerned with defining the principles or concepts that are articulated through the conduit of  ‘the doing’ of performance and in the interrelatedness or in-between-ness of these spatial contexts –

  • Embodying space – being in the present
  • Marking absent space, the trace of disappeared presence, both in motional pathways  and in the verbal commentaries
  • Emphasis on liveness and ephemerality
  • Dialogic relationship – oscillation between the live and the mediated, the actual and the virtual

perf4.jpg



Manchester practicum

22 02 2008

replay-miming.jpg

[Simon / Open U writes…] — We’ve just completed a 2-day practicum in Manchester, continuing our experiments to understand the potential of Memetic’s replayable Access Grid functionality for choreographic rehearsal and performance.

Two AG Nodes (i.e. rooms wired for sound and large format video) were rearranged for dance, clearing the usual tables and chairs. With the Nodes connected, a dancer in each Node, and Helen as choreographic researcher co-located with each in turn, we could then explore the impact of different window configurations on the dancers’ self image, as well as their projected images to the other Node, and mixing recorded and live performance.

feb08-1.jpg

Helen will add some more on the choreographic research dimensions she was exploring. My interest was in trying to move towards articulating the “design space” we are constructing, so that we can have a clearer idea of how to position our work along different dimensions. (Reflecting on how our roles are playing out as we figure out how to work with each other is of course a central part of the project…)

feb08-2.jpg

To start with, we can identify a number of design dimensions:

  • synchronous — asynchronous
  • recorded — live (noting that ‘live’ is a problematic term now: Liveness: Performance in an Mediatized Culture by Philip Auslander)
  • virtual — physical
  • modality of annotation: spoken dialogue/written/mapped
  • AG as performance environment vs. as rehearsal documentation context
  • AG as performance environment enabling traditional co-present choreographic practices — or as a means of generating/enabling new choreographic practices
  • documenting process in the AG — ‘vs’ the non-AG communication ecology that emerges around the e-dance tools (what would a template for an edance–based project website look like, in order to support this more ‘invisible’ work?)
  • deictic annotation: gesture, sketching, highlighting windows
  • in-between-ness: emergent structures/patterns are what make a moment potentially interesting and worth annotating, e.g. the relationships between specific video windows
  • continuous — discontinuous space: moving beyond geometrical/Euclidean space
  • continuous — discontinuous time: moving beyond a single, linear time
  • framing: aesthetic decisions/generation of meaning – around the revealing of process. Framing as in ‘window’ versus visual arts sense

Bringing to bear an HCI orientation, an initial analysis of the user groups who could potentially use the e-Dance tools, and the activities they might perform with them, yielded the following matrix:

edance-user-activity-matrix.jpg

We will not be working with all of these user communities of course, but the in-depth work we do can now be positioned in relation to the use cases we do not cover.

feb08-3.jpg



eDance Discussions in Manchester

1 02 2008

Today and yesterday Anja, Helen, Sita and myself have been getting into the nitty-gritty of the eDance project software requirements in Manchester.  Helen and Sita arrived (after what sounded a monumental train journey for Helen!) and we got straight into discussing their experience of using the mish-mash of software we have given them so far!  Of course, this software hadn’t been working 100% smoothly (as it was being used in a context it had not been conceived for – namely all running on one machine without a network).  However, they had managed to get some useful recordings which we had they had sent to us, and we had already imported them onto our local recording server before they arrived.

We started by discussing what Helen and Sita found was missing from the existing recordings.  This included things like the fact that the windows all looked like windows (i.e. had hard frames) which made it hard to forget that there was a computer doing the work.  This was expanded with further windowing requirements, like window transparency and windows with different shapes, which would help allow more free layouts of the videos.  We quickly realised that this could also help in a meeting context, as it would help AG users forget that they are using computers and just get on with the communication. 

 We also discussed having a single window capable of displaying different videos; this could make it look better in a performance context, where you wouldn’t want to see the movement of the windows, but want to change between videos.  It was also desirable to split up the recorded video into separate streams that could be controlled independantly.  This would allow different parts of different recordings to be integrated.  This would also require the ability to jump between recordings, something that the current software does not allow.

 We moved on to talk about drawing on videos.  This would allow a level of visual communication between the dancers and choreographers, which can be essential to the process; it was mentioned earlier that much of the communication is visual (e.g. “do this” rather than “move from position A to position B”).  Drawings on the videos would enable this type of communication – although for effective communication, the lines would need to be reproduced as they are drawn, rather than just the line (i.e. the movement of the drawing, not just the result).  We realised that there was a need to have tools for the lines, as you may want lines that stay for the duration of the video and lines that disappear after some predetermined interval (and of course a clear function to remove lines).

 We finally discussed how all this would be recorded, so that it could be replayed either during a live event or during another recording, including the movement of windows and drawings on the screen.  We realised that this would need a user interface.  This is where we hit problems, as we found that it would be complicated to represent the flow through the videos.  We realised that this may be related to the work on Compendium – this is where we left this part as Simon was not present to help out with this!



Choreographic Workshop 2

1 02 2008

Choreographic Workshop 2 image 1
Sita and I have spent the last two days with Andrew and Anja in Manchester.  The aim of the workshop was to revisit the video stream material from the first workshop in January and begin to look at the possibilities that this material might provide compositionally.

In the first workshop, we were testing the system and generating material for the windows. We were exploring what the system offered creatively that standard video cameras don’t.
Choreographic workshop 2 image 4
 Yesterday we looked at Memetic as a playback system. We asked ourselves what Memetic and AG offer as a performance environment. How do we play with recorded material in this environment? How do we navigate and compose video stream material? What does the system permit/enable in terms of the relationship between live and streamed material? 
Choreographic workshop 2 image 3
Most of the first day was discursive and playing with the video streams.  Initially we watched the video, but gradually they became background information from which we extrapolated the kinds of developments that we need to move forward. We started to construct a ‘to do’ list for Anja and Andrew.
Choreographic workshop 2 image 2
We began with the question: what can we do with this environment now?  Picking up on the idea of the window/frame from the first workshop, we translated that into some practical develoment needs with Anja and Andrew.  For instance, we were particularly concerned with the properties of the windows, their animation, and their presentation for a performative context.

On the second day, we experimented with the potential for layering of material via multiple recordings.  We took some of the video streams that we recorded at the first choreographic workshop and played them back through memetic, whilst simultaneously recording live new material improvised in response to it by Helen. This gave us two layers of material. It also provided some interesting ideas about the mediatisation of space, not only between the windows but also experientially by the live performer who is performing improvisationally to their body presented in a fragmentary manner through the live windows.  We then went through the cycle a third time, using the first two sets of recorded material and improvising a third gestural response in a new set of windows. Operator control of the streamed videos allowed a playful approach to highlight the differences between the pre-recorded and the live by jumping around through the video streams in performance.



e-Dance `self finding’ process

1 02 2008

The great goal of the last two days was to finally find out what we are supposed to develop in the next year. Well — we done it. There is a list – even prioritised -  so we could say we had a successfull time, but besides it was a lot of fun.

Helen and Sita came up yesterday and we reviewed some of the recordings of the last workshop in Bedford. There is lots of material. For someone who was not in Bedford at the time of the recording I felt what I not  getting the full picture, but of course I was not supposed to – that is the artist side of it.

Interestingly, we took the recorded material and Helen did some improvisations to it – and of course we recorded both the original material and the new video streams. To complicate things even more the next stage is replaying the newly recorded material and add some more video feeds to it – Helen moves through the times.

Placing of video windows on the screen is it own art work. (Something we will support during the project.) Sita took up this important task and moved and re-sized them very professionally. At this sage of the project we just have some `old fashioned’ photographs to show for it. I wonder how somebody else, taking our video streams, would  reshape the space. The impression will be completely different.



Projecting Performance: Interrelationships between performance and technology, dancer and operator

19 01 2008

Views from an outsider:

“Date: Friday 18th January 2008, 3pm- 5pm Venue: stage@leeds, School of Performance & Cultural Industries, University of Leeds, LS2 9JT.

This event marks the culmination of 18 months of collaborative practice-led research between performance academics in the School of Performance & Cultural Industries, University of Leeds, and digital technologists from KMA Creative Technology Ltd. The afternoon will consist of performance-demonstrations, presentations and discussion of the work undertaken in this project. We would welcome your contribution to the evaluation of our research. Refreshments will be provided afterwards. “

LP1

Sita Popat and Scott Palmer, from Leeds, presented an involving final part of the AHRC project involving an open session with questions and comments. Demos showed examples of the live and mixed projection (‘sprites’ in various forms) within the performance space – creating new positions of the operator to be within and possibly on, the stage. This blended the boundary from operator to dancer; and the space they occupied.

From demos a couple of personal thoughts;

  • Sprites were all projected light sources and thus acted as controllable movable illumination points – used as a way to focus audience attention on the dancers.
  • Illusion of depth caused by double illusion, although an artifact sometimes helped – and colours on-top of other colours also acted as composition elements.

and a few questions (open)

  • Should the performer-operator be on the stage? or seen (illuminated) by the performer-dancer or seen by the audience?
  • Where now is the choreographer or director? In fact also where now should the audience be – in which space?
  • What is foreground and what is background? – and can this change position?

There was a healthy discussion and ideas on the role of the performer-operator as opposed to the performer-dancer.

Further details at their official site: http://www.leeds.ac.uk/paci/projectingperformance/home.html

LP2



Choreographic Workshop 1 – Bedford

9 01 2008

eDance Jan08 Helen and Sita 5eDance Jan08 Helen and Sita 4eDance Jan08 Helen and Sita 3eDance Jan08 Helen and Sita 2eDance Jan08 Helen and Sita 1eDance Jan08 Helen and Sita 6Between 7th and 9th January 2008, Helen and Sita are working intensively in the dance studio in Bedford to explore the performative possibilities of Access Grid and to develop some initial material for review. On 7th Jan, Anja Le Blanc came down from Manchester to help us set up, bringing a self-contained kit developed by Andrew Rowley that simulates an Access Grid node. Multiple video streams will then be uploaded in Manchester for future reference. After half a day of trial and error, we finally got it all working!  Yesterday was very productive. We worked with three dancers: Nicola Drew, Amalia Garcia, and James Hewison.

We thought initially that we would primarily be concerned with creating mediated material for future use, but actually we found that because of the system’s interactive capacity we had to create material between the live and the mediated in order for the improvisations to have a logic that supported that capacity. All of our dancers were co-located at this point, but we were considering the ramifications of non-co-locatedness for future workshops.

To begin with, we were interested in the framing potential of the multiple cameras, and we played with perspective and scale through a series of improvisations using pre-existing movement material. Key ideas for this workshop are:

– Formal composition of the frame

– The interrelationships between multiple frames

– Social formulations of space

– Notions of presence and absence in performance.

In relation to formal composition, we are concerned with individual ‘frame’ or ‘window’ as a self-contained video entity. Multiple frames or windows arranged together enable porous spaces that the dancers can move between literally (see images on this blog). Multiple frames provide further compositional complexity as you are composing between them (dancers can literally move across the windows) but you are also composing the arrangement of the windows and their motion as further choreographable components. Currently all of our dancers are co-located, allowing movement across all of the windows. This allows for located spatial continuity in the choreography. However we will be exploring implications of fracturing that continuity through non-co-locatedness whilst maintaining the ‘liveness’ of performance. 

The concept of social space and social formulations of space was highlighted through an early improvisation. Two dancers worked solely through mediated images in a telematic sense, negotiating their activity visually, motionally, and verbally – verbal language added a further level of complexity, highlighting performer subjectivity. The ‘window’ took precedence over the frame, engaging with documentary/filmic aesthetic constructs and visual grammar (e.g. eye contact, head shots, talking to camera). This highlighted socio-political concerns around power, and the ideas of space and the visual working together in terms of the negotiation and distribution of power. (Interestingly, Nicola’s role within the improvisation was to respond directly to a camera without having a window relay of either herself or Amalia. Her only feedback was auditory rather than visual. This lack of the visual sense had a direct impact on her power relation, leading to questioning of the hierarchical relationship between the visual and the embodied for the dancers. The visual overrode linguistic and motional communication in terms of the distribution of power. We discovered that whoever had visual feedback became dominant within this mediated environment. Very quickly this improvisation began to have its locus in ideas of presence and absence, in terms of the subject, meaning, and modes of communication. Overlaid onto the improvisation as a further level of complexity as the task developed was the premise of failure to communicate through any one of those channels, thus highlighting liveness.

We discovered that in the improvisations focusing on social space, the ‘windows’ have a Brechtian function that draws attention to liveness, as they reference the computer context and frame the space and its social interrelationships. This connects to concepts surrounding social networking and similar technologies that are becoming increasingly embedded in mainstream communications. It draws upon notions of social etiquette emerging in these online contexts and provides an interesting route for this research.

In the improvisations that were more concerned with formal composition of choreography and film, the windows became a real hindrance because we were working with a sense of continuity and we didn’t necessarily want to highlight the constructedness of the ‘frame’ in the philosophical/literal sense.  This highlighted for us the fundamental differences between the frame and the window – in the formal context we want to work with the ‘frame’, and in the social context we are happy to work with the ‘window’. (See project references for Bolter & Gromala’s ‘Windows and Mirrors’ and Friedberg’s ‘The Virtual Window’.)

Helen and Sita will be revisiting the documentation and video streams from this workshop in Manchester at the end of January 2008. We hope that this will provide a useful starting point for the software developers to begin to think about how the Access Grid can be designed to be more attuned to the requirements of performance and creative practice. This will have clear implications/applications beyond the arts for social practices in networked communications.