Abstract
Body Editing is a dual gesture-based and EEG platform that transforms movement, gesture and brain wave data into visual and audio feedback with which dancers engage improvisationally. The platform, uniquely, offers a creature that responds in emergent fashion to the dancer’s movement, allowing for improvisation. The emergent algorithm directing the creature’s response is informed by Karen Barad’s understanding that intra-action in emergent systems is a form of performativity. The wireless EEG monitor provides intuitive musical sounds corresponding to brain wave data that signal to the dancer moments when she is dancing in an unthought or apperceptive manner, in contrast to moments when she is thinking the interface and thus learning, but not improvising. Dancers describe this experience as performing duets with the emergent creature.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Gesture based experience
- Dance improvisation
- EEG monitor
- Interactive media
- Human computer interaction
- Emergent behaviour
- Biometric feedback
1 Introduction
Body Editing is a gesture and biofeedback installation that asks critical human computer action questions regarding how we understand our movement and biodata data when it is presented in various forms—both as 2D graphic output and as aesthetic (sound, music visual or other) feedback in a digital installation format. In this research, conducted at the Mobile Experience Lab at OCAD University (Toronto), we investigate user experiences with movement and biometric data feedback, asking at what stage and with what technological assistance users might engage in embodied interaction or unthought dance.
1.1 Body Editing Platform and Experiments
The platform uses a depth sensing 3D camera for gesture and movement capture, and a wireless EEG sensing headset to capture brain wave and heart rate data. Our agile software interface enables us to code aesthetic feedback including real-time generated graphics and music or sound effects that respond to biofeedback, gesture and movement in the installation space.
Our research experiments with the platform inquires into whether participants must “learn” the machine-feedback programming (how the algorithm translates their data into feedback) in order to engage in apperceptive or “unthought” movement with their own data feedback. This paper will examine our research experiments with improvisational dancers to: query the role of the graphic markings of bodies in space; make transparent the complex algorithm to determine whether it helps dancers to learn the system or perhaps frustrates dancers; and in either case whether these learnings release dancers into apperceptive performance.
We experiment with relaying visual and audio feedback in frequencies that are similar symbolically, conceptually and metaphorically to users’ biodata. The study addresses whether certain biodata is assumed to be more readily identifiable in particular graphic or audio feedback forms. We query whether mimicking exact bio-frequencies in the audio or visual feedback help dancers to move apperceptively or in “unthought” manners.
Finally, we will discuss our experiments delivering visual elements using procedural generation that produces animated mandalas that respond to dancers’ movement and biodata. We query whether dancers’ ongoing random interaction with the mandala creates an experience that is phenomenologically different from one where the programmer has written and can teach the algorithm to the dancers. In other words, if dancers are continually interacting with newly generated mandala patterns, does it qualitatively change the dance experience? In this case, does the dance become a duet rather than a dancerly response to data?
2 Background
2.1 Theory Meets Practice
Art practice provides the distinct opportunity to probe these interfaces through experiments that ask how participants perceive and mediate space, machinic logic, and aesthetic feedback in manners that are embodied, or perhaps intermittently embodied. Where theory can trace what appears to be experiences of presence, apperception, embodiment, or interaction, art practice invites us, as participants, to reflect by engaging in material experience and importantly, to do so via creative and aesthetic experience. In our art based research creation practices, then, we specifically inquire into how art practice interacts with perception and apperception. More, we ask how dancer’s interactions with gesture and movement data might differ or intersect with interactions with their brain wave data.
2.2 HCI Concerns Regarding Embodiment, Perception and Apperception
Researchers are concerned that in our contemporary practices with biometric machines, we are losing an understanding of the complex network of mind-body relations. Researchers show that scientific research in neural networks, the human genome, and genetic sequencing digest the human mind and body into computational, biological entities. (Galloway 2004; Kember and Zylinska 2012). This so-called reductionist view has far reaching effects; Nikolas Rose laments that the “recoding of everyday affects and conducts in terms of their neurochemistry is only one element of a more widespread mutation in which we in the West… have come to understand our minds and selves in terms of our brains and bodies.” (2003, 46). Specifically, researchers have argued that data visualisations often succumb to the Cartesian grid, missing other spatial actions beyond 3D environments. The invisibility of the algorithm results in common cultural practices of data fetishization, where we unduly trust that data as objective, and coherently reference it a discernible thing. For Chun (2009), the separation of algorithm from interface or software from hardware makes it a powerful metaphor for everything we believe is invisible, and yet generates visible effects. This separation creates an understanding of code as a thing rather than a process, fetishizing and reducing the complexity of code processing, and generating the sense in participants that seeing (the output) is knowing (the complexity of the instructions).
Researchers have called for HCI research and art practices to open up algorithms and interfaces to transparency, encouraging users to understand the complex cognitive, perceptive and aesthetic interactions between humans and computers (Hayles 1999; Braidotti 2013; Suchman 1987). Suchman (1987) charges that interaction is not information exchange or operationalized intention; rather, it is the ongoing, contingent, coproduction of diverse matter, things, output, and selves. Marc Andrejevic (2013) charges that we view physical data, such as that produced by sensors, as outside of symbolism or cognition– as ‘automatic, immediate, and unreflexive.’ To counter our disembodied and unproblematic readings of data, Chun (2009) suggests we should engage in play with code, thus cracking open subject-object relations. Recognizing that the digital environment frames the user’s affective experience, Munster (2006) calls for exploiting the exaggerated aesthetics of digital/material interfaces in sensual engagements; in so doing, we can, she argues, embody users in spaces of difference, enabling an acute querying of subjectivity and the body. Similarly, Suchman (1987) calls on us to engage the creative and aesthetically performative human.
Our premise, then, is that as makers, we need to craft human machine interactions, meaning both machining and aesthetic practices, allowing for transparent, process-based, and multisensorial experiences, in order to more fully engage participants in interactivity, immersion and embodiment. Our team’s research interventions assume that how users perceive and interpret algorithms and biodata visuals is linked to their ability to maintain agency and critical engagement with machines. That is, participants trust the data visualization/output produced by the algorithm only in so far as it resonates with their embodied experience. As such, this project undertakes experiments that engage the problem of interactivity and queries the nature of how participants sense very different types of data – gesture and movement data, as well as brain wave data.
3 Background- Art Experiments
3.1 Biofeedback Interactions
Artists have engaged in movement and meditation experiments to query how participants engage via aesthetics to understand spatialisation, and how mindfulness, or control of brain wave function, might create engaging aesthetic experiments.
Australian artist George Khut has experimented in biofeedback and embodiment, for example, in Res’onance-Body [box] (2003). In these experiences, participants sat in a dark space, in reclined chairs facing a projection screen with circular pulsating visualizations of their breath and heart rhythm biofeedback. Khut’s research aims to understand how arts practice can represent subjectivity as a physiologically embodied phenomenon. (2006) Notably Khut’s team concludes that elements of the research apparatus (e.g. lighting) caused unstable periods of “mindfulness” that signaled failed embodiment.
David Rokeby’s Very Nervous System (1986–1990) is an interactive sound installation that responds to participants’ movements with responsive music. The computer observes, through video camera, the physical gestures of bodies, and responds with improvised music to the action. Rokeby here is trying to counter the computer’s “tiny playing field of integrated circuits” that disembodies, by making the human experience take place in human-scaled physical space. Rokeby notes that the interface is invisible and diffuse; the interaction is unclear at first but becomes clearer as the user engages. He describes the interaction as a feedback loop where human and computer elements change in response to each other—the two “interpenetrate until the notion of control is lost and the relationship becomes encounter and involvement. (Rokeby 2016, p. 1) After an hour in the responsive system, Rokeby described feeling strongly connected to his surrounding environment; he notes that continued use of the interface can become a type of ‘belief system’—that the interfaces we use leave imprints on us, and the longer we use them the stronger the imprint (2016).
Char Davies’s performance work, Osmose (2009) engages with users’ biometric (heart rhythm) data to explore their spatial natures, with careful attention to the role of aesthetic experience. Uniquely, Davies is an artist, programmer, and a reader of quantum theory. Davie’s work is thus deeply informed by the theoretical problematics of rendering virtual and digital space into a Cartesian grid that is often anything but immersive. Davies finds that the realist, visual aesthetic common to Virtual Reality and computer graphics recreates a false (Cartesian) dichotomy of subject/object. Her success in thwarting that separation comes from her unique, informed, critical attention to the sensual and aesthetic experience of the time/space of the digital. Davie’s work queries the human-object and human-human interface with technology by probing our relationships to our own data – of motion, and biodata. She presses questions regarding the phenomenological experience of this interface, asking how much the interface can meld with experience so that the technology becomes one, with and alongside our own, well known and deeply felt, experiences of breath, and of movement.
The work of Kutz addresses user’s sense of spatialization made possible through mindfulness or biometric experiences that are aestheticized with attention to the experience, but not necessarily the role of the aesthetic. As well, we understand mindfulness differently as an incoherent, unstable experience wherein one migrates in and out of the “mindfulness”; this is evidenced in our own research experiences capturing “mindfulness” periods using brain wave monitors (Gardner and Wray 2013) Rokeby’s feedback loop experience notes a decidedly “apperceptive” type of experience; we are interested in the types of engagement that become possible in both “responsive” and other types of environments that are less responsive, and more random. Our experiments, as we shall present, query whether instead of Rokeby’s “belief system” formed in response to anticipating the feedback, one can engage in apperception in response to an inability to anticipate the rules structure. Davies’ work inspires our project in its probing of both the scales of human to human and human to computer interactions, querying the role of the interfaces in the experience—in our experiment, that of apperception.
3.2 Apperception, Embodied Theory and Improvisational Dance
Apperception, according to Emmanuel Kant, is where the world and experience come together—it is internal experience uniting with conscious experience. (1781/1996) Gilles Deleuze (1968) provided a more digestible understanding of apperception as the “unthought,” or what is yet to be thought. For Deleuze, the unthought is automatic and autonomous – it is not a response to a thing, as in representation, but rather it is a process–a becoming. It is not a performance, but rather “something in the world forces us to think.” (Deleuze 1968, p. 139). Digital theorists have, as noted above, suggested that embodiment is an undertheorized experience of the digital interface. We suggest that the embodied and apperceptive nature of dance improvisation presents an opportunity to query potentials for apperceptive interaction in human-machine interfaces. Notably, dancers employ techniques that heighten or challenge possibilities to engage in improvisational moment.
One standard technique of improvisation uses a technique of “Yes, and…,” where dancers or performers accept and work with whatever aesthetic, movement or intervention is offered to them by another performer. A second technique can be described as “mini challenges,” where, while the improvisational dancer works in a interface, she attempts to engage a certain action in response to a certain probe (e.g. a particular dancerly response (e.g. “juicy”) when a particular sound or musical event occurs. In this way, the dancer adds additional aesthetic challenges to the already unpredictable nature of improvisational dance, heightening the tension between incoming stimuli and dancerly response, or instigating the mind/body to respond in an “unthought” dancerly manner. Our challenge was to create a human-computer interface that manifest these potentials for improvisational or unthought movement between dancer and her data feedback.
4 Prototypes Toward Apperceptive and Embodied Movement
4.1 Introduction to Prototypes
To meet this challenge, we sought to engage with both the concept of interface transparency, and lack of transparency, as well as multisensorial feedback opportunities. We developed multiple platform prototypes that sought to, iteratively: (a) engage users in apperceptive response to gesture-and movement-based data is visual form; (b) engage users in apperceptive brain wave-based data in audio form; and (c) to analyse the relationship between these two coinciding experiences of responding to gesture and brain wave data.
4.2 Experiments with Responsive Visuals
The Body Editing platform seeks to create a responsive audiovisual environment, which reacts to the motions and mental activity of the dancer. The responsive visuals take the form of a procedurally generated ‘mandala’ created out of many overlapping components with different kinds of symmetry.
The mandala mimics a life form in some ways. Each mandala has a ‘genotype’ of random numbers, which is then ‘transcribed’ into the visual representation (the ‘phenotype’). In this way, it contains both elements of randomness and curation. The mandala is a fractal image; first, a set of small symbols is randomly generated, along with a palette of colors. Then those symbols are combined into radially symmetric rings and layered. Slight random animations are also created to give the mandala some life and organic motion. (See Fig. 1) Each time a user is detected by the Kinect, a new set of linked animations is created for the mandala. Sixteen to thirty-two links are created, with each link taking one property from the Kinect skeleton tracking, for example the position of the left hand, and attaching that to a property of the mandala.
Parameters tracked include position, distance, depth, normalized distance, and angle of hands, head, torso, and knees. These can be linked to the offset, rotation, position, transparency, speed of animation, or scale of elements in the mandala. This creates a wide range of possible ways for the dancer to affect the movement of the mandala. Some random aspects of animation will remain, even as linked animations are added, giving the dancer something to respond to in the mandala even when they are totally still.
During the Body Editing research process, the team has moved from the Kinect V1 to the Kinect V2. In our research we confronted various limitations with the Kinect V1; for example, dancers disappeared entirely from the system if limbs went outside the sensor area, the sensor did not recognize participants sitting in wheelchairs, and lacked accurate detection of the dancer’s position in the performance space. The Kinect V2, which we programmed using Processing, has solved these problems. If a participant’s body is slightly out of frame, the Kinect V2 still recognizes the parts that are within, and fixes for those that are outside. Most importantly, the Kinect V2 can pick up on subtle movements such as hand gestures, as well as more joints and body points for increased accuracy.
The Body Editing platform uses a web server running the Meteor framework to synchronize all of the different components. Messages are sent through OSC (open sound control, a network protocol) to and from the web server, allowing the system to synchronize events between audio and visuals and adapt to input from the Kinect and Muse headbands. One computer is used to process the data from the Kinect, a second is used to generate the audio component and read the EEG data, and a third is able to be dedicated solely to running the visual elements. This modular architecture allows for a wide range of possible setups and can accommodate many different kinds of input or applications. (See Fig. 2).
As our series of prototypes developed, we moved away from mapping the input from the Kinect in one-to-one or easily predictable ways. We found that even when the associations between the dancers’ actions and the systems’ response were complicated and not easily expressible, the dancer still feels that they were generating the interaction and were unable to intuitively sense the unthought response to the feedback. Ultimately, our final prototype produced for the dancer the sense of a ‘duet’ in which the system both produces content for the dancer to react to and responds gracefully to their actions; this is examined below.
In addition to responding within the structure of the dance itself, the genomic structure of the algorithm permits an element of intuitive composition or choreography, as well. Before beginning a session, the dancer can choose between many randomly constructed mandalas and animations, even mutating and evolving patterns that they like, to collaboratively guide the system toward an audio-visual response that the dancer feels appropriate to their movement. In future prototypes we hope to further explore the possibilities of building an entire dynamic backdrop for a longer form production in which the visuals can change and shift as the show moves between sections blending pre-planned choreography and responsive interaction.
4.3 Experiments in Sound Responses to Data
A second piece of the experiment is to determine how audio feedback in response to brainwave monitors might bring users into apperceptive experiences of dance. Might it be possible that users, when they are able to sense their brain function, (e.g. thinking hard about the how the interaction is programmed; thinking in a relaxed and meditative fashion, other types of thought,) they might be brought more readily into an apperceptive state? How does this experience intersect with the experience of visual feedback in response to participants’ gestures and movements in the space? How does audio feedback relate to spatial and movement feedback?
Our information aesthetics experiments query whether dancers respond to particular tones or tensions in music to help them understand different frequencies of brain waves, with attention to Alpha and Beta waves. We tested charts of colours and conceptual sounds that aligned conceptually with the Hz frequencies of these brain waves, to ask whether conceptual affinities helped dancers to either comprehend or intuit (and apperceptively move,) to this data feedback.
The data sonication process uses several stages of hardware and software to translate the users’ movement and brainwaves into sound and music. The audio is generated using the visual coding environment Max/MSP 7, to translate numerical data from sensors into musical notation. This musical notation, communicated digitally as MIDI, is turned into music and/or abstract sound by the DAW (Digital Audio Workstation) Ableton Live 8. Within Ableton, notes generated by the users’ data are processed into several categories based on the type of brain wave and movement data received from the Muse EEG headset worn by the user. Max and Ableton also processes the notes using a system of filters that nudge the data into musical modes or scales, e.g. C Major, or D Minor and etc. By doing this, the data can be made more musical in a traditional sense, allowing for varying emotional resonances, and brain patterns, depending on the scale chosen.
4.4 Brain Wave Feedback
We conceptualized brain waves detected by the Muse headset, with the following associations, and programmed the system to provide the following corresponding feedback:
Delta waves – .5–.3 Hz: Low frequency and deeply penetrating, like a drumbeat; associated with deep, dreamless sleep (NREM). The amplitude of these waves was used to trigger percussive instruments like real and synthesized drums.
Theta waves – 3–8 Hz: Frequency range associated with dreaming and the intention for movement. The amplitude of this range was tuned to create notes on a droning abstract synthesizer that has a meditative long decay.
Alpha waves – 8–12 Hz: Frequency range associated with being calm and alert. The amplitude of this range was scaled to create chords on a digital grand piano. The intensity of movement along with the amplitude of this range could generate calm relaxing chords or quick and percussive phrases of a piano.
Beta waves – 12–38 Hz: Frequency range associated with being alert and excited at a continually high frequency. This range controlled a bright and fluttering synthesized arpeggio sound, reminiscent of vintage synthesizers.
Gamma waves – 38–42 Hz: Frequency range associated with conscious integration of sensory and mental data and working memory. This range controlled an abstract synth based on pitched noise. The intensity of movement could cause this sound to be a soothing wash of sound, similar to the calm crashing of waves, or to become a full and nearly overwhelming wash of sound.
4.5 Accelerometer Feedback
The accelerometer (3-axis movement sensor) data collected by the Muse headset was programmed to provide the following responses in the system:
X-axis – The left and right lateral movement of the user was used to adjust the length of the user’s generated notes. A more abrupt movement to a users’ left would create a note held for over 1 s, where a slow movement creates a shorter note.
Y-axis – The up and down movement of a user controlled the rhythm of the generated notes. Fast vertical movements like a high jump or quick crouch would generate fast (1/32 note or 1/16) notes, while slow movements may produce ¼ or whole notes.
Z-axis – The forward and back motion of the user changed the sets the volume or velocity of the notes generated. A fast forward motion would create a loud sound with a MIDI velocity value of 127, while a slow movement may cause a value of 60. For certain synthesis, this value greatly changes the sounds characteristics, including level of filter cutoff frequency or distortion.
This system was designed to allow the user to decipher trends in their brainwave patterns, to facilitate more apperceptive experiences, by creating a correspondence between the conceptual aesthetic readings of the output and their embodied dancing—their dual conceptualization of the system and dancer with it. For example, dancers’ fast and large movements/gestures create loud and fast sounds, including movement along the Z-axis, which, if initiated toward an audience, would make the sounds louder. The dancer is also able to mute and unmute individual tracks/brain waves in order to isolate the effects of movement or thought on their mental state, i.e. soloing Theta and Gamma waves to increase the ability to meditate, creating a symbiotic loop.
Using these methods, dancers appeared to struggle with the lack of direct control of audio response that one might be used to, with traditional instruments. However, once a certain amount of control was released by the dancers, they were able to work with the interface and enter a less conscious process of music generation, and by their reflections, a less controlled and more apperceptive experience of movement. Occasionally the interface would create a particularly unexpected sound, based on a spike in certain brain activity, along with a certain movement, this might cause the users’ brain pattern to drastically shift, which affected the music, and in turn, broke the dancer out of the apperceptive or embodied dance.
4.6 Discussion: Body Editing Research Experiments
Our series of research experiments with dancers progressively tested their experiences dancing without knowledge of the programmed, platform; with knowledge of the algorithm and with informational graphics that help them to locate their movement and data in the space; in a space where the feedback is programmed by the dancers; and finally, with the generative genomic algorithm producing the changing mandala figure.
In the early iterations, when no instruction is given to the dancer, one can see that she is “sleuthing” the algorithm—she is thinking hard while she moves, trying to remember how to reach for a certain sound, and try to figure out how to remember how to effect a particular sound or image. The feedback seems neither metaphorical nor intuitive to her; instead, she is trying to remember to dance improvisationally. She looks like she is trying to learn and that she is not dancing or “in the zone.’
We found that over time, dancers became more comfortable with “intuitive” algorithmic feedback. One might, for example, reach high for high notes, and move forward to increase musical feedback pace, or move left to right to play a piano keyboard. In such instances, dancers begin to move more fluidly dance with the data, but they weren’t dancing in an unthought manner– because they were trying to “work” the interface. The dancing folded toward learning rather than unthinking. In these cases, the machine still led the experience, and the dancers didn’t engaged in apperception or embodied interaction.
Our mistake was to assume that apperceptive unthinking or improvisation could be created by relieving the memory of its work. We guessed that training – with visuals in the space that showed for example a gesture into the Y-axis producing a certain sound—could eventually become embodied knowledge. This never happened. So we next recreated the interface to include a brain wave monitor to try to understand how adding brain wave data to the dancer’s movement and motion experience might induce ‘unthought’ dance. Here, the mandala was programmed to intuitively respond to the dancer. The dancers were more engaged in the second iteration, but were entranced by the possibility to anticipate the algorithm, which was clearly based on a rules structure.
The third iteration created a dance most closely resembling apperception. In this version, recall that musical sounds responded to brainwave data, and the animated mandala responded movement and acceleration in the space. It was here that the interaction became one that the dancer described as a “duet.” There were two key reasons for this. Foremost, the animated mandala responses, because they based on emergent programming, could not be anticipated. The emergent figure was so lively that we began to refer to it as the organism. Secondly, the brain wave data suggested to the dancer moments when she was spiking cognition (or was in the Beta zone, “thinking the interface”) or when, differently she was instead in an Alpha zone, and dancing with the organism. The gentle reminder of brainwave activity worked itself as a form of biofeedback that prodded the dancer to let go and dance, while the organism, with its constantly shifting response to the dancer’s movement impelled the dancer to improvisationally respond in duet.
5 Summary
Our experiments in conjoining biofeedback and emergent algorithmic structures to induce unthought movement taught us some key things. First, the dancer’s attempt to learn the algorithm in fact disrupts improvisational, unthought dance and instead creates in the dancer the desire to constantly think the dance interface. Second, the brain wave monitors worked in a manner we didn’t anticipate, serving as a biofeedback that helped dancers remember to dance improvisationally and to be attuned to the improvisational opportunities offered by the emergent behavior of the visual organism. Notably, the digital interface affords the possibility of employing emergent (rather than responsive) algorithms that invite emergent behaviors or “unthought” dance. It is crucial to note that the aesthetic nature of the organism and the aesthetic qualities of the music feedback afforded this interaction as an art experience for dancers—one that is replete with textural and multi-sensorial qualities that evoke experimental and creative response. The interface allowed the coincidence of these many features—to result in moments of poetic duets between the dancer and the organism.
References
Andrejevic, M.: Infoglut: How Too Much Information Is Changing the Way We Think and Know. Routledge, New York (2013)
Braidotti, R.: The Posthuman. Polity, Cambridge (2013)
Buzsáki, G.: Rhythms of the Brain. Oxford University Press, Oxford (2006)
Chun, W.: Programmed Visions: Software and Memory. MIT Press, Cambridge (2011)
Cohen, A.J.: Film music and unfolding narrative. In: Arbib, M.A. (ed.) Language, Music and the Brain. Strüngmann Forum Reports, J. Lupp, series ed., vol. 10, pp. 173 – 201. MIT Press, Cambridge, MA (2013). ISBN: 978-0-262-01810-4
Cohen, A.J.: How music influences the interpretation of film and video: Approaches from experimental psychology. In: Kendall, R.A., Savage, R.W. (eds.) Selected Reports in Ethnomusicology: Special Issue in Systematic Musicology, vol. 12, pp. 15–36 (2005)
Davies, C.: Osmose. [Art Installation and Video] (1995). http://www.immersence.com/osmose/
Deleuze, G.: Difference and Repetition (1968). Trans. Paul Patton. New York, Colombia University Press (1994)
Galloway, A.R.: Protocol: How Control Exists After Decentralization. MIT press, Cambridge (2004)
Gardner, P., Wray, B.: From Lab to Living Room: Transhumanist Imaginaries of Consumer Brain Wave Monitors. Ada: A Journal of Gender, New Media, and Technology, No. 3 (2013). doi:10.7264/N3GQ6VP4
Hagendoorn, I.G.: Cognitive dance improvisation. How study of the motor system can inspire dance (and vice versa). Leonardo 36(3), 221–227 (2003)
Hagendoorn, I.: Emergent patterns in dance improvisation and choreography. In: Minai, A.A., Bar-Yam, Y. (eds.) Unifying Themes in Complex Systems IV, pp. 183–195. Springer, Heidelberg (2008)
Hayles, N.K.: How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Information. University of Chicago Press, Chicago (1999)
Kant, E.: Critique of Pure Reason. Hackett, Indianapolis (1781/1996). Pluhar, W. (Trans.)
Kember, S., Zylinska, J.: Life After New Media: Mediation as a Vital Process. MIT Press, Cambridge (2012)
Khut, G.: Development and Evaluation of Participant-Centred Biofeedback Artworks [Doctoral Exegesis]. University of Wester Sydney, School of Communication Arts, Sydney, Australia (2006). http://georgekhut.com/research/exegesis/
Jung, D., Jensen, M.H., Laing, S., Mayall, J. (2012) Cyclic: an interactive performance combining dance, graphics, music and kinect-technology. In: Proceedings of the 13th International Conference of the NZ Chapter of the ACM’s Special Interest Group on Human-Computer Interaction, pp. 36–43. ACM, July 2012
McRobert, L.: Char Davies’ Immersive Virtual Art and The Essence of Spatiality. University of Toronto Press, Toronto (2007)
Popper, F.: From Technological to Virtual Art. MIT Press, Cambridge (2007)
Rokeby, D.: Transforming mirrors. Leonardo Electron. Almanac 3(4), 12 (1995)
Rokeby, D.: The construction of experience: Interface as content. In: Digital Illusion: Entertaining the future with high technology, pp. 27–48 (1998)
Rokeby, D.: Home Webpage (2016). URL: http://www.davidrokeby.com/vns.html
Rodrigues, D.G., Grenader, E., Nos, F.D.S., Dall’Agnol, M.D.S., Hansen, T.E., Weibel, N.: MotionDraw: a tool for enhancing art and performance using kinect. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 1197–1202. ACM, April 2013
Rose, N.: Neurochemical Selves. Society 41(1), 46–49 (2003)
Spadoni, R.: Uncanny Bodies: The Coming of Sound Film and the Origins of the Horror Genre. University of California Press, Berkeley (2007)
Suchman, L.: Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press, Cambridge (1987)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Gardner, P., Sturgeon, H., Jones, L., Surlin, S. (2016). Body Editing: Dance Biofeedback Experiments in Apperception. In: Kurosu, M. (eds) Human-Computer Interaction. Interaction Platforms and Techniques. HCI 2016. Lecture Notes in Computer Science(), vol 9732. Springer, Cham. https://doi.org/10.1007/978-3-319-39516-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-39516-6_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-39515-9
Online ISBN: 978-3-319-39516-6
eBook Packages: Computer ScienceComputer Science (R0)