REAL TIME              

In 2017 The French CNC (Centre National du Cinema) granted our project with pre-production funding. At the start, we anticipated a digital creation process in line with the classic production techniques of 3D animation film. Throughout the process, we realised that the use of pre-calculated rendering solutions actually amounts to freezing the stereoscopic images produced into a film that can be viewed in a virtual reality headset. It was initially attractive to be able to consider a higher quality of pictorial detail. However, material requirements and the lack of interactivity which results from such classic production technique have led us to revaluate this approach. Moreover, the constant progress of VR accessible technology in the past two years has resolved some of the original technique obstacles. Finally, it seemed more relevant to us to insert the project into the contemporaneity of the new digital, interactive creation, rather than employing experimental content within an inherited form of production media that reached maturity. Indeed, when we started the VR set design for Yawar Fiesta, the possibility of broadcasting VR content generated in real time for multiple viewers seemed difficult, if not impossible, not only because our knowledge of the subject was limited, but also because the solutions proposed by the manufacturers did not permit it.

 The progress of our artistic research has shown that visual complexity is not only not necessary but actually contrary to our intentions. The restrictions linked to the use of real-time techniques therefore no longer posit a problem but a chance to experiment with digital materiality within current constraints. This choice has led us to approach new tools initially related to the world of video games or digital arts, such as programming in C#, openFrameworks, openGL programming or game rendering engines like Unity.



Our strand of research that specifically related to artistic design in virtual reality has led us to the work of pioneers of the medium who developed pieces in the 1990s (Jaron Lanier, Char Davies), as well as to research papers related to notions of cognitive perception in VR (Mel Slater). It revealed an essential characteristic of the medium that we had not originally anticipated. The concepts of presence and plausibility introduced by Professor Mel Slater have generated questions related to  spectators’ potential interactions with the virtual environment in which they are immersed. These selected interactions are also determinants of our position as artists working on the development of a speculative form of VR opera staging.

It is essential to recall that Yawar Fiesta is experimental in its form since it was conceived within an acoustic tradition. Unlike traditional opera, musique concrète is enjoyed live without the support of visual representation other than a multiple-speaker system called an Acousmonium. It concerns space and sound: it is common for listeners to close their eyes to get the most out of the sensory experience provided by it. Thus we are faced with a wide spectrum of visual possibilities, structurally receptive to the proposal of our experimental interpretation. Technique, however, greatly impacts these potentialities. Among the different senses we benefit from, we will consider only four of them as implementable in this experience: hearing and sight as well as haptic and proprioceptive sensations.

The sound will be implemented using a binaural ambisonic file, a format adapted to immersive applications, which is able to render spatial experience. Head movements will allow the spectator to have the impression that the sound is anchored to the virtual space. The visual component will be stereoscopic and receptive not only to the rotation of the spectator’s head but also to the position of his head in space. The parallax effect allowed by the resulting 6dof HMD (6 degrees of freedom Head Mounted Display) will greatly increase the feeling of space and immersion.

The body’s own sensations, haptic and proprioceptive, are more delicate to address. We assume that the spectator is seated for the duration of the show. Bearing this setting in mind, we asked ourselves the question of possible bodily presence in this virtual audio-visual world. We have chosen to implement hands and arms using Leap Motion technology and position tracker bracelets. The mere sight of avatars of arms and hands immediately provokes a feeling of presence to the spectator. They will also allow us to set up slight interactions with the scenery, for instance materializing movements of air or dust in suspension, pushing back floating elements, or suddenly covering the field of vision. This solution will reinforce the illusion of plausibility.

Inspired by Char Davies' piece Osmose, we will also add a breath sensor. The visuals will be subtly distorted according to spectators’ respiration and exhalations, creating a sensory link of cause and effect between them and the digital environment. Identifying this link during the experience will help its being internalized. Finally, as opera is a libretto-based genre, the subtitles will be played at the level of the audience's knees, synchronized with the music.

Our commitment to the project is reflect in Jaron Lanie’s statement in Dawn of the New Everything, a Journey Through Virtual Reality: ‘VR is the technology that highlights the existence of your subjective experience. It proves you are real.’ We aim to restore, through this artificial staging, a sense of real presence in the world.


STAGE PRODUCTION                

One of our goals in this speculative opera project is to propose, in accordance with the theoretical axes of concrete music, a universal, transportable, affordable approach for all. This opera is based on a model of Greek tragedy laid out by Aeschylus, which was developed in face of a tradition of religious celebration. In order to safeguard the ritual sense that brings spectators together in a specific place, to share a collective moment that transcends the space-time in which the show takes place, we want to convey a form of impression of the sacred. In order to achieve this, it seems important to us to keep in each of the three versions we propose below a liminality of space, suggesting a passage between reality and the virtual paintings present in the headset.

The first will be a live performative version for around twelve people (more if the technology allows it). The estimated scale for this show is a small concert hall. The music is played on an Acousmonium in the middle of which the spectators are placed. The composer or a musician will be present for the live spatialization of the piece. In addition to appearing in the VR headset there will be projections on large screens. Thus, the immersion can also be experienced without the headset. The circle or dome of the loudspeakers as well as the VR device will induce a ritual in itself. Spectators will be materialized by a graphic presence in the virtual world, so they will be able to connect to others. This version is the closest to the parameters of a regular opera.

The second version will be proposed on the scale of an art gallery or cultural institution. It could be experience by up to four people at a time. The sound is broadcast in binaural ambisonic that will be played on headphones and LED TV screens or projections which transmit the images visible in the headset. This version also materialises the presence of each participant in the digital world. Here, a scenography/light installation will be set up in this context in order to engage spectators with the staging as soon as they enter the physical space. We particularly want to stage the space of the show around the VR headsets to recreate a feeling of passage inherent to the notion of ritual.

Finally, we will develop a third version for webVR on API type A-Frame allowing viewing on a smartphone device with a portable headset and binaural sound from anywhere with access to the website of Yawar Fiesta. We propose to broadcast the show at predefined times in order to create links between spectators in the cyber space. The location of the connections and the number of participants will be indicated, and an avatar of presence in the virtual setting will be generated.