e.motion, the cinematic construct of space

"...our ability to transfer cognitive artefacts into an experiential mode is a powerful tool for thought"
Paul Anders : Envisioning cyberspace page 23

Considering space perceived through the electronic eye of the camera on a flat screen conducts us directly to the cinematic construct of space. Here motion is the key parameter producing the feeling of 'insideness', as a time-based relation between subjects and objects.

As foreseen by one of the very first experience of the frères Lumière "L'arrivée d'un train en gare de La Ciota" in 1895, the viewer can get into the light play to such an extend that he starts screaming while viewing the projected scenery in the black box. A step further, 'cinerama' stands for the most advanced example of the spatial immersion of the viewer: 'internal' through the use of the subjective camera and 'external' through its stereoscopic spherical 180degree projection.

Indeed, the invention of cinema has lead to a large development of techniques setting the viewer into motion, fusing modalities of perception such as camera movements with the sequential technique of montage. It is this sequential structure of film which instated the cinematic space as a segmented, non linear (space) or non continuous (time) vision in/of space. Once the independent fragments are replaced in the continuous following up of the celluloid, motion is inscribed through the rapid succession of frames while merging different camera movements such as shots, travelling, pans, orbits ... and different montage principles such as frame by frame, overlays, fade-in/out ... techniques in one single flow. In this way cinema can be defined as the first media producing a non linear experience (perception) of space and time.

These new possibilities in imagineering space could already perfectly be seen in the experimental cinema of the twenties, like the one of Walter Ruttmans 'Synphonie der Grosstadt' or Lazlo Moholy-Nagy's constructivist movie 'black-white-grey'. While Ruttman uses the 'Cinematic Eye' technique of Dziga Vertov to change images in time according to music, to describe the rhythm of the roaring city of Berlin's twenties, Nagy uses photographic techniques of positive and negative overlaying of images 'to paint light in movement', exploring his kinetic sculpture 'the light space modulator'. Both use the technical parameters of the media 'cinema' itself to create a new visual and sonic experience shaping time, light and movement into space, thus extending the traditional static and central perspective of space to temporal parameters such as rhythm, frequency, flows, tension...

see image Stills from Lightplay Black White Grey, Lazlo Moholy-Nagy 1930

This vocabulary has turned into the basis of the cinematic language, where continuous motion is experienced and perceived by the senses while categoric motion, moving abruptly from scene to scene, is cognitive and imagined. The possibility to integrate cognitive principals in the spatial experiences through cutting and montage techniques further introduces space as a mental construct, reversing the physical relation between body and space to a disembodied experience of motion. In this manner the filmic techniques have produced mediaspecific forms of expression and meanings, a cinematic language. The electronic eye of the camera thus has turned into a sensitive apparatus, a sense establishing through motion - specific space construct(ion)s from physiological to psychic ones _ the space of e.motion.

Besides film architecture itself, cinematic analogies are relevant to the discourse of 20th century architecture since it was the first media to introduce temporal and spatial discontinuity, thus allowing to express modern culture, from the rising industrialization to the discovery of radioactivity to the futuristic feeling of speed ... For example: the frame by frame montage of static and categoric motion has been used by the constructivist architect Yakov Chernikhov to inscribe dynamics and tension into his spatial constructs. In 'Tales of Industry, complex forms in a strict rhythm perspective' he defined 'Konstruktsiia' as being more than a principle of compositions; the construction of arguments through assembling sequences of ideas. 'Konstruktsiia' thus denotes a mode of thinking, a certain ordering of the process of thought.

"Deconstructionist art stimulates the viewer to take part in the analysis of the "between' and explores the possibilities of the frame." ( A.Papadakis )

In direct tradition to the constructivist's idea, contemporary architects like Bernhard Tschumi have directly transposed it to its de-constructivist architecture. In his design for 'le parc de la Villette', the overlaying of different spatial and functional structures are not only meant to produce an open programmatic system of 'in-betweenness' but also to shape its temporal experience and multiplicity of viewpoints (frames) directly inside the design. According to this example one can understand in the quotation of A.Papadakis the 'between' as a multi-semantic system produced out of the overlaying of frames = contexts. In his writing 'Manhattan Transcripts' and works about de-construction, Bernard Tschumi often refers to Sergei Eisenstein's technique of 'dialectic montage' as a subjective cutting of multiple 'narrative' elements interconnecting and representing different layers of stories, all parallel in time and space. This post-structural linguistic analysis of Sergei Eisenstein's montage technique overcomes the investigation of cinematic fiction and expresses the contemporary vision of disembodied space experiences.

see image Stills from a u march 1994 special issue Bernhard Tschumi, Le parc de la Villette

According to the mentioned examples, 'e.MOTION space' stands for a specific setting of meaning and expression, which has turned into the predominant cultural paradigm of diffusing and representing cultural data, where the cinematic vision triumphed over the printed tradition. The Gutenberg galaxy turned out to be just a subset of the 'Lumière' universe.


windows of perception

"As the role of a computer is shifting from being a tool to a universal media machine, we are increasingly "interfacing" to predominantly cultural data: texts, photographs, films, music, virtual environments. In short, we are no longer interfacing to a computer but to culture encoded in digital form". Lev Manovich

As described in recent articles the digital media is based on inFORMation processes computing and transferring binary signals into visual, sonic or spatial representations. The conception and realization of these processes are all based on a set of characters and numbers. These assignments of parameters allow a huge, yet unknown, variety of semantic systems information structuring methods (code / language) as it is presented to a user (visualizations). This 'informing' data includes perceptive, the display of data, and cognitive, the interfacing of data, parameters conditioning the way we understand and interact; it condition the communicative / semantic value of the digital media. Yet only a few of these possibilities of structuring and displaying data actually appear relevant.

As the cinematic language has been predicted by the 19th century mass culture of panoramas, optical toys, peep shows..., any media is inscribed in a spatial and temporal context conditioning its cultural relevance as the receptor's vision. Of course each media have developed their proper ways of organizing information, how it is presented to the user, how space and time are correlated with each other, how human experience is being structured in the process of accessing information... In short, while each media develops its own field of communication based on it's technicality, as on its perceptive and cognitive modalities, as it is influenced by the pre-existing media as much as it transforms them.

In relation to the setting of the electronic media's cultural languages one can easily relate it to the textual culture in its semantic structure of keyword indexing, hypertext, as to the graphical language of icons, cartographies.... but more and more also to the cinematic vision.

"Cinematic means of perception, of connecting space and time, of representing human memory, thinking, and emotions become a way of work and a way of life for millions in the computer age. Cinema's aesthetic strategies have become basic organizational principles of computer software. The window in a fictional world of a cinematic narrative has become a window in a datascape. In short, what was cinema has become human-computer interface".Lev Manovich (Lev Manovich, in 'cinema the cultural interface'

Lev Manovich's quote underlines the influence of cinematic aesthetics and the camera's particular grammar in the development of the computer interface and the way data is accessed.

Functions like: '... Zoom, tilt, pan and track: we now use these operations to interact with data spaces, models, objects and bodies. "Cinema" thus includes mobile camera, representation of space, editing techniques, narrative conventions, activity of a spectator -- in short, different elements of cinematic perception, language and reception. Lev Manovich

Furthermore the cinematic construct of space has lead to the structuring of 3d real-time technologies, electronic space, as much on the level of interaction as on the level of conception, a relation one can perfectly understand in the aspen movie map project by Andrew Lippmann and his MIT Architecture Machine group (see The Aspen Movie Map) is probably the first hypermedia system of non-linear access to photographic images in virtual space. The system was a surrogate travel application that allowed the user to enjoy the city of Aspen. The recording of the city was done by means of four cameras which have taken every 3 meter pictures in 4 directions. In this manner the specific camera shots and the indexing of image in time and space allowed a continuous ride through the city in form of a picture sequences and thus allowed the user to navigate interactively the cityscape of Aspen at different times of the day and different seasons of the year. The system has been combined with a map navigator using each decision point (street intersection) to have a continuous navigation or to use the switch nodes of the map to do a point to point jump in the city.

see image The Aspen Movie Map

A closer view on the technological system of the Aspen movie map project allows understanding the extension cinematic techniques are related to the structures of the digital media and mainly 3d technologies. In 1978 no computer was able to render in real time the amount of images necessary to produce a continuous driving perception, so to say a frame-rate of 25 images a second. In order to resolve this lack of computation power the project was based on external analog storage from which the image sequences have been called and displayed according to the user's navigation. In this manner the project was able to produce a driving perception up to 300 km/h through the city of Aspen. This way of structuring visual data reveals the principles of image indexing and linking, hypertext, as time-based parameters expressed in form of frame-rates per second.

This conception allowed not only to display the navigation in a continuous and seamless manner but also to jump instantaneously from one point of the city to another, from one video sequence to another. Yet, working on electronic space and movement patterns, such as jump nodes (abrupt point to point navigation) and switch nodes (abrupt file to file = space to space navigation) has become the basis of any 3d environment. In fact these principles produce a 'discontinuous' experience inside the electronic space navigation close to the filmic technique of montage. Thus the use of a camera, like the usage of panoramic vision or subjective cameras, editing techniques as time-based structures, has been integrated in 3d technologies from the very beginning and has turned into one of the most established concept applied to conceive electronic space.

Of course the historic example of the Aspen movie map only reveals some of the possible relationships one can establish in-between the two media, since many cinematic narrative and cognitive structures find nowadays the application, mostly in video games. Thus it allows to ponder on the potential of visual and spatial transcription of information processes, computation and communication, in the form of electronic space. Similarly to the cinematic 'realm' where specific techniques have turned into the narrative structures of a media, the transcription of information processes now becomes the basis of electronic space construct(ion)s.

see image space 360 degree _ the 10th sphere, LAb[au] 2003

As mentioned, the experience of Cyberspace and the notion of 'being there', imply the setting up of specific, yet variable parameters according to camera movements, physical behaviours, as well as the setting up of perceptive ones like electronic ears - listen, body-fragments - touch... The variable setting of these elements allows not only to conceive new space constructs such as behavioural, generative and zero gravity spaces but already constitutes a huge palette of elements throughout which electronic space constructs will emerge as a language linking new space-time parameters, communication and computation, to specific modalities of perception and cognition. But in the same way different techniques have set up various visual languages in the cinematic culture, specific codes involving computation and communication processes need to be established in order to build an understanding of electronic space. Thus the comparison of filmic space to electronic should not be seen as a metaphoric transcription but as a general reflection on the cinematic and disembodied vision of space and time and how these topics will more and more become relevant in the cultural paradigm with the ongoing conquer of the electronic media.


Online examples

Below are some selected examples to illustrate and extend the different proposals.

Eddie Elliot , The video streamer

see image The video streamer printscreen

Eddie Elliott developed the 'Video Streame' software back in 1994 in the Interactive Cinema section of the MIT Medialab. The Video Streamer presents motion picture time as a three dimensional block (z-axis) of images flowing away from us in distance and in time. Each frame overlay the previous one with a slight offset thus forming the block on which front side displays the 'common' view of a video, while the sides would display the edges of the video. Cuts, movement, pans, zooms would become immediately apparent as patterns on the side of the block. These principals of displaying time-structures in form of space inspired several follow ups among them one could name the 'diva_ exploratory data Analysis' software by Wendy E.Mackay & Michel Beaudouin-Lafon.

ART COM, the invisible shape of things past

see image invisible shape of things past printscreen

The Berlin-based Art Com bureau created in 'The Invisible Shape of Things Past' an innovative 3D interface for accessing and visualizing data about Berlin through historic film sequence. The interface renders the camera movements, perspectives and focal length by placing the records of cinematic vision back into their historical and material context. The trajectory through time and space taken by a camera thus becomes spatial objects so called 'filmobjects' that correspond to documentary footage recorded at the corresponding points in the city. As the user navigates through the 3-D model of Berlin, they come across elongated shapes lying on city and thus experience the urban space through cinematic motion. Extended to an 'on line' interface users now can transform film sequence into electronic 3D shapes in order to investigate space-time based information access through virtual space and navigation.

The ambiant machine

see image The ambiant machine website printscreen

The project of Marc Lafia, the ambient machine, explores the setting of visual language, on the surface of the screen according to cinematic techniques. As Through a composition, recording and playback device the user get involved with the 'generative grammar' of cinematic gestures such as montage in order to experience the computer screen as a multi-layered two dimensional space.


see image Machinima website printscreen

Machinima is a way of conceiving, producing and distributing a film directly coming from the realm of videogames, mods and other 3D real-time technologies. Like stated on about its protagonists " they take a bunch of technologies that create a sort of Virtual Reality (generally a videogame), then shoot their film in there, just like you'd shoot a 'real' film or video". First of all machinima is differing from 'real' film or video by its use of computer and network technology, furthermore it is differing from CG (computer graphics) animated movies (like toy story f.e.), by its use of real-time and of shared multi-user environment, and last by its distribution on the internet, either as a movie or as a mod file which destination is to be rendered the viewer's computer. A 'machinima' producer is like a total director, he is the writer, he casts his actors which will act through the network, he does the set, the propositions and the characters design, he programs camera position and movements, and he is editing and distributing himself the movie. In short 'machinima' is a contemporary way of film making art inside the electronic space.

Spa[z]e 360 degree sphere muziq

see image space Navigable Music website printscreen

With the augmenting need to include information displays in buildings _ to conceive buildings as an information display more and more architects and media artists explore new possibilities to leave the flat projection screen to create architecture out of information display. Let us use one of LAb[au]'s recent project developments 'Spa[z]e 360 degree sphere' to exemplify how the 'external' parameters of electronic motion space can be related to the 'internal' ones.

The space, navigable music is a project investigating the impact of IC technologies and particularly, 3D modelling languages and real time renderings on the notions of space-time constructs. The project offers the user a specific experiment of cyberspace: by mixing music throughout navigation in electronic space, by modifying in real time camera settings and editing its movements he produces a continuous travelling, an acoustic architecture; a navigable music clip. Here the assignment of camera values (field of view) to sound frequency _ the pitching of sound even more relates the visual and spatial parameters to the sonic once. According to this structural interrelation between the camera and the processing of sound, the networked real-time rendering allows a complete 360 degree spherical projection in coherence with the spatial structure of the electronic space. A space where the sound_ through surround technology and the projected images_ through the 360 degree projection system become a space itself immerging the audience in a visual soundscape edited in real-time. Working on/in electronic space - architecture - thus extends the experience and understanding of electronic space as hybrid interplay of architecture, music and cinema.


Contact Store Facebook Projects pdf en | fr | nl

Metalab02 History Navigator (requires the Adobe Flash Player 8+).