"Espacios" [Spaces] is an offline interactive work in which various atmospheres cab be experienced in a 3D environment. Spaces looks at the field of abstract creation, video art and 2D/3D animation. I consider Spaces to be a test laboratory in the interactive audiovisuals field. Part of the composition of Spaces is hypertextual content (websites, links), still images and video, compositions in three dimensions, edited sound, spatial anchoring points, interconnection between 2D and 3D structures, the digital treatment of images, etc. Spaces starts as a small project and grows on a nodal basis until it is completely interconnected at all game levels and sublevels.
Spaces is a turning point in interactive communication and the production of 3D interactive content. Furthermore, it brings the concept of interactive content closer to other genres to which it is already closely related: offline video game, virtual tour, interactive exploration, interactive narration, etc.
Spaces was a joint production with Rosamérica Urtasun, a musician, composer and a student with me on the Master's Degree in Digital Arts. Without her and her and magnificent sonic creations the work would not be as complete or as atmospheric.
Finally, I would like to dedicate this immense work to three people without whom it would never have been possible: Ricard Serra (project lecturer and tutor); Rosamérica Urtasun (music and sound) and Catalina Acelas (graphic designer and multimedia). Thank you for your invaluable help.
The end result, in terms of the visual part, was quite satisfactory. I was seeking three conditions to make the work technically correct at a visual level: that the interactive content contained no errors and that the application did not crash, that the pace and transitions when executing the program were correct (reproduction of various films, actors and scenarios within the Director) and that the product created simple and pleasant browsing for the user.
The interactive content contains no errors and does not crash, the pace and changes of scenarios are normal, although there are some problems when the program is very heavy and has so much browsing and when the scenario is changed it does not recognise the still it should go to and goes to another one.
The issue of interaction and ease of browsing still needs some work, as there is always room for improvement. The information given to users must be continuous at all times due to the succession of spaces and scenarios, but without saturating or exhausting them. This is a basic assumption so that people who are exploring move forward in their search and move easily from one space to another. The interactive content is interlinked and logically connected in its various parts and scenarios, but there should still be more clearly defined browsing controls. It is currently possible to browse using the mouse or the arrows on the keyboard and some specific keys, but the idea is that in the near future it will be possible to play using a joystick or any graphic pallete, for example.
Another objective that is still outstanding is to execute some specific scripts and to be able to optimise and expand the interaction: Director and ADN complement each other and enable almost total control. In other words, from Director, using a series of parameters in its plug-in for ADN, it is possible to create hybrid and even more complex types of interaction. An example would be to implement possibilities to manipulate objects or video textures, or to change the position of the shots by the 3D camera (our eyes) in the various scenarios, with all this happening when specific keys on the keyboard or the joystick were operated.
We considered the possibility of using real time with other software and image texturing, but the objective was complex and unstable for computers without a high performance graphics card. We chose the prerendering system for these two reasons: because the result is certain as everything is calculated, and because the image and sound are not generated for each still. As real time is so fashionable, I was initially concerned as I thought that we would be losing much of the "performance" that the pre-rendering system loses by being unable to transform parameters in real time, but we have come to like it more as the interactive content has 200 browsable nodes (spatial points) and the initial camera position at each node is different, meaning that we are never really in the same place without being in real time.
Furthermore, real time would never have enabled us to show such complex textures in large objects as the system calculated. Another point to consider is to find out which code makes Director (by creating specific functions) create groups or arrays of videos and sounds and launch them at various random interactive points.
Finally, having worked on and communicated with so many media meant that we in turn needed to find out about them to find out how far we could go. As a result, in visual terms we have researched the connection of a 2D content idea, three-dimensional browsing with video and sound interaction and a program for connection and postproduction of multimedia. Research was therefore done on the relationship between vectorial creation in Flash and programmes generating fractals (Gaia, Ultrafractal), the synthetic model and its programming to add internal interaction to it (3d Max and ADN) creation, montage and post-production of various audiovisual compositions (Combustion, AfterEffects, Cleaner) and the final programming and general assembly at a visual, sound and interaction level (Director and the ADN plug-in). The result of research on this program and the connection of some of its parameters were questions that were more difficult to resolve, as the problem had not previously been considered, such as the fact that Director cannot contain five 3D browsings programmed with ADN in a single project because it becomes confused when their parameters are activated. This will be solved by calling films (each one will contain a separate 3D browsing) separately (projectors), outside Director.
Finally, I would like to dedicate this immense work to three people without whom it would never have been possible: Ricard Serra (project lecturer and tutor); Rosamérica Urtasun (music, composer and creator of all the sound atmospheres) and Catalina Acelas (graphic designer and multimedia). Thank you for your invaluable help.