Time Space Texture: An Approach to Audio-Visual Composition

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE DOCTOR OF PHILOSOPHY by Andrew D. Lyons Bachelor of Arts University of New England 1995

Composition Unit The Sydney Conservatorium of Music. The University of Sydney Sydney, Australia

March 8 2003

Copyright 2003 Andrew D Lyons

“Music is the Eye of the Ear” Thomas Draxe – Bibliothecha 1616.

Acknowledgements First of all I would like to thank my supervisor Dr Greg Schiemer from the Sydney Conservatorium, for encouraging me to begin this, for supporting me along the way, and for seeing it to its conclusion. Secondly, I would like to thank my wife, Melissah Kochel for being the most potent muse an artist could wish for. Thirdly I would like to thank Dr. Michael Hannan for his invaluable support at Southern Cross University over the last two years of this research.

I would like to thank Ben Simons and Chris Willing for helping me, and bearing with me while I dominated the facilities at Sydney Vislab. Thanks also to Youzhen Cheng and Dr. Russell Standish for allotting me time on the PVL and AC3 supercomputers. Thanks to Leonard Layton and Roger Butler from Lake Technologies for their support with the Huron. An extra big thank you to Greg Hermanovic and Janet Fraser from Side Effects Software for making Houdini the software that it is. This project would not have been possible without Houdini. Thanks also to Bozidar Kos, Peter McCallum, Richard Toop, Martin Wesley Smith and Ian Fredericks (RIP) from the Sydney Conservatorium and Troy Schmidt from Southern Cross University for support and help in various guises over the years.

I would like to thank academics who have sent me copies of their writing: Dr. Carlos Palombini, Dr. Marc Leman, Dr Rolf Inge Godoy, Dr. Maria Anna Harley, Dr. Alexandra Quittner, Candace Brower, and those academics who have provided feedback or made reading suggestions along the way: Dr. Denise Erdonmez Grocke, Dr. Richard Cytowic, Dr. Lawrence Marks, Dr. Lawrence Barsalou, Dr. Wayne Slawson, Dr. Mark Rollins, Alan Lem, Dr. Nigel J.T. Thomas.

Thanks go to the guys from Loucid: Andrew McLennan, Paul Boswell, Glen Black and Stuart Soler.

Dedicated to the memory of Ian Fredericks, and the arrival of my son Ayden.

Time Space Texture: An Approach to Audio-Visual Composition

Index

INDEX

1

ABSTRACT

9

STATEMENT OF ORIGINALITY

11

PROLOGUE

12

1 INTRODUCTION

15

2 THE HUMAN BRAIN

21

2.1 THE IDENTITY THEORY OF MIND

21

2.2 THE ARCHITECTURE OF THE BRAIN

22

2.3 MUSIC AND THE BRAIN

23

2.3.1 IMAGING TECHNIQUES

23

2.3.2 MUSIC AND IMAGING

24

2.3.3 RHYTHM

24

2.3.4 SEMANTIC ANCHORAGE

24

2.3.5 PITCH AND VISUAL SPACE

25

2.3.6 TIMBRE, MELODIC SHAPE, AND SPACE

26

1

Time Space Texture: An Approach to Audio-Visual Composition

2.4 THE BRAIN, MUSIC AND VISUAL IMAGERY

27

2.4.1 IMAGERY AND THE BRAIN

27

2.4.2 CROSS MODALITY

28

2.5 MUSIC AND EMOTION

29

2.5.1 CONSCIOUSNESS

29

2.5.2 EMOTION

30

2.5.3 THE ACCRETION HYPOTHESIS

31

2.5.4 THE LIMBIC SYSTEM

32

2.5.5 IMAGERY AND EMOTION

34

2.5.6 ECSTASY

35

2.6 NEUROPHYSICS SUMMARY

36

3 COGNITION

38

3.1 MENTAL IMAGERY

38

3.2 PERCEPTUAL BASES OF COGNITION

39

3.3 COGNITIVE STYLES

39

3.4 DUAL CODING THEORY

41

3.5 REASONING USING IMAGERY

43

3.6 SIMILARITY AND METAPHOR

44

3.7 CREATIVITY

46

3.8 IMAGERY, CREATIVITY AND IMPROVISATION

47

3.9 COGNITION SUMMARY

49

4 PHENOMENOLOGY

50

4.1 INTRODUCTION

50

2

Time Space Texture: An Approach to Audio-Visual Composition

4.2 A BRIEF HISTORY OF PHENOMENOLOGY

51

4.3 CROSS MODAL METAPHOR

52

4.4 PHENOMENOLOGY AND ANGLOPHONES

53

4.5 MUSIC AND PHENOMENOLOGY

54

4.6 GESTALT PSYCHOLOGY

55

4.7 CONCRETE ARTS AND MUSIC

55

4.8 SUMMARY

56

5 GUIDED IMAGERY IN MUSIC THERAPY

58

5.1 GIM HISTORY

58

5.2 GIM SESSIONS

59

5.3 IMAGERY TYPES

60

5.4 MUSIC PROPERTIES FOR IMAGERY

61

5.5 SUMMARY OF GIM RESEARCH OUTCOMES

66

6 MUSIC THEORY AND AESTHETICS

68

6.1 INTRODUCTION

68

6.2 MUSIC AND AMBIGUITY

68

6.3 PHILOSOPHY OF ART

69

6.4 COMMON MUSIC NOTATION

75

6.5 TRADITIONAL MUSIC THEORY

80

6.6 SUMMARY

85

7 NEW MUSIC THEORY

86

7.1 INTRODUCTION

86 3

Time Space Texture: An Approach to Audio-Visual Composition

7.2 LEONARD B MEYER

86

7.3 EDGARD VARESE

87

7.4 PIERRE SCHAEFFER

88

7.5 DENIS SMALLEY

89

7.6 ROLF INGE GODOY

91

7.7 GESTURE

92

7.8 EMOTION

93

7.9 SUMMARY

96

8 SCHWARZCHILD

97

8.1 INTRODUCTION

97

8.2 MAKING SCHWARZCHILD

98

8.2.1 INTRODUCTION

98

8.2.2 3D SOUND COMPOSITION

99

8.2.3 3D AUDIO-VISUAL COMPOSITION

100

8.2.4 SCHWARZCHILD SCENE BREAK DOWN

102

8.3 MAKING SCHWARZCHILD’S MUSIC

103

8.3.1 THE WX5

103

8.3.2 MIGHTY WATER AND THE HEADS

104

8.3.3 PULSE ROOM

105

8.3.4 DIDGE-LAND

106

8.3.5 THE TITLES AND CREDITS

107

8.4 MAKING SCHWARZCHILD’S ANIMATION

108

8.4.1 PRE-VISUALISATION

108

8.4.2 VISUALISATION

109

8.4.2.1 Head 1, 2 and 3.

110 4

Time Space Texture: An Approach to Audio-Visual Composition

8.4.2.2 Pulse Wall

110

8.4.2.3 Mighty Water

111

8.4.2.4 Didgeland

111

8.4.2.5 Credits

112

8.4.2.6 Stereoscopy

112

8.4.3 SPATIALISATION

112

8.4.3.1 Introduction

112

8.4.3.2 Moving Objects

115

8.4.3.3 Compositing

116

8.5 A SCHWARZCHILD RETROSPECTIVE

117

9 LOUCID

120

9.1 INTRODUCTION

120

9.1.1 THE VISUAL STUDY. THE SONIC STUDY

120

9.1.2 LOUCID - THE BAND

122

9.1.3 THE RECORDING

123

9.1.4 THE AC3 OPPORTUNITY

124

9.2 MAKING LOUCID

125

9.2.1 THE IMPROVISING SAXOPHONIST

125

9.2.2 THE SYNTHESIS

128

9.2.3 PREPARING THE AUDIO FOR ANALYSIS

128

9.2.4 ANIMATING LOUCID

130

9.2.4.1 Pre-visualisation

130

9.2.4.2 Buckling Walls

131

9.2.4.3 The Bass Cavern

131

9.2.4.4 The Saxophone Object

131 5

Time Space Texture: An Approach to Audio-Visual Composition

9.2.4.5 The Snowflake Phalanx

132

9.2.4.6 The Guitar Streaks

133

9.2.4.7 The Drum Lights

133

9.2.4.8 The Loucid Camera

133

9.3 THE DIGITAL SYNTHESIS STUDIES CD

134

9.4 LOUCID RETROSPECTIVE

135

10 HEISENBERG

137

10.1 3D SOUND IN HOUDINI AND CSOUND

137

10.1.1 THE SEED

137

10.1.2 DSP SKILLS

137

10.1.3 PAN

138

10.2 MAKING HEISENBERG

141

10.2.1 INTRODUCTION

141

10.2.2 POLE HUTS

142

10.2.2.1 Design

142

10.2.2.2 Huts

143

10.2.2.3 Vines

143

10.2.2.4 The Fly

144

10.2.2.5 The Orange Vortex

145

10.2.3 THE BLUE ROOM

145

10.2.3.1 Introduction

145

10.2.3.2 The Spinning Ball

145

10.2.3.3 The Breathing Room

146

10.2.3.4 The Fiery Dissolve

146

10.2.3.5 The splatter

146 6

Time Space Texture: An Approach to Audio-Visual Composition

10.2.3.6 The screech

147

10.2.4 THE GREEN ROOM

147

10.2.4.1 Introduction

147

10.2.4.2 The Flying Glitches

147

10.2.5 THE RED ROOM

148

10.2.6 FLESHSCAPE

148

10.2.6.1 Introduction

148

10.2.6.2 The Voices

149

10.2.6.3 The Tablas

150

10.2.7 CLOUDSCAPE

151

10.2.7.1 Introduction

151

10.2.7.2 The Clouds

151

10.2.7.3 The Blown Glass Bell

151

10.3 HEISENBERG RETROSPECTIVE

152

11 EPILOGUE

154

11.1 SUMMARY

154

11.2 FUTURE WORK

157

12 APPENDICES

159

APPENDIX A – VISUAL MUSIC COMPOSITION

160

Thoughts on a Non-Cartesian Approach to Visual-Music Composition

160

APPENDIX B – VISUAL MUSIC HISTORY

166

A Brief Historical Survey of Audio-Visual Intermedia Composition

166

APPENDIX C - SYNAESTHESIA

175

7

Time Space Texture: An Approach to Audio-Visual Composition

Synaesthesia - A Cognitive Model of Cross Modal Association

175

APPENDIX D – MYSTICISM

188

MYSTICISM IN ABSTRACT ARTS

188

APPENDIX E – SCHWARZCHILD SCHWARZCHILD—ABSTRACT ELECTRONIC MUSIC THEATRE.

193

APPENDIX F – GESTALT GESAMTKUNSTWERK

201

Gestalt Approaches to the Virtual Gesamtkunstwerk.

201

APPENDIX G – HEISENBERG – 3D-AV

214

APPENDIX H – PAN HSCRIPT

239

APPENDIX I – CSOUND 5.1 PANNING ORC + SCO

254

APPENDIX J – VARIOUS SCRIPTS

273

APPENDIX K – INSTRUCTIONS FOR DVD USE

274

13 BIBLIOGRAPHY

275

8

Time Space Texture: An Approach to Audio-Visual Composition

Abstract This thesis outlines various issues related to the composition of the works included on the “Music VR” DVD. These works explore the relationship between spatialised music and visual mental imagery. An understanding of these works involves an understanding of a number of research disciplines.

Human thought seems to move along a continuum, with mental imagery modes of thought at one end, and linguistic and conceptual modes at the other. There is an emphasis in traditional western music theory on rational and linguistic approaches to music. Music may however be thought about in visuo-spatial terms. These

visuo-spatial

analogies

are

particularly

useful

for

highly

timbral

“soundscapes” and other modern computer generated music. Visuo-spatial analogies of such music may be employed in the design of time based visual arts involving 3D animations.

Visuo-spatial analogies of music are generated by a number of factors. Music which is expansively articulated and highly timbral tends to be interpreted by more spatio-visual areas of the brain. People also exhibit a distinct cognitive style in their approach to thought – some people are visualisers and some are verbalisers. Finally, people can adopt specific intentional positions through acculturation. An analytic approach will precipitate more linguistic thought about music. A passive, de-differentiated approach will precipitate more visuo-spatial thought about music. The latter approach is related to creativity and intuition. Emotion is understood to play a large part in suggesting visuals associated with particular passages of music.

9

Time Space Texture: An Approach to Audio-Visual Composition

Observing music and mental imagery can draw on Phenomenological methods. Phenomenological

methods

are

useful

in

approaching

music

without

preconceptions about music – linguistic, physical or otherwise. The phenomenal relationship between music and mental imagery has been explored by music therapists working with the Guided Imagery and Music (GIM) techniques. This research suggests that certain types of music give rise to mental imagery more consistently.

This visuo-spatial approach to music and the relationship between music and imagery is compared to some traditional music theories. Some conventional biases, and epistemological problems with traditional theories are isolated, and a foundation for new theories suggested. Some features of recent theories which reflect recent developments in cognitive science are then described. In the final chapters, an outline of the way these new theories, techniques and ideas are involved in the development of the works on the Music VR DVD is set out. In the appendices A to G, numerous papers expand on specific areas concerning these works. Numerous examples of the computer scripts and code used to create the animations are also included in the appendices H to J. Instruction on how to use the DVD are found in Appendix K.

10

Time Space Texture: An Approach to Audio-Visual Composition

Statement of Originality To the best of the author’s knowledge there are numerous original features in this research. Many of these result from a synthesis of existing creative practices and theory. The works on the DVD are the first abstract animated works to explore the representation of visual mental imagery associated with spatialised music using 3D animation and 3D sound spatialisation systems. (This applies in full to Schwarzchild and Heisenberg, but only in part to Loucid.) To date, this thesis represents the most comprehensive synthesis of research from various disciplines concerned with exploring the cognitive relationships that exist between visual mental imagery and music in relation to creative art works. An identification and description of numerous unique creative challenges occurs for the first time in this thesis. These challenges have arisen as a result of a specific concern for spatially coincident and abstractly related audio-visual objects. Most important amongst these is the bi-directional circular relationship that exists between the three areas of visual imagery, spatialisation and musical qualities. In creative works of the sort on the Music VR DVD, each area must be arrived at in a way that is coherent with the other areas. This demands a particular

approach

to

composition

and

design

which

involves

the

prior

imagination of coherent, inter-related, spatial audio-visual scenarios. To the best of the author’s knowledge the model of improvisation described in section 3.7 is perhaps the first discussion of improvisation in cognitive terms. The hypothesis describing “intuition” using the cognitive ideas “mental imagery”, and “primary process thought” is also original. Besides its reverberation algorithm, the “Pan” software system described in chapter 10.1.3 is also an original system. Pan is the first 3D sound spatialisation renderer created for a major commercial 3D animation package. It should be noted that the DVD included with this thesis needs to be experienced using a spatial sound reproduction system – either a 5.1 speaker array or stereo headphones.

11

Time Space Texture: An Approach to Audio-Visual Composition

Prologue

This thesis outlines an approach to the creation of spatial audio-visual composition. Works of the kind discussed here are to be found on the Music-VR DVD that is included with this thesis. These audio-visual works explore relationships between spatialised music and visual mental imagery. These relationships

are

implemented

in

compositions

designed

for

performance

involving multiple speaker arrays, and video projection systems. A dichotomy between linguistic and image based mental modes, and a dichotomy between analytic and de-differentiated execution of thought are discussed.

In Chapter 2, the reasons why people might associate music and visual mental imagery are explored. Many of these reasons are suggested by the various means of information transfer in the human brain. The human brain does not have a single area which is concerned with music cognition. Instead numerous areas which evolved to deal with other aspects of cognition become involved in music cognition. These include emotional, auditory, linguistic, visual, and spatial areas.

In Chapter 3, the research of various cognitive psychologists is discussed. Cognitive psychologists describe two main styles of thought – linguistic and image based. The existence of these two styles is consistent with the combination of current neurophysical models of the brain, and different intentional positions. While there can be complex relationships between language and imagery, each may function independently of the other. Associative thought using imagery instead of concepts as a basic unit of thought is associated with creativity. This

12

Time Space Texture: An Approach to Audio-Visual Composition

provides evidence for a psychological basis for discussing the author’s works using the dichotomy of linguistic and image based modes of thought.

In Chapter 4, an outline of Phenomenology is provided. This outline describes the methods, history, and application of phenomenology to works involving music and visual mental imagery. Phenomenological method involves a bracketing out of all preconceived ideas about phenomena, such as concerns for its causes. Phenomenology is an important way to observe music, and the sole means of observing the features and qualities of visual mental imagery resulting from music.

In Chapter 5, a description of research involving music and mental imagery is discussed. This research has been undertaken in relation to a music therapy technique called Guided Imagery and Music (GIM). The observations of this research include the idea that emotion is highly connected to the stimulation of visual mental imagery. There is also some evidence that tension and release has a part to play in the generation of visual mental imagery. The idea of containment is also an interesting idea which can be applied by a range of composers and artists. This research is somewhat compromised by the limited range of music with which it concerns itself.

In Chapter 6, traditional approaches to western music theory and philosophy of art are discussed. Phenomenological approaches to music require the bracketing out of preconceived ideas about music. To achieve this it has been necessary to loosen any commitment that readers might have to preconceptions about the essential nature of music. Much western music theory has no firm epistemological foundation. Western music and music theory may be seen to have evolved under the influence of social and technological forces. Much of this theory is not applicable to new music or interdisciplinary art made with new technology. 13

Time Space Texture: An Approach to Audio-Visual Composition

In Chapter 7, modern music theories which can be applied to music made using new technology are discussed. New music styles in which the modification of timbre is a central concern have demanded new models of music. These new models frequently employ spatio-visual analogies. Theorists involved in highly timbral music created using new technology frequently refer to music in terms related to vision and gesture. In addition, new music theory being developed by cognitive musicologists builds on current understandings of mental imagery and the ways that music relates to spatial perception, vision and the body via emotion.

In Chapter 8, 9 and 10, the works from the DVD - Schwarzchild, Loucid and Heisenberg are discussed. The ideas, challenges and solutions arrived at in the creation of each work are dealt with. One of these solutions – the “Pan” spatialisation system is discussed. A full print of the Pan code, as well as various other supplemental papers concerned with the works on the DVD are presented in the Appendices. The Appendices can be found after a brief summary of findings, and ideas for further works.

14

Time Space Texture: An Approach to Audio-Visual Composition

1 Introduction This thesis concerns itself with an imagery based digital music composition method, which has given rise to extensions in the visual realm. These visual extensions have occurred in an organic way, and there is continuity in the ideological paradigm that permeates the composition of both music and imagery. At the core of this ideological paradigm is a specific approach to music. For the author, music is any ordered arrangement of sounds and silences as they are heard. This definition, "implies nothing about the intentions of the composer, or indeed, about whether there is a composer. It says nothing about the status of the score or about the nature of the instruments. Both the score and the instruments are as dispensable as the composer. To be more precise, then, I should say that music is the actualisation of the possibility of any sound whatever to present to some human being a meaning which he experiences with his body that is to say, with his mind, his feelings, his senses, his will, and his metabolism.” (Clifton, 1983 p.1). This definition of “music” - one which describes almost any combination of sounds as they appear mentally - makes it similar in definition to what Pierre Schaeffer would refer to as a “Sonic Object”. (Palombini 1993). Michel Chion summarises Schaeffer’s sonic object thus:”

a) “The sonic object is not the sound body” b) “The sonic object is not the physical signal” c) “The sonic object is not a recorded fragment” d) “The sonic object is not a notated symbol on a score” e) “The sonic object is not a state of mind (it remains the same across different listening modes).” (Chion 1983).

15

Time Space Texture: An Approach to Audio-Visual Composition

In this thesis music is always a sonic object – a sound that is heard to be organised. This is an important distinction to make, because to understand music in terms of visual imagery, it is necessary at first to approach music phenomena in the most raw available form with which it presents itself to the mind. All preconceived ideas, prejudices, and external concerns must be bracketed out. Nothing must be taken for granted, or assumed. The raw phenomena must be all that remains in the mind. One then needs to apply a particular cognitive strategy to the distillation of the music phenomena to visual essences. This approach can involve a primary process, or analytic style of thought in what might be termed a visual “intentionality”.

“The term intentionality was coined by the Scholastics in the middle ages, and derives from the Latin verb intendo, meaning to point (at) or aim (at) or extend (toward)… The term was revived in the late 19th century by the philosopher and psychologist Franz Brentano, one of the most important predecessors of the school of Phenomenology.” (Gregory and Haugeland, 1987). To Brentano, intentionality is an irreducible feature possessed by many (perhaps all) mental states, of being of, about, or directed at something. Intentionality manifests itself strongly in what we call “expectations”, “desires” and ”imaginations”, although it is a feature of many, if not all modes of thought. It is suggested that it is difficult or impossible to have a thought that is not “about” something. Intentionality might be understood as the implicit directivity one takes to thought about phenomena.

Intentional positions in listening to music can become reinforced by consistent repetition. A reflex towards a particular intentional stance in listening to music can manifest itself as a tendency to interpret music in certain ways. As an example of this, a musician who has been transcribing music from recordings for days on end will find it difficult not to listen to music without hearing it in terms of 16

Time Space Texture: An Approach to Audio-Visual Composition

crotchets, quavers and the other devices of music notation. On the other hand a linguist may tend to hear music in semiotic or structural terms. A painter on the other hand may interpret music in terms of line, shape, light, dark, colour and texture. A visual intentionality may therefore be described in general terms, as an approach to thought about music as if music were visual phenomena. It might be suggested that there are two general types of visual intentionality, one being analytic and the other being intuitive. These have been outlined in a previous paper by the author. (Lyons 2002) (See Appendix G – Section 2.3.7 and 2.3.8).

If a visual intentionality is successful, the listener may be able to summon a mental image in response to the music. Cognitive psychologists have recently studied such mental imagery in an effort to understand what people commonly refer to as “imagination”. (Richardson 1999). Mental imagery does need not be purely visual in nature however, and in cognitive psychology a distinction is made between auditory, visual, kinaesthetic, tactile, and olfactory imagery. All are referred to as “imagery” of one sort or another, but each one refers to imagery that relates to one of the human senses. As an example of imagery, visual mental imagery is often informally described as "seeing in the mind's eye" or "visualization". All types of imagery do not depend on external sensory stimuli for their existence, being rather from some “internal” source. Many believe that imagery has an integral semantic or emotional potential, which is referred to by some as “qualia”. Chalmers (1995). describes a quale as “the what it is like” element of experience. This is however a highly debated area within areas of philosophy of mind. (Dennett, 1988).

It is important not to confuse visual imagery with the other sorts of imagery. For this reason, the auditory kind of mental imagery is generally referred to specifically as auditory imagery. In addition, musical imagery refers to a specific type of auditory imagery associated with musical experience. An example of 17

Time Space Texture: An Approach to Audio-Visual Composition

musical imagery is that section of music that one hears in the morning which then repeats itself over and over inside ones mind throughout the rest of the day. Musical imagery is technically the sound of music in one’s mind resulting from some source independent from the external perception of a sound. It should also be noted that the cognitive term “musical imagery” should not be confused with visual mental imagery precipitated by musical stimuli. In the discussion to come, imagery will always be used in a generic way to denote the imagination of sensory stimuli related to one sensory modality or another. It does not specifically refer to visual imagery. This distinction is important in a discussion that concerns the perceived “cross-modal” exchange of attributes between one sensory modality and another.

Imagery may be seen to form a mental dichotomy with what is understood by the term “concept”. A concept is usually described as an idea framed entirely within the constraints of verbal language. While “language” is a term which is often used generically to describe any means of communication, in this thesis language will be limited to the description of modes of communication and thought which are derivatives of verbal languages such as English, French, German etc. This delimitation of that which is linguistic may be seen to draw its definition from the verbal / visuo-spatial dichotomy discussed in many areas of cognitive science. (Richardson 1999). On a pragmatic level, an essential feature of “language” is the way in which verbal words follow one after another in a serial sequence to form sentences and propositions that are attributed meaning. That which is “linguistic” therefore includes spoken languages as they are spoken and written, as well as thought that proceeds in a linguistic manner. Both logic and analytic thought are linguistic by this definition.

The study of imagery was an original concern of psychology when it began in the late nineteenth century. However during the early and middle decades of the 18

Time Space Texture: An Approach to Audio-Visual Composition

20th century, the positivist philosophy of behavioural psychology denied or ignored the existence of “internal” mental activity such as imagery. Following the “cognitive revolution” that took place during the last few decades of the twentieth century, imagery is again widely accepted as a valid and important concern of scientific enquiry. The “cognitive revolution” describes a new approach to knowledge of the human mind that resulted from the interdisciplinary cooperation of a number of existing research disciplines. These disciplines include cognitive

psychology,

neurophysics,

artificial

intelligence,

linguistics

and

philosophy. Cognitive science may be thought of as the scientific study of thought. It aims to overcome the limitations and insights available to single disciplines of research by combining the results of many disciplines. Recent cognitive psychology has challenged the serial and amodal systems of Fodor’s (1975) original linguistic model of cognition in favour of an understanding in terms of mental imagery. (Kosslyn 1994). While the existence of imagery is now widely accepted, its exact nature is still debated. (Richardson 1999)

It is agreed by many scholars that Cognitive Science constitutes a positive scientific means with which to study the arts. Aspects of consciousness, perception, emotion, and imagination all lend themselves well to study using various cognitive science techniques. Many theorists see the cognitive sciences as a release from the linguistic nihilism and armchair theorising of post modernism (Anderson, 1996, pp.7-8). It is important for the arts that new art theory need no longer rely solely on philosophy for a basis.

In the context of music research, the cognitive revolution has spawned fields such as cognitive musicology and cognitive neuromusicology. The perspective of these fields in relation to music provides an exceptional framework with which to re-build music theory in such a way that it has some credible epistemological foundation. Cognitive psychology also provides an excellent account of all forms 19

Time Space Texture: An Approach to Audio-Visual Composition

of mental imagery. This in turn provides an excellent platform from to which study cross modal exchanges and to develop interdisciplinary arts.

20

Time Space Texture: An Approach to Audio-Visual Composition

2 The Human Brain

2.1 The Identity Theory of Mind Rather than launch headlong into a speculative discussion of music and imagery, an understanding of the human brain is going to be used as a foundation on which to build further discussion. In discussing the brain, a particular view of the relationship between thought and the brain is going to be taken. It is a common view, which holds that cognitive states and processes are related to physical states of the brain. In this view, most cognitive activity is associated in a fairly consistent way with certain parts of the brain. This theory was originally developed following observation of the consistent behavioural peculiarities exhibited by patients with physical damage to specific parts of their brains.

There is no single dedicated area of the brain responsible for music cognition. It is believed that the “experience” of music draws on various types of perception and cognition developed for other survival related functions. Music can therefore be studied to some extent according to the various parts of the brain activated by different aspects of musical experience. This can be understood by considering the analogy that many distinct areas of the brain have a generic cognitive “perspective” which can be applied to many forms of mental activity.

In this way - and probably for this reason - similarities may be perceived to exist

between

aspects

of

diverse

sensory

experience,

which

happen

to

consistently rely on shared areas of the brain during cognition. (i.e. Describing a melody in spatial terms associates visual and auditory perception.) In many 21

Time Space Texture: An Approach to Audio-Visual Composition

instances, the way that people think about things is suggested - and even directed by the physical structure of the human brain. It is therefore important to build an understanding of the brain into the foundations of any theory of music and cross modal exchange.

2.2 The Architecture of the Brain As a general introduction to the architecture of the human brain, it should be understood that the “human brain contains at least five anatomically distinct networks:

spatial

awareness,

language,

object

recognition,

explicit

memory/emotion, and working memory /executive function. In keeping with the principles of selectively distributed processing, each epicenter of a large-scale network displays a relative specialization for a specific behavioral component of its principal neuropsychological domain.” (Cytowic 2002, p.237). It is worth noting two general things in the context of the discussion to come. The first is that memory is associated with emotion, and the second is that areas of the brain associated with spatial awareness and language are largely anatomically separate.

There are a number of systems for naming different areas of the brain. On the largest scale the brain is referred to as having three general areas that evolved one after another. These are the Brainstem, the Limbic System, and the Cerebral Cortex. Each of these main areas has sub areas. The Cerebral Cortex is divided into left and right hemispheres, and has four lobes. Figure 2.1 below shows these various areas.

22

Time Space Texture: An Approach to Audio-Visual Composition

Figure 2.1 – The Human Brain

2.3 Music and the Brain

2.3.1 Imaging Techniques Positron Emission Tomography (PET) is a procedure that allows a physician to examine the chemical functioning of the human brain and other organs. Data from PET imaging is usually calibrated, coloured and displayed in the form of images. In the context of the brain, PET imaging permits doctors to see which internal

parts

of

the

brain

are

active

at

a

particular

time.

An

electroencephalogram (EEG) on the other hand is a measure of electrical scalp potentials, and requires that electrodes be placed in various positions on the scalp. 23

Time Space Texture: An Approach to Audio-Visual Composition

While a PET image provides a highly detailed snapshot of the brain at any instant, EEG data provides a representation of brain activity which tracks activity over time.

2.3.2 Music and Imaging In relation to music specifically, recent research involving PET imaging of neural activity in the human brain has shown that different areas of the brain are responsible for the cognition of different musical attributes. It has been shown that pitch recognition, rhythm, familiarity, melodic structure and timbre all depend on different areas of the brain.

2.3.3 Rhythm Platel and his colleagues found that rhythmic, temporal and sequential components of music tasks involve “Broca’s area”. Broca’s area is found in a frontal, left hemispheric, lexico-semantic (linguistic) area of the brain closely connected to hearing mechanisms. (Platel et al. 1997). Similarly, Zatorre (2001) found that “cerebral blood flow in a region of the left auditory cortex showed a greater response to increasing temporal than spectral variation, whereas a symmetrical area on the right showed the reverse pattern.”

2.3.4 Semantic Anchorage As with the rhythm task, the recognition of familiar melodies also included Broca’s area. (Platel et al. 1997). Platel suggests that, “there can be no doubt that this task, more than any of the other three, may bring to mind verbal material and induce subjects into lexico semantic search.” (Platel et al. 1997, p.238).

It should be noted that the subjects of Platel’s study were all right

handed French males. With males, the cognition of lexico-semantic properties is largely confined to the left inferior frontal gyrus. (Brocha’s region.) Women

24

Time Space Texture: An Approach to Audio-Visual Composition

however tend to utilise similar areas in their right hemispheres in linguistic tasks as well. (Shaywitz 1995). These differing cognitive strategies suggest differing styles of thought in response to language.

2.3.5 Pitch and Visual Space The findings of Platel and his colleagues in relation to pitch recognition tasks confirmed what numerous other studies had confirmed before. They found that there are two main cognitive strategies involved in pitch tasks, each of which involves a different intentional position. The analytic discernment of pitch intervals involves front left areas of the brain, while a more passive approach to distinguishing global melodic outlines has more to do with the right hemisphere. They suggest that this “hemispheric' effect, linked to a cognitive strategy, has already been suggested with PET by the work of Mazziotta et al. (1982) and Phelps and Mazziotta (1985). These authors showed that activations during the comparison of note sequences (Seashore test) were localized on the left (superior temporal areas) when the subject's strategy was 'analytic', and on the right (inferior parietal and temporo-occipital areas) when the subject's strategy was 'passive'.” (Platel et al. 1997, p.239).

Platel and his colleagues also found that areas of the cuneus/ precuneus were stimulated in the pitch interval task. “Being close to the primary visual areas, the activation of this region is often interpreted as reflecting visual 'mental imagery' (Demonet et al., 1992; Grasby et al., 1993; Fletcher et al., 1995). The contribution of the cuneus/precuneus to tasks involving visual material has been demonstrated (Corbetta et al., 1993). In their study, Sergent et al. (1992a) found activation foci, close to ours, in the left cuneus/precuneus areas for the reading of a musical score by musicians ... What is the evidence that allows us to think that a visual mental strategy could be induced by this task? Probably more than any other task in our protocol, the pitch task tapped into visual imagery in terms, of 25

Time Space Texture: An Approach to Audio-Visual Composition

'high' and 'low' in relation to a notional base line (a 'mental stave'), and this idea was suggested by the subjects themselves during debriefing.” (Platel et al. 1997, p.239).

This would seem to verify experimental evidence provided by Leonard Meyer which suggests that spatially oriented terms like high, low, rounded, pointed, bright, dark, have more than metaphorical meaning. For Meyer, "the fact that cultures all over the world tend to characterize pitches in spatial terms ... supports the view that this way of perceiving pitch is at least partially innate." (Meyer 1967 pp. 250-251). There is still some debate about whether thought about pitch in terms of “high” and “low” is a universal cognitive strategy in all humans, or merely a cognitive strategy employed by those exposed to western music education. (Walker, 1987).

2.3.6 Timbre, Melodic Shape, and Space Also of interest in the Platel (1997) neuro-imaging study was the fact that the recognition of both timbre and melodic contour tasks involved right hand areas of the brain. “Four general abilities relatively dependent on the right hemisphere are (1) nonlinguistic perception, especially that involving spatial configurations, (2) spatial distribution of attention, (3) expression of emotion, and (4) the nonlinguistic aspects of communication (e.g., the melodiousness or prosody of speech,

facial

expression,

gesture,

emphasis,

intonational

pitch,

attitude,

comprehension of situational context, and cues regarding the interpersonal dynamics of a conversation). The right hemisphere's talent for visuospatial perception and manipulation have received the most attention, even though its skill at complex perceptual tasks extends to hearing and touch. The right hemisphere is also superior at depth perception, locating objects in space, identifying geometric shapes, and assessing spatial orientation by touch. The right hemisphere appears to be dominant in all aspects of emotional expression 26

Time Space Texture: An Approach to Audio-Visual Composition

and experience.” (Cytowic 2002, p.248). That larger melodic forms should require the mediation of areas of the brain required for spatio-visual cognition is consistent with the common application of spatial metaphores such as “form” and “contour“ to describe melody.

Besides activity in the right hemisphere, Platel and his colleagues also found that the timbre task precipitated activity in two small left-sided posterior loci adjacent to the visual cortex. This they suggested implied “some recourse to visual mental imagery.” (Platel et al. 1997, p.239).

The findings of the Platel et al. study in regard to music and visual mental imagery have been confirmed by another EEG and PET imaging study. Nakamura et al., (1999) found that brain activity resulting from listening to Indonesian Gamelan music “may reflect the interaction of the music with the cognitive processes, such as music evoked memory recall or visual imagery.” (Nakamura et all, 1999, p.222).

2.4 The Brain, Music and Visual Imagery

2.4.1 Imagery and the Brain Stephen M. Kosslyn suggests that the current resurgence of interest in mental imagery is not being driven by philosophical concerns or claims based on introspection as in the past. Instead, many of the old arguments about mental imagery are being resolved using new brain imaging methods and new ways to conceptualise imagery suggested by artificial intelligence. (Kosslyn et all, 1995). He suggests that, “because imagery has been shown to share mechanisms with like modality perception, theories of imagery have been able to stand on the shoulders of theories of perception. This state of affairs has resulted in the

27

Time Space Texture: An Approach to Audio-Visual Composition

somewhat ironic fact that an area that was considered beyond the pale 20 years ago is now of the first cognitive domains to be firmly rooted in the brain.” (Kosslyn et all, 1995, p.1).

The neuroanatomical areas of the brain that underpin visual mental imagery are understood not to involve the visual cortex area itself. For a long time it was thought that an area of the right hemisphere must be responsible for mental imagery. Richardson (1999) suggests that the modules responsible for visual imagery may in fact be distributed across the brain. All that can be said is that structures in the occipital, or posterior portion of the left hemisphere seem to be crucial to the generation and experience of mental imagery. Structures in the right hemisphere seem to be concerned with the manipulation and transformation of mental imagery once it has been created. (Richardson, 1999).

The neuroanatomical areas of the brain that underpin auditory imagery are thought to be the same as those that underpin hearing itself. Petr Janata suggests that the auditory cortex and other nearby areas of the superior temporal gyrus are activated during auditory imagery tasks. (Janata, 2001).

2.4.2 Cross Modality A more general investigation of the neural basis of cross modal exchange was undertaken by Stein and Meredith in their 1993 book, The Merging of the Senses. This book points out that many structures in the human brain have a high level of multimodal functionality. These include elementary structures in the brain such as multi-modal neural pathways as well as numerous secondary association cortices. The Superior Colliculus is isolated as one area of the cerebral cortex which integrates information from visual, auditory and somatosensory (touch) sensory inputs. The top layers of the superior colliculus are primarily receivers of visual sensory

information,

however

the

intermediate 28

and

deep

layers

receive

Time Space Texture: An Approach to Audio-Visual Composition

projections from many functionally different areas of the brain. These include visual, auditory and somatosensory inputs, as the superior colliculus plays a role in helping orient the head and eyes to all types of sensory stimuli. It is suggested that the cognition of higher levels of abstract interaction between the perception of sound, vision and the body take place here. It is notable that the perception of smell and taste are not present in the superior colliculus.

Another neuroscientist concerned with cross modal relationships is Richard Cytowic. The second edition of his Synesthesia – A Union of the Senses (2002) includes a thorough summary of up to date knowledge concerning the human brain and the perception of cross modal relationships. Like Stein and Meredith, Cytowic points out that elementary structures in the brain such as neural pathways can be multi-modal in nature. He also points to the numerous secondary association cortices. In his search for the neural source of Synesthesia however, Cytowic, is also concerned with sub-cortical areas of the brain known as the limbic system. The limbic system is regarded as the clearing house for all raw sensory information, and as the source of emotional states and memory. Cytowic and other neuroscientists increasingly regard emotional states as being the fundamental mode of consciousness. (For a more detailed discussion of music and Synesthesia please see Lyons 2001 in Appendix C.)

2.5 Music and Emotion

2.5.1 Consciousness It should be noted that the while the basic relationship between the brain and low level cognition is widely accepted, the relationship between the brain and what some philosophers call “qualia” and consciousness is more widely debated. (Dennett 1988) (Chalmers 1995). Because theories of consciousness are at

29

Time Space Texture: An Approach to Audio-Visual Composition

present a necessary construct with which to explain the experience of “what it is like” to experience music, art and the world, a position will be taken here in which consciousness is a given. Although it is beyond the scope of this thesis to tackle the “hard problem” of philosophy of mind, a brief definition of consciousness and some issues which surround it will be provided courtesy of David Chalmers:

“The word 'consciousness' is used in many different ways. It is sometimes used for the ability to discriminate stimuli, or to report information, or to monitor internal states, or to control behaviour. We can think of these phenomena as posing the "easy problems" of consciousness. These are important phenomena, and there is much that is not understood about them, but the problems of explaining them have the character of puzzles rather than mysteries. There seems to be no deep problem in principle with the idea that a physical system could be "conscious" in these senses, and there is no obvious obstacle to an eventual explanation of these phenomena in neurobiological or computational terms.” … “The hard problem of consciousness is the problem of experience. Human beings have subjective experience: there is something it is like to be them. We can say that a being is conscious in this sense — or is phenomenally conscious, as it is sometimes put — when there is something it is like to be that being. A mental state is conscious when there is something it is like to be in that state.” (Chalmers 1995).

2.5.2 Emotion At this point, a definition of emotion will be provided. “Emotion is a complex set of interactions among subjective and objective factors, mediated by neural/hormonal systems, which can (a) give rise to affective experiences such as feelings of arousal, pleasure/displeasure; (b) generate cognitive processes such as emotionally relevant perceptual effects, appraisals, labeling processes; (c) activate widespread physiological adjustments to the arousing conditions; and (d) 30

Time Space Texture: An Approach to Audio-Visual Composition

lead to behavior that is often, but not always, expressive, goal-directed, and adaptive.” (Kleinginna and Kleinginna, 1981, p.355).

2.5.3 The Accretion Hypothesis Recent Neuroscientific research has concerned itself with areas of the brain associated with emotional experience. (Cytowic 2002). A look at the evolved architecture of the human brain serves to shed light on the idea that our most profound levels of mental activity are “emotional” in nature. It is understood that to some large degree these areas mediate all human activity mental or otherwise – whether we are aware of it or not. (Cytowic 2002).

Figure 2.2 The Triune Brain Model

“In 1949, the American neurologist Paul MacLean originated the triune brain model, an embodiment of what was known as the accretion hypothesis at its height. This view, long popular but now much modified held that newer structures were added onto those of the reptilian brain and were accompanied by correspondingly

new

mental

skills

and

behaviours.

MacLean's

conceptual

refinement of three-brains-in-one proposed that human brains contain three 31

Time Space Texture: An Approach to Audio-Visual Composition

neural systems, each of different evolutionary age and each governing a separate category of behaviour. The oldest "reptilian" brain, represented by the brainstem and basal ganglia, deals with self-preservation. The middle "paleomammalian" brain is our inheritance from the mammal-like reptiles and is concerned with preservation of the species (e.g., sex, procreation, and socialisation), plus supposedly unique mammalian behaviours such as nursing, maternal and paternal care, audiovocal communication, and play. The components of the paleomammalian brain are collectively called the limbic system, which in humans deals

mostly

with

emotion

and

memory.

The

evolutionarily

youngest

"neomammalian" brain is embodied in the great expanse of cortex that is seen as a chief executor.” (Cytowic 2002 pp. 209-210). There is much evidence to suggest that the areas of the brain associated with verbal language are located in the front left cortex, and that verbal language is phylogenetically a recent development in the human species. It is also hypothesised that language is a more evolved form of an earlier, purely gestural mode of communication.

2.5.4 The Limbic System While the Triune model is now regarded more as a useful metaphor for brain organisation than a faithful model, it serves to suggest why certain modes of mental activity may seem more deeply embedded than others. The Limbic System is frequently referred to by Cytowic and others as “the emotional brain”. It mediates not only all sensory input in the form of neurons and axons, but also controls the emission of a range of hormones and peptides that form the extracellular fluid which surrounds these systems. An example of a hormone emanated from the limbic is oxytocin, which the hypothalamus secretes during orgasm, lactation and birthing. Extensions of the limbic system permeate every major division of the nervous system, thereby suggesting that the brain’s core is an emotional one. The limbic system is suggested to be responsible for regulating and organising all possible means of information transfer in the human brain. 32

Time Space Texture: An Approach to Audio-Visual Composition

(Armstrong 1990, 1991). Figure 2.3 below shows the human brain and the Limbic System in particular.

Figure 2.3 – The human brain with detail of the Limbic System.

Richard Cytowic illustrates this idea with an analogy: "… neuroscientists have just lately come to acknowledge how important emotion is in our mental life. In believing reason to be the superior and dominant force that guides our thinking and behaviour, we simultaneously hold the dichotomous view that emotion must be primitive and disruptive, an interference to clear logical thinking. People who think of their brains at all usually imagine a computer in their heads, a reasoning machine that runs things. This is consistent with the (antiquated) hierarchical model of brain organisation. However, like the carnival barker who pretended to be the Wizard of Oz, hiding behind the curtain while shouting, "Pay no attention to the man behind the curtain," placing reason and the neocortex foremost overstates the case because emotion and the mentation not normally accessible 33

Time Space Texture: An Approach to Audio-Visual Composition

to self awareness are often what's behind the curtain pulling the levers." (Cytowic, 2002 p.212).

Figure 2.4 – Marvin Minsky’s A-B Brain from “The Emotion Machine”.

In a similar vein, but from the point of view of someone attempting to model artificial intelligence, Marvin Minsky describes the brain as having multiple strata which filter information at different levels of complexity. He puts it this way, “Your B-Brain does not connect to those sensors, but only gets signals that come from A. So B cannot 'see' any actual things; it can only detect A's descriptions of them. Therefore, to the B-Brain, A is its 'outer world'—a world that contains no physical ‘things’, but only their second-hand descriptions.

Nor can B directly move things

in the world; it can only send signals that change what A does. Nevertheless, B thus can control how A will react to its situation, and how A will deal with the problems it faces.” (Minsky 2004).

2.5.5 Imagery and Emotion Studies of the brain during a music therapy technique known as Guided Imagery and Music (GIM) provide further insight into the relationship between music, emotion and imagery. "Goldberg (1992) draws on a neuroanatomical framework to understand the relationship, interaction and action potential between images, affect and the music. She argued that music triggers the memories and images which in turn activate emotions. Music is processed through the limbic system of the autonomic nervous system (ANS), which in turn processes emotions via the connections with the hypothalamus and amygdala. The amygdala also houses long-term memories and the emotions associated with those memories (Erdonmez, 1993), so that in GIM sessions it is common for 34

Time Space Texture: An Approach to Audio-Visual Composition

memories from childhood to surface with associated strong emotion." (Erdonmez Grocke 1999, p.53).

"Achterberg (1985) also draws on the neuroanatomical model to explain that images too, are processed through the ANS. The question arises whether one modality (music, image or affect) occurs sequentially before another. Does the music suggest the image which activates emotion? Does the imagery sequence unfold independent of the music stimulus? Is emotion aroused directly by the music without an image? Goldberg proposed a Field Theory of GIM in which the music is the central field" … "The direction of the arrows indicates that the music evokes emotion which stimulates imagery." (Erdonmez Grocke 1999, pp.53-54). In this regard Achterberg does not agree with Goldberg completely. It should be noted however that connections in the brain are not exclusively linear in nature, but involve, “parallel, recursive, feedforward and feedback connections. There also exists a wide assortment of molecules, such as hormones and peptides that likewise act as information messengers.” (Cytowic 2002, p.213). Given this, it is possible that there is a closer binding between imagery and emotion than either Achterberg or Goldberg suggest.

2.5.6 Ecstasy One final neuroscientific study to which I wish to refer discusses the neurological basis of the tingling sensation that can run through the body in response to music. “We have shown here that music recruits neural systems of reward and emotion similar to those known to respond specifically to biologically relevant stimuli, such as food and sex, and those that are artificially activated by drugs of abuse. This is quite remarkable, because music is neither strictly necessary for biological survival or reproduction, nor is it a pharmacological substance. Activation of these brain systems in response to a stimulus as abstract as music may represent an emergent property of the complexity of human 35

Time Space Texture: An Approach to Audio-Visual Composition

cognition. Perhaps as formation of anatomical and functional links between phylogenically older, survival-related brain systems and newer, more cognitive systems increased our general capacity to assign meaning to abstract stimuli, our capacity to derive pleasure from these stimuli also increased. The ability of music to induce such intense pleasure and its putative stimulation of endogenous reward systems suggest that, although music may not be imperative for survival of the human species, it may indeed be of significant benefit to our mental and physical well-being.” (Blood & Zatorre, 2001, p. 11823). There is also evidence that mental imagery can result in an improvement in the human immune system. (Rider and Achterberg, 1989).

2.6 Neurophysics Summary The human brain may be thought of as an extremely complicated system. Numerous opportunities exist for cognitive areas related to one sensory mode to interface with those related to other sensory modes. In the context of this thesis, neurophysical research provides major insights into the nature of visual mental imagery experienced in response to music. One area of insight neuroscience offers is the fundamental cognitive directivity provided by emotion. It would seem that after music has been transformed by the perceptual apparatus, emotion has the potential to determine the global nature of our response to music. This is especially the case with autonomous visual and kinaesthetic imagery. The other findings can be summarised thus:



Many areas of our brains merge sensory information – especially between auditory, visual and bodily senses.



Familiarity, rhythmic and pitch interval tasks appeal to linguistic areas of the brain.

36

Time Space Texture: An Approach to Audio-Visual Composition



Pitch and timbre tasks activate areas of the brain found near the visual cortex which are associated with visual mental imagery.



Timbre and melodic outlines are associated with the right hemisphere, suggesting spatial manipulation of mental images.



Different intentional positions have an effect on which hemisphere of the brain is associated with music cognition. Analytical intentional positions are associated with linguistic areas of the left hemisphere. Passive intentional positions are associated with stimulation of right hemisphere areas.



Emotion may be understood to play an important role in relationships between music and other mental imagery forms.

Although some electro-acoustic music is intentionally articulated in a linguistic manner, this might explain why more expansively articulated and timbrally transformational electro-acoustic music styles can seem so non-linguistic in any traditional sense. Thus “soundscapes” and other slowly evolving and highly timbral music forms seem to many composers and scholars to be highly spatiovisual. As it turns out sonic forms of this sort are in fact much more associated with non-linguistic areas of the brain. It might seem obvious therefore that these music forms should profit from models of music that are more essentially spatiovisual and kinaesthetic than linguistic. This has been applied in the creation of the works on the Music VR DVD.

37

Time Space Texture: An Approach to Audio-Visual Composition

3 Cognition

3.1 Mental Imagery Mental imagery has already been discussed briefly in the introduction of this thesis, however a brief overview will be provided here. “Let us provisionally define mental imagery as quasi-perceptual experience, experience that significantly resembles perceptual experience (in any sense mode), but which occurs in the absence of appropriate external stimuli for the relevant perception … There are normally important experiential differences between imagery and perception without them imagery may slide into hallucination - but these need not concern us here; it is the similarities which are definitive.“ (Thomas, 1999).

Cognitive Psychologists aim to study human behaviour in a way that is coherent with the understanding provided by Neurophysicists. Much of what Neurophysicists tell us about the separation of linguistic and spatio-visual areas of the brain along lobal and hemispheric lines is reflected in what Cognitive Psychologists tell us about human information processing abilities. Language and imagery systems are understood to constitute quite different cognitive styles. (Richardson 1999). It is suggested that with certain intentional positions, thought using imagery can take place without the involvement of linguistic cognitive modes. (Richardson 1999). The transformational processes of thought which use imagery - instead of concepts - as a basic unit can be very different to those of analytic thought. Types of thought involving imagery may be seen to be very useful in establishing cognitive relationships between music and visual imagery. They are also useful in creative approaches of various sorts. (Dailey 1994).

38

Time Space Texture: An Approach to Audio-Visual Composition

3.2 Perceptual Bases of Cognition Recent cognitive science suggests quite strongly that all human cognition is modal in some sense – meaning that it pertains to one or another sensory mode. (Barsalou, 1999). The perceptual bases of cognition theory of Lawrence W Barsalou suggests that, “cognition is grounded in the sensory-motor mechanisms of the brain. According to this view, cognition does not utilise knowledge that is amodal or non-perceptual. Instead, all knowledge is modal in one way or another. For example, knowledge about colours is represented in the perceptual mechanisms that perceive colours. Similarly, knowledge about sounds is represented in the perceptual mechanisms that process sounds; knowledge about actions is represented in motor mechanisms; knowledge about emotions is represented in the mechanisms that underlie emotional experience; and so forth. In general, when we conceptualise a kind of entity or event in its absence, we partially run sensory-motor mechanisms as if it were present. In other words, we simulate the entity or event perceptually, partially running modality-specific brain mechanisms in the manner that they would run during an actual experience.” (Barsalou, 1999). Barsalou’s theory represents a shift away from older amodal, propositional theories of cognition in which it was believed that language formed the best analogy for mental activity. (Fodor, 1975) (Pylyshyn, 1978).

3.3 Cognitive Styles Cognitive Psychologists believe that there are two major categories of perceptual styles in human beings. (Richardson 1999 p.104). In their discussion of mental processes cognitive psychologists regularly discuss "visualisers" and "verbalisers." (Richardson 1999 p.104). The cognitive differentiation between visualisers and verbalisers was made on the basis that people tend to favour specific strategies repeatedly in cognitive tasks. “One of the first psychologists to

39

Time Space Texture: An Approach to Audio-Visual Composition

relate memory performance to people's learning strategies was Bartlett (1932, pp. 59-61, 109-112). He found that he could classify his subjects on the basis of their informal comments either as "visualisers", who claimed to rely mainly upon visual imagery in remembering, or as "vocalisers", who claimed to rely mainly upon language cues rather than mental images. Although the vocalisers tended to be less confidant in their recall, the two groups produced comparable levels of memory performance. Bartlett suggested that individual people tended to adopt the same approach to remembering across different experiments, and therefore the distinction between "verbalisers” and "visualisers" came to be regarded as reflecting a relatively stable characteristic of individuals or, in other words, a dimension of cognitive style.” (Richardson 1999 p.104).

This theory, developed further by Paivio “assumes that cognitive behavior is mediated by two independent but richly interconnected symbolic systems that are specialized for encoding, organizing, transforming, storing, and retrieving information. One (the image system) is specialized for dealing with perceptual information concerning nonverbal objects and events. The other (the verbal system) is specialized for dealing with linguistic information. The systems differ in the nature of the representational units, the way the units are organized into higher-order structures, and the way the structures can be reorganised or transformed.” (Richardson 1999 p.81).

The PET brain imaging studies presented in Chapter 2 suggested that music cognition draws on various areas of the brain. It was suggested that the ability of music to evoke emotional responses had a lot to do with the evocation of visual mental imagery. It was also suggested that different psycho-acoustic dimensions of music were associated with different parts of the brain. Finally, it was also suggested that different intentional positions had an effect on which hemisphere of the brain was associated with music cognition. A passive listening approach 40

Time Space Texture: An Approach to Audio-Visual Composition

applied to an expansively articulated timbral music might involve a limited recourse to linguistic strategies. Instead, areas of the brain associated with visual mental imagery and spatial manipulation may be more involved.

In the context of this thesis, these findings, and the independence of verbal and image based cognitive styles described by Paivio suggest something important about relationships between music and visual imagery. While there is a continuum between linguistic and image based modes of thought, it is possible to interpret expansively articulated, highly timbral music visually, without any mediation involving the reduction of phenomena to the static concepts and linguistic symbols required in dialectic processes. This is especially the case with expansively articulated, and highly timbral music forms.

3.4 Dual Coding Theory With the difference between visual and verbal modes of cognition clear, the way in which these modes interact can be modelled. Paivio “identified three levels at which information might be processed. The first was the representational level, where the sensory trace that was produced by an item when it was perceived arouses the appropriate symbolic representation in long-term memory. Thus, words activated verbal representations (which Paivio called "logogens"), whereas perceptual experiences activated imaginal representations (which he called "imagens"). The second was the referential level, where symbolic representations in one system aroused corresponding representations in the other system; these interconnections were assumed to be involved in naming or describing objects, on the one hand, and in creating the image of an object when given its name, on the other hand. Finally, the associative level involved associative connections among images, among verbal representations, or among both.” (Richardson 1999 p.83). This may be further explained in Figure 3.2 below.

41

Time Space Texture: An Approach to Audio-Visual Composition

Figure 3.2. Paivio’s Dual Coding Theory

A difficulty in accepting visual approaches to musical thought may result from a tendency to employ more verbal than visual cognitive strategies to music. People can be predisposed to interpret music in primarily linguistic terms either through some acculturated analytical intentional position, or through an innate verbal cognitive predisposition. It has been suggested that some types of music

42

Time Space Texture: An Approach to Audio-Visual Composition

make it easy to relax any linguistic intentionality in the listening process and so perceive music using image based cognitive processes almost exclusively. In this way it is possible to encode music using Paivio’s “imagens” and create relationships between musical materials using what he calls

“associative

relationships”. No recourse to referential relations with logogens or referential feedback between logogens and imagens need occur in the musical act, or interpretation of such music.

3.5 Reasoning Using Imagery The cognition of music in the form of imagens suggests the potential for a unique non-linguistic form of musical thought. Stephen M. Kosslyn suggests that, “One reason why image transformations are important is that images are more than a means by which information is learned, stored and retrieved. They also play a key role in several types of reasoning.” (Kosslyn et all, 1995, p.5).

Kosslyn goes on to suggest that, “Imagery can play a role in two distinct types of reasoning. First, one can reason about perceptual properties themselves. For example, consider how you decide what is the best route to take to get to the airport at a specific time of day from where you work, or how you would decide whether a sofa seen in the store would 'fit' in your living room. In both cases, imagery is used to carry out a kind of 'mental simulation'. Second, imagery can be used in abstract reasoning. Indeed, imagery apparently played a role in some of the key discoveries in science. For example, an image of snakes biting their own tails apparently led Kekule to infer the ring-structure of benzene. In this sort of reasoning images are used as symbols. Such reasoning occurs when one visualizes Venn Diagrams or lines with dots on them as aids to solving logic problems.” (Kosslyn et all, 1995, p.5).

43

Time Space Texture: An Approach to Audio-Visual Composition

The second mode of thought Kosslyn describes is a kind of reasoning that involves, “the inferences one makes when comprehending language. Imagery is often contrasted with verbal abilities, but it is clear that the two faculties work together in many ways. We earlier noted that imagery can help one to learn new information, including verbal information. In addition, imagery can help one to comprehend verbal descriptions.” (Kosslyn et all, 1995, p.5).

3.6 Similarity and Metaphor Cognitive research into similarity and metaphor is another type of reasoning using imagery which has the potential to shed much light on the relationship between music and visual imagery. The study of similarity is concerned in part with how people categorise things. Understanding how and why people might categorise certain types of music with certain types of visual imagery would assist a great deal in understanding the development of works such as those on the Music VR DVD.

A recent compendium of papers on similarity provides an overview of the tensions that exist between various views. (Sloman and Rips, 2001). The tensions between rule based conceptual approaches and more perceptual, association orientated approaches is pertinent to the discussion undertaken in this thesis. It is suggested that deductive inference may be explained well by rule based systems, whereas similarity is particularly applicable to inductive inferences – this is particularly the case in relation to perceptual properties. “Rips (1989) has shown

that

certain

properties

of

natural

kinds

can

differentially

affect

classification and similarity judgments. In particular, similarity judgments are more sensitive to perceptual properties than to other more central properties.” (Sloman and Rips, 1998, P. 96).

44

Time Space Texture: An Approach to Audio-Visual Composition

The proponents of similarity, “view that similarity is primitive in perception and cognition. Although our judgments of similarity might be subject to biases, there is a raw feeling of similarity that we have when we confront two objects that is fixed once and for all by our cognitive system. We’d expect similarity of this type to be relatively automatic, fast, perceptual, and impenetrable (i.e. unaffected by a person’s other beliefs). On this view, similarity is loaded with explanatory power, for similarity relations are fundamental.” (Sloman and Rips, 1998, P.90) Goldstone and Barsalou suggest that, “In general, we suspect that associative mechanisms are, in general, a large class of relatively automatic processes, and that rule mechanisms are, in general, a large class of relatively controlled processes.” (Goldstone & Barsalou, 1998).

In their 1998 paper, Goldstone and Barsalou, “discuss the advantages, power, and influences of perceptually-based representations. First,

many

of the

properties associated with amodal symbol systems (e.g. productivity and generativity) can be achieved with perceptually-based systems as well. Second, relatively raw perceptual representations are powerful because they can implicitly represent properties in an analog fashion. Third, perception naturally provides impressions of overall similarity, exactly the type of similarity useful for establishing many common categories. Fourth, perceptual similarity is not static but becomes tuned over time to conceptual demands. Fifth, the original motivation or basis for sophisticated cognition is often less sophisticated perceptual similarity. Sixth, perceptual simulation occurs even in conceptual tasks that have no explicit perceptual demands. Parallels between perceptual and conceptual processes suggest that many mechanisms typically associated with abstract thought are also present in perception, and that perceptual processes provide useful mechanisms that may be co-opted by abstract thought.” (Goldstone & Barsalou, 1998). Similarity research has the potential to shed much light on the cognitive relationships between music and imagery. 45

Time Space Texture: An Approach to Audio-Visual Composition

3.7 Creativity There are presumably other types of image based thought which are not directed by executive areas of the brain. Such types of thought are often termed “dedifferentiated” styles of thought. (Dailey 1994). Dedifferentiated thought is often associated with creativity. In her 1994 PhD thesis, Audrey Dailey describes the relationships that exist between creativity, primary process thinking, synesthesia, and physiognomic perception. For Dailey, “Creativity does appear to be a function of an indeterminate interplay between personality and cognitive style. Actually, it is only within a certain configuration of personality traits that creative cognition tends to occur (Martindale, 1989). There is considerable evidence that creative individuals have easier access to more primitive modes of thought, frequently referred to as primary process thinking. One may think of the levels of consciousness as a continuum with a dedifferentiated (primary process) mode of cognition positioned at one end and a differentiated (secondary process) mode of cognition at the other. Dedifferentiated and differentiated are terms proposed by Werner (1948) to account for thinking styles analogous to what is today more commonly referred to as primary process and secondary process thinking. Primary process thinking is free-associative, autistic, analogical, and tends to operate on concrete images like those experienced in dreams, reveries, and hypnogogic states. (Martindale, 1981, 1989). Autistic thinking, as used here, refers to an associative, non-conceptual thinking involving concrete images rather than causally logical cognition (Whitmont, 1991).” (Dailey 1994, pp.2-3). Dailey suggests that, "It appears that excessive cortical control narrows one's range of experience by limiting awareness or openness to alternate modes of thought." (Dailey 1994, p. 179).

Because creativity is often considered an important aspect of artistic endeavour, primary process, or dedifferentiated styles of thought may be of much 46

Time Space Texture: An Approach to Audio-Visual Composition

interest to artists. Thinking using imagery has been attributed to many artists and scientists in Dailey’s thesis. Dailey believes that, “primary process thinking involves an inhibition of neocortical activity with a relative enhancement of limbicbrain activity. Creative individuals show a decrease in cortical activity when told to be creative (Martindale, 1991; Martindale & Hines, 1975).” (Dailey 1994, p. 174).

These findings create an interesting synthesis of the understanding of the brain outlined in Chapter 2, and the styles of cognition outlined by Paivio and Richardson earlier in this chapter. It would seem that the relationship between imagery, memory and emotion has a strong relationship to creativity. Analytical and conceptual approaches to creativity may be understood to create different outcomes. The research of prominent Cognitive Scientists suggests that emotion is

an

important

part

of

successful design.

(Norman,

2004).

Conceptual

approaches are most definitely important to engineering simulations of creative processes and the modelling of artificial forms of intelligence.

3.8 Imagery, Creativity and Improvisation Creative thought involving musical and kinesthetic imagens may be seen as being important to musical improvisation. In improvisation, time does not always permit the extended process of analytic decision making, exercise of volition and action needed to articulate musical decisions based on serial linguistic and analytical thought. In improvisation, a continuum of semantic cues can access memories containing integrated kinaesthetic, musical, and visual imagery to produce a coherent and uninterrupted musical flow. The improvisation of inexperienced or highly educated western jazz musicians is often characterized by analytic thought, and conceptual artifacts. Such thought can obstruct the uninterrupted flowing performance of semantically coherent music that results

47

Time Space Texture: An Approach to Audio-Visual Composition

from the kinaesthetic realisation of transforming musical imagery. When attempting to improvise, inexperienced improvisers and overly rational musicians tend to play a disjointed sequence of pre-contrived musical snippets or “licks” rather than allow executive areas of the brain to be overridden so that the music can just “flow” through them. In order to achieve this kind of flow, it is necessary to establish a close, semantically laden bond between kinaesthetic and musical imagery in long term memory prior to improvisation. As has been noted in Chapter 2, long term memory is associated with the Limbic system – which is also the mediator of emotion. When a dedifferentiated approach to thought is adopted, emotional cues can substitute for the executive areas of the brain during improvisation. Improvised performance itself can then take place seemingly without conscious volition. Automatic performance of this type may be seen to occur as a result of the direct transformation of emerging musical imagery into a kinaesthetic and motor realisation. Dedifferentiated thought is a fragile state, which demands a conceptual repression, and an emotional sensitivity which is easily overridden by any focused conceptual awareness. This approach to thought is consistent with what Dailey (1994) and Martindale (1991) describe in their studies of creativity.

A dedifferentiated approach to thought is a central approach of the author to musical analysis, creation and the imagination of corresponding mental imagery. This is especially the case with music involving Saxophones, and Saxophone based midi controllers. Often the author’s improvisation involves evolving spatiovisual mental imagery as a semantic departure point. The hypothesis that improvisation

involves

dedifferentiated

thought,

and

a

direct

emotional

manipulation of mental imagery is perhaps the closest that the author can come to an explanation of music improvisation using scientific terms. This approach might be defined as being “intuitive” in nature. More will be said of intuition later.

48

Time Space Texture: An Approach to Audio-Visual Composition

3.9 Cognition Summary Cognitive psychologists describe two main styles of thought – linguistic and image based. The existence of these two styles is consistent with the combination of current neurophysical models of the brain, and different intentional positions. While there can be complex relationships between language and imagery, each may function independently of the other. Associative thought using imagery instead of concepts as a basic unit of thought is often associated with creativity.

Imagery is the primary mode of thought applied to the creation of imagery in the works on the DVD. Analytical approaches are applied where problem solving and organisation are required.

49

Time Space Texture: An Approach to Audio-Visual Composition

4 Phenomenology

4.1 Introduction The two most prominent forms of philosophy at present are Phenomenology and Conceptual Analysis. Phenomenology is often described as an attempt to uncover the fundamental structures of lived experience. While science purports to investigate the causes of phenomena, Phenomenology is the direct investigation and description of phenomena as consciously experienced. Phenomenology does not concern itself with theories about the causal explanation of phenomena, or any

unexamined

preconceptions

and

presuppositions

of

science.

The

phenomenological method involves a “bracketing out” of all preconceived ideas related to the phenomena being observed. This process is often described as the “phenomenological reduction” (or epoché). The only “reality” that concerns phenomenology is that intuited by consciousness. Phenomenology is in essence a form of philosophical idealism – meaning that what is real is in some way confined to, or at least related to the contents of our own minds. (See Appendix E – Section 2 for more about idealism.)

It is important to understand that the process of bracketing out requires that many different intentional stances are to be taken. Don Ihde suggests that, "Every experiencing has its reference or direction toward what is experienced and, contrarily, every experienced phenomenon refers to or reflects a mode of experiencing to which it is present." (Ihde, 1986, pp. 42-43.). By taking a generous approach to the existence of many possible intentional positions, a more accurate understanding of phenomena can be arrived at. Like subtractive synthesis, many intentional positions help remove “conceptual white noise” so as 50

Time Space Texture: An Approach to Audio-Visual Composition

to intuit the invariable and irreducible "essence" or "Eidos" of a phenomenal experience. This is often called the “Eidetic reduction”.

Arts are often concerned with the perception of phenomena, phenomenal essences, and the fundamental structures of lived experience. The kinds of relationships that exist between music and imagery concerned here may be seen to be entirely a product of human consciousness. A theory that concerns itself entirely with phenomena, as they appear to human consciousness, may therefore be seen to be a suitable vantage point with which to examine perceived relationships between music and imagery. Phenomenology - and the existential phenomenology of Maurice Merleau-Ponty in particular – is particularly applicable to an “internal” subjective study of cross-modal relationships.

With phenomenology as its methodological foundation, Gestalt Psychology provides an approach to modelling consistent features of perception. The knowledge acquired by Phenomenological and Gestalt methodologies lends itself well to artists developing cross–modal works. During the middle decades of the 20th century, mainstream positivist behavioural psychology would not concern itself with the nature of “internal” mental activity. Phenomenological method however offers many excellent insights into “internal” mental phenomena, and many gestalt and phenomenological terms and methods have filtered into current cognitive science. I have used the term "intentionality" a number of times already, and this is just one of many useful ideas strongly associated with phenomenology.

4.2 A Brief History of Phenomenology Without delving too deeply into the fascinating web of late 19th century philosophers whose ideas gave rise to phenomenology, it will suffice to say that the philosophy of Edmund Husserl (1859-1938) forms the basis, even today, of

51

Time Space Texture: An Approach to Audio-Visual Composition

all serious phenomenological studies. Husserl’s phenomenology was in turn embellished and merged with existential ideas of being-in-the-world by later philosophers such as Martin Heidegger and Maurice Merleau-Ponty. Merleau-Ponty in particular concerned himself with the phenomenology of perception and its relationship to the human body. In doing so he provides a basis for the study of music in terms of visual, kinaesthetic and other imagery. In his monograph, Music As Heard Thomas Clifton bases his work largely on Husserl and Merleau-Ponty’s philosophy. His section on Synaethesia is especially insightful, although the definition

of

synaesthesia

differs

from

the

modern

clinical

definition

of

synaesthesia. (See Appendix C).

4.3 Cross Modal Metaphor The term synesthesia as used by Merleau-Ponty and Clifton is more like a profound physical relationship between different modes of mental imagery. Here is a selection of what Clifton has to say, which includes a famous quote from Maurice Merleau-Ponty: “The suggestion that synesthetic perception is an essential background is made because of the above evidence which suggests that sound images are not the result of a second-order act of reflection. Were this to be the case, it would seem that synaesthetic perception would be more exceptional than it is. But Merleau-Ponty suggests that ‘Synaesthetic perception is the rule, and we are unaware of it only because scientific knowledge shifts the centre of gravity of experience, so that we have unlearned how to see, hear, and generally speaking, feel, in order to deduce, from our bodily organisation and the world as the physicist conceives it, what we are to see, hear, and feel.’ ( MerleauPonty, 1962, p. 229. ). This should indicate the nonmetaphorical, nonanalogical status of synaesthetic perception: it is a movement of the body, not a product of deductive thinking.” (Clifton 1983 p.66). In this extract, Clifton astutely

52

Time Space Texture: An Approach to Audio-Visual Composition

recognises the relationship between kinaesthetic imagery, music and visual imagery previously described from various cognitive science standpoints.

4.4 Phenomenology and Anglophones While Phenomenology is a mainstay of continental thought, it has not penetrated thought in the English speaking world so much. Some well known texts on music that take a phenomenological approach are those of Don Ihde, (1976), Thomas Clifton (1983), Victor Zuckerkandl (1957), and Roman Ingarden (1986). Pierre Schaeffer’s highly influential phenomenological writing, Traite des Objects Musicaux (Schaeffer 1966) is an example of a seminal phenomenological text on modern music that has no complete English translation. Western music theory has traditionally placed a much greater emphasis on musical elements deduced through analytic reductions than those induced as phenomenological gestalts. This is consistent with many inherited presumptions about the nature of music.

These presumptions perhaps have something to do with the prominence of analytic philosophy in the English speaking world during the 20th century. Since its

development,

phenomenology

has

had

detractors

amongst

analytic

philosophers working within positivist frameworks. Don Ihde suggests that “The popular belief, if anything exaggerated by analytic philosophers, held that (a) phenomenology was ‘subjectivist’ in contrast to ‘objectivity’; (b) ‘introspective’ in contrast to analytical; (c) and, with respect to evidence, took the ‘immediately or intuitively given’ as its base. From my perspective, all three of these widely held notions about phenomenology were false.”

Ihde responds to these critiques of phenomenology by stating that,” (a) phenomenology, in my understanding, is neither subjectivist nor objectivist, but

53

Time Space Texture: An Approach to Audio-Visual Composition

relational. Its core ontology is an analysis of interrelations between humans and environments [intentionality]. (b) It is not introspective, but reflexive in that whatever one ‘experiences’ is derived from, not introspection, but the ‘what’ and ‘how’ of the ‘external’ or environmental context in relation to embodied experience. And (c) all ‘givens’ are merely indices for the genuine work of showing how any particular ‘given’ can become intuited or experienced. Phenomenology investigates the conditions of what makes things appear as such.” (Don Ihde. 2002).

4.5 Music and Phenomenology As it applies pragmatically to music, Thomas Clifton has suggested that, “a phenomenological attitude can describe the newer music more faithfully than methods which rely on the existence of a score printed in traditional notation and which, for that reason, arouses the suspicion that it is the notation more than the music which is being analyzed. In addition, contemporary composers write "phenomenological" music in their efforts to present musical essences, movement, shape, duration, succession, color, play, and feeling, without cluttering their pieces with such literary imports as plot (theme), character development (thematic

manipulation),

and

structure

(beginning,

middle,

and

end).

Contemporary composers also realize that the old ideals of unity, organization, and cohesiveness, to the extent that they are still ideals, are as much a product of subjective constitution as compositional givenness. (Clifton 1983 p. X).

Composers working with the broad sound palette available using new technology are often in search of a methodology with which to intuit the essences of diverse sounds. This is often necessary in order that such sounds can be organised in ways which fulfill the purpose of the composer – whatever that may be. In such situations it can be important to have access to a methodology with

54

Time Space Texture: An Approach to Audio-Visual Composition

which to determine the essential features of musical sounds. Theories that employ phenomenology can interpret a range of musical material in a way that is consistent with what cognitive science understands about human perception and cognition. Phenomenology has formed the basis of Thomas Clifton’s (1983) theory of music, and also those of Pierre Schaeffer. (Palombini 1993). Once sounds have been eidetically reduced to their essential features, it becomes possible to explore relationships between quite dissimilar sounds in a way that is consistent with the unique cognitive styles of human consciousness.

4.6 Gestalt Psychology Phenomenology gave rise to a psychology of perception called Gestalt Psychology in the first decades of the twentieth century. The birth of the Gestalt school took place in 1912 when Max Wertheimer, Wolfgang Kohler and Kurt Koffka began publishing studies of perception. The German word Gestalt is generally interpreted as meaning “form” although it refers more specifically to the way that things are put together. The primary position of the Gestalt school is that the whole is more than the sum of its parts. A large amount of Gestalt research has concerned the study of “grouping”. (See Appendix F). The influence of the Gestalt psychologists is apparent in many theories on the philosophy of art and visual aesthetics. The works of Rudolf Arnheim (1986) are one example.

4.7 Concrete Arts and Music The relationships between music composition and visual arts are often discussed in terms that draw on phenomenological thought. (Arnheim 1986). The grouping techniques outlined in Lyons (1999) (See Appendix F – Section 3.2) may be seen to be equally applicable to concrete and abstract arts. For these reasons the similarity of form - or “isomorphism” - that can exist between visual and sonic arts often permits the comparison of visual and sonic techniques for composition. 55

Time Space Texture: An Approach to Audio-Visual Composition

Maitland Graves (1951) suggests that the following design elements and principles are common to both music and visual arts.

Elements of Design Line Direction Shape Size Texture Proportion Value Colour

Principles of Design Repetition Alternation Harmony Gradation Contrast, Opposition or Conflict Dominance Unity Balance

Table 4.1 Design Elements and Principles (Graves 1951)

4.8 Summary Phenomenology is entirely concerned with reality as it presents itself to human consciousness. A theory that concerns itself entirely with phenomena, as they appear to human consciousness, may therefore be seen to be a suitable vantage point with which to examine perceived relationships between music and imagery. Phenomenology aims to ignore questions about the cause of a phenomena, so that the features of the phenomena itself can be focused upon. Phenomenology aims to “bracket out” all manner of prejudices and preconceptions about phenomena. A very open and generous position must be taken to the fact that there are many possible intentional approaches to take to the experience of phenomena. The more of these that are taken, and the more successful the process of bracketing is, the more successful the phenomenological reduction.

The Eidetic reduction aims to distill the essences of phenomena. It seeks out the invariant features of the experience of phenomena. Such essences may be very useful to composers working with multifarious sound sources, and interdisciplinary artists. Intentional positions that seek relationships between

56

Time Space Texture: An Approach to Audio-Visual Composition

visual and sonic arts may provide useful relationships between the two arts. Gestalt Psychology aims to determine the essential features of human cognition through research involving phenomenological observations.

Gestalt Psychology is a form of perceptual psychology which is based on phenomenological techniques. Many of the findings of Gestalt psychology can be applied usefully to relationships between visual and auditory arts. An example is termed “isomorphism”.

Isomorphism may be seen to be related to the spatio-

visual functions of the right hemisphere described in Chapter 2.

57

Time Space Texture: An Approach to Audio-Visual Composition

5 Guided Imagery in Music Therapy

5.1 GIM History At this point, the nature of visual mental imagery that occurs in response to music needs to be examined. In particular, a broad description of the attributes and qualities of music which give rise to visual mental imagery may be suggested. That the experience of music has the capacity to elicit visual mental imagery is well acknowledged by psychologists. (Summer 1985). The manipulation of imagery generated by music during psychotherapy sessions constitutes a major field of music therapy. “The Bonny method of Guided Imagery and Music” (GIM), is a specialised area of therapy in which clients listen to pre-recorded classical music in a deeply relaxed state and in which visual imagery, changes in mood and physiological effects in the body are experienced. Guided Imagery and Music was developed by Dr Helen Bonny, a music therapist at the Baltimore Psychiatric Institute, USA, in 1970. The method evolved from a time when LSD was used in psychiatry to evoke altered states of consciousness. Initially, Bonny was required to provide music to complement the various stages of LSD therapy: the onset period, building to the peak experience, the peak experience, stabilising after the peak experience and the return to normal consciousness.” (Erdonmez Grocke 1999b, p.197).

After LSD was banned in medical practices in the USA, Bonny continued the development of the guided imagery technique by supplementing the LSD medication with a relaxation exercise. Psychologists frequently refer to relaxed, dream-like states of mind as “hypnogogic” states. (Dailey 1995). (See section 3.7). In a GIM session, the relaxation induction is preceded by a discussion 58

Time Space Texture: An Approach to Audio-Visual Composition

between therapist and patient, and followed by the commencement of the music program.

5.2 GIM Sessions Bonny devised a number of different music sessions specific to the emotional condition and psychotherapeutic needs of different patients. An example of a session and its stages in relation to the course of an LSD trip can be seen in Table 5.1 and Figure 5.1 below.

Selections Elgar: Enigma variations #8 and #9 (from the Enigma Variations): Mozart: Laudate Dominum (from the Vesperae Solemnes de Confessore) Barber: Adagio for strings: Gounod: Offertoire (from the St Cecilia Mass): Gounod: Sanctus: (ibid) Strauss: Excerpt from Death and Transfiguration:

Stages of LSD session Pre-onset Onset Build to peak Plateau Build to peak Stabilisation and return

Table 5.1. Stages of the “Positive Affect Session”. (Bonny, 1978b, 39-42). In (Erdonmez Grocke 1999a, p.41)

Figure 5.1 Affective Contour of the Positive Affect Program. (Bonny, 1978b, p. 42) In (Erdonmez Grocke 1999a, p.151).

59

Time Space Texture: An Approach to Audio-Visual Composition

The rise and fall of each GIM session differs according to the goals of the therapy. In the positive affect program described above, the goal is to induce a number of peak emotional experiences so as to induce cathartic episodes concerned with issues isolated for attention in the patient-therapist discussion prior to the relaxation induction.

5.3 Imagery Types Music is well known for having the quality of being emotionally indicative in nature, whilst at the same time maintaining ambiguity in relation to any signified object or event. (Nattiez 1990). In this way patients are free to individually interpret GIM music sessions and make associations of a personal nature. The music merely constitutes a generic emotional catalyst. The purpose of GIM is to take patients on a visual mental imagery journey which is stimulated emotionally by the GIM music session. For these reasons, the imagery experienced by patients varies amongst individuals, and in relation to focus issues. In order to catalogue the main categories of emotional and visual responses, Bonny devised a catalogue which was updated by Denise Erdonmez Grocke in her 1999 PhD thesis:

1. Visual experiences, which may include: colours, shapes, fragments of scenes, complete scenes, figures, people, animals, birds, water (lakes, streams, oceans, pools). 2. Memories: childhood memories, including memories of significant events, significant people and feelings in the client’s life are explored through reminiscences. 3. Emotions and feelings: sadness, happiness, joy, sorrow, fear, anger, surprise etc. 4. Body sensations: parts of the body may feel lighter, or heavier; parts of the body may become numb, and feel split off from the body; there may be feelings of floating or falling; sensations of spinning, or feelings that the body is changing in some way. 5. Body movements. The client may make expressive movements of the body in relation to the imagery being experienced - eg. hands create a shape, arms reach up in response to an image, fists or legs pound on the mat in reaction to feelings of anger. 60

Time Space Texture: An Approach to Audio-Visual Composition

6. Somatic imagery. Changes within the internal organs of the body may be experienced - eg. pain felt in the chest or heart, exploring an internal organ for its shape and colour, a surge of energy felt through the entire body. 7. Altered auditory experiences. There may be an altered auditory perception of the music: the music comes from far away; the music is very close; one particular instrument stands out (which can also be transference to music). 8. Associations with the music and transference to the music: memories of when the music was heard last, memories of playing the music; the music is being played especially for the person; the person is actually playing the music being heard. 9. Abstract imagery: mists, fog, geometrical shapes, clouds etc. 10. Spiritual experiences: being drawn toward a light; a spiritual person: a monk, priest, woman in flowing robes; being in a cathedral; feeling a presence very close. 11. Transpersonal experiences: the body becoming smaller, or larger, change felt deep in the body (cells changing, parts of body changing shape). 12. Archetypal figures, sometimes from legendary stories, may appear: King Arthur, Robin Hood, the Vikings, Aboriginal man/woman, the witch, Merlin etc. 13. Dialogue. Significant figures from the client’s life may appear in the imagery and often have a message, so that dialogue may occur eg with parental figures. Aspects of self may be symbolised in human form (a baby or adult figure), or significant companions (eg an albatross bird, or an eagle) and dialogue may occur with these aspects. 14. Aspects of the Shadow or Anima or Animus: Aspects of the shadow frequently appear in the image of a person of the same gender, aspects of the anima/animus in images of a person of the opposite gender. 15. Symbolic shapes and images - eg. a long tunnel, a black hole, seeds opening. These shapes or images can be symbolic of moments of change or transition. Symbolic images such as an ancient book or the trident shape often have specific meaning to the client. Table 5.2 Categories of Experience in GIM. (Erdonmez Grocke, 1999a p.27)

As can be seen in this table a range of different responses are possible in GIM sessions, either in combination or in isolation. Much of the imagery takes on Jungian archetypal forms or involves a resynthesis of various imagery from an individual’s long term memory.

5.4 Music Properties for Imagery The question which continues to concern many music therapists, and which concerns us most here is, “What aspects of music give rise more successfully to mental imagery?” Unfortunately, there are no sound criteria for the selection of music in GIM sessions, and the process to date has been open to considerable 61

Time Space Texture: An Approach to Audio-Visual Composition

bias. In the majority of existing GIM sessions the methodology for selecting music is based on the intuition of Helen Bonny. Whilst Bonny is to be admired for her excellent insights into Psychotherapy, her approach to research avoids verifiable methodology in favour of a first person narrative philosophical style that borders on the esoteric at times. (Bonny, 1987,1989,1993,1994). Only recently have music therapists like Alan Lem (1998) attempted to isolate psycho-acoustic attributes of music using phenomenological methods combined with scientific methodology. With this in mind there is still something to be learnt from Bonny’s lengthy criteria for the selection of works for use in the original eighteen GIM sessions:

“1) Bonny comments that the music is a catalytic agent in that it creates tension and release. The music needs to create excitement and integration, inhibition and resolution. She draws on the theories of Leonard Meyer to substantiate this belief. Meyer’s theory rests on the dictum that “emotion is aroused when a tendency to respond is inhibited.” (Meyer, 1956, p14). The GIM music has the element of expectation, in that suspense evokes the imagery responses. When the release or climax is heard in the music, there is a concomitant resolution in the imagery sequence. The degree of resolution Bonny believes is based on the degree of uncertainty, as Meyer explains “the greater the build up of suspense, of tension, the greater the emotional release upon resolution.” (Meyer, 1956, p 28).

2) Bonny believes that the music in GIM acts as a ‘container’ for the experience. This concept of ‘music as container’ seems to have emerged from the writings of Winnicott, who developed a theory of containment in relation to the developing child. Winnicott argued that the parents provide a containment for the emotional experiences of the young child. For example, when a young child feels enraged and is in the midst of a tantrum, the parent ‘contains’ the experience by 62

Time Space Texture: An Approach to Audio-Visual Composition

not losing control. The child then learns to internalise the containment of feelings. Should the child in the midst of a tantrum experience a parental figure losing control, then there is no safe containment for the child’s feelings and s/he doesn’t learn to contain his/her own feelings. The theory of containment has been applied to improvisational music therapy by de Backer (1993). De Backer cites Cluckers (1989) definition of containment within a therapeutic relationship, in which the therapist creates a space whereby the client can project intolerable feelings. These feelings can be received by the therapist and held in the safe ‘contained’ space of the therapeutic relationship. De Backer uses an analogy of an ‘acoustic skin’ to explain how the therapist ‘binds and shapes the expression of chaos’ (1993, p 36). The therapist also provides ‘empathic accompaniment’ whereby the patient feels that his chaos and unresolved feelings are understood and accepted by the therapist.

The theory of containment has been applied to the practice of GIM (Bonny, 1989; Goldberg, 1992; Summer, 1992, 1995, 1998); however the boundaries of the music container are fluid, in that the music is ever changing and unfolding in time. The music which underpins a transpersonal experience for example, must allow a wide space for exploring the emotion of the transpersonal experience. In order for a client to express angry feelings within a GIM experience, the music must provide a container with strong boundaries that allows for the expression of strong feelings.

3) The third element which Bonny lists as a characteristic of effective music for GIM is that it stimulates the flow and movement of the imagery experience. Movement she says is related to tempo, and also to ornamentation within the orchestration of a work, for example the use of pizzicato in the lower strings creates movement in the music itself, and may influence movement in the client’s experience of imagery. 63

Time Space Texture: An Approach to Audio-Visual Composition

4) Variability is another feature of the GIM music. Bonny asserts that minimalist music and so-called ‘New Age” music is not used in GIM programs because there is not sufficient variability to stimulate the client’s imagery. The variability may be provided by changes in timbre, in melody, harmony and dynamics. Too much variability however may be perceived by the client as disorganised. A certain amount of redundancy is needed to provide a sense of musical stability, as evident in the above discussion on containment.

5) The mood conveyed by the music selection is a crucial characteristic in choosing a work for a program, and also in deciding its sequential place in the program. The mood may be determined by many factors: the melodic line, the harmonic progressions, modulation points and the timbral effects of certain instruments. Associations with particular instruments also influence the emotional substance of the music. Bonny believes the harp is usually associated with the higher aspects of self, the woodwinds with ‘the medium, the every-day experiences’ and the bass notes (instruments) are for aspects of sustaining and rhythmic security.” (Erdonmez Grocke, 1999a, pp.154-155).

Erdonmez Grocke continues, "The music which Bonny selected for the GIM programs comes exclusively from the Western classical tradition. Bonny’s extensive experience as an orchestral musician gave her a wide knowledge of classical music of all genres, and this is clearly evident in the choice of music for the GIM programs.” … “The argument for the sole use of classical music on the GIM music programs bears further exploration. Bonny’s assertions, while based on her extensive knowledge of classical music, may reflect a bias. Only by further studies comparing classical music with other traditions could there be consensus that classical music is more effectual than other music traditions”… “While it is evident that those selections Bonny has chosen are effective in evoking imagery 64

Time Space Texture: An Approach to Audio-Visual Composition

experiences, further studies are needed to provide a more specific rationale for the exclusivity of classical music in GIM therapy." (Erdonmez Grocke, 1999a, pp.155-158).

Erdonmez Grocke goes on to suggest that the features of music responsible for cathartic or “pivotal” moments of imagery have consistent features. "… the music program chosen for the session which contained the pivotal experience for the client, comprised music that started with selections that were strong in character, yet the pivotal moment occurred during music that was slow and spacious, so that an important finding from this research is that ‘containment’ theory as it applies to the music in GIM is valid." (Erdonmez Grocke 1999, p.231). In summary Erdonmez Grocke suggests that the following musical attributes give rise to pivotal moments:



“there was a formal structure in which there was repetition of themes



they were of predominantly slow speed, and the tempos were consistent



there was predictability in melodic, harmonic and rhythmic elements 



there was dialogue between instruments (including vocal parts)."

(Erdonmez Grocke 1999, p.231).

The study undertaken by Alan Lem (1998) verifies some of these observations. Lem “examined brain wave activity of 27 subjects during a 16 minute musical work (Pierne’s Concertstucke for Harp and Orchestra), and matched the EEG tracings against a spectrograph of the intensity of the music. The structural variability in the piece was measured on i) the intensity of sound, ii) the underlying pulse and iii) the affective contour. The results suggested potential associations between 1) patterns of tension-release in the music and the occurrence of synaesthetic imagery, 2) an association between brain wave response, high music intensity stimulation and the affective experience and 3) an 65

Time Space Texture: An Approach to Audio-Visual Composition

association between brain wave activity and sudden-unexpected changes in the music (in particularly that during sudden and very soft passages in the music, brain wave activity increased.)" (Erdonmez Grocke 1999, p.51). Lem confirms that a rapid release of psycho acoustic tension is responsible for a visual imagery experience. He also comments on a lack of scientifically provable literature on connections between larger musical structure and psycho-physiological impact. (Lem 1998).

5.5 Summary of GIM Research Outcomes Before presenting a summary of what GIM related research suggests to composers and interdisciplinary artists, the biases of such research must be examined. Firstly it should be noted that Helen Bonny believes that a requirement for emotional progression is integral to the functional purpose of music therapy. For this reason there may exist an implicit bias in visual imagery research undertaken specifically for GIM outcomes which favours the more teleological music forms which characterise concert music of the romantic period. Bonny herself suggests that, “we don’t use minimalist music, or music that is new age, because there isn’t enough variability. The function of new age music is to keep you in the one space. In GIM the music needs to move, e.g. through variability in timbre.” (Erdonmez Grocke 1997, p. 430).

One might imagine that those not feeling the need to purge emotional baggage may be free to find music which precipitates imagery which fulfils some personal aesthetic criteria. Such spaces may be enjoyed without the necessity of having to endure a florid romantic catharsis. Certainly the more methodical research of Erdonmez-Grocke suggests that pivotal moments are not associated with variability – indeed she finds that the opposite is true. It is unfortunate that the specific outcomes and musical bias displayed by GIM research in the past can

66

Time Space Texture: An Approach to Audio-Visual Composition

tend to limit the usefulness of GIM related research for a broad range of composers and interdisciplinary artists. Nonetheless, research related to GIM does make some useful generic observations about the relationship between music and visual mental imagery. These include the idea that emotion is highly connected to the stimulation of visual mental imagery. There is also some evidence that tension and release has a part to play in the generation of visual mental imagery. The idea of containment is also an interesting idea which can be applied by a range of composers and artists.

It should be noted, that based on all the core criteria provided by GIM researchers thus far, there is little evidence provided that should a-priori exclude music not of western orchestral music traditions from imagery sessions or research. Certainly the suggestion that variability in timbre is a criterion points to the power of electro-acoustic and other modern music forms. The idea of containment, and its relationship with repetition likewise suggests that certain popular contemporary electronic genres should be excellent catalysts of visual imagery – this especially so given their current cultural pervasiveness in western cultures. Finally, both emotional evocation, and tension and release are also not the exclusive domain of western concert music.

As modern GIM research embraces the methods and findings of contemporary psychology, it will be forced to confront the realisation that, “music is first and foremost a cultural artefact.” (Juslin and Sloboda, 2001, P.455).

While it is

possible that certain emotional responses to music are biological and timeless, many emotional responses to music are largely conditioned by environment. Over the period of a hundred years cultural forces can effect great change on human responses to music. Given that much of the music in GIM sessions is that old or older, the continued effectiveness of GIM therapy may require therapy programs compiled from a broader range of music. 67

Time Space Texture: An Approach to Audio-Visual Composition

6 Music Theory and Aesthetics

6.1 Introduction In this chapter, an attempt will be made to eliminate presumptions about what is, and is not, a valid approach to thinking about music. As has been explained numerous times above, to approach music visually, all preconceived ideas about music must be abandoned. Many approaches to describing music exist, and while some are more widely taught than others, each approach has different strengths and limitations. Many have evolved with specific applications in mind, and can obstruct our current concern with visual imagery. It needs to be made clear here, that there is no “correct” way to think about, or describe all music.

Aesthetics is the philosophical study of art, and critical judgements about art. At its core is a problem that relates to the difficulty of describing arts exhaustively with words. A number of different analytical positions exist in relation to understanding the experience of art. Some argue that analysis is a flawed approach to aesthetics. Many of the arguments of aestheticians concern the nature of music, and directly relate to its visual representation.

6.2 Music and Ambiguity There are profound problems inherent in describing music given any means. The difficulties involved in describing music relate to what is called the “ineffability” of musical experience. The concept of musical ineffability may be described as the problem of exhaustively explaining with words to another, the nature of one’s experience of music. If the experience of music is transduced

68

Time Space Texture: An Approach to Audio-Visual Composition

from our perception of subtle changes in air pressure – how are we to say that the cognition of music is “like” anything else; so as to communicate that experience. For example - is the experience of music semantically more like linguistic communication than the spatio-visual experience of passing through a changing landscape?

Susanne Langer (1895-1985) says of Ineffability that: [I]t seems peculiarly difficult for our literal minds to grasp the idea that anything can be known which cannot be named.... But this ... is really the strength of musical expressiveness: that music articulates the forms that language cannot set forth.... The imagination that responds to music is personal and associative and logical, tinged with affect, tinged with bodily rhythm, tinged with dream, but concerned with a wealth of formulations for its wealth of wordless knowledge. (Susanne Langer, 1942, Pp.198, 207.).

The concept of musical ineffability suggests the common analogical inadequacy of all means of describing and interpreting music. In particular, it strongly suggests the limitations of linguistic and conceptual approaches to musical understanding. This does not discourage people from talking and writing about music, and the nature of ineffability is a central concern in the field of aesthetics.

6.3 Philosophy of Art There are many extant aesthetic arguments about musical ineffability. Diana Raffman provides a short outline of some common aesthetic arguments related to musical ineffability: “Convictions widely shared usually admit a good deal of variation, and this one is no exception. Is it that the content of our ineffable knowledge defies verbalisation altogether, or that it cannot be exhausted by verbal report? Is it that there could be no terms for it, or that even if there were

69

Time Space Texture: An Approach to Audio-Visual Composition

such terms, we would not be able to apply them? Musical ineffability, the subject of discussion here, is often associated with the character of our affective experience. We are told, for example, that music symbolises human feelings; and feelings can be known only by feeling. Here again we find diversity of opinion in matters of detail: does a piece of music express or evoke these feelings? Or does it somehow embody them? Are the relevant feelings stereotypical emotions, like anger and joy, or are they peculiarly musical feelings? And so on.” (Raffman, 1993, P2.).

Aesthetics is an old field - as are some of its most important arguments in relation to understanding art. Hegel argued that it is both necessary to distinguish form from content and also impossible to do so. (Brittanica 2003). This somewhat paradoxical problem infuses many debates about aesthetics. Some camps argue the primacy of form, others of interpretation in their attempts to understand music, yet seldom do the two meet with the tools of analytical philosophy alone. Leonard Meyer (1956) describes two opposing sets of aesthetic positions, which are summarised briefly below:

ABSOLUTIST: "musical meaning lies exclusively within the context of the work itself." REFERENTIALIST: "musical meanings refer to the extramusical world of concepts, actions, emotional states, and character." In his book, Meyer then suggests a compromise position that acknowledges the existence of both types of musical meanings:

FORMALIST: "the meaning of music lies in the perception and understanding of the musical relationships set forth in the work of art and that meaning in music is primarily intellectual" EXPRESSIONIST: "the expressionist would argue that these same relationships are in some sense capable of exciting feelings and emotions in the listener"

70

Time Space Texture: An Approach to Audio-Visual Composition

A similar conceptual dichotomy to this formalist / expressionist dichotomy is that of Benedetto Croce (1866-1952). Croce made an aesthetic distinction between representation and expression. To Croce representation is descriptive, conceptual, and concerned with classifying objects according to their common properties. Expression on the other hand is intuitive and concerned with intuitively created and interpreted subject matter. Croce argued that the essence of aesthetic awareness lay in the intuited understanding of expression. (Croce 1922). In reference to this idea, John W. Osborne suggests that, “conceptually speaking, it cannot be shown clearly that music is a language of thought (representation) or emotion (expression).” (Osborne 1989, p.154).

It is in fact possible that each of these positions is valid given the different cognitive strategies that can be taken by listeners. Croce in effect argues for an aesthetic monism, when cognitive science suggests that a duality is the more likely form of aesthetics. As will be argued, music cognition permits many modes of intentionality, which can in turn precipitate the experience of many different aesthetic interpretations. Unless one can demonstrate the primacy of one intentional position or cognitive strategy over another, it is perhaps pointless to exclusively argue one position against another. Many aestheticians do not in fact accept this, and the goal of much recent aesthetic theory has in fact been to discover relationships between form, content and understanding.

Of these recent theories, many have concerned themselves with linguistic analogies. Linguistics, semiotics and semiology in various guises have been applied to music in an attempt to explain the aesthetic experience of music. One such theory is that of Lerdahl and Jackendoff (1996). Their generative theory of tonal music (GTTM) may be seen as one approach to uniting formalist concerns with expressionist concerns. Nicholas Cook (2002) describes the synthesis of heterogenous ideas that constitute GTTM: “First there is Schenkerian theory, 71

Time Space Texture: An Approach to Audio-Visual Composition

which in its original form was located at the intersection of psychology, phenomenonology, and metaphysics, but after crossing the Atlantic became assimilated within the post-war formalist tradition … Then there is the approach to rhythmic analysis developed during the 1950s by Meyer and Cooper, heavily influenced by Gestalt psychology - though without the empirical control that one would expect of an explicitly psychological theory. The third element is structural linguistics, which provided not only certain key features of the theoretical model (in particular its formulation in terms of rules) but also its epistemological orientation: GTTM was to explicate the intuitions of musically "experienced" listeners through constructing ‘an explicit formal musical grammar that models the listener's connection between the presented musical surface of a piece and the structure he attributes to the piece’ ”. (Cook 2002, p.4).

The problem of ineffability however remains an obstacle to many linguistic attempts at understanding music, and GTTM has been criticised for various reasons. While aesthetic approaches which lend themselves to the procedures of modern philosophy have naturally commanded considerable attention by scholars, it is often argued that they are still aesthetically insignificant.

Nicholas Cook

suggests that GTTM is a “formalist theory disguised as a psychological one.” (Cook 2002 p.7). Osborne reflects on the reasons why heavily conceptual approaches to music understanding may fall short of their mark:

“Margolis points out the conceptual difficulties of a variety of aesthetic theories (e.g., Danto, Dickie, Goodman, Hirsch, Sibley, and Strawson). The basic problem is one of describing linguistically an experience which is beyond words. He points to the ways in which aesthetic theories usually fail to maintain the distinctions which they seek to delineate. Scruton (1983), after conceptually analysing the representational

and

expressive

properties

of

music,

reaches

the

same

conclusion: musical understanding is basically intuitive and metaphorical rather 72

Time Space Texture: An Approach to Audio-Visual Composition

than

representational

and

propositional.

Margolis

promotes

the

value

of

discussing aesthetic considerations in wider contexts (e.g., non-dialectically based approaches which are not as reliant upon categorical distinctions). The imposition of "conceptual uniformities" is what seems to get aesthetic theories into internal contradictions. Margolis seems to agree with Strawson (1959) that a work of art is more than the sum of its perceptual parts. The problem is to describe or define that "something special" that distinguishes a work of art from a mere physical object. A more ecologically oriented approach to aesthetics may facilitate our chance of enriching our understanding of a work of art.” (Osborne 1989, p. 152).

Croce sought to explain the ineffability of musical meaning by describing it as that which cannot be conceptualised. Croce believed that music is bound up with forms of non-conceptual awareness. In the quote below, Croce discusses the nature of ineffability in relation to poetry and then all arts.

"If we examine a poem in order to determine what it is that makes us feel it to be a poem, we at once find two constant and necessary elements: a complex of images, and a feeling that animates them ... Moreover, these two elements may appear as two in a first abstract analysis, but they cannot be regarded as two distinct threads, however intertwined; for, in effect, the feeling is altogether converted into images, into this complex of images, and is thus a feeling that is contemplated and therefore resolved and transcended. Hence poetry must be called neither feeling, nor image, nor yet the sum of the two, but "contemplation of feeling" or "lyrical intuition" [which is the same thing] "pure intuition" - pure, that is, of all historical and critical reference to the reality or unreality of the images of which it is woven, and apprehending the pure throb of life in its ideality. Doubtless, other things may be found in poetry besides these two elements or moments and the synthesis of the two; but these other things are either present as extraneous elements in a compound [reflections, exhortations, polemics, 73

Time Space Texture: An Approach to Audio-Visual Composition

allegories, etc.], or else they are just these image-feelings themselves taken in abstraction from their context as so much material, restored to the condition in which it was before the act of poetic creation ... What has been said of "poetry" applies to all the other "arts" commonly enumerated; painting, sculpture, architecture, music ... By defining art as lyrical or pure intuition we have implicitly distinguished it from all other forms of mental production. If such distinctions are made explicit, we obtain the following negations:

1. Art is not philosophy, because philosophy is the logical thinking of the universal categories of being, and art is the unreflective intuition of being. Hence, while philosophy transcends the image and uses it for its own purposes, art lives in it as in a kingdom. It is said that art cannot behave in an irrational manner and cannot ignore logic; and certainly it is neither irrational nor illogical; but its own rationality, its own logic, is a quite a different thing from the dialectical logic of the concept, and it was in order to indicate this peculiar and unique character that the name "logic of sense" or "aesthetic" was invented. The not uncommon assertion that art has a logical character, involves either an equivocation between conceptual logic and aesthetic logic, or a symbolic expression of the later in terms of the former. (Hofstadter and Kuhns, 1964, p. 558-559).

Croce suggests that analytic philosophy - or “common language philosophy” as some refer to it - may be incapable of explaining aesthetics. Cognitive science suggests quite clearly that imagery and emotion are understood to be independent from language in multiple neural and cognitive dimensions. One might speculate that these profound neural and cognitive barriers also constitute profound ontological barriers for language. There is after all no reason to think that language should be able to provide an exhaustive account of all modes of thought, when it is merely one of these modes. Any such reason might be characterised as a kind of linguistic solipsism: “I cannot logically explain the 74

Time Space Texture: An Approach to Audio-Visual Composition

existence of any other mode of thought other than logic – therefore only logic exists”. While analytic method offers valuable insights into purely conceptual entities, its particular limitations should not be used to argue against the existence

of

certain

mental

activity

ad

ignorantium.

The

existence

of

methodological limitations is the precise reason why the umbrella discipline known as cognitive science was devised. By synthesising knowledge acquired through various methods of investigation, the methodological limitations of each discipline can be overcome.

In the guise in which Croce presents intuition, a philosopher might suggest that intuition is a kind of mystical construct. (See Appendix D). In philosophy, this would be the case so long as intuition is denied any logical definition. Intuition however may well be amenable to a definition within a cognitive science framework. In Section 3.7, creativity was described in psychological terms as an outcome of dedifferentiated modes of thought. It will be hypothesised here that while “logic” is a functional manipulation of concepts using an analytic thought type, “intuition” is a functional manipulation of mental imagery using a dedifferentiated thought type. As such, intuition is closely related to creativity.

6.4 Common Music Notation Music is a cultural construct. It exists because humans brought it into being. It did not exist in any a-priori form before someone brought it into being. The fact that music is valued and appreciated by so many people may be considered to be “an emergent property of the complexity of human cognition.” (Blood & Zatorre, 2001, p. 11823). As such, music has been forever subject to modification according to the evolving values of specific cultural paradigms. By definition, music can not exist independent of some cultural ecology. To call something “music” or “musical” immediately draws on established cultural notions of some

75

Time Space Texture: An Approach to Audio-Visual Composition

sort. Perhaps like all art, music is defined in part by the politics of musical ideas and concepts present in society.

The existence of ancient bone flutes indicates that instrumental music has existed for at least 45,000 years, and possibly for even longer. Without any historical evidence, it is difficult to say what forms of music came first, and what functions music played in human cultures before written history. Some suggest that music has always been associated with some text, but this idea has a number of problems. In Judeo-Christian cultures, biblical text has always been regarded as a sacred religious object, and text permeates musical practice in cultures dominated by these religions. Early Christian music is the first western music form to be notated, and it is from this music that historians trace the direct descent of the western concert music tradition. (Grout 1988). From the point of view of western music history, text has always been tightly bound to music. History is usually biased and incomplete, and may not represent the whole story. (Foucault 1970). It may be hypothesised that the significance attributed to text in Judeo-Christian cultures has something to do with the emphasis placed on linguistic interpretations of music in western cultures.

Max Weber pointed out quite early in the twentieth century that what many people take as “natural” givens in western music are in fact conventions resulting from of a long process of rationalisation. (Weber, 1958). Things as fundamental as our tuning system are conventions created as part of a subjective process of rational systemisation that characterises many aspects of western culture since the

Renaissance.

Theodore

Adorno

has

made

some

particularly

acute

observations regarding the evolved relationship between music and language. “Now, the specifically linguistic character of music consists in the unity of its objectification, or, if you prefer, reification, with its subjectification; just as, everywhere, reification and subjectification are not mutually exclusive, but rather 76

Time Space Texture: An Approach to Audio-Visual Composition

mutually determinant polar opposites. Since music, as Max Weber demonstrated in his posthumous sociology of music, became integrated into the rationalization process

of

pronounced.”

Western (Adorno

society, 2002,

its

linguistic

pp.145-146).

character Here

has

Adorno

become refers

to

more the

“objectification” of musical attributes in a process of reification. A process of reification that Max Weber points out had the downside of stripping music of many of its original elements and values. (Weber 1958, p.40).

In western culture, music has long been reified in a way demanded by pragmatic concerns. One significant initial stage in the process of music's objectification took place as a result of a pragmatic search for a way to record music. In the days when an acoustic instrument or a human voice were the sole source of music production, a writing implement and some form of paper were amongst the favoured tools for recording. It is now difficult to conceive of western concert music without Common Music Notation (CMN). When CMN was developed it was impossible to reify, and thereby preserve music as anything other than pitch, rhythm and duration with the available technology. Furthermore it was not necessary - to record timbre it was necessary merely to write the name of an instrument or vocal range next to the stave. There were no electric instruments or synthesis systems in the middle ages with which to create sounds capable of dramatic timbral variation, and if there had been there would have been no means to adequately record them.

In essence CMN developed as a recording system devised to meet the limited needs of an antique music. CMN was also required to work within the limitations of an antique recording technology. For the purposes of recording music intended for varied interpretation and performance with acoustic sound generating devices, CMN was – and continues to be generally sufficient. It was never intended to be a perfect recorded representation of an ideal musical performance. It is merely a 77

Time Space Texture: An Approach to Audio-Visual Composition

system descended from the best possible mnemonic system available for encoding the music that was possible over a thousand years ago.

Yet as a mnemonic and communicative tool, CMN came to constitute a lowlevel musical meta-language - and one which Christianity eventually spread throughout the occident. As CMN evolved and developed it become such an important part of western music that it even became synonymous with the term reserved for the audible phenomena of music. Composing music is still often considered to be writing using CMN. Many writers still argue that if music isn’t notated then it is not a true “musical work”. Musicology similarly has until relatively recently been interchangeable with the study of CMN. As the dominant low level meta-language of western music, CMN has been the primary conceptual space from which much music and almost all music theory has sprung.

There is even evidence to suggest that the nature of western music has been to some degree determined by the nature of CMN. This can be explained by considering the Sapir-Whorf Hypothesis of linguistic determinism. This hypothesis states that, “the language we use to some extent determines the way in which we view and think about the world around us.” (Campbell 2002). Extreme versions of this hypothesis decree that thought is impossible without language, and that the constraints of language completely constrain thought. Widely accepted and more moderate versions of this theory known as “weak linguistic determinism” hold that “thought is merely affected by, or influenced by our language, whatever that language may be.” (ibid). (This may be understood as involving a kind of reinforced linguistic intentionality).

In geological disciplines the combinatorial power of a persistent weak force and time is well acknowledged. Rocks split, valleys are carved and mountains are hollowed by weak forces such as moving wind and water. Similarly, over the 78

Time Space Texture: An Approach to Audio-Visual Composition

centuries it is possible that the subtle ongoing influence of a weak linguistic determinism would influence the developmental course of western music.

The problem of determinism and CMN has occurred to many music theorists. As early as 1917 the proto-music-phenomenologist Ernst Kurth suggested, “In the initial approach to compositional technique we tend to remain heavily dependent on notation. The majority of current technical terms for musical structure adhere to the notated image, so that the structure is characterized as vertical and horizontal, according to whether the [analytic] summary takes the form of chords or linear progressions, i.e., simply according to the way the page is read.” (Rothfarb 1991 p.38). More recently bio-musicologists have even suggested that this emphasis in western music on reading notated music from left to right as one would read a spoken language, has had a major influence on the cognition of music and the evolution of the art and cognition of music as a whole. (Wallin 1991).

The most potent evidence for linguistic determination is found in certain incarnations of integral musical serialism. Any music notated using CMN which is entirely void of concern for sonic phenomena strongly suggests itself as a product of absolute linguistic determination. Some have argued that music of this sort signalled the logical, rational conclusion of western concert music. (Adorno 1949). It is conceivable that if western concert music was not so heavily based on a heavily conventionalised body of theory rationalised around CMN, then this may not have come to be the case. As Trevor Wishart suggests, “It is notability which determines the importance of pitch, rhythm and duration and not vice versa.” (Wishart 1983, p.6-7).

As a mnemonic aid, CMN has undoubtedly helped preserve much music, and bring about many new musical thoughts that may not otherwise have come to be. 79

Time Space Texture: An Approach to Audio-Visual Composition

It has however become increasingly redundant for most musicians as a mnemonic device during the 20th century. Furthermore, the arrival of electronic sound recording and reproduction technology at the turn of the twentieth century has had a massive impact on the way that music is created, heard and conceptualised. With the availability of powerful sound recording and synthesis systems, a notation system that does not record aspects of acoustic ecology, and which is only useful for instruments with a limited timbral scope has a greatly reduced value. As well as being redundant to an immense number of musicians as a mnemonic device, CMN is also no longer necessary as a basic element in music composition.

Various

computer

systems

now

provide

numerous

visual

abstractions of musical sound with which composers can edit, design and audition musical works. As an example musicians can now permanently record and edit improvised performances using all manner of “instruments” either real or virtual. Musicians can construct music directly out of such recordings using performances by various persons in different spaces and at different times. This would seem to have realised Theodor Adorno’s hope that "the direction (Richtungstendenz) of innovation must be changed, transferred to a new plane" (Adorno 1949 p 41). Were he not so incapable of extracting himself from the limited conceptual outlook of his childhood cultural mileau Adorno may have seen this new timbral plane coming, and approved. Certainly his hero Arnold Schoenberg saw it coming when calling for a theory of "Sound Colour". (Slawson 1985).

6.5 Traditional Music Theory As Weber and Adorno point out at the beginning of the previous section, the rationalisation of music and its reification as “language” has been going on for many centuries. The enshrinement of pitch, rhythm and duration in CMN embodies one major foundation in this process. Some would argue that a more significant foundation in this process of rationalisation is the adoption en-masse

80

Time Space Texture: An Approach to Audio-Visual Composition

of the equal tempered tuning system. The western tempered tuning system is an artificial tuning convention adopted at the start of the baroque era to meet the pragmatic demands of keyboard makers and musicians. While this is significant in its own right, it is as a result of this widely adopted tuning convention that the complex circular world of western harmony became a viable development.

Western harmony arose during the Baroque era, when a fashion for linguistic music forms such as oratorio were prevalent. Nicholas Cook writes that “Analogies have frequently been drawn between the structural organisation of music and that of language. Indeed, it was at one time assumed that the two were more or less coextensive; baroque music theory was to a large extent an adaptation of the theory of rhetoric and was centred around the expression of textual meaning, so that, as Dahlhaus puts it: 'Instrumental music, unless provided by a program-note with some intelligible meaning, was regarded not as eloquent but simply as having nothing to say.” (Cook 1990. P71). The process of making music like language gained considerable momentum in the Baroque era.

Much of the music theory that developed to describe music during the late Baroque period has for better or worse formed the foundations of all music theory that has come since. “More than any other theorist, it is Rameau who established the discursive space within which music theory has operated ever since.” (Cook 2002 p.10). Unfortunately, in Rameau’s theory “…a privileged domain of knowledge is constructed; subjective experience is explained through being derived from a reality that is cognitively inaccessible to the individual.” "Carl Dahlhaus saw the issue of self-evidence as a crucial one for the historiography of music theory, stressing the extent to which "music theory in the 18th and 19th centuries was burdened ... with problems that lay concealed in apparent selfevidence". (Cook, 2002, p.1). Following in the footsteps of Rameau, Schenker “…saw the theoretical principles that he developed for the common-practice style 81

Time Space Texture: An Approach to Audio-Visual Composition

as natural laws, or at least as firmly embedded in natural laws, that Schenker dismissed the music of other times and places as more or less valueless.” (Cook 2002 p.12).

The

problem

of

questionable

epistemological

foundations

permeated

mainstream music theory for centuries. When discussing the highly conventional nature of western music, Nicholas Cook suggests that western music theory is profoundly lacking in any coherent epistemological basis. Nicholas Cook opens his most recent paper with a quote from Leslie Blasius: “The epistemological underpinnings of Schenker's theory are far from obvious" Blasius says. In saying this Blasius undermines the validity of “what sort of knowledge of music it (Schenkerian theory) gives us,” and “what sort of truth it aspires to.” (Cook, 2002, p.1). For Cook, “it is through its performative effect rather than its epistemological underpinnings that any music theory achieves its cash value.” (Cook 2002 p.16). Cook implies that the entire body of music theory is little more than many versions of many possible musical imaginings, and that all such imaginings are profoundly analogical - to the point of being fictional.

Perhaps to conceal a lack of foundation, “… Schenker characteristically presents as universal statements of truth and inevitability (it had to be precisely as it is) what are better thought of as performative injunctions (hear it this way!). Similarly, Robert Snarrenberg has drawn attention to the way in which Schenker constantly invites his reader's participation in the aesthetic act, thereby "poetically co-creating" the musical effect …” (Cook, 2002, p.1). In this way Schenkerian theory presents itself as one possible type of intentionality available to musical thought and the listening process. Cook suggests that “A Schenkerian analysis is not a scientific explanation, but a metaphorical one; it is not an account of how people actually hear pieces of music, but a way of imagining them.” (Cook 1990. P4). 82

Time Space Texture: An Approach to Audio-Visual Composition

Cook points out that western concert music and related music theory often exist in a circular relationship. By this he means that a lot of music theory is as much prescriptive as it is descriptive of music. Leonard Meyer (1956) suggests that, “on the whole, music theorists have concerned themselves with the grammar and syntax of music rather than with its meaning or the affective experiences to which it gives rise.” (Meyer 1956, p.6). Composers heavily indoctrinated with such music theory are somewhat predetermined to fulfil the intentional prescription of this theory in their music. This music is then used as further evidence of the validity of the original theory, and so on and so on - ad infinitum. These compounded circular exchanges between theory and practice have reinforced many rational and linguistic features deep within western music culture. Cook suggests that the "linguistic analogy has received considerable impetus in recent decades from the dissemination of Schenker's approach to music in terms of structural levels on the one hand, and the development of structural linguistics on the other. “ (Cook 1990. P71).

The most recent of these linguistic theories is the Generative Theory of Tonal Music (GTTM) of Lerdahl and Jackendoff (1996). GTTM is based in part on Chomskian structural linguistics, Gestalt grouping ideas, and Schenker’s theories. Cook suggests that GTTM is also quite similar to Schenker's theory in that it “draws for its performative effect… upon what might be termed multiple epistemological registers: it says how things are, it suggests how you might hear things,

it

recaptures

historical

conceptions,

and

each

register

merges

imperceptibly into the next.” (Cook, 2002 p.18).

Like much traditional music theory, GTTM is an “attribute” based system. As Marc Leman puts it, ”The attribute theory assumes that aspects of musical representation can be cast in terms of predicate assignments. For example, the 83

Time Space Texture: An Approach to Audio-Visual Composition

properties of duration, height, and articulation can be attributed to a note; other predicates, such as crescendo or articulation-structure can be attributed to a configuration of notes, etc. However, a paradigm shift in cognitive science which took place in the mid -1980s has profoundly changed the way in which we now think about musical representation and memory control. In particular, attributes are no longer considered to be valid entities for representations of musical perception. Instead, musical representations are conceived of in terms of images.” (Leman, 1999, p.93).

Like much traditional attribute based music theory, GTTM concerns itself only with idealised structures built on the attributes of pitch and rhythm. It is clearly stated by Lerdahl and Jackendoff that, “Other dimensions of musical structure – notably timbre, dynamics and motivic-thematic processes – are not hierarchical in nature, and are not treated directly in the theory (GTTM) as it now stands.” (Lerdahl and Jackendoff 1996 p.9). To many modern composers using current sound manipulation technologies and the broad palette of sound types provided by such tools, concerns for timbre and acoustic ecology are prior to both pitch and rhythm. To some, pitch and rhythm are merely spatial and temporal subcategories of timbre. Any theory which is divorced from an ecological account of modern sound culture to the degree that it does not even account for timbre can have only limited claim as a means to understand music, let alone constitute an aesthetic benchmark.

Arnold Schoenberg is one person discussed in Cook’s latest paper to exhibit some vision regarding much music theory and its shortcomings. “…near the beginning of his Harmonielehre he calls on us to get away from established theory and "again and again to begin at the beginning; again and again to examine anew for ourselves and attempt to organize anew for ourselves. Regarding nothing as given but the phenomena." (Cook, 2002). Instead we have 84

Time Space Texture: An Approach to Audio-Visual Composition

a situation where “…theory, increasingly self-sustaining, becomes a filter through which observation has to pass in order to be accepted. Under such circumstances … "The self-stabilizing, corroborating effect of interdependent premises precludes fundamental revisions, major discoveries, or even accidental breakthroughs." (Cook, 2002).

6.6 Summary It may be possible to summarise as follows: •

Music is has qualities of ineffability that limit the ability to reduce it to linguistic or visual models.



Aesthetics can involve intuition and analysis.



Philosophy can be seen to have limitations in developing theories of art.



Music has many aspects which are culturally determined.



Music notation may be seen to have a deterministic effect on music practice.



Attribute theories of music based on music notation are no longer seen as valid approaches to music cognition.



Much traditional western music theory has performative and circular tendencies.

At this point readers should be open to the idea that there are many types of music, and music interpretation. Not all music is linguistic in derivation, although all music is open to multiple - if not infinite interpretations. Phenomenological method requires the bracketing out of preconceived ideas about music. To achieve this it has been necessary to loosen any commitment that readers might have to preconceptions about the essential nature of music. Having attempted this, it now becomes possible to commence an attempt to reconstruct musical thought based on the scientific and ecological realities of musical life in the 22nd century.

85

Time Space Texture: An Approach to Audio-Visual Composition

7 New Music Theory

7.1 Introduction Numerous radically new forms of music were developed during the twentieth century. In addition, numerous new theories of music were developed. Many of these theories have been psychological or phenomenological in nature, and have been based on imagery rather than attribute systems. Some of these have concerned themselves with new music and the relationship between music and visual imagery. It would seem there is an emerging tradition of music theory which takes into account many of the cross modal features of human cognition described in the early chapters of this thesis.

7.2 Leonard B Meyer Relationships between music and other sensory modalities were written about accurately by Leonard B Meyer in 1956 (Meyer 1956). Meyer writes that: "The unity of perceptual experience, regardless of the particular sense employed, is also demonstrated by the fact that in experience even single musical tones tend to become associated with qualities generally attributed to non-aural modes of sense perception. This tendency is apparent not only in Western culture, but in the cultures of the Orient and in many primitive cultures. In Western culture, for example, tones are characterized with respect to size (large or small), color value (light or dark), position (high or low), and tactile quality (rough or smooth, piercing or round). Furthermore, it should be noted that these qualities are interassociated among themselves; that is, volume is associated with position (e.g., a large object is generally associated with a low position), and both of these are

86

Time Space Texture: An Approach to Audio-Visual Composition

associated with color” … “Through such visual and tactile qualities, which are themselves a part of almost all referential experience, tones become associated with our experience of the world. Thus the associations, if any evoked by a low tone will be limited, though not defined, by the fact that in Western culture such tones are generally associated with dark colours, low position, large size, and slower motion." (Meyer 1956, p.261).

7.3 Edgard Varese Ferruccio Busoni (1866-1924) is often referred to as the “prophet of electronic music”. Busoni believed that the, “full flowering of music is frustrated by our instruments ...In their range, their tone, what they can render, our instruments are chained fast and their hundred chains must also bind the composer." (Russcol, 1972, p.32). A student of Busoni by the name of Edgard Varese (18831965) gave rise to some interesting ideas about music in pursuit of non-acoustic music making devices. Varese conceptualised works in terms of massive geometrical bodies of sound moving in space. For his 1923 piece Hyperprism, Varese attempted to create - “a sense of sound projection in space by the emission of sound in any part or in many parts of the hall as may be required by the score”. (Russcol, 1972, p.53). Varese says of his work Integrales, "Integrales was conceived for spatial projection, necessitating acoustic means that did not yet exist but which I foresaw… By permitting the figure and the plane to have unpredictable motion creating complex and unpredictable images, one can anticipate results of complex images unforeseen and unimagined. Furthermore, these qualities could be enhanced by allowing the geometric figure to vary its form as well as its speed." (Mattis 1992, p.162). Varese is the first composer to explicitly describe an approach to compositional thought similar to that central to the spatial audio-visual works on the Music-VR DVD.

87

Time Space Texture: An Approach to Audio-Visual Composition

7.4 Pierre Schaeffer

Figure 4.1 Recapitulatory table of Pierre Schaeffers Solfege. Reproduced from Carlos Palombini's 1993 thesis.

88

Time Space Texture: An Approach to Audio-Visual Composition

By conceptualising his music in terms of geometrical objects containing mass located in space, Varese influenced many of the more cross–modal features of Pierre Schaeffer's musique concrete solfege. Schaeffer once said of Varese, “In the paths we were following, Varese, the American, was our single great man, and the sole precursor anyway”. (Palombini 1993, p.25).

Besides demanding particular intentional approaches to "reduced listening", Schaeffer applied the principles of phenomenology to devise a system of Typology and Morphology which incorporates many references to sensory attributes from sensory modalities other than hearing. A primary motivation for Schaeffers theory was the belief that "sound can no longer be characterized by its causal element, it has to be characterized by the effect only.... the notion of the musical note, so intimately linked to the aural character of the instrument, no longer suffices to account for the sonic object..." (Palombini 1993, p.26). In Schaeffer’s system those musical elements suggesting attributes of touch and vision include grain, mass, texture, colour and thickness. These can be seen in the recapitulatory table of Schaeffers solfege in Figure 4.1.

7.5 Denis Smalley Denis Smalley has devised a system called Spectro-morphology, which is based on Schaeffers Typo-Morphology work, but which exempts itself somewhat from the reduced listening elements essential to Schaeffers theories. As Smalley puts it, “Spectromorphology is an approach to sound materials and musical structures which concentrates on the spectrum of available pitches and their shaping in time. In embracing the total framework of pitch and time it implies that the vernacular language is confined to a small area of the musical universe. Developments such as atonality, total serialism, the expansion of percussion

89

Time Space Texture: An Approach to Audio-Visual Composition

instruments, and the advent of electroacoustic media, all contribute to the recognition of the inherent musicality in all sounds. But it is sound recording, electronic technology, and most recently the computer, which have opened up a musical exploration not previously possible. Spectro-morphology is a way of perceiving and conceiving these new values resulting from a chain of influences which has accelerated since the turn of the century. As such it is an heir to Western musical tradition which at the same time changes musical criteria and demands new perceptions.” (Smalley, 1986, p.61).

In reference to an attribute of his own approach to acousmatic music analysis, Smalley states that, “It is true to say that vision is at the very basis of the gesture-field, and that the energy motion trajectory is unimaginable without its visual correlations. This would imply that music, and electroacoustic music in particular, is not a purely auditory art but a more integrated, audiovisual art, albeit that the visual aspect is frequently invisible. That in turn suggests some kind of synaesthesia. This is true, but it would be wrong to regard the vision-field as hallucinatory or as a strong, involuntary type of synaesthesia like that of 'colour-hearing'. Rather, it is a weaker, voluntary, associative synaesthesia which will vary in consciousness and activity among listeners. The vision-field embraces both kinetic and static phenomena. For example, the textural design of textiles or rock formations could easily form part of a listener's indicative reference-bank. Musicians,

because

of

their

interactive

relationship

with

sounds

and

sound-structures, tend to regard the vision field as extrinsic to music, but our discussion of the indicative networks reveals pervasive intrinsic qualities. Thus vision must be accorded the status of a network.” (Smalley 1996 p.90). (Although it is pertinent, Smalley uses the term “Synesthesia” in a rather colloquial way. See appendix C for a more thorough description of the relationship between music and Synesthesia.)

90

Time Space Texture: An Approach to Audio-Visual Composition

7.6 Rolf Inge Godoy Rolf Inge Godoy states that “… Progress in music theory necessitates more holistic approaches to musical sound as well as to musical cognition than what has been the case in much music theory of our culture. In particular, this concerns the inability of traditional music theory (with its fixations on discrete and abstract symbols) to handle global emergent qualities of musical objects such as timbre, texture and contour, as these qualities can only be represented as trajectories in time, and hence as shapes. It is now common knowledge that the shapes of overall dynamic envelopes, shapes of spectra (both stationary and evolving), shapes of transients and other fluctuations in the course of a sound, can be correlated to the perceived qualities of musical sound.” … “Other and more complex features of musical sound, such as timbre and texture, can likewise be thought of as shapes and ordered in multidimensional axial systems. The twin concepts of shapes and spaces can thus be seen as a universal strategy for explorations in music theory, providing powerful and flexible tools for thematizing otherwise inaccessible qualities in musical sound.” (Godoy 1999 p.8). The work of Godoy and his continental colleagues is amongst the most cognitively sound, and eminently applicable music research being undertaken in the world today.

Denis Smalley is commendably astute in his ground breaking intuitive recognition of the multi-modal interpretative potential of musical imagery; however he does not attempt to couch elements of his theory in any firm cognitive science framework. Rolf Inge Godoy describes the basis of a music theory that incorporates cross modal relationships in a more concrete cognitive science framework. He states that “Including schemata from other modalities, particularly those of vision and action, in music theory (and in other domains of musical thought as well, such as for instance in musical semiotics) could give us more comprehensive images of musical objects and especially provide closer links 91

Time Space Texture: An Approach to Audio-Visual Composition

between musical sound and the "ecological" or body-schematic aspects of music. There is now considerable interest in bodily based schematic orderings in the field of cognitive linguistics (Johnson 1987, Lakoff 1987), which in turn has been influenced by work on categorization (Rosch et al 1976, Harnad 1987) where motor schemata (amongst other things) have been assigned an important role in the formation of prototypes. In our case, sound-producing actions and their associated kinematic images seem to be important for the perceived shape of the musical objects. There is some material on this (Sudnow 1978, Mikumo 1994, Todd 1995, Friberg and Sundberg 1997), and I have previously proposed a working hypothesis of a triangular model of image, action and sound (Godoy 1997b), which can be seen as based on a cross-modal understanding of musical imagery.” (Godøy, 1999, p.89). An examination of Smalley’s Spectro-Morphology work shows that it could easily be re-framed within a cross modal framework established by Cognitive Science.

7.7 Gesture As Godoy and Smalley suggest above, the body can become closely tied in with visual and sonic imagery during certain approaches to musical creation. Studies of gesture amongst those who support the perceptual bases of cognition theory

(Barsalou,

1999)

suggest

that

gesture

mediates

all

modes

of

communication – especially those with visuospatial content. (McNeill 1992). In this way the mnemonic image of a musical passage can be closely related to the kinaesthetic image required for its reproduction. In a listening context, it is for this reason that the experience of a particular musical image can trigger association with the kind of kinaesthetic image involved in its generation. People can often engage quite physically with a performance for this reason. The relationship between musical imagery and the kinaesthetic imagery of the body is quite close. The relationship between kinaesthetic imagery and visual imagery is

92

Time Space Texture: An Approach to Audio-Visual Composition

quite close as well. As Godoy and Smalley suggest, it is indeed feasible that kinaesthetic imagery plays a large mediatory role in relationships between musical and visual imagery. It is also distinctly possible that all three types of imagery form a triad in which each type of imagery is equally well connected to every other. Recent theories of music and gesture describe the possible nature of this relationship. (Battey 1998).

7.8 Emotion The mediating force between kinaesthetic, musical and visual imagery has been suggested in this thesis to be a product of affective and emotional forces. This is an area however which remains largely unexplored by scientific researchers. As Sloboda and Juslin put it, “Emotion is one of the most pervasive aspects of human experience, related to practically every aspect of human behaviour – action, perception, memory, learning, and decision making. It is thus all the more remarkable that emotion has been neglected throughout much of Psychology’s brief history.” (Juslin and Sloboda, 2001, P.73).

While emotion has only relatively recently come to be a research focus of Psychology and the Cognitive Sciences in general, the study of music and emotion is even more infantile. Few since Leonard Meyer (1956) have dared tackle the topic, perhaps due to the lingering spector of behavioural tendencies in the academies, and perhaps due to a lack of any sound methodology. Marvin Minsky at MIT suggests that, “I think most people assume that emotions are very deep and complicated, because they seem so powerful and hard to understand. But my view is just the opposite: it may be largely because they (emotions) are basically simple - but wired up to be powerful - that they are hard to understand. That is, they seem mysterious simply because they're separate and opaque to your other cognitive processes.

93

Time Space Texture: An Approach to Audio-Visual Composition

Specifically in reference to the sparcity of research into emotion in the academies, Minsky says, "I think that's because the AI people have suffered from the same misconception that most cognitive psychologists have suffered from, viz., the idea that "well, we'll do the easy things first, like understanding memory and simple reasoning and so forth; but emotions are surely too difficult, so let's put off researching them for now." I once came across a statement by Freud in which he complains along lines like this: - "people think I work on emotions because those are the profound, important things. Not so, what I'd really like to understand is common sense thinking. And it is only because that's so difficult, so incredibly complicated, that I work on emotions instead -- because emotions are so much more simple." So I would like to see some music-theorists start with models based on simulating a few postulated emotions, each linked to a few procedural rules about how various sorts of rhythmic, harmonic, and melodic elements might work to arouse, suppress and otherwise engage a listener's various feelings and attitudes.” (Minsky and Laske, 1991).

Ten years after Minsky made this statement, Juslin and Sloboda published the first book length treatise on music and emotion since Meyer’s (1956) work came into being. (Juslin and Sloboda, 2001). The papers in this book vary in subject and strength, and illustrate many opposing views. In the current context, one notable paper is that titled “Emotional Effects of Music: Production Rules”. (Scherer and Zentner, 2001). The authors suggest that, “A parsimonious premise is that musical stimuli provokes emotion in a similar fashion to any other emotion – eliciting event. Thus, the mechanisms described by emotion psychologists may be also applicable to the study of emotion induction via music. There is an emerging consensus that emotion elicitation and differentiation is best understood by assuming a process of event evaluation, or appraisal, that models the way in 94

Time Space Texture: An Approach to Audio-Visual Composition

which an individual assesses the personal significance of an event for its well being on a number of criteria and dimensions.” (Scherer and Zentner, 2001, P. 366).

This model of emotion is derivative of that originally set out by Ortony, Clore, and Collins. (1988). While motivated largely by a concern for artificial intelligence applications,

this

theory

constitutes

the

first

attempt

at

a

structural

representation of human emotion. Their main goals, “were to present an approach to the study of emotion that explains how people’s perception of the world – their construals – cause them to experience emotions.” Their working characterization viewed emotions as, “valenced reactions to events, agents, or objects, with their particular nature being determined by the way in which the eliciting situation is construed.” (Ortony, Clore, and Collins, 1988, P.12-13).

The appraisal process involved in eliciting emotions is viewed to take place, “in a rudimentary, automatic fashion at lower levels of the CNS (mostly the limbic system), especially for evolutionarily ‘prepared’ stimuli, or in a more elaborated and more effortful process involving the cortical association regions of the CNS as well as the lower centres.” (Scherer and Zentner, 2001, P. 366). It is suggested that musical stimuli may activate low level emotional responses.

Of particular interest in the current study however is the role of memory in the interplay of music and emotion. “It has been suggested that expressive and physiological reaction patterns to emotion inducing events are stored in memory together with the experiential content. (Lang 1979; Lang et al. 1980). In consequence, it is often claimed that recall of past emotional experiences from memory and imagination can evoke similar emotional reactions as in the original experience.” (Scherer and Zentner, 2001, P. 369). It will be hypothesised here that this points directly to a large proportion of the perceived relationships 95

Time Space Texture: An Approach to Audio-Visual Composition

between music and visual imagery. These relationships may be understood to exist largely as a by product of memories in which music, visual and other imagery forms - such as kinaesthetic imagery - were bound by a common emotional token. At some later time, a similar musical experience may evoke the emotional token in memory, which in turn gives rise to a renewed experience of various visual and kinaesthetic imagery associated with that token.

7.9 Summary Various composers and music scholars in the twentieth century devised music theories that better suited their approach to new musical techniques and materials. Edgard Varese conceived of sound as three dimensional bodies moving in

space.

Pierre

Schaeffer

developed

a

theory

that

fully

integrated

phenomenological techniques. His listening styles took into account various listening intentionalities ranging from passive to analytical. His employment of cross modal attributes in his typo-morphology work was revolutionary. Denis Smalley extended Schaeffers work. Continental scholars such as Rolf Inge Godoy are approaching a new music theory built on a firm foundation of cognitive psychology. These new theories are also approaching an understanding of new music and interdisciplinary art that involves cross modal relationships. The relationship between music, gesture and visual imagery seems to be a central concern in the development of new theories. Recent research into emotion and music may be used as a basis for a further exploration of these relationships. In particular, the relationship between music, emotion, memory and similarity and their role in the relationship between music and imagery associated with other perceptual domains needs to be undertaken. All of these areas are in their infantile stages, and it is hoped that scholars will begin the process of remedying this at some stage soon.

96

Time Space Texture: An Approach to Audio-Visual Composition

8 Schwarzchild

8.1 Introduction To the author, "Schwarzchild" is often imagined as a synthesis of Punk, Jazz, Electro-acoustic and 20th Century atonal music’s. Schwarzchild attempts to roll twentieth century music into a big, 3D asphalt ball. Schwarzchild’s music could be imagined as Alban Berg improvising with a Saxophone plugged into a Marshall stack in virtual reality. It has been commented that Schwarzchild reflects a very inner space, whereas the other two works on the DVD reflect outer spaces more. Upon reflection this would seem to be true. Schwarzchild expresses the stress and toxicity of many years of dense urban living – with just a touch of existential angst. The program note for Schwarzchild provides a good introduction to the work:

"Schwarzchild explores the similarities possible between spatialised sonic and visual phenomena. It consists of objects and scenes that are determined by their sonic constituents. The production process incorporated a digital wind controller, a sampler, analogue synthesisers, phase vocoding techniques, a didgeridoo, an fm tuner, procedural 3D animation and visual effects software, 3D sound equipment and a supercomputing facility. The music was composed first, being largely an improvisation on a midi wind controller. This was then layered and edited with various other instruments until there were five separate parts. These parts were then fed to the 3D animation software Houdini, which processed the audio and midi data to create all motion in the 3D models. The trajectories for each of the five parts were then exported to Lake Technologies 3D sound

97

Time Space Texture: An Approach to Audio-Visual Composition

equipment. The result is a black and white stereoscopic 3D animation with a synthesised and spatialised soundtrack."

8.2 Making Schwarzchild

8.2.1 Introduction This research project was begun at the start of 1998. The first task undertaken was a familiarisation with some new animation tools in the Houdini 3D animation software called CHOPs. These tools included many Digital Signal Processing (DSP) tools for treating animation, audio and midi data. All time based data could be used in both animation and sound synthesis outcomes. An audio-visual study was undertaken using these animation tools. This short animation was very similar to Heisenberg in many ways. It was colourful, bizarre, and featured sound and imagery created in Houdini. It was considered at the time to be “disturbing” and sublime, but the process involved in making it was devoid of any conceptual methodology that could be described. It was the result of a purely experimental and intuitive creative act, and its creation defied description. This was considered not in the interest of a research project which required lengthy documentation in a thesis. In any event – the goal in undertaking doctoral research was to understand how and why such pieces came into being, and were as effective as they were. An intuitive approach was abandoned, and the search began for some conceptual methodology.

A respectable understanding of Synesthesia was developed in the months that followed, but the information gleaned from this study was not considered an exhaustive theoretical basis for works. (See (Lyons 2001) in Appendix A). Synesthesia did not reflect the kind of visual mental imagery that the author experienced when listening to music. It was this imagery that was ideally to be

98

Time Space Texture: An Approach to Audio-Visual Composition

understood and represented in animations. 3D animation tools were ideal for rendering this kind of imagery. Synesthesia however was best rendered with a 2D paint style packages. A search for other theories continued.

8.2.2 3D Sound Composition A familiarisation with 3D sound was also developed during 1998 using the Lake Huron equipment at the Sydney Conservatorium of Music. In addition to having no clear conceptual approach to the creation of abstract audio-visual works, the author had no experience with the composition of music for 3D sound systems with multiple speaker arrays. The effects of distance and velocity were completely disruptive to all composition/production techniques developed for motionless sound sources conceived for stereophonic reproduction.

In a 3D sound system, musical dynamics are dependent on the proximity of a sound source to the listener. If the sound source is moving – its motion will effect its

dynamic

behaviour.

All

musical

phrasing

thereby

becomes

bound

to

spatialisation. Every sonic articulation needs to be conceived or imagined with its spatial behaviour in mind. Composition takes on distinctly concrete attributes when it cannot ignore the spatial behaviour of sound sources. Finding ways to integrate 3D sound spatialisation into a music composition made some very special demands on both the composer, and the music composition systems required to complete works. At the time highly integrated systems were not available. Composition was undertaken in one studio, spatialisation in another, and animation somewhere else again. It was difficult to understand spatio-visual and sonic relationships when interactivity was so prohibitive. This slowed the development of an intuited understanding of spatial sound.

99

Time Space Texture: An Approach to Audio-Visual Composition

8.2.3

3D Audio-Visual Composition

This was not however the most difficult problem encountered. The combination of spatial sound with the demands of cross-modal audio-visual composition eventually proved to be the most difficult challenge in creating the works on the DVD.

When

Schwarzchild

was

begun,

it

was

imagined

that

the

production/composition process would be linear. The music would be created, then spatialised, and then combined with the imagery. There might be some feedback between the music spatialisation, and the location of visual imagery, but it was not understood how this would in turn effect the music – which would then feedforward directly into the visual imagery. This nieve approach can be seen in figure 8.1 below.

Figure 8.1 Nieve approach to spatial audio-visual production with only one feedback stage.

When the production of Schwarzchild was begun however, it was realised that it was not to be so simple. Spatial, cross-modal, audio-visual composition presented a profound intellectual hurdle. When it was first realised that the sound spatialisation would effect the music - which would then effect the visual representations – which in turn needed to feedback into the spatialisation, and then the music – something analogous to a cranial inversion almost ensued. This

100

Time Space Texture: An Approach to Audio-Visual Composition

perfectly

circular

feedforward

and

feedback

relationship

between

music,

spatialisation and visual representation demanded that all aspects needed to be conceptualised simultaneously in some stable working form. Ideas arrived at independently of one element or another invariably suffered badly when realised in the spatial, audio-visual context. Figure 8.2 below may be contrasted with Figure 8.1 above to understand this better.

Figure 8.2 Audio-visual production involving full feedforward/feedback.

This layer of complexity was completely unanticipated at the time Schwarzchild was begun, and its existence required a completely new approach to imagining audio-visual works. Scenes needed to be imagined very much in four-dimensional terms. Sound and imagery needed to be imagined in a way that was coherent with the spatial composition. There was a great danger in coming up with wonderful imagery for sonic passages that were divorced from concerns for spatial ecology. Similarly there was a great danger in coming up with music divorced of spatial and visual concerns. When the spatialisation was added the music may not work, and if it did, there was no guarantee that the associated visual imagery would work. For best results working spatial audio-visual scenes needed to be completely pre-visualised prior to composition. The distances between the studios in which all three stages of Schwarzchild’s production were completed made it impossible to interactively explore different combinations in 101

Time Space Texture: An Approach to Audio-Visual Composition

realtime so as to intuit an understanding. A clear understanding of how to deal with this problem was not therefore really acquired until the Pan 3D sound system was developed for Heisenberg at the end of 2001. (See Appendix H).

In Schwarzchild therefore, a distinct spatial conservatism is present. The camera is locked off and motionless in every scene. Many objects in the various scenes are equally immobile. There are really only three objects which have dynamic spatial motion. The first is the crystal ball object; the second is a white comet-like ball of steam that is associated with a “windy”, band pass filtered white noise sound. The third is the “splat space ship” object that pops around the scene making short bursts of filtered noise at each location. See figure 8.3 below:

Above: Figure 8.3 “Splat Space Ship” object from Schwarzchild.

8.2.4 Schwarzchild Scene Break Down

Below: Figure 8.4 - Schwarzchild Scene Breakdown

102

Time Space Texture: An Approach to Audio-Visual Composition

Schwarzchild can be seen to break down into the following scenes: Titles, Head-1, Pulse Wall, Head-2, Mighty Water, Inverted Head, Didge-land, and end credits. These names can be associated with each section in figure 8.4 above.

8.3 Making Schwarzchild’s Music

8.3.1 The WX5 Schwarzchild was composed in part using a Yamaha WX5 midi wind controller. This instrument is similar to a soprano saxophone in some aspects. It has breath pressure sensitivity, a pressure sensitive reed, and can be configured to have the same fingering as a Saxophone. The main differences are of course that it only produces midi information, and is dependent on an external sound synthesis unit to create any sound. It also has the ability to generate an eight octave range of tones – which is much more than a normal saxophone, and which requires an extended left thumb technique. The right thumb can also be used to control a pitch bend wheel. In Schwarzchild, this wheel was used to control the cutoff frequency of a resonant low pass filter. There is a picture of a WX5 in Figure 8.5 below.

Figure 8.5 - Yamaha WX5 as used in Schwarzchild

103

Time Space Texture: An Approach to Audio-Visual Composition

8.3.2 Mighty Water and the Heads The music for Schwarzchild began as an improvisation using the WX5. The MIDI output of the WX5 was fed to an ASR10 sampler. The synthesis algorithm used to generate the rasping sound involved two oscillators set to “mono-mode” to achieve a gliding glissando transition between notes. The glide time for each oscillator was set to different times – one slower, and one faster. The effect when these two tones were combined in a distortion unit was a terrible beating as the slower tone approached the pitch of the faster tone. This distorted beating signal was then fed into a resonant low pass filter to achieve the “wah” sound.

Some time was spent experimenting with this WX5 instrument / synthesis patch over the end of 1998 and early 1999. Eventually a suitable performance style was arrived at which involved slow legato phrases. Phrasing of this sort permitted the beating of the two oscillators to achieve the greatest effect. The emotional response to the sound created by the synthesis patch was a mild form of hostility. This in turn precipitated a highly dissonant approach to melody.

After a number of refinements to both sound and performance style, a final improvised take was recorded into the midi sequencer. This performance was then edited in the form of MIDI data. All MIDI data was time domain stretched within the sequencer. This gives the impression of a huge performer with extensive lung capacity and tremendous breath control. From this data, two main sections were eventually kept. The “head” and the “mighty water” section are both extracted from this performance. (See figure 8.2.4.1 above) All “head” sections are variations or copies of the first. The second head was copy and pasted from the end of the first head and is identical in this regard. The third head is a transposed version of the second head. The first two phrases are also tonally inverted. After all this MIDI data was edited, it was used to drive a final 104

Time Space Texture: An Approach to Audio-Visual Composition

performance of the ASR, which was recorded onto ADAT. These recordings and the original MIDI data were then transferred to computers in the spatialisation and animation studios.

An additional version of the mighty water part was eventually created in which the voice was phase vocoded down an octave. In addition, all the main WX5 MIDI parts for the heads and mighty water section were re-recorded using a sine wave in the lowest audible register. All these parts were then combined with the original part to produce a very bass “heavy” sonic quality.

A second melodic line was created to provide contrary motion to the original “head” and “might water” melodic parts. As the main part rises the secondary part falls, and vice versa. This part was performed using the same synthesis patch as the first line used, but in a higher register. At times this part weaves in and around the 15khz nyquist frequency of the sampled wavetable used in the synthesis patch. This is particularly audible in the “pulse-wall” section. In this section the pitch cracks between frequencies that are not part of the equal tempered tuning system.

An improvised performance on a Moog Prodigy was a unique musical part that was added to the “mighty water” section. The Moog was manipulated with one hand controlling the rate of an LFO while the other hand controlled the pitch. The LFO was controlling both the filter and contributing to the pitch. Eventually a performance was recorded which was later cut and pasted into the mighty water section.

8.3.3 Pulse Room At this point some listening tests were made. The music was listened to repeatedly, and it was decided that the composition needed to be broken up with 105

Time Space Texture: An Approach to Audio-Visual Composition

at least two more sections. The idea for a room that contracted in on the audience in synch with a 20 Hz tone that rises up into the audible 50 Hz sound spectrum had been in existence for some time. This tone was created using a sine tone in C-sound. The data for the rising sine tone was written out of C-sound as a k-stream and imported into the Houdini animation package where it was used to translate the wall towards, and away from the camera. The comet object and “splat space ship” were added to this scene, and the previously mentioned cracking nyquist tone was added.

The splat ship is a recording of an improvisation on an ARP “Axxe”. This small analogue synthesiser has a random noise function which can be used to trigger notes of random pitch, at random intervals. An ADSR filter envelope was added to these notes, and the resonant frequency of the filter was controlled during the performance. The “noise” preset waveform was used as a sound source.

8.3.4 Didge-Land The other section that was added was the didge-land section. This was originally based on another pre-imagined idea in which a face made of mangrove roots sang the didgeridoo against a night sky. The didgeridoo has always been associated with the earth in the authors mind. The author has been in possession of a didgeridoo created by the Tiwi people on Bathurst Island since 1982, and has learnt play and to circular breath. A recording of a performance on this didgeridoo was made. This recording was then transposed down an octave using the pvanal and pvoc phase vocoding tools in Csound. This was found to give the didgeridoo a much larger, “mouthy” sound.

Another performance of the WX5 was then layered onto this recording. This performance involved playing a gentle repeating melodic figure of three tones at an increasing rate. At first, the tones are played so slowly that the repetition is 106

Time Space Texture: An Approach to Audio-Visual Composition

inaudible. However by the end of the section, the listener has become aware that the tone is repeating. One final variation of this melody augers the end of the scene, and of the piece. A resonant low pass filter is swept across the considerable white noise content of this tone to create a sweeping wind effect which follows the phrasing of the melody.

The final musical part in the section is a recording of an improvisation using an FM radio tuner. This performance was completed by sweeping the radio tuning wheel across a bandwidth that happened to have many frequencies which produced awful jackhammer-like noise. This noise was so violent and thunderous that it was found to have a valuable aesthetic potential. This recording was transposed down an octave and time stretched - once again using the Csound pvoc and pvanal phase vocoding tools. This sound was then located at a unique position in 3D space for each burst of its sound. The harsh jarring effect created by this sound and its random jumping spatial relocation is in sharp contrast to the sustained deep drone of the Didgeridoo, and the gentle breathy repetition of the WX5 part. This contrast served to add interest, and balance out the “energetics” (Rothfarb 1992) of the various parts in the section.

The effect of the FM tuner sound also serves to prolongue the inability of the listener to become aware that the WX5 is playing the same simple three note melody over and over again until quite late in the scene. This awareness of repetition arises subconsciously at first, and creates an emerging sense of comfort – which is in spite of the random blasts of FM noise and general strangeness of the scene.

8.3.5 The Titles and Credits The music for the end credits was originally built on a drum construction consisting of two different “drum and bass” samples. These combined sequences 107

Time Space Texture: An Approach to Audio-Visual Composition

of polyrhythmic drumming aim to create an awareness of pulse, whilst limiting a sense of metre. Any consistent metrical “one” beat should be imperceptible when both passages of drumming are combined.

An improvisation on the WX5 was then layered over this drumming. Once again the aim of the phrasing in the WX5 was to avoid a strong sense of meter. The phrases are mostly metrical, but emphasise different beats in their arrival and offset.

The music for the titles and credits also features a performance of a bowed electric guitar. In this performance, a steel “slide” was used to bow the strings of a guitar with an open tuning, and high gain-active pick-ups. The performance used in the titles is a short, copied extract from the entirety of the performance used in the credits.

8.4 Making Schwarzchild’s Animation

8.4.1 Pre-Visualisation Once the music for Schwarzchild was in place, a test animation was completed. In this animation, rough trajectories for each object were developed, and a basic primitive object was assigned to each of these trajectories. These objects were then made to behave in a way that corresponded exactly to the conceptual understanding of cross–modal relationships gleaned from studies of synesthesia. This animation was rendered roughly, and put to tape. The tape was then taken to the spatialisation studio and played in conjunction with the spatial sound.

Upon watching the tape it became immediately apparent that this highly conceptual approach wasn’t going to result in a work that would win any Academy

108

Time Space Texture: An Approach to Audio-Visual Composition

awards. It was extremely banale viewing to say the least. It had no resonance with the experience of the music, and certainly didn’t come near to the kind of imagery experienced mentally as a result of musical audition. Despite the fact that no hard theoretical basis existed with which to relate imagery of this detail to music, artistic license and intuitive interpretation firmly re-entered the creative equation at this point. A process of visualisation began.

8.4.2 Visualisation The visualisation stage was underway by the middle of 1999. A primary process style of thought was first precipitated by undergoing a period of relaxation before listening to the music. Once in a dreamlike physiognomic state (Dailey 1994), the music was performed at a reasonable level of volume. Repeated sessions of this visualisation were undertaken, and the visual imagery experienced during each session was written down. Where consistent forms of imagery were experienced during repeated visualisations, these forms were noted carefully.

This kind of visualised imagery is present in all sections Schwarzchild. Not all components in each scene are however the pure product of a prior visualisation. In many cases, each sound in a scene was imagined with an associated image, but there was no room within the visual frame of the projection or computer monitor screen for all visualised imagery to be superimposed. This imaginal potential for simultaneous multiple perspectives has been noted many times since it was first described by Jean-Paul Sartre in the 1940’s. (Sartre, 1972) (Cook, 1990, p.88). In any case, the problem of making multiple levels of imagery blend was overcome with some compromises.

109

Time Space Texture: An Approach to Audio-Visual Composition

8.4.2.1 Head 1, 2 and 3. In the head sections, the imagery associated with the main WX5 part was a singing black hole. Different cross sections down the length of this object responded to different frequencies in the main WX5 part. Each one of these cross sections moves and lightens according to the level of activity in the part of the frequency spectrum with which they were associated. The deepest frequencies are deep within the object, while the higher frequencies are at the lip.

In addition, the melodic contour of the main WX5 part was used to create the spine of a tunnel through which the camera travels. This tunnel is displayed as a seven edged frame that looks like a spider’s web to some people. As the camera moves through this tunnel, edges appear once they come within a certain proximity of the camera, causing them to flick into visibility. This object is used in each head section, and in the end credits.

The imagery associated with the contrary motion part was devised in part using a visualisation process, and in part using the model of cross-modal perception provided by studies of Synesthesia. (Lyons 2001) (See Appendix C). This part ended up being a kind pulsing, semi-transparent fractal crystal.

The imagery associated with the filtered white noise based sounds ended up being a kind of white amorphous comet like object. The sound of the ARP Axxe ended up being the “splat space ship” described earlier. (Figure 8.8.3.3.3). The bass sounds in the heads were subsumed in the main WX5 part. 8.4.2.2 Pulse Wall The wall in this section had been pre-visualised as moving wall that is associated with a low sine wave whose rising and falling in pitch around the bottom of the audible spectrum accelerates. The crystal associated with the 110

Time Space Texture: An Approach to Audio-Visual Composition

contrary motion part was carried over into this scene as well. In the end this object was present in every scene, and in this way tied the many sections together, and gave some audience members a sense of narrative. The splat space ship is present in parts of this scene, as is the cloud comet. The main WX5 part re-emerges as an ascending sine tone towards the end of the scene. 8.4.2.3 Mighty Water The Mighty water scene was visualised as two overlapping objects. The main WX5 part was strongly perceived to by a massive wave of noise coming towards the audience from the distance. It was hoped to add foam and other effects to the lip of the breaking wave, however time did not permit the development or rendering of particle systems needed for this kind of effect. The other major visual layer was associated with the Moog part. This part intermittently pulses to life with a kind of low “moop – moop” sound. This was conceptualised as a kind of circular mouth like object. Both objects were designed with watery qualities. To fit both objects within the frame however one needed to be above the other. In the end it was decided to invert the mighty wave object and place it above the Moog object below it. This is consistent with the physiognomic idea that bass heavy sounds are “low”. The “splat space ship”, crystal and comet objects also travel through this scene. 8.4.2.4 Didgeland As has been mentioned already, the didgeland scene had been pre-visualised. This scene involved the land buckling and rising according to the different spectral peaks of the didgeridoo recording. The pitch of the repetitive WX5 part was used to control the branching and flowering of an L-System. (Prusinkiewicz, 1990). The crystal object was stamped into this object as its flower type. Different numbers of crystal flowers bloom depending on how high the pitch is. The filter which sweeps across this repetitive melody was used to control a fan object which is

111

Time Space Texture: An Approach to Audio-Visual Composition

part of the physical dynamic system which makes the curtains move. The amplitude of the repetitive filter object also makes the opacity and luminance of the curtains increase. In the didgeland scene the “splat space ship” is given a major work over to make its appearance match the monstrous sonic scale of the FM noise. Both versions of the space ship share the same random motion. 8.4.2.5 Credits The credits section was visualised as a kind of amorphous tunnel. The drums trigger lights which flash against the inside of the cave in a subtle kind of way. As before, the pitch of the main WX5 part is used to control the journey of the camera through a web like tunnel. 8.4.2.6 Stereoscopy As the animation in Schwarzchild developed, a number of stereoscopic image techniques were experimented with. In the end the red-blue anaglyph technique was decided upon for its cost effectiveness and portability. One of the draw backs of this technique were that images with strong colour were badly filtered by the coloured lenses, and compromised the stereo effect. Because Schwarzchild was such a dark and grey piece of music anyway, it was decided to abandon colour at that point and create a mostly black and white animation with only very faint colour saturation.

This also removed the need to associate hues with sonic

objects. The subjective association of colours with sounds has been shown to vary dramatically from person to person. (Lyons 2001) (See Appendix A).

8.4.3 Spatialisation

8.4.3.1 Introduction Once the visualisation began to be realised as animations in Houdini, it was realised that the spatialisation for each object needed to be completed as part of the animation process. Before this could happen, a script to export Houdini 112

Time Space Texture: An Approach to Audio-Visual Composition

animation data in the form of Lake Huron “loc” scripts was needed. This script was completed in due course. (See Appendix J for the “loc” export script) The problems inherent in animation for a limited screen space discussed in the authors 2002 paper (Appendix G – Section 2.2.3) were first noted at this time as well.

At this point all the musical parts were reduced to as few basic parts as possible. This was at the time necessary due to a limitation with the number of mono parts that could be spatialised by the Lake Huron at any particular time. At that time a maximum number of five parts could be spatialised by the Huron at one time. The musical parts in Schwarzchild were eventually reduced to five main parts that could share a trajectory (TR1 – TR5):

Track Number

Sonic Identifier

Visual Identifier

TR1

Main WX5 parts

Black Hole, Mighty Water

TR2

The contrary motion WX5

Crystal Object

parts TR3

The white noise based

Comet

sounds TR4

The random ARP and FM

Splat Space Ship

noises TR5

Bass sounds

Moog and WX5 bass parts

The layout of these sounds in the Houdini object level display can be seen in figure 8.6 on the next page. It should be noted that each child object inherits the transformations of any parent object it is connected to above it.

113

Time Space Texture: An Approach to Audio-Visual Composition

Diagram 8.6 – Object level node layout used in Schwarzchild. All main trajectory columns (TR1-TR5) are visible.

114

Time Space Texture: An Approach to Audio-Visual Composition

8.4.3.2 Moving Objects As the animations were developed, it was realised that only a few musical parts were going to be moving. These parts were TR2 crystal, TR3 comet, and TR4 splat space ship. Trajectories were devised for these using the Houdini DSP channel operator (Chop) tools. An example of one of these chop nets can be seen in figure 8.7 below: Figure 8.7 – Node network used to spatialise TR2 sounds.

115

Time Space Texture: An Approach to Audio-Visual Composition

In this Chop net, numerous input data and signal modifiers Chops are used to generate outputs for spatialisation, and object geometry animation. The input data types include midi data, amplitude envelopes, and multi-channel spectral analysis data. All signals are modified and combined in various ways before being exported. 8.4.3.3 Compositing By late 1999, much of the rendering of Schwarzchild was near completion. Renders ran constantly on multiple CPU’s at the Sydney Vislab facility. Dropped frames were identified visually. Schwarzchild was around 20,000 frames long, and there were in the area of five layers at most stages. In addition, everything was rendered twice for stereoscopic output. This involved the individual viewing of around 200,000 frames. This took months, and before Loucid was begun, an automated image verification script system was developed.

In late 1999, a new software release for the Lake Huron became available. This system, known as “Sonic Animator”, permitted multiple trajectories to spatialise sounds using the Huron hardware. Prior to this only one sound could be spatialised at a time. With this new system, a binaural recording was made of all five musical parts. In the end, only the TR2, TR3 and TR4 binaural recordings were used. The main WX5 part was effected using a standard stereo mixing and spatialisation technique involving stereo delays and reverberation. The bass parts were left completely dry. All the parts were then edited and mixed in a software multi-track software system native to Silicon Graphics computers. The binaural stereo/recording for Schwarzchild is in fact a hybrid stereo/binaural recording. Only the moving parts are encoded with a hrtf filter.

116

Time Space Texture: An Approach to Audio-Visual Composition

Finally all the animations were rendered. The layers were composited together, and the final piece outputted to Betacam SP tape. The hybrid stereo/binaural recording was used for the audio. 3D anaglyph glasses were acquired from a distributor in the USA, and Schwarzchild was ready for a performance.

This first performance took place at the Sydney Conservatorium Lake Huron 3D sound studio. An eight speaker array was used to surround an audience of around twenty staff and students. A powerful projector was used to project the imagery against a far wall, and all audience members were provided with 3D anaglyph glasses. The sound and imagery were synchronised manually as there was no way to synchronise the video playback device, multitrack audio device and trajectory generation software. This problem would only be remedied when the “pan” system was created.

8.5 A Schwarzchild Retrospective In its original program note, Schwarzchild was described as, “the toxic progeny of a digital wind controller and a procedural visual effects package. … Schwarzchild might be described as an abstract asphalt aria exhumed from the psychic devastation of the 20th century collective unconscious.” In many regards, Schwarzchild is a “dark” work. Its timbres are harsh at times, and the psychological effect of some of its imagery is deliberately disconcerting.

In Edmund Burke’s words, “Whatever is fitted in any sort to excite the ideas of pain, and danger, that is to say, whatever is in any sort terrible, or is conversant about terrible objects, or operates in a manner analogous to terror, is a source of the sublime; that is it is productive of the strongest emotion which the mind is capable of feeling.” (Furniss, 1993).

117

Time Space Texture: An Approach to Audio-Visual Composition

Burke’s description suggests that Schwarzchild might be a work that has sublime qualities. It certainly presents a somewhat bizarre and disconcerting take on nature. When I first saw Schwarzchild, I was happy that it was a coherent work that had acquired a form, and a mystery that I had not fully intended. Something had emerged during the creation of the work that I had facilitated, but not consciously determined. While I enjoyed the work, and respected it for its particular qualities, I also observed numerous things that I would like to have done better, or at least differently: The camera is static all the way through the piece. The spatialisation is fairly primitive, and does not involve complex relationships between phrasing and spatialisation. There is no colour. The sound is all generated by analogue synthesisers or the ASR10. Although there is a little complex software re-synthesis, there are none of the granular and FFT resynthesis timbres used in the soundscapes which I associate strongly with visual mental imagery. Finally Schwarzchild is very long. This was mainly due to an awareness that PhD composition folios usually contain long works. This was eventually decided to be not a feasible goal given the development time involved in creating 3D-AV works. Quality was being given more priority to quantity by the time Heisenberg was made.

Schwarzchild is a first work of its sort by the author, and it has all the hallmarks of an early work. It is a rough diamond. When it was complete, three main areas were identified as being areas needing development.



Software synthesis



Integrated audio, spatialisation, and animation systems.



New rendering techniques to achieve greater photorealism.

Schwarzchild had its American debut at the Not Still Art festival in Brooklyn New York on Saturday April 29th 2000. It has also been shown at the 118

Time Space Texture: An Approach to Audio-Visual Composition

Electrofringe festival on August 9 2000, and at Live Wires on the 29th of October 2000.

119

Time Space Texture: An Approach to Audio-Visual Composition

9 Loucid

9.1 Introduction

9.1.1 The Visual Study. The Sonic Study When Schwarzchild was completed, it was suggested by my supervisor, Dr Greg Schiemer, that I undertake a number of small studies rather than launch headlong into large works. Following the creation of Schwarzchild, three main areas had been identified which needed further research. The first was to better understand software synthesis techniques; the second was to acquire new skills in 3D modelling and materials. The third area was to develop an integrated audio composition,

spatialisation,

and

animation

system.

A

special

studio

was

established at Sydney Vislab with which to pursue these goals. It was tentatively named the Vislab 3D-AV Studio. (See Figure 9.1 below)

Efforts were made using the Vislab 3D-AV studio to develop an integrated spatial A/V system throughout the end of 2000. Numerous efforts to achieve synchronisation between the audio, spatialisation and video hardware at Sydney Vislab failed. It was eventually decided to abandon the Lake Huron equipment and to begin writing a spatialisation system into Houdini using its DSP Channel Operators (CHOPs). Houdini already had animation and audio tools - the spatialisation system was all that it needed. Some early attempts were made to interface existing Csound spatialisation opcodes with Houdini in late 2000. The outcomes were deemed to be unsatisfactory, and the project was shelved temporarily. The multiple levels of complexity involved in this project were not

120

Time Space Texture: An Approach to Audio-Visual Composition

really ready to be tackled until work on the “Pan” spatialisation system began in July 2001. (See Appendix H, and Section 10.1.3 of the current treatise.)

Figure 9.1 - The Vislab 3D-AV Studio

In the meantime, two projects were undertaken. The first was a digital synthesis study. The second was an animation study. The synthesis study was motivated by a feeling of dissatisfaction with the kinds of sounds created by the ASR10 sampler and vintage analogue synthesisers used in Schwarzchild. There was a desire to create the kinds of sounds the author was hearing in various electro-acoustic music recordings. The product of the synthesis study is the 121

Time Space Texture: An Approach to Audio-Visual Composition

Digital Synthesis Studies: 2000 CD included with this thesis. The animation study was concerned with the development of new 3D modelling and shader techniques. The product of the 3D animation study is the piece Loucid on the DVD.

9.1.2 Loucid - The Band “Loucid” was originally the name of the band that performed the eponymously titled music in the Loucid animation. The author and his Loucid band-mates went to a secondary college together in Melbourne, and formed the band in 1987 to play at a school fete. By 1991 the band was performing original music and was quite adept at improvising coherent music together. The transient nature of rock lead singers meant that most music rehearsals involved mostly instrumental performances. As an improvising instrumental ensemble, the band increasingly identified with the experimental aesthetic embodied in broadcasts on a local public radio station called 3RRR.

By 1995, Loucid was very tight musically, and professional level recordings were undertaken at various locations. All of these recordings were engineered by the author. The author had been making recordings of the band since the late 1980’s, and so the band and the author were quite comfortable with the recording process. In July 1995, a session was organised at a beach house on the famous “Great Ocean Road” on the southern coast of the state of Victoria in Australia. The beach house was situated on a hill top overlooking the Southern Ocean. The image below is taken at Fairhaven from an elevation considerably higher than the house

122

Time Space Texture: An Approach to Audio-Visual Composition

Figure 9.1 – Fairhaven Beach and the Great Ocean Road.

9.1.3 The Recording The Fairhaven recording was conducted over an entire weekend in a highly relaxed way. All musicians set up their instruments in such a way that they had a view - not only of each other – but also of the ocean vista below. Mattresses were arranged around the drums and amplifiers to provide some sound insulation for the four or five microphones positioned around the drum kit. The recording was made directly to ADAT digital multitrack tape. Multiple ADAT tapes were purchased, with the idea being that the band could relax and play songs without the pressure of using up valuable tape. When everyone was ready to play, tape would be rolled, and we would forget about the fact that we were recording until we decided to have a break. Five 45 minute tapes were recorded by the end of the weekend. Most of the tracks were ten to twelve minute epics.

123

Time Space Texture: An Approach to Audio-Visual Composition

In 2000, an audio-visual studio was established at Sydney Vislab which featured a Macintosh G3 with a Digi-001 eight channel I/O board. The author subsequently upgraded his old Atari “Notator” midi sequencing software license to a full license of Logic Audio Gold. The author’s ADAT machine, and the tapes from the Fairhaven sessions were exhumed, and a selection of tracks were recorded to the hard drive of the G3. After a process of editing the bed tracks, Saxophone and synthesis tracks were overdubbed. Eleven tracks were then mixed, mastered and burnt to CD.

9.1.4 The AC3 Opportunity Around the time this CD was finished, a new supercomputing centre called AC3 was established at the Australian Technology Park in Sydney. The centre featured many of the fastest supercomputers in the southern hemisphere. In November 2000, a marketing representative of AC3 approached the author to render an animation with which AC3 could promote itself. The author was offered unlimited rendering time on the 64 CPU Silicon Graphics machine.

The amount of rendering required in the creation of Schwarzchild had severely stretched good will for the author’s work at Sydney Vislab. It seemed likely that nothing as CPU intensive as Schwarzchild was going to be permitted again in the future. It therefore seemed essential that this new opportunity be taken full advantage of. The problem was that no original computer music was ready for treatment in a new animated work, and there was still no integrated spatialisation system with which to develop a new work. The work would have to be stereo, and the only completed music on the author’s computer system at that time were the Loucid tracks. A track was selected, and production commenced at a tremendous pace. The 3D modelling and animation took only four weeks, the rendering about the same amount of time. Loucid was composited and put to tape in the last week of January 2001. 124

Time Space Texture: An Approach to Audio-Visual Composition

9.2 Making Loucid

9.2.1 The Improvising Saxophonist The track selected from the CD for an animated interpretation had been tentatively titled “Fairhaven”. It was the first track off the ADATs to have Saxophone overdubbed on it and then be mixed and completed. It was on the strength of this track that it was decided to make a whole album. Besides the arrangement, editing and production of the track, the author contributed Synthesis and Saxophone parts during his PhD candidature.

Two saxophone parts were recorded separately. The first track was not listened to when the second was recorded. Both parts were improvised, and recorded to hard drive in one take. It was then discovered that both parts worked together very well when combined. Good “accidents” like this seem to happen to the author a lot more than they used to. Loucid in particular is riddled with them. Some composers may call this a haphazard approach to an art that is supposed to be constructed in a deliberated way.

In the author’s view, the “reality” of life corresponds to an essence that is embedded in - what might be described briefly as - a chaotic, multidimensional and irregular cyclic repetition of experiential phenomena. The approach taken to music composition involves reducing such phenomena to some intelligible musical form, which responds to, and mirrors the essential qualities of the original chaos tinged patterns. Composition can be deliberated, but it also needs to be not so rigid to take advantage of unexpected events. The author chooses to walk a line between deliberation and the unexpected. In this way, when chaotic elements arise in a composition they are immediately harnessed and deliberately put to good use. 125

Time Space Texture: An Approach to Audio-Visual Composition

This approach to the unexpected is a necessary skill for an improvising musician. Saxophonists in traditional ensembles in particular need to be open to a range of unexpected situations that a band can deliver. A good player will deal with any irregularities or mistakes so fluidly that they end up looking rehearsed or intended.

There are problems with this approach however. Often inspiration doesn’t come on call, and a mediocre outcome can result. It is possible however to improve the probability of a successful improvised performance in a number of ways. One way is with physical and psychological preparation. A second way is to switch ones modes of thought from intuitive to analytical, and make deliberated use of a software sound editing system with a cut and paste function.

In regards to the first approach, preparation can be undertaken in any number of ways. The repetitive breathing involved in swimming, walking and dancing can be a successful physical preparation. Following this physical preparation, some meditation can be a good way to create the kind of mental “openness” described in phenomenological techniques. Some artists employ sleep deprivation. Some like a nice glass of Shiraz, others like a cocktail of drugs. In any case, when the right state of mind, body and spirit has been achieved, it is time to press the “record” button, and forget everything except the music.

In regards to the second approach - audio editing software systems are of tremendous use to sonic artists. Sound sculpting tools dramatically improve the probability that a good musical part can be salvaged from any improvised performance. It is one thing to be able to prepare well, but it is another to be able to cut and paste. With such tools, the best sections of multiple improvised performances can be edited together. Good sections can be kept, and unwanted 126

Time Space Texture: An Approach to Audio-Visual Composition

sections removed. The entire part can be put into a room with acoustics that better suit the performance. Alternatively, the entire part can be resynthesised. A combination of these techniques can produce a composition that has good form and good expression.

Both preparation and cut’n’paste approaches were used in the preparation of the Saxophone tracks for Loucid. In Loucid, the bass and drums are fairly repetitive, and this sets up a container in which the guitarist undertakes a highly free form improvisation. As a Saxophonist and a composer, the author sought to weave a melodic thread between the chaos of the guitar part and the regularity of the rhythm section. A measured response to each part was required.

As has been mentioned previously, the two Saxophone tracks that were recorded for Loucid were later found to be complimentary in a number of sections when combined. When one part was performing long legato phrases in the higher register, the other part seemed to be performing more dynamic passages in the middle register.

The Logic Audio software was also used to overdub a new finish for the saxophone part. Originally the saxophone had concluded the song on the tonic. It was then noted that the guitar also concluded on the tonic. For this reason a new ending for the Saxophone part was recorded in which the Saxophone repeats the semitone interval between the second and minor third. This decision was more rational than intuitive. It was already known that one way to avoid a big naff finish where everybody finishes on the tonic was to add a little suspended harmonic colour. This particular phrase denies the work complete closure, and begs some kind of ongoing emotional response. At the same time that this final phrase is unfolding, an emerging torrent of resynthesised train brakes rises in contrast. 127

Time Space Texture: An Approach to Audio-Visual Composition

9.2.2 The Synthesis After the Saxophones were added, the track was mixed and mastered and burnt onto a CD. Following a number of listens outside the studio, it was decided to add a synthesised section to the beginning and end of the piece. To begin with, a Risset style endlessly-rising tone was created for the beginning. (Risset, 1989). This was layered with some phase vocoded medieval choral music and field recordings of trains. All of this material had been time domain stretched using the pvoc and pvanal phase vocoding tools in Csound. It was noticed quickly that the time stretched choral music was in the same key as Loucid. Quite a long extract of this music was available, and so this was pasted into a couple of spare tracks. It was then manipulated until it was found to work particularly well throughout the entire length of the piece. Being more a subliminal “sweetener” than a main musical part, this choral track is not meant to be particularly noticeable. It is faded down during sections where the combination becomes dissonant. At the end of Loucid, some more phase vocoded field recordings were added. The track was then remixed and remastered.

9.2.3 Preparing the Audio for Analysis Loucid presented some unique challenges from the point of view of animation data extraction. Schwarzchild had been animated almost entirely using Csound kstream and MIDI data. It was thought to be a great challenge to see if the automated animation techniques employed in Schwarzchild could be applied to a live performance. To use the music to drive the animation, all the instruments needed to be isolated. Then each part needed to be reduced to spectral, amplitude and fundamental pitch data types.

The drums had been recorded onto four tracks, and there was a lot of spill from all the drums and both guitars. A multiple series of band pass filters and 128

Time Space Texture: An Approach to Audio-Visual Composition

noise gates was necessary to extract the high hat, snare and kick drum from the spill. Each of these tracks was then manipulated further in Houdini to remove any unwanted events.

The bass guitar had been recorded to ADAT using a direct input (D.I.) feed from the back of the bass amplifier. The signal was quite free from spill for this reason. However it was almost impossible to pull a useful amplitude envelope off the bass guitar due to the sustained signal power of the low frequencies. The attack of various notes did not register against the constant power of the lower frequencies. The rhythmic phrasing of the bass line was eventually extracted using series of band pass filters and noise gates to isolate the clicking sound of the plectrum. This was then re-enveloped to better approximate the perceived ADSR envelope of the bass part. The FFT tools in Houdini successfully determined the fundamental frequency of the bass guitar.

The electric guitar was also recorded with a direct feed from a stereo guitar effects unit (Alesis Quadreverb GT). There was no spill from other instruments in these two tracks. A very broad amplitude envelope was easily acquired. The fundamental notes of the guitar’s very compressed and reverberated performance were difficult to establish, and very long averaging envelopes needed to be applied to the data to achieve a useful animation channel.

The two Saxophone parts provided an amplitude envelope easily enough. An accurate fundamental pitch was more difficult to acquire. The metal mouth piece and thick reed used by the author created many overtones and inharmonic partials which compromised the ability of the Houdini pitch CHOP to determine a fundamental pitch. This data was arrived at with some difficulty.

129

Time Space Texture: An Approach to Audio-Visual Composition

9.2.4 Animating Loucid

9.2.4.1 Pre-visualisation Loucid was quite extensively pre-visualised. As with Schwarzchild, this involved a relaxation session to induce a hypnogogic state. All the camera moves were arrived at prior to animation by listening to the music in this way. As the music was listened to, an exact timing and a highly gestural camera move were written down. All the camera moves happen on beats and work very closely with the saxophone melody. In a way, the camera animation is quite like a dance.

Another significant event that was arrived at during this stage was the opening, and flattening of the cavern to form a landscape. This was based in part on a study of the perception of spaces and containment in music. (Brower, 1997). It was found that the emotional effect of Saxophone playing the “head” in the upper register for the first and last time created a crescendo which corresponded to an opening feeling. This seems to be a powerful effect.

The physical shapes and motion of the objects associated with each instrument were also arrived at during visualisation. There were however some considerable problems involved in trying to create visual objects that represented each instrument in a way that permitted each object to be visible at all times. In particular, the object associated with the bass was modified a number of times. At first it was going to be a semi-transparent film that sat in the foreground of the scene. This was found to obscure the rest of the scene in a way that dispersed the effectiveness of the other imagery. The problem of occlusion had occurred in Schwarzchild as well, but it was particularly difficult to resolve in Loucid.

130

Time Space Texture: An Approach to Audio-Visual Composition

A kind of formula that was used in Heisenberg as well started to appear in Loucid. The bass object is always assigned to a large wall or ground object that rests in the background of a scene. The problems with occlusion and the bass object formula have also been documented in Lyons (2002) (See Appendix G Section 3.2). 9.2.4.2 Buckling Walls The Loucid animation begins with an ascending journey towards a bright light between two buckling walls. The buckling walls respond to the music in a number of ways. The high frequency peaks of the phase vocoded train recordings cause the walls to bulge outwards. In addition the colour of the walls changes according to the note being sung by the tenor voice in the vocoded choral music. The camera rises up through the walls to parallel the upward motion of the endlessly rising Risset style Shepard tones. 9.2.4.3 The Bass Cavern As the rising bells of a Mark Tree ring out, a burst of blue and white particles erupt. The viewer then emerges within a cavernous space. This space expands and contracts as a function of the bass guitar's spectral content. Different cross sections of the cavern respond to different bandwidths of the bass. The overall amplitude of the bass guitar is mapped into the mouth of the cavern. In addition the fundamental pitch of the bass lines is used to control the density of the texture on the walls of the cavern. As the bass line plays higher notes, the texture draws inwards. As it plays lower notes, the texture expands. 9.2.4.4 The Saxophone Object The dominant focal point within the cavern is the object associated with the Saxophone. This strange double-ended object appears to be like an old fashioned egg timer on its side with a mouth at either end. The end of the egg timer facing the camera is associated with the main saxophone part. The end facing away is 131

Time Space Texture: An Approach to Audio-Visual Composition

associated with the secondary Saxophone part which can first be heard about half way through the track. As the front mouth articulates the saxophone parts in the music, spindly arms embedded within the mouth weave the sound out into the cavern like the legs of a spider spinning a web. The shape of the mouth flattens and curls outward as the saxophone plays higher notes and the whole model moves and gestures to accentuate the phrasing. The gesturing of the model is controlled by the fundamental of the Saxophone line. Each note has a specific value. These changing values are fed into a counter. This value is then used as the seed in a random function. In this way no two seeds are ever repeated. The output of this random function is used as a rotation parameter for the Saxophone mouth. 9.2.4.5 The Snowflake Phalanx The saxophone object is ringed by a phalanx of snowflake like objects, which re-configure with a new shape and in a new formation in synch with the progression undertaken by the bass guitar. These objects are all individual Lsystems. Each snowflake branches in different ways each time the bassline progresses. In addition, their rotation on their individual axes is controlled by the bass line’s amplitude. As a collective group they also rotate around the Saxophone object each time the bass progresses. Each time they stop, they form a new shape. The position they take is determined by a mathematical “pi” expression which makes use of a unique number that each snowflake has. This expression looks like this:

-chop("/ch/inst_bass/countenv/squarepitch")*$CY*$PI

What this expression is basically saying is, “Each time the bassline changes, rotate in an anti clockwise direction by a number of degrees which equals your individual number multiplied by PI. The shapes formed by this expression are

132

Time Space Texture: An Approach to Audio-Visual Composition

quite interesting. What is also interesting is that the most interesting shapes formed by the somewhat randomised PI expression fall on interesting passages of the music. The mostly notable of these is the interesting shape they form at the very end. This was all pure chance. 9.2.4.6 The Guitar Streaks Streaks of coloured light stream toward the camera from a distant source through the far end of the space. These streaks respond to the amplitude and fundamental pitch of the electric guitar in the music. At lower frequencies these streaks of light are deep red, but as the pitch of the guitar rises they become orange, yellow, blue and eventually white. More streaks emerge - and with a greater initial velocity - the louder the guitar plays. 9.2.4.7 The Drum Lights Throughout the piece the cavern and its contents are illuminated by lights which respond dynamically to the drumming in the music. At the crescendo of the music, the cavernous space opens and flattens to become a beautiful landscape under a distant blue supergiant star over which the final bars of the music unfold. 9.2.4.8 The Loucid Camera The camera movements were recorded in real-time into Houdini in 35 sections with a standard computer mouse, and then enveloped using CHOPS to achieve smooth motion. Each of these 35 sections were then plugged into a switch, which cut between camera moves at predetermined beats. The output of this switch was then enveloped to achieve the seamless snapping motion between each move. The camera was in effect the only hand animated object in the whole animation. The camera animation took only about five hours once the pre-visualisation was complete. Everything else beside the camera was controlled algorithmically by the music.

133

Time Space Texture: An Approach to Audio-Visual Composition

9.3 The Digital Synthesis Studies CD The Digital Synthesis Studies CD was completed using three tools. Granular synthesis and re-synthesis was completed using the “Metasynth” software. Phase vocoding of various recordings was completed using the “Csound” Pvoc and Pvanal tools. Finally, all the resulting sounds were assembled in “Logic Audio Gold”.

The resynthesised sounds were mainly recordings off CD’s. They have been stretched and often retuned to different scale systems. It would be impossible to work out their original source without being told. The straight out granular synthesis was completed by painting various images in the spectrum window, and then smearing these paintings along the time axis. The waveform used to stamp at each pixel point was drawn in a window for this task.

The recordings used in the phase vocoding sounds were mostly recordings completed by the author. These recordings were completed using a stereo microphone and a portable DAT machine. The sources include a shopping trolley, various metal, glass and stone objects which are shaken hit and manipulated in various way. They also include recordings taken all across Sydney including King Street in Newtown, Redfern train station, and the bus stop near Central Station.

The aim of this CD was not so much to produce an electro-acoustic masterpiece. It was purely to experiment with different sound generating techniques for use in the development of future works.

134

Time Space Texture: An Approach to Audio-Visual Composition

9.4 Loucid Retrospective As discussed already, Loucid was undertaken as a visual study primarily. It was an experiment which explored the use of live sound sources to create automated animations. The fact that the piece also documents the author’s animation, sound synthesis, sound engineering and Saxophone performance skills has also been appreciated.

An understanding of the problems involved in constructing spatial scenes full of moving objects, all of which need to be seen at all times by a moving camera was developed in the making of Loucid. All the objects in the scene were arrived at as ideal visualisations of a sound source. All of these objects had to compete for space within the limited field of view of the camera and the screen. For this reason they frequently occluded each other, or were incompatible in some other way when combined. Various compromises needed to be made in order to avoid this problem. The visualisation of individual sound sources needs to take into account the fact that the visual range of a TV screen is much smaller than that of the imagination. For scenes to work well, all objects need to be visualised as a community in which all musical parts interact with each other in a non-occluding way.

A number of new techniques were developed in Loucid. Amongst these were new CHOP manipulation techniques in Houdini. New rendering techniques were also developed. Foremost amongst these was a multiple stage script that made sure every frame in a sequence was rendered perfectly. This was necessary because often a renderer will ”drop” a frame for various reasons. When this happens, frames are left blank, or half rendered. If left uncorrected, flashes of black occur during playback of the animation at 25 frames per second.

135

Time Space Texture: An Approach to Audio-Visual Composition

The render script had a few stages. First it checked the many thousands of individual images for dropped or half rendered frames. If a dropped or incomplete frame was encountered it was deleted. The next stage then analysed all the sequences of frames in a directory to see if there were frame numbers missing from the sequences. These missing values were then saved into a data array, which then created Houdini render scripts which re-rendered these frames.

This saved an immense amount of time, and gave the author a real taste for the power of scipting and automating tasks. After Loucid was completed this new lust for programming power resulted in a study of C++ and Perl programming languages. This all came in handy when the development of the “Pan” 3D sound spatialisation system began. (See Appendix H and I)

Loucid had its debut on the ABC’s “rage” music video program on the 6th of April 2001, and was performed at the annual Australasian Computer Music Conference “Waveform 2001” on July 14th 2001.

136

Time Space Texture: An Approach to Audio-Visual Composition

10 Heisenberg

10.1 3D Sound in Houdini and Csound

10.1.1

The Seed

In the middle of 1999 a last ditch effort was made to have Schwarzchild ready for a performance at the Australasian Computer Music Conference in Wellington, New Zealand. There were some difficulties at the time with the Lake Huron system, and an attempt was made to create a 3D sound spatialisation system in Houdini. During 3 days in which almost no sleep was had, a rough 8 channel spatialisation was created without the aid of any monitoring equipment other than a pair of stereo headphones. The outcome was not overly impressive, however the idea that Houdini could be used to spatialise 3D sound was revived in late 2000 when all other spatialisation options failed.

10.1.2

DSP Skills

By the time Loucid was completed, some scripting skills in UNIX cshell, zshell and perl had been acquired. Skills with the native Houdini hscript language – which is a derivative of UNIX cshell - had been in place for some time. After Loucid was completed, a period of research began into Csound and the C++ computer programming language. “The Csound Book” (Boulanger, 2000) was bought, as was a large book called “Practical C++”. (McGregor, 1999) The Csound book was worked through from front to back, and the CD-ROMs that come with it were similarly scoured. Particular interest was given to the sound

137

Time Space Texture: An Approach to Audio-Visual Composition

spatialisation and reverberation sections in the Csound Book. The practical C++ book was also read from front to back.

Not long afterwards a graphics workstation and a student license of the Houdini 3D animation software were bought. In July 2001, development work on synthesis algorithms in Houdini began. A new knowledge of digital audio signal processing made the raw DSP power of Houdini’s Channel Operators (CHOPs) more apparent. Synthesis patches in Houdini were built that explored FM, granular and FFT based synthesis techniques. Various pitch tables and algorithmic composition networks were also explored.

At some stage in August 2001, a copy of the Dodge and Jerse (1997) computer music book was discovered in the library. It was realised that the spatial sound system described within this book could be implemented using the DSP tools within Houdini. Further material on spatialisation was referred to in Roads (1990), Begault (1994) and Chowning (1971).

10.1.3

Pan

“Pan” is the name given to the spatialisation system developed by the author which spatialises sounds in Csound using information from a Houdini animation. Pan worked with varying levels of success from around early October 2001. At that time however all the audio spatialisation was being rendered inside Houdini except for the Doppler shifts. This effect was being completed by exporting all audio to Csound and then importing the Doppler shifted audio back into Houdini. This system ended up being deprecated for various reasons related to processing time, and memory usage in Houdini. A new stage of development began in early October, and by early November 2001 Pan had taken on its final form - although modifications were made up until May 2002.

138

Time Space Texture: An Approach to Audio-Visual Composition

In its final form, Pan completed all calculations involving the relationships between camera, sound generating object, and their acoustic environment. This information was calculated at a low sample rate, and then exported to Csound where all audio sample rate processing took place. In its final form Pan was capable of calculating the following: •

Doppler shifted and spatialised unreflected sound sources.



Doppler shifted and spatialised six early reflections



Dynamic reverberation for varying room shapes and surface materials. Wall proximity determination.



Enveloped source proximity limitation to avoid clipping.



Attenuation scaling to simulate differing distance related decays for varying relative sound source amplitudes.

Below: Figure 10.2 – The Pan CHOP network within Houdini. All spatialisation preprocessing was completed here before being exported to Csound for audio processing.

139

Time Space Texture: An Approach to Audio-Visual Composition

The hscript section of Pan is rather large, and is included in Appendix H. Pan was designed to handle different spatialisation situations in as dynamic a way as possible. Because Houdini’s hscript does not permit nested loops, hscript was used to write some perl scripts which in turn handle the generation of Csound orchestra’s for which nested loops were desirable.

In broad terms, the Pan

hscript involves the following stages:



Initialise hscript variables.



Write script to save raw audio from Houdini. (Some music composition was completed in Houdini.)



Write hscript to save direct distance channels from Houdini.



Write hscript to save direct azimuth channels from Houdini.



Write hscript to save speaker reverb amp channels from Houdini.



Write hscript to save reverb time channel from Houdini.



Write hscript to save reflected distance channels from Houdini.



Write hscript to save reflected azimuth channels from Houdini.



Use hscript to generate perl script to generate spatialisation orc file for Csound. •

Perl directory existence tests for destination drives.



Initialisation of Csound header and global variables. Import audio, distance, angle and other CHOP data to buffers.



Direct sound source spatialisation for each sound source and speaker: •

Doppler Shifts



Distance related frequency attenuation.



Distance related gain attenuation



Local/global reverberation ratio calculation



Panning for each sound 140

Time Space Texture: An Approach to Audio-Visual Composition



Early Reflection calculation for each sound source and speaker: •

Doppler Shifts



Distance related frequency attenuation.



Distance related gain attenuation



Panning for each sound



Reverberation of early reflections



Output direct and reverberated signals to disk.



Write script to execute perl orc file generator



Generate spatialisation sco files



Generate csound orc and sco execution files



Generate hscript to execute hscript that saves raw sound CHOP channels



Generate hscript to execute hscript that saves azimuth, distance and other CHOP data channels.

10.2 Making Heisenberg

10.2.1

Introduction

In the discussion to come, the individual scenes in Heisenberg will be referred to using the names in figure 10.3 below:

Figure 10.3 – Heisenberg scene breakdown and nomenclature.

Heisenberg was very much developed from start to finish, scene by scene. Each scene completely used up the 100 gigabytes of hard disk memory available 141

Time Space Texture: An Approach to Audio-Visual Composition

on the author’s workstation. It was not possible to have the components of more than one scene on the system at any one time. The scenes had to be completed, composited, and their components deleted. The Pole Huts and Blue room scenes were a partial exception to this. Because they share geometry, and were the first scenes rendered, room on the system was found for them both simultaneously.

It is difficult to discuss the design of Heisenberg in terms of sound and imagery separately. Each scene was definitely designed with sound and vision in mind from the ground up. This is different from both Schwarzchild, and Loucid, which were largely visualisations of an existing piece of music.

Much has already been written about the making of Heisenberg in Lyons (2002) (See Appendix G). What will be written here will be a compliment to the discussion in this previous paper.

10.2.2

Pole Huts

10.2.2.1

Design

The first scene in Heisenberg was devised so as to showcase Pan as much as possible. It aimed to do this with a scene involving a camera moving amongst moving sound sources. The scene was evolved within Houdini. Audio-visual ideas gave rise to more audio-visual ideas, and each element evolved from another. First the huts themselves were arrived at, and then they were put on poles. This design is derivative of a strange house built by some architects in Fairhaven where Loucid was recorded during the 1970’s.

When the huts were complete, it was felt that something in the upper area of the screen was needed visually. The slowly swinging L-system vines were then added. Next the camera path was created, and then the “fly” object that the 142

Time Space Texture: An Approach to Audio-Visual Composition

camera chases and eats. It was then found that the camera move was so dynamic that dis-orientation occurred. There was also room in the auditory scene for a more sustained sonic part. For these reasons a stable reference object was needed. The huge distant orange vortex was added for this reason. 10.2.2.2

Huts

The sounds for objects in this scene were arrived at in a number of ways. In most cases a raw source timbre was arrived at, which was then brought into Houdini where it was enveloped with animation data. The sound of the huts started off having two sources which were combined. The first source involved a collection of granular synthesis sounds created in Metasynth. The second source involved phase vocoded female vocals. For each of the five huts, an amplitude envelope was created which controlled the attack, decay, sustain and release (ADSR) of the sounds. These ADSR envelopes were derived by the “sparse convolution” fractal noise setting in Houdini. These five fractal noise channels were fed in parallel to a trigger device which created an impulse each time the noise crossed a certain threshold for each channel. An ADSR envelope was then copied to each one of these impulses. Next a number of echoes were created of each ADSR envelope using a delay. These envelopes were then used to attenuate the combined source sounds for each hut. The envelopes were also fed into the global scale parameter of the 3D huts to make them expand each time the sound triggered. 10.2.2.3

Vines

The vines were created with an L-system and a displacement shader. The rules for the L-system turtle were as follows:



Context Ignore: F+-



Premise: F//F//F//A//

143

Time Space Texture: An Approach to Audio-Visual Composition



Rule1: A=!"F//F//F//[B]//A$



Rule2: B=&F//F//F//F//F//FC



Rule3: C=//A//B//A//B[&F&//~F&F/&//&F&F&F///F~F&F&F&F~FJ]

Each L-system was than instanced a number of times to limit the memory overheads of having multiple copies. There is no sound associated with the vines, and they exist for purely visual design reasons. Their organicism makes them desirable in the author’s efforts to avoid the clinicism of computer art. 10.2.2.4

The Fly

Like the Huts, the fly sounds were created from a combination of raw vocoded female vocal sounds which have been enveloped by some animation data. The Doppler shifts and spatialisation contribute the other features of this sound. Like the Huts, the amplitude envelope for the fly was created using fractal nose.

The chomping sound as the camera goes to bite down on the fly was created by detuning and time domain stretching a slamming fridge door using phase vocoding techniques. This sound was created using a stereo microphone and Csound at the Vislab 3D-AV studio in late 2000. Because both the camera and fly animation were created almost randomly, it was necessary to create a proximity sensor to evaluate when the fly was close enough for the camera to chomp at it. This in part required some functions from the Pan CHOP network. Because the camera is always looking at the fly, it was possible to use the data used to create the centre speaker of the 5.1 array amplitude to measure proximity. When modified slightly, this data was able to be used to trigger both the animation of the camera attacking, and to trigger the chomp sound.

144

Time Space Texture: An Approach to Audio-Visual Composition

10.2.2.5

The Orange Vortex

The sound of the orange vortex was created by combining numerous recordings of a Chinese string instrument which had been time domain stretched using phase vocoding techniques. The Doppler shifts and other spatial effects were created afterwards by Pan.

10.2.3

The Blue Room

10.2.3.1

Introduction

The blue room was the first object to be modelled in the making of Heisenberg. It was created solely to test the ability of Pan’s reverberation to properly respond to a room. It was then embellished until it achieved its final state. Eventually it was decided to have an introductory animation in which the camera flew around numerous such rooms. This is how the previous Pole Huts section came about. The blue room was the beginning however. The sound of the blue room is a highly stochastic composition. Fractal noise was used to generate the timing of all events. The waveforms are largely granular sounds created in Metasynth. 10.2.3.2

The Spinning Ball

The spinning ball had been something that had been thought about since observing

in

1998

that

the

Lake

Huron

equipment

could

simulate

the

directionality of a sound emitting object. An example of how this could be employed had been considered for some time. The analogy of a celestial pulsar had always occurred, and this was somehow then reconceptualised as a spinning “death star” like object which emitted a beam of sound and light as it spun around.

The sound source which the spinning ball emits was created by enveloping a sustained evolving granular synthesis sound created in Metasynth. Each time the

145

Time Space Texture: An Approach to Audio-Visual Composition

envelope opens, the timbre of the sound has evolved somewhat. The envelope in this case was created using a low pass filter as well as with plain amplitude scaling. The irregular rotation of the ball was arrived at using a sine function of a constant ramp that is muted according to the value of a fractal noise channel. 10.2.3.3

The Breathing Room

Like the spinning ball, the sound of the breathing room was created by enveloping a sustained evolving granular synthesis sound. The envelope was created in an identical way to the breathing room. The animation of the breathing room and spinning ball were muted by the function which triggers the onset of the green room. 10.2.3.4

The Fiery Dissolve

Visually and sonically, the fiery orange vortex that appears intermittently during the blue room scene is identical to the one used in the Pole Huts scene. This fiery orange vortex is triggered by a fractal noise function. As with many such noise functions, different seeds were experimented with until a “near enough” pattern was arrived at. Being able to visualise the resultant data channels in a Houdini Channel graph was of great value for making such decisions. Employing the same methods in a “blind” package like Csound would have been more difficult.

10.2.3.5

The splatter

The short bursts of light and sound that emit from the smaller balls that jump around the blue room were created using a variation of the technique used to create the sounds for the breathing room and the main spinning ball. A sustained granular synth sound was enveloped at temporal intervals determined by fractal noise functions. Each time the envelope opens however, the small balls jump to a

146

Time Space Texture: An Approach to Audio-Visual Composition

randomised set of Cartesian co-ordinates before emitting a burst of light and sound. 10.2.3.6

The screech

The five horizontal bars of light that screech from the background into the foreground and then back again were also controlled randomly, but use a dramatically different waveform to create the granular sound that they emit. It was imagined to be a very fiery sound; however occlusion problems required it to take on a somewhat complimentary visual appearance. Various methods of transparency were experimented with before the final form was arrived at.

10.2.4

The Green Room

10.2.4.1

Introduction

The green room uses a variation on the model of the blue room. The room moves according to the low bass sound – which is the result of Brownian fractal noise being fed into a sine wave oscillator The room geometry is subdivided dramatically so that there are many points in the geometry. A randomly generated letter is then copied to a random point number at randomised time intervals. Multiple copies of this room are created at varying distances to acquire a depth effect. 10.2.4.2

The Flying Glitches

The flying green objects are the result of an L-system. There are two rendered layers composited over each other. One is the L-system itself, and one is a cookie cut out of the L-system projected along one axis onto a plane. This plane is then extruded, and treated with the same displacement map as the original L-System to give the same bumpy texture. When rendered the two are composited over each other.

147

Time Space Texture: An Approach to Audio-Visual Composition

The sound associated with the flying glitches is generated by enveloping a sound which started its life as an operating system file, but which has been converted to an audio file. Roughly fifty of these “glitchy” sounds were assembled by hand. A random function then makes a random selection from this set of sounds, and makes a copy at a random time point. All sounds are enveloped with an ADSR envelope which has a very sharp attack and quick decay to give a percussive effect. Each onset of a sound is associated with a randomised Cartesian co-ordinate which causes the sounds to jump from place to place. There are four sources which behave in this way.

10.2.5

The Red Room

The red room was created by looping a sequence of images which are mapped to dome shaped objects. The camera is given a brownian noise animation path. The sounds emitted by each disk are the same as those of the fiery orange vortex used in the first two scenes. The spatialisation creates most of the sonic variation. This scene was devised a kind of minimalist reprise from the intensity of the previous scenes.

10.2.6

10.2.6.1

Fleshscape

Introduction

The fleshscape scene is a scene which had its birth more as a piece of music than as a united audiovisual scene. A sonic waveform that involved a combination of phase vocoded female vocals and central Asian throat singers were a starting point. This was previsualised very much as a landscape of seething human flesh. A mass of tabla drumming was then added to the stretched vocal parts. This in turn was visualised as many arms drumming. The scene originally had the arms coming out of the landscape of human flesh, however this was changed. In the

148

Time Space Texture: An Approach to Audio-Visual Composition

end the entire scene was flipped upside down to make it work better with the final cloudscape scene.

When all this was somewhat in place it was decided to add the distant vortices, and the glowing spheres that rush past. This event was conceived as a cathartic section of music, and was deliberately timed to occur at the golden section of the entire piece.

The way that this scene was arrived at required the highest degree of integrated spatial audio-visual imaginary ability. Experimentation with 3D sound and graphics had been underway for some years by this stage. Also, an immersion in the process of making Heisenberg meant that the author was starting to acquire a deeply intuited understanding of the creative possibilities and limitations native to such works.

The rotation of the camera in the fleshscape section is identical to that in the Red Room section and so the transition between the two is smooth in terms of motion. The luminance of the red dome is initially used as a matte for the transition. Fractal noise was used to control the five way transition between the sound and imagery of the red room and fleshscape scenes. 10.2.6.2

The Voices

The voices in the fleshscape section were a combination of two different types of vocal sound. There were five versions of each type. One sound of each type is associated with each of the skin coloured vortices in the scene. The first type of sound is that of a throat singer from central Asia. The second type is that of a woman singing Islamic payers. Each recording was dramatically time stretched using phase vocoding techniques in Csound. The pairs were then created by

149

Time Space Texture: An Approach to Audio-Visual Composition

combining one sound of each type in Logic Audio.

The spatialisation was

completed by Pan. 10.2.6.3

The Tablas

The tablas were assembled algorithmically. A selection set of 30 or so samples were drawn from a sample CD purchased some years ago. Each of these was synchronised to a tempo of 93.75 bpm. This tempo is perfectly synchronised with the frame rate of the PAL television system. (See Appendix G – Section 2.2.2)

Eight separate drum parts were then created. Each part used a particular sample for a duration decided by a random function. The selection of samples was also made randomly from amongst the set of 30 possible samples. Finally, each sample was spatialised randomly each time it is triggered, creating different location and distance effects. The effect is that of many tabla players in a room playing different lines at different times. The tabla begins after the glowing balls rush by. At first they are high pass filtered. Within a couple of bars a deep descending tone augers in the full bandwidth drumming.

The tablas are represented by 16 arms which play all the drums that are spatialised in their radial sector. This was achieved using a variation of Pan. Instead of the drumming being spatialised for five speakers, sixteen are used. The amplitude of each speaker determines the activity of the arm at that sector. The back-swing was added to each drum attack by copying an ADSR envelope to each onset. Fractal noise was added to the slapping action to create the subtle floating and twitching motion of the various joints in the arms and hands. Different seeds were used to differentiate the fractal noise controlling every joint in every arm.

150

Time Space Texture: An Approach to Audio-Visual Composition

10.2.7

10.2.7.1

Cloudscape

Introduction

Like the Fleshscape scene, the Cloudscape scene was created by previsualising a section of audio. The audio contains two different types of sound. There is a constant pink noise, and a pulsing bell like sound. 10.2.7.2

The Clouds

The constant pink noise that emanates from each of the five surround speakers was created in the Vislab 3D-AV studio during late 2000. This sound is a stretched field recording that was made of traffic in King Street Newtown, Sydney Australia in the middle of 2000. There are five separate versions of this recording – one for each satellite speaker.

This sound was thought to sound very white, yet it was dynamic and seething at the same time. It was associated with a waterfall of clouds. These clouds were created using a looping technique. Once the camera comes to a rest, there are only 300 frames of clouds that loop over and over. This was a very expensive render, and looping brought the render time down by a factor of 10. 10.2.7.3

The Blown Glass Bell

The pulsing bell-like sound was created in Houdini during June 2001 using an FFT based synthesis process. The original sound is a sampled chord performed using a synthesis patch involving a combined FM synthesiser and vocal waveform. This sample is then converted to phase and magnitude channels by the Houdini “spectrum” CHOP. The original phase channel is then deleted. The magnitude is then fed into a despiking filter. This filter has a two second width in which it looks for samples which are radically different to those around it. The “tolerance” parameter is set with a variable which changes randomly each time the sound is

151

Time Space Texture: An Approach to Audio-Visual Composition

retriggered. The filtered magnitude channel is then resynthesised from the spectral domain with a new phase channel which has a constant value of one. Each time the sound is retriggered, a different tolerance in the de-spiking filter causes the sound to acquire different harmonic values, creating a different timbre.

This sound was thought to sound like a glass bell. It was imagined that the glass was molten, and was in the process of being blown out of a tube as it solidified. This animation makes use of the spectrum of the sound and a fractal function to determine its varying shape. Sounds with low frequency content create an open bell, whereas high frequency sounds create a closed bell. Each onset of the bell has a different random seed that creates structural variations.

10.3 Heisenberg Retrospective Heisenberg represents the achievement all the research goals that existed when this PhD began. These included:



To understand how and why music could be associated with visual imagery.



To develop music involving advanced sound synthesis techniques.



To develop photorealistic animations of my own design that explore the relationship between this music and visual imagery.



To compose music for spatial sound reproduction.



To learn more about programming a computer.



To make high quality audio visual works.

Of all the works on the DVD, Heisenberg has had the best response from audiences. Schwarzchild was a first work, and Loucid was a rapidly developed study. Heisenberg however had at its disposal the best computer systems, and the most development time. There was never any question of rushing its

152

Time Space Texture: An Approach to Audio-Visual Composition

development for some deadline. It was worked on until it was completed to the composer’s satisfaction. The only aspect of Heisenberg that concerns the author has to do with its form. Future works will try to be less episodic. The progression of audio visual material will flow and develop in a more ongoing manner. This will require a much more comprehensive approach to pre-production and a little less hands on improvisation.

Heisenberg had its debut performance at "Form Space Time" on 8-7-2002 as part of the annual Australiasian computer music association conference. It was then performed at 12-7-2002 at "Liquid Architecture" in Melbourne's Treasury Theatre. The next performance was on 4-10-2002 at “Electrofringe” as part of a workshop on DVD production. It was also performed at “Live Wires” on Wednesday 20th November at the Sydney Conservatorium of Music.

153

Time Space Texture: An Approach to Audio-Visual Composition

11 Epilogue

11.1 Summary The creation of spatial audio-visual works which explore the relationship between spatial music and visual mental imagery offer numerous challenges. These challenges are both technical and creative. Spatialised computer music and 3D animation are both complex technical areas. Imagining spatial works which work equally well in both sonic and visual domains is also a challenge.

It is interesting that many of the creative challenges are related to technical limitations. The fruit of the imagination doesn’t always confine itself to the boundaries of a small screen. Many visual images that are created by the imagination demand an angle of projection that is equal to that of our eyesight. Similarly, five surround speakers do not provide the detailed spatial image that the real world provides.

A more serious problem has been the fact that this art is an original synthesis of various obscure creative disciplines, and is slightly divorced from former artistic precedents. It was therefore impossible to intuit an understanding of such works by watching and listening to many existing examples. The particular strengths and limitations of this genre of spatial audio visual works had to be arrived at almost entirely by experimentation. It is true that these works are like some existing arts. They are like a ballet, in that it is musical and spatio-visual - there is no gravity however, and any visual or auditory phenomena imaginable can be involved in the dance. There are also elements of architecture, sculpture and painting. But the works on the DVD move, and so they are not like these arts. 154

Time Space Texture: An Approach to Audio-Visual Composition

The works on the Music VR DVD are perhaps most like abstract animation works by artists such as Jordan Belson. The difficulties associated with representing music in a limited screen size also exist in abstract animations, but the problems with spatialised audio composition usually do not. Problems associated making the associated sonic and visual elements in the works spatially coincident are a further level of difference. The easiest way to perhaps describe the works on the Music VR DVD is as a synthesis of the abstract animation tradition with techniques and ideas from virtual reality research.

Theoretically, there is much that the development of the works on the DVD brought to light about the nature of human cognition. Firstly, emotion and kinesthetic imagery are seen to play a major role in mediating music and visual imagery. Also, the human brain has anatomically separate areas which are responsible for language and image based cognition. Psychologists have been able to verify that this is embodied in different cognitive styles. People seem to favour one style or the other. This is particularly the case with the differences between linguistic and image based modes of cognition. Both approaches are employed in the works on the DVD; however there is dearth of writing that relates image based modes of thought with creative practice. These image based modes of thought were particularly important to the author’s creative approach – both in terms of an improvised approach to composition, and in the development of visual analogues of music. The relationship between image based modes of thought and creative endeavor therefore became the principle theoretical area of concern in this thesis.

It has been shown that different intentional positions can also provide different cognitive outcomes. Analytic approaches provide more language based outcomes, while passive or de-differentiated approaches provide more image based 155

Time Space Texture: An Approach to Audio-Visual Composition

outcomes. The articulation of music also seems to have an effect on the employment of cognitive styles. Music which is homophonic and rapidly articulated seems to be associated with more linguistic modes of thought, while music which is highly timbral and expansively articulated seems to be associated with more visuo-spatial modes of thought. It should be noted that these categories are not exclusive, and kinaesthetic imagery associated with rapidly articulated music suggests its own unique visual counterparts – as exemplified in various classic Warner Brothers and Disney cartoons for instance.

Image based modes of thought have been associated with creativity and intuition. Formalist western music theory has traditionally favoured analytical and linguistic approaches to musical thought built on redundant attribute theories. Phenomenology provides an antidote to this approach, and in conjunction with cognitive science new music theories are being developed which are amenable to describing sonic and interdisciplinary works created with new technology.

The relationship between music and visual imagery may be seen to be mediated by emotional tokens stored with memories of visual and kinaesthetic imagery. Research into emotion is a new concern of Psychology, and an even more recent concern of music theory. There is much work still to be done to establish the relationship of music and emotion, let alone to understand relationships between music and visual imagery.

The work on the DVD contains a hybrid of analytic and image based approaches. The image based approach may be seen to be more central to design, while the analytic approach is more important to the technical execution of the pieces. Still - occasionally concerns related to the latter override those of the former.

156

Time Space Texture: An Approach to Audio-Visual Composition

11.2 Future Work There are many areas related to this artform that require further investigation. In terms of understanding why and how people associate music with visual imagery, there is much research to be done in order to establish the existence of patterns in cognition. It would be particularly useful to have access to the results of a range studies which tested the response of various control groups of varying demographics to a huge range of different music. It would also be of value to have many broad phenomenological studies undertaken so as to establish the existence of relationships between particular types of music and particular types of imagery. The analysis of data from all such studies would need to take into account the fact that individual memory determines the content of mental imagery to a large degree. Archetypal relationships may however emerge.

In terms of creative work, there is easily a lifetime’s work in the exploration of relationships

between

spatial

sound

and

abstract

animation.

Improving

technology will hopefully make such works more potent, easier and faster to create. It is the authors aim to explore performance aspects of the art in the future. The feedback possible with realtime performance can accelerate the development of skills and interesting ideas which can then be used in larger works. Schwarzchild was not seen as a unified spatial, audio-visual experience by the author until one year from when work on the piece began. This was too long a period to wait to assess and develop ideas. This delay was dramatically reduced by the time Heisenberg was undertaken. Each section was able to be experienced with fully synchronised spatial sound at various stages during the development of Heisenberg. The work was able to be further refined and developed with ideas that were arrived at as a result of being able to preview sections before the work was completed. Realtime feedback would accelerate this development and evolution of ideas even further. In the end it is hoped that a high degree of 157

Time Space Texture: An Approach to Audio-Visual Composition

improvisation will be possible with the creation of spatial audio visual works. Sophisticated human-computer interface technology will be critical in the development of the level of control required to do this. An understanding of the relationship between music, emotion, and visual and kinaesthetic imagery will also be required.

158

Time Space Texture: An Approach to Audio-Visual Composition

12 Appendices

159

Time Space Texture: An Approach to Audio-Visual Composition

Appendix A – Visual Music Composition

Thoughts on a Non-Cartesian Approach to VisualMusic Composition Andrew D. Lyons. Composition Unit The Sydney Conservatorium of Music The University of Sydney Sydney NSW 2000 Australia http://www.tstex.com Email: [email protected]

I find it to be a useful heuristic premise when approaching visual music composition to conceptualise the creative material to be given form as a kind of unified ether of raw perceptual information, manifesting itself as both sound and vision. This heuristic premise is similar to the monist philosophy that all existence is part of one all inclusive substance. This idea was originally espoused by Benedictus De Spinoza in his 17th century critique of the philosophy of Rene Descartes - itself an antecedent of modern positivist philosophy.[1] Whilst there are also numerous differences, Spinoza's Monism may be seen to have more to do with various forms of Phenomenology, [2] which are often seen to be more useful in forming art theories than the positivist philosophy of the natural sciences.[3] The perception of art after all, is highly subjective and science and its philosophy postulate no explanation of the subject. From this it follows that it is a logical impossibility that a purely scientific process might one day create an accurate emulation of the human visual interpretation of music. "The rational mind functions by separating subject from object, that is, the knower from the known... It works by analysis, a systematic processing technique 160

Time Space Texture: An Approach to Audio-Visual Composition

that is based on the laws of logic." [4] We tend in the western world, to favour an analytical approach to understanding which tends towards reduction rather than synthesis. This approach to understanding has been especially and increasingly common in Western Europe since the seventeenth century, that is, since the beginnings of modern science.

Reduction/Induction modelling has yielded humanity incredible means to control (and to destroy), our environment. For example, the knowledge of sound, which physics and physiology have yielded us has allowed us to employ tools such as computers to create sound and aural percepts of a broad and powerful variety. This approach to knowing may however be seen to have shortcomings. The inherent shortcomings of the logical, rational mind include its complete inability to regard duration as anything other than a sequence of discrete, moments in time. Our intuition tells us otherwise, as does Physics, in which time is regarded as a continuum, closely connected to space. In the words of Albert Einstein: "I didn’t arrive at my understanding of the fundamental laws of the universe through my rational mind." [5]

Motion through space is something, which like time, is beyond the grasp of the rational mind. This is well exemplified by what is known as Zeno's paradox: "Take the phenomenon of a flying arrow. It is easy to show," says Zeno, "that it does not really move. For at each instant of its flight it occupies one and only one point of space. This means that at each instant the arrow must be at rest, since otherwise it would not occupy a given point at that instant. But its whole course is composed of such points. Therefore the arrow does not actually move at all." Henri Bergson believes that the moral of Zeno's story, is, "not that motion is impossible, but rather that it is impossible for the intellect to comprehend motion." {6] It is also interesting to note that the rational mind can in no way comprehend paradoxical information, ie. that the points at either end of a 161

Time Space Texture: An Approach to Audio-Visual Composition

continuum can be the same point, such as the intersection of the infinite past and infinite future as they exist on the Schwarzchild radius (or event horizon) of a black hole.

Whilst it may not be possible for the rational mind to comprehend time and space as anything other than a sequence of static entities, it solves this shortcoming by developing something which may be called a relationship. It has long been known that any type of analysis uses a serial logical process to create factual, static relata which are organised into structures described as relationships - often in the form of differential equations. With this approach to music it has been suggested that, "all music is algorithmic - whether the author is consciously aware of it or not." It may be argued that that any analysis can only result in information which can be described in such terms, and so this idea is entirely tautological.

The tendency of western man to favour the application of materialist philosophy and reductive, analytic approaches to all existence is exemplified by the importance placed on static relata at the expense of the structural relationships from which they were extracted. It is also this tendency that blinds us in regards to relationships between music and image - we tend to fail to acknowledge the existence of anything that cannot be grasped or reduced by a rational process to discrete packets of information. According to Existential Phenomenology, we are experientially connected to both the Relata and the Relationships in their original unreduced form, and in the arts, this allows us to operate with them in an infinite number of ways.

Intuition and the ancient Greek word "nous" from which our word knowledge and its variations are derived, describe a means of knowing requiring a "direct participation in the immediacy of experience. This can be accomplished only by 162

Time Space Texture: An Approach to Audio-Visual Composition

making an effort to detach oneself from the demands of action, by inverting the normal attitude of consciousness and immersing oneself in the current of direct awareness. The result will be a cognition of reality such as intellectual concepts can never yield. In so far as this reality is communicable, it must be expressed in metaphors or "fluid concepts" quite different from the static abstractions of logic."

Intuition is compared by Henri Bergson to an act of empathy such as that which occurs whilst attending a performance which involves a character with whom one can identify. With an object such as a sound, or a concept such as time, it would be necessary for one to attribute it with not only an interior, but states of mind with which to empathise. This requires some imagination, but once achieved it is possible to empathetically place ones self inside that object or concept and apprehend directly and in an absolute sense, its condition, first hand. [7] Once this has been achieved, it is possible to find resemblances between phenomena previously thought to be unrelated. In viewing the world and all that is in it as an unbounded web of similitudes, and by approaching all understanding through, "ceaselessly drawing things together and holding them apart, in an interplay of sympathy and antipathy",[8] one returns to what was the essential approach to knowledge during the sixteenth century, before the rise of science and its exclusively reductive and inductive approaches to the acquisition of knowledge. Also, because intuition is a massively parallel process that deals mainly with mental images as opposed to sequences of rules, it is in many situations a much faster process than the serial approach taken by logical processes. Using intuition, one is able to approach the understanding of an object or concept, such as music, from the point of view so to speak, of another party, such as an abstract 3 dimensional shape or space, and begin to form an

163

Time Space Texture: An Approach to Audio-Visual Composition

understanding via an appreciation of similitudes, of how a sound might appear if able to manifest itself as sense datum resulting from such an object.

This is of course, a highly simplistic description of a process which by its very nature defies reduction into symbols such as words. It is interesting to note that to some, the difficulty involved in representing such ideas as words or mathematical languages might be given as proof of their non-validity. However it should be understood that "although language constitutes a most important symbolic mode for maintaining the technologically developed civilisations of industrial societies, it does not follow that reality, as comprehended through language

is

more

real

than

that

articulated

through

other

modes

of

communication, nor concomitantly does it follow that language is potentially an exclusive mode in the sense of all other non-verbally coded experience being reducible to verbal terms."[9]

Music and abstract animation are two non-verbal languages with unique communicative abilities, and while the metalanguages of art theory and musicology have developed some ability to describe these arts, they can never substitute for the art itself. Similarly, while the languages of logic and its technological extensions have provided us with numerous tools and instruments with which to understand and develop these art forms, it also cannot substitute for the art itself. In the arts, logic makes a good servant, but does not necessarily make a good master.

[1] Spinoza, Benedictus de, 1632-1677. Earlier philosophical writings : the Cartesian principles and Thoughts on metaphysics / Translated by Frank A. Hayes. Indianapolis : Bobbs-Merrill, 1963. [2] Macann, Christopher E. Four phenomenological philosophers : Husserl, Heidegger, Sartre, Merleau-Ponty. London ; New York : Routledge, 1993. 164

Time Space Texture: An Approach to Audio-Visual Composition

[3] Arnheim, Rudolf. To the rescue of art : twenty-six essays. Berkeley : University of California Press, 1992. [4] Alpert, Dr Richard. Be Here Now. Lama Foundation: New Mexico, 1971. (p.85) [5] Einstein, Albert. Ether and the Theory of Relativity. an address delivered on May 5th, 1920, in the University of Leyden. In Ray Tomes' Cycles in the Universe: http://www.kcbbs.gen.nz/users/af/alt-faq.htm [6] Bergson, Henri. An introduction to Metaphysics. The Bobbs-Merril company, Indeanapolis. 1980. [7] ibid. [8] Foucault, Michel. The order of things: An archaeology of the human sciences. Random House-Pantheon, NY. 1970. [9] John Shepherd. Music as Social Text. Cambridge MA: Polity Press, (pp 7879), 1991.

165

Time Space Texture: An Approach to Audio-Visual Composition

Appendix B – Visual Music History

A Brief Historical Survey of Audio-Visual Intermedia Composition

Andrew D. Lyons. Composition Unit The Sydney Conservatorium of Music. The University of Sydney Sydney NSW 2000 Australia http://www.tstex.com Email: [email protected]

Alchemical Foundations Pythagoras had considered synesthesia to be the greatest philosophical gift and spiritual achievement. The earliest experiments in colour-music instrument design occurred in the alchemical haze of antiquity. Leonardo da Vinci made sketches of colour organs in the fifteenth century. Athanasius Kircher later triggered an interest in the area as a result of experiments with his "magic lantern" and the subsequent writings in his "Musurgia Universalis. . . " of 1662. John Locke also raised the subject in his 1690 publication, "An essay concerning human understanding." Auricular Harpsichords In his 1725 article, Louis-Bertrand Castel wrote, "Why not make ocular as well as auricular harpsichords? It is again to our good friend [Kircher] that I owe the birth of such a delightful idea." A working example of Castels' "Clavecin Oculaire", 166

Time Space Texture: An Approach to Audio-Visual Composition

which made use of colour/sound correspondences described by Isaac Newton in his Optik of 1730, was never actually completed although plans were set out and a model was constructed in 1734. The Colour Organ The incidence of colour organ design and construction accelerated during the nineteenth century culminating in Alexander Rimington's "Colour Organ" of 1893. His book describing the colour organ and the colour theories on which it was based was published in 1911 as Colour-music: The Art of Mobile Colour. That same year the Russian composer Alexandr Scriabin completed his Promethius, The Poem Of Fire. Scriabin's Promethius Promethius was composed and scored for orchestra and a device to perform colour which he called the "Tastiera per luce." Scriabin had first recognised his feeling for sound/colour correspondences following conversations with Nicholai Rimsky-Korsakov who also recognised such correspondences and utilised them in his orchestration. An account of Scriabin's experience of synesthesia can be found in a 1915 article published in the British Journal of Psychology. A description of both a successful performance of Promethius and the workings of the "Tastiera per luce" used in a March 1915 performance in New York can be found in the edition of Scientific American published soon afterwards. Kenneth Peocock addresses both Scriabin's synesthesia and his Promethius comprehensively in an article published in Leonardo in 1985. Wassily Kandinsky Being a fusion of two modalities, the development of colour music has been pursued by both synesthetic musicians and visual artists. The theory and works of Wassily Kandinsky were concerned to a large degree with abstracted musical forms. He describes an experience of Wagner's Lohengrin during the early

167

Time Space Texture: An Approach to Audio-Visual Composition

1890's: "All my colours were conjured up before my eyes. Wild, almost mad lines drew themselves before me. I did not dare to tell myself in so many words that Wagner had painted `my hour' in music. But it was quite clear to me that painting was capable of developing powers of exactly the same order as those music possessed."

Kandinsky moved to Munich in 1896 in order to realise this dream in the form of the abstracted art forms he describes in his numerous and highly influential treaties on abstract art. However ". . . Western readers may be interested to learn that while he was still in Russia during the years immediately following the revolution, Kandinsky was co-head of the Institute for Artistic Culture where he explored the objective regularities of the synthesis of music and painting and studied colour hearing (synesthesia), the psychological basis of this synthesis." Kandinsky completed his first non-representational painting and his treatise On The Spiritual In Art in 1910. Two years later an artist working in Paris had devised and commenced work on an abstracted musical animation that he called Coloured Rhythm. Leopold Survage "Leopold Survage was a familiar if taciturn figure in a circle of artists that included Picasso, Braque, Modigliani, Leger, Brancusi and others.” Inspired by trends in abstract and cubist painting and the burgeoning development of cinematic techniques, Survage began work on his rhythm-colour "symphonies in movement." Survage said of his work in the area that, "Coloured music is in no way an illustration or an interpretation of a musical work. It is an autonomous art, although based on the same psychological principles as music. "The numerous descriptions of Survage's 200 or so completed plates are all fairly ecstatic in nature. Many descriptions written at the time appear in Russett and Starr's, Experimental Animation.

168

Time Space Texture: An Approach to Audio-Visual Composition

These descriptions, which are too lengthy to quote in full, and the few frames presented as examples, bear an uncanny similarity to the descriptions of coloured photisms resulting from chromaesthesia. Unfortunately Survage never found an animator to complete the frames between his abstracted key frames and his work on coloured rhythm was never completed. Wilfred’s Clavilux The prohibitive logistics of painting tens of thousands of frames to create a cinematic

animation

were

avoided

by

makers

of

dedicated

sound/colour

generating instruments. Perhaps the most famous example of such instruments during the first half of this century was Thomas Wilfred's "Clavilux" of 1922. The Clavilux performed displays of prismatic colour that many compared to the shifting lights of the Aurora Borealis and the abstracted forms of Kandinsky's painting. Performances of his "Lumia" occurred throughout North America, Canada and Europe during his tour of 1924-1925. Most of Wilfred's performances were silent, however orchestral collaborations such as the performances of Rimsky-Korsakov's Scheherezade directed by Leopold Stokowski that occurred during 1926 in Philadelphia were perhaps exceptions. Bentham's Light Console Wilfreds Clavilux pre-cursored the numerous coloured-music instruments that were constructed in the decades that followed. One such notable instrument was Frederick Bentham's "Light Console." Bentham used his console to perform coloured light accompanied by phonograph records of such works as Stravinsky's The Firebird Suite, and Scriabin's Promethius. These instruments created works that left critical writers of the time short of words with which to describe their beauty. "For this new colour-art might very aptly be called music for the eye. . . it is colour and light and form and motion, but it is not painting, nor sculpture, nor pantomine. . . " 169

Time Space Texture: An Approach to Audio-Visual Composition

Walter Ruttman Whilst such instruments were logistically amenable to real time interaction and live musical performance, animated kinetic art in cinematic forms were beginning to appear in the visual entertainment arena. In the spring of 1921, Walter Ruttman showed his short film Lichtspiel Opus 1, in Frankfurt, Germany. The film was reviewed by Bernhard Diebold in the Frankfurter Zeitung under the heading “A New Art, The Vision-Music of Films." Diebold and his young friend Oskar Fischinger were "Greatly impressed with the work", which according to William Moritz, Fischinger's biographer, was "not only the first abstract film to be shown in public, but also a film hand tinted in striking and subtle colours, with a live synchronous musical score composed especially for it." "The concepts of sound painting or tone colour seemed literally to fulfil their meaning; content and character of the musical piece express themselves, silently moving in the forms and colours of the continuous motion picture." Opto-Acoustic Notation By 1933 the visionary Bauhaus artist and teacher Lazlo Moholy-Nagy had already published several articles about the possibility of creating music recordings using film as a compositional media and made a short film, The Sound ABC. The utilisation of synthetic sound on film was also being explored by Oskar Fischinger in his development of an abstracted musical animation language. "Fischinger's basic aim was . . . to artistically inter-relate the sensory modes of sight and sound into a totally synesthetic experience." His "opto-acoustic notation" consisted of a series of geometric shapes that represented musical elements drawn onto scrolls of paper before being photographed onto the film soundtrack. The likes of Norman McLaren and Len Lye were involved in extending the field of synthetic sound before the commencement of the Second World War. John and James Whitney

170

Time Space Texture: An Approach to Audio-Visual Composition

In 1940 John Whitney, a filmmaker, composer and technical innovator and his brother James Whitney, a painter, began working with synthetic sound and a film animation language in order "to develop a unified bi-sensory relationship between film and music." Stimulated by the avante garde filmmakers of France and Germany in the early nineteen twenties these American artists created numerous films during the nineteen fourties and fifties that utilised unique machinery designed and realised specifically for their purposes. One example of their creative heritage are the slit-scan photographic techniques famous for such cinematic passages as the Stargate sequence of 2001 a Space Odyssey. John Whitney and Computer Animation After many years developing mechanical techniques with his brother James, John Whitney found himself making use of analogue computers sourced from military surplus centres as tools with which to continue his development of a bisensory relationship between film and music. His animations completed using this primitive technology were of sufficient quality to secure him long time on-going support from IBM. With these tools he became a pioneer in the field of computer generated abstract musical animations.

In his book published in 1980, John Whitney states that. . . "Music, as the true model of temporal structure, is most worthy of study among prior arts. Music is the supreme example of movement become pattern. Music is time given sublime shape. If for no other reason than its universality and its status in the collective mind, music invites imitation. A visual art should give the same superior shape to the temporal order that we expect from music." Whitney goes on to describe differential motion patterns as one means by which, "would-be music/image innovators" may attempt to play colour "musically" on a two dimensional field such as a monitor with time placed on the Cartesian z-axis as the principal axis of construction.

171

Time Space Texture: An Approach to Audio-Visual Composition

The Multi-Media Age By the time that John Whitney's three sons were beginning to move into the field of experimental animation in the late 1960's they were not alone. The multimedia age had dawned. The different arts were being presented together, at the same time, in a variety of settings on an unprecedented scale. Some members of this popular artistic movement took to describing their art as intermedia. It would seem that such terms as audio-visual intermedia, colour-music and kinetic art are all representative of the philosophy and technology of the age in which they were realised.

As Bulat M. Galayev explains: "The techniques of music-kinetic art . . . are not susceptible to unification; each music-kinetic art instrument is unique, as is any work of art.” This suggests that with each new advance in technology a new name will be developed to take its place amongst the "colour harpsichords" of history. This problem is exacerbated be the lack of cohesive tradition within the art. Musicians and painters, for example, acknowledge and revel in their tradition. Coloured-music/kinetic-artists however are traditionally characterised by an ignorance of their predecessors matched seemingly only by their keenness to claim the art as their own and give it a new name. The defiance of both a static terminology and technique may however be seen to be part of the integral nature of the art and perhaps also the source of its attractiveness to artists.

Selected Bibliography. Aristotle. "De Anima." In The works of Aristotle, edited by W. D. Ross. Oxford: Oxford University Press, 1928. Dube W.D. The Expressionists. London: Thames and Hudson,1972.

172

Time Space Texture: An Approach to Audio-Visual Composition

Galayev, Bulat M. "The Fire of Promethius: Music-Kinetic Art Experiments in the USSR." Leonardo 21.4 (1988),383-396. Goethe J.W. von. "Zur Farbenlahre." In Theory of Colours, translated by C.L. Eastlake. London: Frank Cass and Co, 1967 (1810). Iamblichus. De Vita Pythagorus. tr. Thomas Taylor. London: J.M. Watkins, 1965. Kandinsky, W. "On the Spiritual in Art" in Kandinsky: Complete Writings on Art, edited and translated by K.C. Lindsay and p. Vergo. London: Faber and Faber, 1982. Kircher, Athanasius. Musurgia universalis: . . . New York: Barenreiter, 1988 (1662). Kuenzli, Rudolph E. ed., Dada and Surrealist Film. NewYork: Willis Locker and Owens, 1987. Moritz, William. "Abstract Film and Color Music." in The Spiritual in Art: Abstract Painting 1890-1985, edited by Maurice Tuchman, Judi Freeman and Carel Blotkamp. New York: Abbeville Press, 1986. Peacock, K. "Synesthetic perception: Alexander Scriabin's color hearing." Music Perception 2 (1985) : 498. Peacock, Kenneth. "Instruments to Perform Color-Music: Two Centuries of Technological Experimentation." Leonardo 21.4 (1988), 399. Plummer, H.C. "Colour music - a new art created with the aid of science." Scientific American 112 (1915) : 343, 350-1. Russett, Robert., and Cecile Starr. Experimental Animation : origins of a new art . London: DeCapo, 1988.

173

Time Space Texture: An Approach to Audio-Visual Composition

Scriabin, Aleksandr Nikolayevich. Prometheus The poem of fire ; Piano concerto in F sharp minor. London: Phonodisc, 1972. Whitney, J. Digital Harmony: On the Complementarity of Music and Visual Art. (Peterborough, N.H: Byte Books, 1980), 44. Youngblood, Gene. Expanded Cinema. London: Studio Vista, 1970.

174

Time Space Texture: An Approach to Audio-Visual Composition

Appendix C - Synaesthesia

Synaesthesia - A Cognitive Model of Cross Modal Association

Andrew D. Lyons. Composition Unit The Sydney Conservatorium of Music. The University of Sydney Sydney NSW 2000 Australia http://www.tstex.com Email: [email protected]

Abstract The cognitive characteristics of the rare perceptual condition known as synesthesia provides a clinical insight into the relationship between the various human sensory modalities and in particular for the relationship between audition and vision. Following a discussion of the nature of synesthetic perception, this nature is discussed within the context of the relationship between auditory and visual arts.

1 Introduction Traditionally, the arts have been separated into disciplines delimited by medium and other criterion. Painting and music for example are delimited by, amongst other things, the different senses by which they are perceived - we hear music and see painting. One of the great dreams of the romantic tradition has been that works particular to each artistic discipline might be meaningfully represented in another artistic discipline. One of the great challenges to the inter175

Time Space Texture: An Approach to Audio-Visual Composition

disciplinary translation of artworks has been the development of a system of mapping perceptual attributes between each of the five human senses. Whilst mapping between sculpture and painting may be achieved in a very literal way, mapping between music and painting has always presented itself as more of a challenge. A study of human perception, especially as it pertains to the relationship between audition and vision, can prove very useful toward this end.

2 Clinical Synaesthesia 2.1 Human Sensory Systems "Human sensory systems mediate four attributes of a stimulus that can be correlated quantitatively with a sensation: Modality, intensity, duration and location." [1] The attributes of intensity, duration and location apply to all five sensory modalities: vision, hearing, touch, taste and smell. Each of these sensory modalities has sub modalities, which in the case of vision include color whilst in hearing they include pitch. Our perception of light arrives to the brain via a series of Photo-receptive rods and cones in the eye. Audition on the other hand uses information gathered by mechano-receptive hair cells in the ear that measure vibrations in air pressure. The nature of the differences between the five modalities is suggested by the disparate nature of these sensory receptors. Whilst

both

photo-receptors

and

mechano-receptors

measure

intensity,

location and duration, they both also measure a property of frequency. Pythagorus, Sir Isaac Newton, and numerous other physicists, have hypothesised about the existence of a physical relationship between the frequencies of light and sound

responsible

for

the

sub-modalities

of

colour

and

pitch.

However,

explanations of the relationships that exist between sensory sub-modalities, have been made in more recent times by Psychologists and Neurophisicists. 2.2 Definition of Synesthesia

176

Time Space Texture: An Approach to Audio-Visual Composition

"Synesthesia is a word derived from the Greek words 'syn' meaning together and 'aisthesis' meaning perception. It is used to describe the involuntary physical experience of a cross-modal association. That is, the stimulation of one sensory modality reliably causes a perception in one or more different senses." [2] For example, a synesthete will see coloured shapes projected into their field of vision as a result of auditory stimulation. 2.3 Does Synesthesia Exist? Synesthesia has in the past been considered a less than scientific area of research by some, due to its reliance on subjective sources for any observations of its nature. Scientists have recently been convinced of the existence of synesthesia and cite evidence in support: •

The impressive test-retest reliability in the consistency of colours triggered by different words (in the case of "coloured hearing").



The similarity of reports from different cultures and different times across the century.



The consistency of sex ratio (it is overwhelmingly a female condition).



The familial pattern to the condition.



The neuroimaging data (using PET) showing different cortical blood flow patterns in women with synaesthesia in comparison to women without the condition.

2.4 Clinical Diagnosis of Synesthesia. Psychologists and more recently Neuropsychologists have documented the nature of synesthetic experience in a useful manner for over a hundred years. Varying criteria has been applied to the diagnosis of synesthesia although in general psychologists have always differentiated clinical synesthesia from metaphor, literary tropes, sound symbolism, and deliberate artistic contrivances that sometimes employ the term "synesthesia" to describe their multi-sensory joinings. Dr. Richard Cytowic has proposed five criteria for the diagnosis of a type 177

Time Space Texture: An Approach to Audio-Visual Composition

of clinical synesthesia called ideopathic or developmental synesthesia as opposed to acquired forms of clinical synesthesia such as drug induced synesthesia, epileptic synesthesia, and synesthesia due to acquired brain lesions: •

Synesthesia is involuntary but elicited.



Synesthesia is projected. If visual, a photism will appear outside the body in the region close to the face.



Synesthetic percepts are durable and discrete. The associations for an individual Synesthete are stable over their lifetime. If a sound is blue, it will always be blue.



Synesthetic

experience

is

memorable.

Many

synesthetes

exhibit

hypermnesis. •

Synesthesia

is

emotional

in

nature.

A

synesthetic

experience

is

accompanied by a sense of noetic certitude. 2.5 Non Uniformity in Synesthetic Perception. In addition to these characteristics it should be added that there is no uniformity amongst the experience of synesthetes. Each individual experiencing synesthesia experiences it in a unique form. "In fact, this rather glaring problem that two individuals with the same sensory pairings do not report identical, or even similar, synesthetic responses - has sometimes been taken as 'proof' that synesthesia is not 'real.'" [3] Yet it remains that certain patterns have remained constant in the statistical information derived from scientific observation of synesthetic perception. Some of these patterns, such as the correspondence between pitch and visual brightness, have been documented repeatedly since they were first described in the experiments of Bleuler and Lehmann in 1881. Lawrence E. Marks describes the situation thus: "One should not come away with the impression that all our knowledge about our sensory and perceptual experiences can be captured in a set of independent - or even interrelated verbal categories; nor that sensory/perceptual experiences themselves reduce in

178

Time Space Texture: An Approach to Audio-Visual Composition

any simple manner to a list of attributes... Still, the study of synesthetic metaphor may serve as a useful model system. By being amenable to psychophysical

analysis,

synesthetic

metaphors

not

only

permit

ready

quantification, but enable us to assess development trends in the ways that at least certain aspects of such metaphors are interpreted... A psychophysics of synesthetic metaphor as described here may eventually reveal much about perception and language; but to appreciate the depth and extent of human metaphorical capacity will demand a psychological analysis that is as yet hardly dreamt in our philosophy." 2.6 Explanations of Synesthetic Perception. Over the past 200 years a number of hypotheses have been put forward to explain the cause of synesthesia. Current theories however in some way recognise the findings of recent neurological studies that suggest the possibility that the executive areas of the human brain, primarily in the frontal lobes, manifest a high degree of sensory integration. The Cross-Modal Transfer (CMT) hypothesis is now a widely accepted explanation for the occurrence of synesthesia although it was radical when it was first proposed. The CMT hypothesis supports the view that detection of intersensory equivalence is present from birth, and that perceptual development is characterized by gradual differentiation. 2.7 The Neonatal Synaesthesia hypothesis The Neonatal Synaesthesia hypothesis builds on the CMT evidence, but suggests that early in infancy, probably up to about 4 months of age, all human babies experience sensory input in an undifferentiated way. Sounds trigger auditory, visual and tactile experiences all at once. Following this early initial phase

of

normal

synaesthesia,

the

different

sensory

modalities

become

increasingly modular. Adult synaesthesia, has been suggested to be as a result of a breakdown in the process of modularization, such that during infancy the

179

Time Space Texture: An Approach to Audio-Visual Composition

modularization process was not completed. This of course implies that if not now, then at some time in the past, we have all experienced synesthetic perception.

3 Synesthesia and Art 3.1 Photisms in Coloured-Hearing Synesthesia. In coloured hearing synesthesia, a photism, usually coloured in some way, appears in the field of vision of a synesthete as a response to some form of aural stimuli. Synesthetic photisms usually vary in shape and color according to the nature of the stimuli that triggered them. The examples below represent some different photisms.

As can be seen above, "Synesthetes never see complex dream-like scenes or have otherwise elaborate percepts. They perceive blobs, lines, spirals, lattices, and other geometric shapes." [4] Dr Richard Cytowic notes that the generic and restricted nature of synesthetic percepts bears a considerable likeness to a series of forms first developed by Heinrich Kluver in the 1920's known as Kluvers "form constants". [5] These generic shapes are common to synesthesia, hallucinations and are frequently seen in primitive art. "

180

Time Space Texture: An Approach to Audio-Visual Composition

Figure 2.2 Kluver's Form Constants

Variations in photism color, brightness, symmetry, and shapes have been recorded to vary as a result of variation in musical stimuli. Tempo for instance

181

Time Space Texture: An Approach to Audio-Visual Composition

effects the shape of a photism; the faster the music, the sharper and more angular the photism. That pitch has a direct effect on the size of a photism has also been recorded. It has been observed universally that photism size increases as auditory pitch decreases. In this way high pitched sounds produce small photisms and low pitched sounds produce synesthetic percepts that are large in size. Loudness also has an effect on the size of the photism perceived by a synesthete. Lawrence E. Marks shares his understanding of Synesthetic response to music :"Just as the important dimensions of the auditory stimulus that are responsible for musical synesthesia can be quite complex, so too can be the synesthetic responses themselves." "Visual sensations aroused by music need not be limited or confined to simple spots of color. Often the entire visual field fills with colours that change over time with the music; some subjects report several colours simultaneously, each color reflecting a particular aspect of the music." [6] 3.2 Musical Perception in Chromaesthesia. It is of interest to open this section by quoting the concerns of one group of psychologists who conducted numerous investigations into synesthesia during the first half of this century. Published in The Journal Of General Psychology in 1942, they write, "Although it is generally agreed that relationships between visual and auditory experiences exist commonly in our language forms, nevertheless, we have no quantitative measure of just how common a given relationship between sound and sight actually is in the population. Such a measure would be useful to determine what per cent of an audience could be expected to grasp an artists purpose if, for example, he represented the harmony of his musical composition by background and the melody by figures in his color-music production. Also it would facilitate the process of conventionalising associations between music and vision if one could determine quantitatively which of several acceptable ways of

182

Time Space Texture: An Approach to Audio-Visual Composition

representing a melody, for instance, is already predominately in use-i.e., is considered appropriate by most people." [7] At least 23 psychological publications between 1862 and 1974 concern themselves with correlations between sound composition and colour as a result of research directed at synesthesia. Many more have concerned themselves with studies of synesthesia triggered by speech stimulus. Research into these areas of synesthesia has furnished artists with some information by which to start developing the formulations suggested in the quote from 1942 above. In his article, On Coloured-Hearing Synesthesia originally published in the Psychological bulletin, Lawrence E. Marks compiled all the information extant on such matters

into a series of tables printed below: [8]

3.3 Vowel Colour The study of chromesthetic phenomena often concerns itself with associations triggered by speech rather than music. This is perhaps due to the fact that speech is pathologically superior in its ability to evoke a synesthetic response. The component of speech that bears the greatest influence on the nature of the induced response is the sound of vowels. Both areas have tremendous significance in mapping out perceptual parallels between the modalities of hearing 183

Time Space Texture: An Approach to Audio-Visual Composition

and vision. Firstly, when it comes to reports on musical synesthesia, we find that the important principles of visual-auditory association that manifest themselves in color music are basically the same principles that manifest themselves in coloured vowels - that is, the relations of visual brightness and size to auditory pitch and loudness. Secondly, in an article published in 1968, Wayne Slawson showed that artificial two formant sounds are readily interpretable as vowels and as musical notes and that the vowel quality and musical timbre depend in similar ways on the structure of the sound (formant frequency and spectrum envelope.) 3.4 Slawsons' Sound Colour Slawson went on to elaborate his comparison of vowel sounds to the field of musical timbre in his Sound Color of 1985. In this book, Slawson uses the four characteristics set out by Chomsky and Halle in 1968 in The Sound Patterns of English and adapts them so as to form a basis for organising musical timbre derived from varied sources. Besides being the only text to date to take up Arnold Schoenbergs' 1911 request for a treatise on the subject of KlangenFarbe, Sound Color forms an essential bridge between the colours commonly associated with vowel sounds and the formant composition of synthesised sounds with which Slawson is largely concerned. Slawson indicates that sound colour is primarily a function of the frequencies of the first two resonances. Slawson uses the three categories by which vowel features are organised - compactness, acuteness and laxness, changing compactness to openness, and adding a fourth category smallness; which has no corresponding vowel feature.

4 Conclusion The nature of Synesthetic perception does not on its own provide artists with a template for mapping between visual and auditory arts. Whilst it describes the constant nature of the relationship between brightness and pitch, brightness and volume, photism shape and aural texture - it does not map hue to timbre or pitch in any way not related to a particular musical context. Also, because synesthetic 184

Time Space Texture: An Approach to Audio-Visual Composition

photisms are two dimensional in nature, research into synesthesia can shed limited light on relationships between music, and objects with three dimensions.

5 Footnotes [1] John H. Martin, "Coding and Processing of Sensory Information" in Principles of Neural Science, ed. Eric R Kandel, James H. Schwartz and Thomas M. Jessel. (London: Prentice Hall, 1991), 329. [2] Richard E. Cytowic, "Synesthesia, phenomenology and neuropsychology: a review of current knowledge," Psyche 2.10 (1995), 1. [3] ibid. [4] Richard E. Cytowic, Synesthesia: a union of the senses, (New York: Springer Verlag, 1989), 138. [5] Kluver, H. Mescal and Mechanisms of Hallucinations, Chicago: University of Chicago Press, 1966. [6] Lawrence E. Marks, "On Coloured-Hearing Synesthesia", " in Simon BaronCohen and John Harrison, eds. Synesthesia: Classic and Contemporary Readings, (Oxford: Blackwells, 1996), 70. [7] Karwoski, T. F., H. S. Odbert and Charles E. Osgood. "Studies in Synesthetic Thinking II: The Role of Forms in Visual Responses to Music." Journal of General Psychology 26 (1942) : 205. [8] Lawrence E. Marks, "On Coloured-Hearing Synesthesia", " in Simon BaronCohen and John Harrison, eds. Synesthesia: Classic and Contemporary Readings, (Oxford: Blackwells, 1996), 70.

185

Time Space Texture: An Approach to Audio-Visual Composition

6 Bibliography Baron-Cohen,

Simon.

"Is

There

a

Normal

Phase

of

Synaesthesia

in

Development?" PSYCHE 2(27), June 1996. Baron-Cohen, S., and J. Harrison. eds. Synesthesia: Classic and Contemporary Readings. Oxford: Blackwells, 1996. Bleuler, E. and Lehmann. K. Zwangsmassige Lichtempfindungen durch Schall und Verwandte Erscheinungen. Leipzig: Fues's Verlag, 1881. Chomsky, N., and M. Halle. The Sound Pattern of English. New York: Harper and Row, 1968. Cytowic, R.E. "Synesthesia, phenomenology and neuropsychology: a review of current knowledge." in Synesthesia: Classic and Contemporary Readings, eds., S. Baron-Cohen, and J. Harrison. Oxford: Blackwells, 1996. Cytowic, R.E. Synesthesia: a union of the senses. New York: Springer Verlag, 1989. Frith, Christopher D., and Eraldo Paulesu, "Physiological basis of Synesthesia." in Synesthesia: Classic and Contemporary Readings, eds., S. Baron-Cohen, and J. Harrison. Oxford: Blackwells, 1996. Jacobs, L., A. Karpik., D. Bozian, and S. Gothgen. "Auditory-visual synaesthesia: sound induced photisms." Archives of Neurology 38 (1981) : 211-16. Karwoski, T. F., H. S. Odbert and Charles E. Osgood. "Studies in Synesthetic Thinking II: The Role of Forms in Visual Responses to Music." Journal of General Psychology 26 (1942). Kluver, H. Mescal and Mechanisms of Hallucinations. Chicago: University of Chicago Press, 1966. 186

Time Space Texture: An Approach to Audio-Visual Composition

Marks, L. E. The Unity of the Senses: Interrelations among the Modalities. New York: Academic Press, 1978. Marks, Lawrence E. " Categories of Perceptual Experience: A Psychophysicist Peruses Synesthetic Metaphors." in Modern issues in perception. edited by Hans-Georg Geissler. Amsterdam: Elsevier Science Publishers B.V., 1983. 351-352. Marks, Lawrence E. "On Coloured-Hearing Synesthesia" in Synesthesia: Classic and Contemporary Readings, edited by S. Baron-Cohen, and J. Harrison. Oxford: Blackwells, 1996. Martin, John H. "Coding and Processing of Sensory Information" in Principles of Neural Science, edited by Eric R Kandel, James H. Schwartz and Thomas M. Jessel. London: Prentice Hall, 1991. Myers, C. S. "Two cases of Synesthesia." British Journal of Psychology 7 (1915) : 112-17. Newton, Isaac. Optiks. New York: Dover Publications, 1952. (1730) Peacock, K. "Synesthetic perception: Alexander Scriabin's color hearing." Music Perception 2 (1985) : 498. Sabaneev, L. "The relation between Sound and Colour." Music and Letters 10 (1926): 266. Slawson, Wayne. "Vowel quality and musical timbre as functions of spectrum envelope and fundamental frequency." Journal of Acoustical Society of America 43 (1968) : 87-101. Slawson, Wayne. Sound Color. University of California Press, 1985. Stein, Barry M. The Merging of the Senses. Cambridge, Mass.: MIT Press, 1993 187

Time Space Texture: An Approach to Audio-Visual Composition

Appendix D – Mysticism

Mysticism in Abstract Arts Andrew D. Lyons Composition Unit The Sydney Conservatorium of Music. The University of Sydney Sydney NSW 2000 Australia http://www.tstex.com Email: [email protected] "Surely there was a time when art and science did not exist, even as concepts. The activities now ascribed to these two disciplines must have been one and the same. When prehistoric man painted pictures of bison and antelope on the walls of his cave, it seems certain that he was creating sympathetic magic in an effort to understand and control his environment ... He was both proto-artist and protoscientist." [1] This quote suggests a common root or intended outcome for both the arts and sciences. Philosophies concerning this root frequently involve a world view in which all arts can meet and be related. The world’s great mystical traditions have often been employed to suggest new ways to attempt artistic abstraction. Mythical and mystical traditions have also served to provide thematic material for such works. Ideas common to most mystical views are that, "the universe is a single, living substance, mind and matter also are one; all things evolve in dialectical opposition, thus the universe comprises paired opposites; everything corresponds in a universal analogy, with things above as they are below; imagination is real; and self realization can come by illumination, accident, or an induced state: the epiphany is suggested by heat fire or light."[2]

188

Time Space Texture: An Approach to Audio-Visual Composition

The nature of a mystical experience suggested here, "can be defined as being an 'infinite intimacy', a sense of fulfilment in which the subject is simultaneously aware of the limitless nature of the universe and yet of his infinite relationship to a force sensible as an identifiable personality. It is simultaneously the experience of everything and nothing, of knowing all yet being empty, of hearing within silence all sound. Different religious traditions identify this state individually – nirvana, mushir, Shambhala, Buddhahood, mystical union, alchemical marriage, Shekinah – yet it can be seen as a common goal of all esoteric teaching, an experience of oneness beyond the world of duality." [3] A mystical tradition popular with artists at the turn of the century was the distillation of the worlds esoteric belief systems into a science termed Theosophy. As the architect Claude Bragdon puts it in his book dealing with architecture as frozen music: "Theosophy, both as a philosophy, or system of thought, which discovers correlations between things apparently unrelated, and as a life, or system of training whereby it is possible to gain the power to perceive and use these correlations for worthy ends, is of great value to the creative artist, whose success depends on the extent to which he works organically, conforming to the cosmic pattern, proceeding rationally and rhythmically to some predetermined end." [4]... "One of the advantages of a thorough assimilation of what may be called the theosophic idea is that it can be applied with the advantage to every department of knowledge and human activity: like the key to a cryptogram it renders clear and simple that which before seemed intricate and obscure."[5] It can be easily seen how such a philosophical approach to understanding could be of great interest to artists attempting to represent music as architecture or visual art. It is in fact well documented that the theosophical thought and mystic beliefs common at the turn of the century were pivotal in the genesis of non-representational or abstract art. The art historian Sixten Ringbom states summarily that, "Abstract Art emerged as an attempt to express ideas about the ineffable." [6] Indeed, "visual artists from the generation born in the 1860's to 189

Time Space Texture: An Approach to Audio-Visual Composition

contemporary times, have turned to a variety of anti-materialist philosophies, with concepts of mysticism or occultism at their core."[7] This fact is outlined at length in the 1986 publication, "The spiritual in art: Abstract Painting 1890-1985."[8] At the end of this compendium of essays dealing with the function played by interests in mystic traditions in the development of abstract art, the mystical interests of near to a hundred prominent abstract artists are dealt with chronologically. This list of artists includes Jean Arp, Marcel Duchamp, Paul Gauguin, Augusto Giacometti, Victor Hugo, Wassily Kandinsky, Paul Klee, Yves Klein, Hilma Af Klint, Frantisek Kupka, Kazimir Malevich, Franz Marc, Piet Mondrian, Edvard Munch, Francis Picabia, Jackson Pollock, Odilon Redon, and Jan Toorop. Artists working with static media are not the only abstract artists interested in mystical or spiritual matters. In her 1994 PhD dissertation, "Things of the spirit: A study of Abstract Animation",[9] Maureen Ruth Furniss, "argues that many abstract animators, like artists in other fields, have cared for 'things of the spirit', and that the production of abstract animation has often been closely bound with an artists search for understanding of the self and the mysteries of life on a more universal level. Primarily, this study focuses on artists Len Lye, Norman Mclaren, Harry Smith, Oskar Fischinger, James Whitney and Jordan Belson. . .Included are discussions of various practices, tribal art and culture, speculative music, synesthesia, numerology, and alchemy. . ." [10] In the musical world, theosophy played a major part in the formulation of the first and only example of orchestral composition scored for orchestra and a synesthetic coloured light performance. Alexandre Scriabin's Promethius: The Poem of Fire [11] may be seen to be a direct result of the composer's active participation in the fin de siecle Parisian Theosophical society. A more extensive history of the influence of mystical traditions in colour music can be found in Dr. William Moritz' - Abstract Film and Colour Music - A paper originally published in The spiritual in art: Abstract Painting 1890-1985. A 190

Time Space Texture: An Approach to Audio-Visual Composition

beautiful book, which provides a most comprehensive analysis of the matter. [12]. One final book which may be of interest is the Kevin Dann thesis [13], which is an astoundingly comprehensive history of the perceived relationship between mysticism and synesthesia amongst artists. It has been very well researched from an historical point of view, and incorporates an incredible level of detail with masterful ease. It is however a highly tautologous thesis which at times fiercely berates artists who a hundred years ago were unaware of the modern clinical definition of Synesthesia. Mysticism seems destined to always hover beyond the expanding horizon of science.

References [1] Henning, Edward B. Creativity in Art and Science 1860-1960. Cleveland Museum of Art, 1987. ( p. XIV ) [2] Bragdon, Claude, The beautiful necessity : seven essays on theosophy and architecture. Wheaton, Ill. : Theosophical Pub. House, 1978, c1939. [3] Music and Mysticism. in Contemporary Music Review, V.14, London : Harwood Academic Publishers, 1996. ( p. 3 ) [4] Bragdon, Claude, The beautiful necessity : seven essays on theosophy and architecture. Wheaton, Ill. : Theosophical Pub. House, 1978, c1939. [5]ibid. [6] Maurice Tuchman, Judi Freeman and Carel Blotkamp. (eds ) The Spiritual in Art: Abstract Painting 1890-1985, New York: Abbeville Press, 1986. [7] ibid.

191

Time Space Texture: An Approach to Audio-Visual Composition

[8] Maurice Tuchman, Judi Freeman and Carel Blotkamp. (eds) The Spiritual in Art: Abstract Painting 1890-1985. New York: Abbeville Press, 1986. [9] Furniss, Maureen Ruth. Things of the Spirit: A Study of Abstract Animation. (PhD Diss.) University of Southern California. 1994. [10] ibid. p.1 [11] Scriabin, Aleksandr Nikolayevich. Prometheus The poem of fire ; Piano concerto in F sharp minor. London: Phonodisc, 1972. [12] Moritz, William. "Abstract Film and Color Music." in The Spiritual in Art: Abstract Painting 1890-1985, edited by Maurice Tuchman, Judi Freeman and Carel Blotkamp. New York: Abbeville Press, 1986. [13] Dann, Kevin. Bright Colours Falsely Seen: Synaesthesia and the Modern Search for Transcendental Knowledge. (PhD dissertation.) Rutgers the State University of New Jersey - New Brunswick, 1995.

192

Time Space Texture: An Approach to Audio-Visual Composition

Appendix E – Schwarzchild

Schwarzchild—Abstract Electronic Music Theatre.

Andrew D. Lyons. Department of Music Composition The Sydney Conservatorium of Music The University of Sydney Macquarie Street Sydney NSW 2000 Australia http://www.tstex.com Email: [email protected]

Abstract A brief examination of the theoretical basis of an electronic music theatre piece which explores the visual abstraction of musical perception. The representative theory of perception is isolated as a theoretical boundary limiting all useful knowledge to within perceptual psychology. The implementation of knowledge sourced from this field, as it exists within the piece is then detailed. Finally the ways in which this knowledge is implemented is described. Keywords Immersive, Abstract, Music Visualisation, Synesthesia, Physiognomic, Phenomenalism, Idealism.

1 Introduction The representative theory of perception determines that works attempting to represent music visually are bound for any theoretical basis within perceptual psychology. The main

understandings of Psychology

193

in

relation

to such

Time Space Texture: An Approach to Audio-Visual Composition

representations are derived from Psychological analysis of Synesthetic and Physiognomic perception. This points to “energy shape” as a significant parallel between the properties of varied modes of perception. By adopting a Monist, Idealist heuristic premise in relation to sense datum, and employing an understanding of similitudes existing in the visonual space, works such as Schwarzchild may be conceptualised.

2 Perception in Philosophy 2.1 Phenomenalism And The Representative Theory Of Perception Anything that can be observed may be regarded as a phenomenon. The doctrine of phenomenalism states that, “all statements about material objects can be analysed into statements about actual or possible sensations.”

i

These

sensations, or sense datum, are the means by which we acquire knowledge of the material objects around us; it is however important to differentiate between sense datum and the qualities of objects we imagine to be the cause of the sense datum. The sense datum merely represents for us the external world and all that we can know about it. To Phenomenalists, the existence of all matter is dependant on it being observed. 2.2 Materialism According to John Locke and many materialist philosophers since, it is only in what are referred to as the primary sensory qualities of shape, size, position and motion that sense data resemble their originals. The secondary qualities of sense datum such as colour, brightness, shrillness and pitch are entirely qualities of human cognitive processes. Many philosophers since Bishop Berkeley, have argued that no adequate reason is given for putting primary and secondary qualities on an entirely different footing by saying that shape and motion in the sense datum resemble shape and motion in the material thing, whereas colour and pitch do not. To Berkeley, and other Empiricist philosophers, all our knowledge is derived entirely from direct experience. Berkeley extends this

194

Time Space Texture: An Approach to Audio-Visual Composition

theory to regard all existence as being composed of “ideas” in which usage he means sense datum. From this usage we have the term Idealism. 2.3 Idealism and “Physics from Fisher Information” Many people in the western world are accustomed to regarding sense datum, in compliment with the understandings of science, as describing in an absolute sense all reality. It has been shown scientifically however that “through the very act of observing, we thus actually define the physics of the thing being measured.”ii This situation has provoked scientists to pose the question: “Is the universe really a frolic of primal information and matter just a mirage?”iii Whilst it may be premature to affirm this statement as being beyond doubt, by approaching the analysis of matter utilising Fisher information, physicists have been provided with a means of explaining and uniting all the existing laws of physics, whilst providing the means to create new ones.

3 Useful Approaches to Material. 3.1 Heuristic Premises The composer found it a useful heuristic premise to objectify the creative material available, as a kind of unified ether of raw perceptual information, manifesting itself coherently and concurrently as psychologically resonant sounds and visions. This particular heuristic premise may be seen to draw on the Monist and Idealist philosophies that all existence is part of one all inclusive substance. The use of ether as a conceptual constant unifying temporal and spatial aspects parallels Einstein’s consideration of general relativity to be an ether based theory. The previously referred to theory of existence involving fisher information may also seen to be related. 3.2 Similitude Once this premise has been intuited, it is possible to find resemblances between phenomena previously thought to be unrelated. In viewing the world and all that is in it as an unbounded web of similitudes, and by approaching all understanding through, “ceaselessly drawing things together and holding them 195

Time Space Texture: An Approach to Audio-Visual Composition

apart”,iv in an interplay of sympathy and antipathy, one returns to what was the essential approach to knowledge during the sixteenth century, before the rise of science and its exclusively reductive and inductive approaches to the acquisition of knowledge. 3.3 Visonual Spaces This emphasis on similitude also resembles the underlying premise of Servio Tulio Marin’s visonual metaphorical construction, which is that “music can be heard through the eyes and seen through the ears.” v Marin explains that, “According to Foucault’s collateral, correlative, and complementary spaces (1969), and in Deleuzian and Guattarian (1980) terms as well as in Emily Hicks, model of holography (1991), the visonual metaphor results from the deterritorialisation of the aural and visual senses.”vi Marin later elaborates thus; “In effect, a Foucaultian statement is to be associated not with the transmission of particular elements presupposed by it, which in the case of the Visonual Statement are Visual and Aural elements, but with the shape of the whole curve to which they are related,”vii which in Marin’s thesis refers to what he describes as the “energy shape” of a sound or image. Marin also draws on Synesthetic and Physiognomic examples.

4 Perceptual Psychology And Its Implementation 4.1 Approaches to Implementation Composition of sound and image from a shared conceptual space for performance within synthesised spaces, demands particular approaches and understandings on the behalf of composers. Besides being able to organise pitch, rhythm and timbre over time, the composer must be able to create and coordinate physiognomic objects, as well as issues relating to the arrangement of all these elements within space. In the development of Schwarzchild, the complete absorption of all these considerations was necessary before the composer could begin to conceptualise a work which incorporated all these factors in an integrated manner. The aim of this 196

Time Space Texture: An Approach to Audio-Visual Composition

revised conceptual approach was to create a work which succeeded in developing visual symbols which were specific to the musical and spatial content of the piece but which resonated perceptually in a universally powerful manner. In order to do this, research was undertaken into the universal aspects of cross-modal association, as derived from Psychological studies of Synesthetic and Physiognomic perception. The findings of some of these studies and the ways in which they were implemented are set out below, but for more information refer to the author’s ACMA98 paper. 4.1 Location To begin with, the primary perceptual qualities of location and motion were determined by drawing on the physiognomic idea that high tones are exactly that, whilst low tones are similarly found to be spatialised below the medium frequency sounds - which occur at roughly eye level. With these generalised locations in place, functions involving frequency, amplitude, and spectral content were used to create differential functions to vary location for each Cartesian axis. The location of a visual source directly matched the location of a sound within the sound space. 4.2 Size Visual size was determined from aural content using a product of amplitude and pitch - as characterises the size of photisms in synesthetic perception. Generally the louder the sound, the bigger the object, with low pitches being generally large and high pitches tending to be small. It was also possible, using the smearing and diffusion functions of the Lake Space Array software to make sound sources larger or smaller within the sonic space in order to match the visual size of an object if so desired. 4.3 Shape Shape similarly drew on understandings of physiognomic perception and synesthetic photisms. The local, textural timbre of a sound was therefore assigned to this quality. In this way sharp pointed sonic textures were matched 197

Time Space Texture: An Approach to Audio-Visual Composition

with sharp pointed objects and rounded sounds were matched with rounded objects. Fuzzy objects were similarly textured visually. Transparency, which may be seen to be a sub set of shape, was determined in a variety of ways usually drawing on functions of timbre and amplitude. 4.4 Spatial Volume The spaces within which the piece takes place were initially determined by a subjective interjection on the part of the composer to determine, by feel, the spatial qualities of a particular section of music. This was however able to be reinforced through the spatial cues created by these spaces once the music was convolved within that space. At this stage, the relationship of space to music ceased to be particularly subjective. 4.5 Brightness Of the secondary perceptual qualities, brightness may be seen to exist in both sound and vision as a function of what may be called intensity, itself a product of pitch and volume. The louder a sound the brighter the photism; low pitches generally being darker than photisms produced by high pitches which tend to be bright. 4.6 Hue Hue however was ignored completely, being a quality which varies so subjectively in relation to particular sounds between observers as to be rendered an obstructive addition to any attempt to make use of universal perceptual correspondences in an absolute sense. 4.7 Atmospheres The atmospheric effects utilised within a particular space were used to reinforce the size and shape of a space and any desired sense of distance. The lighting for a space was determined using an intuitive experiential analysis to determine what the composer regarded to be the mood or “feel” of a section. It is the most subjective and aesthetically determined aspect of the animation.

5 Conclusion. 198

Time Space Texture: An Approach to Audio-Visual Composition

Approaches to the visual representation of musical perception in a spatial, audio-visual performance environment, requires specific understandings of crossmodal perception and a particular approach to time, space and matter. The employment of an Idealist philosophical premise rather than a materialistic one, facilitates the creation of such works. If the heuristic premises on which Schwarzchild is based are ever proved beyond doubt to be true, it may be regarded as a completely synthesised piece which represents the raw fabric of all existence as material with which to compose a virtual reality. _____________________________

6 Footnotes 1 Marin, Servio Tulio. (1994). The concept of the visonual. Aural and visual associations in twentieth century music theatre. Diss. University of California San Diego. P32 2 Whiteley, C.H. (1966). An introduction to Metaphysics. Methuen + Co, London. p.75. 3 Matthews, Robert. (1999)

I is the Law. in New Scientist, No.2171 Reed

Business Information, Sydney. (P24-28). P27 4 ibid. Front Cover. 5 Tomlinson, Gary. (1993). Music in renaissance magic : toward a historiography of others. University of Chicago Press, Chicago. P189 6 Marin, Servio Tulio. p. xiii 7 ibid 8 ibid. P 20.

199

Time Space Texture: An Approach to Audio-Visual Composition

7 Selected Bibliography Bergson, Henri. (1980). An introduction to Metaphysics. The Bobbs-Merril company, Indeanapolis. Davies, John Booth. (1978). The Psychology of Music. Stanford University Press, Stanford, California. Deleuze, Gilles. (1988): Foucault. In, Foldings, or the inside of Thoughts. Minneapolis: University of Minnesota Press. Einstein, Albert; Ether and the Theory of Relativity. an address delivered on May th 5 , 1920, in the University of Leyden. In Ray Tomes’ Cycles in the Universe: http://www.kcbbs.gen.nz/users/af/alt-faq.htm Foucault, Michel. (1970). The order of things: An archaeology of the human sciences. Random House-Pantheon, NY. Frieden, Roy. (1999). Physics from Fisher information. Cambridge University Press, NY. Lyons, Andrew, D. (1998) Evaluating New Tools and Techniques for Intermedia Composition and Production. Published in the Proceedings of the 1998 ACMA conference.

200

Time Space Texture: An Approach to Audio-Visual Composition

Appendix F – Gestalt Gesamtkunstwerk

Gestalt Approaches to the Virtual Gesamtkunstwerk.

Andrew D. Lyons Composition Unit The Sydney Conservatorium of Music. The University of Sydney Sydney NSW 2000 Australia http://www.tstex.com Email: [email protected]

Abstract The basis for differentiation of artistic disciplines is examined. The historical development of definitive criteria for Gesamtkunstwerke is briefly surveyed. Gestalt Psychology is discussed as an advantageous approach to cognition in order to conceptualise and design content for such works. Spatial audio-visual reproduction equipment are suggested to be an important technology for the realisation of such works. The author speculates as to the impact of such works on the individual artistic disciplines of music and architecture.

Keywords Art, Gesamtkunstwerk, Phenomenology, Gestalt Psychology, Virtual Reality

Introduction Computer aided design software applications that converge and integrate data from various artistic disciplines constitute the first technology capable of the morphological freedom necessary in the creation of "Gesamtkunstwerke". Works 201

Time Space Texture: An Approach to Audio-Visual Composition

which synthesise various artistic disciplines into Gesamtkunstwerke inherit dimensional attributes that demand the spatial, audio-visual reproduction technology required to create "Virtual Reality". A phenomenological approach to artistic material provides many means to suggest relationships between the attributes

of

differing

artistic

disciplines.

Gestalt

Psychology

offers

an

understanding of aesthetic perception and cognition based on Phenomenology. Many major challenges in creating Gesamtkunstwerke can be solved with an approach

to

artistic

material

using

Gestalt

techniques.

As

electronically

synthesised Gesamtkunstwerke proliferate, derivations of these works will no doubt have a profound impact on all artistic disciplines.

1 A Reduction and Re-synthesis of Art 1.1 Reduction Contextual definitions reduce art into a schema of various disciplines based on these contexts. If a work is to synthesise all artistic disciplines into a great work, or Gesamtkunstwerk, a commonality must be synthesised from these contexts that need not adhere to the doctrine of any specific discipline. By tabulating numerous common bases for the classification of arts into disciplines, various differences and commonalities may be isolated. Table 1.1 provides examples for each different class and suggests the product of a synthesis of the differences, in the current context of the Gesamtkunstwerk. Classification Medium

Dimension

Purpose

Examples

Re-synthesis

eg. Arts that use words, tones, Synthesised stones, paint on canvas, simulating all concrete media. human bodies etc.

media types of

eg. Arts that use space, time, All dimensions folded into or any other dimension for each other. their main sphere of operation. Arts that are necessary, arts that are useful and arts that Multiple purposes. entertain.

202

Time Space Texture: An Approach to Audio-Visual Composition

This reduction separates the arts into: (1) theoretical arts that leave no traces behind them and are characterised by the study of Productive art that leaves behind a concrete product things, that simulates other (2) practical arts that consist concrete products. of an action of the artist without leaving a product, and

Residue

(3) productive arts that leave behind an object. Semiological determinacy

Painting and poetry can evoke determinate associations; Blended referential and non Music and Architecture usually referential signification. do not.

Table 1.1 - Modern western criteria for the classifications of art

1.2 Re-synthesis A brief analysis of the third column of table 1.1, reveals some of the qualities and problems inherent in the Gesamtkunstwerk. In particular it may be observed that the re-synthesis of some classes may be more difficult to achieve than others. Of the five classifications presented, the medium and the residue are - in this case - prescribed by the technology used for reproduction. This technology can incorporate enough information to accommodate multiple simultaneous purposes. The treatment of mixed signification presents some problems during the production of the Gesamtkunstwerk, given the strengths of existing visualisation software, although it is not too difficult to resolve theoretically. The fusion of dimensional qualities constitutes the major difficulty in the conceptualisation of Gesamtkunstwerk. 1.3 Alternative Perspectives There is no mathematical formula with which to translate one dimensional time into three dimensional space. Ultimately it is not necessary to approach the design of artistic works concerned with relating disparate artistic phenomena by using an approach based on the logical positivism philosophically fundamental to the natural sciences. One of the great commonalities of all artistic disciplines is a

203

Time Space Texture: An Approach to Audio-Visual Composition

concern with the subjective perception of human beings. Because science posits no explanation of the subject -being concerned only with physical objects alternate schools of philosophy such as existential phenomenology generally offer more useful models of human consciousness with which to develop art doctrine. In the words of Henri Bergson, "Our perceptions give us the plan of our eventual action on things much more than that of things themselves."[1]

2 Das Gesamtkunstwerk 2.1 Wagner's Gesamtkunstwerk The idea of the Gesamtkunstwerk or "great work" was first proposed in the late 1840's by Richard Wagner in his paper, The Artwork of the Future. [2] For Wagner, his music theatre works realised the dream of the Gesamtkunstwerk by bringing together the great arts of Painting, Music and Drama as a unity. He expresses a poor opinion of "sister dance", and the stark functionality of his Bayreuth theatre may be seen as a testament to his ideas on Architecture as art. Despite Wagner's artistic preferences, the definition of the Gesamtkunstwerk stipulates that it be a fusion of all arts; qualitative evaluations of any discipline's right for inclusion aside. Therefore, if the arts are to be fused, then aspects of the Visual Arts, Architecture, Dance, Music, Sculpture and Theatre should all be present in equal measure before a work should be considered a Gesamtkunstwerk. 2.2 Approaching a Definition At this point it becomes necessary to attempt to define or perhaps re-define what the Gesamtkunstwerk actually is. Ultimately Wagner's Gesamtkunstwerk was not a fusion of all arts but a combination of a few. If the "great work" is truly a fusion of all arts, would this not demand the semiological ambiguity and a representation of dimensional folding described in section 1.1? This was believed to be the case during the last decades of the nineteenth century as the idea of the Gesamtkunstwerk evolved within the Zeitgeist of time and space that permeated European artistic thought at this time. In the spirit of that age, the

204

Time Space Texture: An Approach to Audio-Visual Composition

Gesamtkunstwerk was believed to constitute a fusion of all arts that would exhibit profound aesthetic resonance and even present itself as a metaphysical epiphany. Cubism, Abstract Art and Suprematism are all examples of such concerns in painting while spatial and morphological concerns in Music, and temporal concerns in Architecture also emerged as a result of this "culture of time and space." [3] 2.3 A Near Miss Two young Parisians greatly influenced by these ideas were the architect Le Corbusier and the composer Edgard Varese. These two men were prominent in the creation of an exhibit for the 1958 world fair that is often regarded as both a forerunner of Virtual Reality and as an example of the Gesamtkunstwerk. "Although a little building of brief life span, the 1958 Philips Pavilion, with its spectacle of amplified sound and rhythmically orchestrated light and colour, was a landmark in electronic media technology that concomitantly tested the limits of Architecture, both concrete and virtual. When seen against the buildings and arts of its time, when seen as Le Corbusier's synthesis of the arts, the Philips project assumes justified importance. While in some ways neither the Architecture nor the spectacle fully realized its complete potential, in other ways all aspects of the project were prescient. If the Philips project did not locate the precise point at which all the arts - traditional and electronic -would intersect some time in the future, it did provide the unquestionable directional signs toward that point." [4]

205

Time Space Texture: An Approach to Audio-Visual Composition

Above: Le Corbusier shielding Edgard Varese from Louis Kalff of the Philips corporation as they stand beside the completed Philips Pavillion in Brussels, 1958.

2.4 A New Approach At the turn of the twentieth century, the cognition of art was investigated by the German Philosopher/Psychologist, Christian von Ehrenfels. Ehrenfels was a Professor at the German University in Prague from 1896 until 1925. His, On Gestalt Qualities [5] of 1890 was a reflection on "what complex perceived 206

Time Space Texture: An Approach to Audio-Visual Composition

formations such as spatial figures or melodies might be." [6] The paper began with a terminological proposal that the German word "Gestalt", which means shape, figure or form, should be generalised in a certain way. For Ehrenfels, a Gestalt quality, "is not a combination of elements but something new in relation to these, which exists together with their combination, but is distinguishable from it". [7] Ehrenfels recognised that Gestalten involving spatial shape could be analogous to Gestalten involving objects that have a complexity that is extended in time. The basis of Ehrenfel's approach did not involve a reduction of either melody or spatial figure to physical attributes in order to derive commonality. He regarded these simple artistic articulations rather as phenomena, and as such their structures were better understood as they presented themselves to consciousness, without recourse to theory, deduction, or assumptions of other disciplines such as the physical sciences. According to this approach, perception initially presents a unified whole or Gestalt which then reveals layers of elements in structured relationships.

This

Phenomenology,

approach

and

with

to its

knowledge various

is

based

derivative

on

schools

the of

ideas

of

thought,

Phenomenology constitutes a highly effective philosophy to employ in the creation of Gesamtkunstwerke. It provides the only tool with which to solve the problem of dimensional

translation

intrinsic

to

the

successful

realisation

of

the

Gesamtkunstwerk. The Gestalt tradition in particular suggests various means by which to create strong associations between aural and visual phenomena in order to create profound illusions of unity.

3 Gestalt Psychology 3.1 The Berlin School The emergence of Gestalt theory as a general theory of psychological phenomena, processes and application is recognised to have taken place in Berlin around 1912. The work of Max Wertheimer, Wolfgang Kohler, Kurt Koffka, and

207

Time Space Texture: An Approach to Audio-Visual Composition

Kurt Lewin at this time established Gestalt Psychology as a major field of perceptual psychology. Drawing on Phenomenology as it does, Gestalt theory is opposed to the elementistic approach to psychological events as in associationism, behaviorism, and psychoanalysis. Methodologically, it involves a meaningful integration of experimental and phenomenological procedure and approaches phenomena without a reduction of experimental precision. 3.2 Gestalt Grouping In his Laws of Organisation in Perceptual Forms, [8] Max Wertheimer explains that during the cognition of sensation, phenomena are initially parsed into groups. These groups are made on the basis of attributes such as those set out in table 3.1. Things that are located in close proximity to each other are inferred

Proximity

to be a group. If objects are similarly spaced, then those of like shape will be

Similarity

regarded as being related. The random arrangement of most objects in nature means that

Symmetry

those that exhibit symmetry will be seen as being related.

Good Continuation Common Fate

If objects are arranged in such a way that they are collinear, or appear to continue each other, they are grouped as a whole. Objects that move together are most likely connected in some way.

Table 3.1 - Some forms of Gestalt grouping.

Gestalt groupings provide artists with a powerful means to create relationships between spatial phenomena that have audible or visible attributes. Of the five grouping types shown, Common Fate is the most powerful type. A good example of grouping disparate phenomena using common fate in the present context would be to synchronise the motion through space of a source of light and sound. 3.3 Isomorphism Gestalt grouping is not the only technique offered by Gestalt Psychology in order to create low-level associations between phenomena sensed by different sensory modalities. A visible structure and an audible structure that share the 208

Time Space Texture: An Approach to Audio-Visual Composition

same structure of operations and relations are said to be "isomorphic". In Gestalt psychology, a one-to-one correspondence between elemental attributes is not essential for relationships to be discerned; structural similarity is another powerful form or relationship. Such isomorphism may be regarded as a means by which to fold dimensional material within spatio-temporal Gesamtkunstwerke. This permits works to take forms other than the tubular representation of spacetime permitted by an approach based on physical sciences.

4. The Future 4.1 The Silence of Speculation. It

is

interesting

Gesamtkunstwerk

as

to speculate Virtual

at

Reality

this will

stage

how

influence

the

proliferation

traditional

art

of

idioms.

Speculations regarding the impact of Virtual Reality on arts have been published at an increasing rate over the last decade. Often however these speculations fail to consider that design aspects of more than one or two disciplines are involved within a Virtual Reality. The dominance of occularity has meant that many art theorists have tended to envisage Virtual Reality as being as silent and mute as the cinematic arts when they were first developed. Bound to conventions technological and otherwise - established during the silent era, music is still part of the post production process in most cinematic production. Virtual worlds can employ the power of musical technique to effect the perception of temporality. Furthermore - the idea of being "immersed" is itself largely derivative of our perception of the world as we hear it - hearing is the only sense which provides us with a circumambient sense of space. Ultimately, without the integration of sound as an integral design aspect, Virtual Reality will resemble a gaudy electronic version of a late 20th century shopping mall - complete with piped Music.

209

Time Space Texture: An Approach to Audio-Visual Composition

4.2 Music In Virtual Reality sonic art has the chance to return to splendour. This will be largely dependant on sound art practitioners coming to terms en masse with the compositional implications of synthetic 3D sound pieces. In most Music, the source of a sound is usually static spatially. At a concert of orchestral Music, the string section doesn’t fly through the air as they bow, and the brass section doesn’t bob up and down ten feet below your chair. At a rock concert, although the guitarist might fly overhead, usually the P.A. system doesn’t. At home, it is most common to listen to Music using two speakers which allow content that may seem to move across the stereo field. As people become more used to Gesamtkunstwerk in which the motion of a sound source becomes an important and expected component of a piece, listeners will come to desire this in situations where there is no visual media. Musicians will change the way they conceptualise sound art pieces to bring Music closer to the other arts. The sonic palette available to Musicians in the 21st century

will

render

many

traditional

lattice

based

approaches

to

Music

composition obsolete. Musicographical [9] categories of musical events such as texture, hue, intensity, mass, volume, and density will come to dominate categories such as pitch, rhythm and harmony which are functionally dependent on instruments with static and limited colouration. The mnemonic system of Music notation will continue to serve the anthropological function of preserving musics which rely on it, but will largely make way for communication via recorded media and graphic systems of sonic representation in technologically advanced cultures. 4.3 Architecture Architecture will not need to be a mute and static environment. Sounds may not need to be attributable only to concealed speakers: they may become integrated aspects of a liquid Architecture that integrate sounds as installations. This may draw on models developed in the virtual context. William Mitchell predicts in his book City of Bits that there will be "profound ideological 210

Time Space Texture: An Approach to Audio-Visual Composition

significance in the Architectural recombinants that follow from electronic dissolution of the traditional building types and of spatial and temporal patterns."[10] Using

the

perceptual-conceptual

bridges

of

Gestalt

Psychology,

cross

fertilisation between artistic disciplines could accelerate greatly. This could be particularly so between the arts of time and space. Architects will find new ways of drawing on Musical form to create structures that are in some ways isomorphic to Musical compositions. Perhaps much like Music, Architecture will abandon paper as a design medium and move entirely into the digital domain.

5 Conclusion It is traditional amongst romantic thinkers to construe Gesamtkunstwerke as symbolic representations of higher truths - as though a successful synthesis of the arts will represent for humankind phenomenally the profound and unifying truths that science seeks to define mathematically. Were this the case, it would be an ambitious exercise to attempt the definition of an art that by all other dialectical approaches still defies description. All such ideas are of course fundamentally conjectural. That Gesamtkunstwerke have any definitive objective qualities at all is also open to debate. The interpretation of art is intrinsically subjective - and as already noted - science and its philosophy posit no model of the subject for artists to draw on. This is indicative of the usefulness of Gestalt Psychology - Phenomenology can explain the existence of science, but science can’t explain Phenomenology. Yet to completely absorb the phenomenological perspective requires almost an inversion of the dominant occidental world view. For many that wouldn’t be such a bad thing.

6 Footnotes [1] Henri Bergson. Creative Evolution. trans, Arthur Mitchell. New York: H. Holt and company, 1911.

211

Time Space Texture: An Approach to Audio-Visual Composition

[2] Richard Wagner. "Artwork of the future." in Correspondence, Selected letters of Richard Wagner. translated and edited by Stewart Spencer and Barry Millington. London : J.M. Dent, 1987. [3] Stephen Kern, The culture of time and space 1880-1918. Cambridge, Mass.: Harvard University Press. 1983. [4] Marc Treib. Space Calculated in Seconds. Princeton, N.J. : Princeton University Press, 1996. p.33 [5] Barry Smith. Foundations of Gestalt Psychology. Munchen, Wien: Philosophia Verlag, 1988. pp. 83-117 [6] ibid. p.12. [7] ibid. p.17. [8] Max Wertheimer. "Laws of Organisation in Perceptual Forms". (1923) in Ellis, W. A source book of Gestalt psychology. London: Routledge & Kegan Paul. (pp. 71-88). 1938. [9] Carlos Palombini. Pierre Schaeffer's Typo-Morphology of Sonic Objects. PhD Dissertation, University of Durham, School of Music, 1993. p. vi [10] William. J Mitchell. City of Bits: Space, Place and the Infobahn, Cambridge, Mass: MIT Press, 1995. p.

7 Bibliography Arnheim, Rudolf. New essays on the psychology of art. Berkeley : University of California Press, 1986. Bragdon, Claude. The Beautiful Necessity - Architecture as frozen Music. Wheaton, Ill. : Theosophical Pub. House, 1978, (c1939).

212

Time Space Texture: An Approach to Audio-Visual Composition

Martin, John H. "Coding and Processing of Sensory Information" in Principles of Neural Science, edited by Eric R Kandel, James H. Schwartz and Thomas M. Jessel. London: Prentice Hall, 1991. Mattis, Olivia. Edgard Varese and the visual arts. Diss. (Ph. D.): Stanford University, 1992. Ong, Tze-Boon. Music as a generative process in Architectural form and space composition. Diss. Rice university: Houston, Texas, 1994. Priest, Stephen. Merleau-Ponty / Stephen Priest. London : Routledge, 1998. Toy, Maggie. ed. Hypersurface Architecture II. Great Britain: John Wiley & sons, 1999. Yi, Dae-Am. Musical Analogy in Gothic and Renaissance Architecture. Diss. University of Sydney: Sydney, Australia, 1991.

213

Time Space Texture: An Approach to Audio-Visual Composition

Appendix G – Heisenberg – 3D-AV

Abstractly Related and Spatially Simultaneous Auditory-Visual Objects

Andrew D. Lyons Composition Unit The Sydney Conservatorium of Music. The University of Sydney Sydney NSW 2000 Australia http://www.tstex.com Email: [email protected]

Abstract This paper discusses various design issues related to the integrated synthesis of 3D sound and 3D graphics. Issues of particular concern are those related to a style of audio-visual integration in which the perceived attributes of sonic and visual phenomenon are mapped to each other. The relationship between auditory and visual perception can draw on synesthesia, mental imagery and creativity. Unique problems result from the combination of these cross-modal design concerns and concerns for dynamic and realistic spatialisation. These problems are discussed within the context of design for reproduction involving traditional screen and multiple channel audio theatre systems. Works by the author are used as examples including a recently completed 3D audio-visual work for DVD performance called "Heisenberg". The paper concludes with a call for greater treatment of visual artefacts involved in music cognition within music education.

1 Introduction 214

Time Space Texture: An Approach to Audio-Visual Composition

Works of the sort discussed here can be described using a schema of four design criteria. Issues related to these design criteria and their combination form the basis of all discussion in this paper. They will be referred to in this paper numerically as denoted in table 1.1 below: 1. Spatialised Sound - The realistic and dynamic spatialisation of sonic objects for performance using multiple speakers or headphones. 2. Spatially Simultaneous Sonic and Visual Objects - The visual representation of spatialised sound source locations using 3D computer graphics and a single screen display. 3. Abstractly Related Sonic and Visual Objects The representation of mental imagery resulting from music and the abstraction into the visual domain of other cognitive artefacts attributable to sonic objects. 4. Effective Design - The development of audiovisual works which satisfy the fundamental design tenets of both sonic and visual arts. Table 1.1 The four design criteria applied by the author.

2 Primary Problems In the author’s research, each of the first three criteria in table 1.1 have been responsible for what may be considered simple or primary problems. Complex or compound problems begin to arise when attempts are made to integrate the first three criteria in a way that satisfies the demands of the fourth criteria. Because the compound problems often result from combinations of primary problems each shall be discussed in turn.

2.1 Primary Problems Related to Sound Spatialisation 2.1.1 Design Criteria One 3D sound spatialisation is defined here as any system which is based on, or extends the original John Chowning (1971) spatial synthesis system. 3D sound systems as defined here do more than just pan sounds around an n-speaker array. A 3D sound system should also synthesise distance cues, Doppler shifts, the six early reflections of sound off walls, local and global reverberation and perhaps include other features in order to generate an aural virtual reality.

215

Time Space Texture: An Approach to Audio-Visual Composition

(Begault 1994) For the creation of Heisenberg an automated 3D sound spatialisation system of this sort dubbed "Pan" was designed. 2.1.2 Spatial Resolution in Sound Spatialisation It is well known that the number of speakers available in a specific speaker array has a direct relationship to the degree of spatial precision achievable with that array. (Begault 1994) As more speakers are added to an array, each individual sound source can be differentiated more easily from others, and its location determined more accurately. (Begault 1994) The greatest spatial resolution in the ITU-R BS.755-1 5.1 speaker array corresponds to the sixty degree arc immediately in front of the audience - where three speakers and the visual display are normally located. The author observed that the movement of spatially coincident sounds and images across this frontal 60 degree arc precipitated a generalised perceptual plausibility which in turn enhanced the evocative power of the acousmatic space beyond the area of the screen Besides the problematic limited spatial resolution inherent in five speaker arrays, the absence of speakers in the vertical plane might also be considered a limitation. The absence of speakers above and below the horizontal plane removes the ability to create sonic images above and below the audience. Heisenberg was subsequently animated so that all motion takes place not too far from a horizontal plane level with the camera. 2.1.3 Scale Issues and the Inverse Square Law One problem associated with all spatial synthesis systems involves ambiguity surrounding the correct function with which to create distance attenuation effects. The inverse square law is suggested widely to be the correct function with which to attenuate the amplitude and spectral content of a sound source to create the illusion of distance. (Dodge 1997) (Roads 1997) (Begault 1994). In Moore (1990) however, the inverse cubed law is stated to provide a more perceptually tenable relationship between distance and amplitude. It would seem that the exponent in 216

Time Space Texture: An Approach to Audio-Visual Composition

this function holds the key to creating different relationships between distance and attenuation. It is well known that the volume of an extremely loud sound decays over greater distances than that of a quieter one. (Bregman 1990) When working with digitised sound files however it is not always good practice to digitally store such scales of amplitude due to problems with either clipping or narrow bit resolution. By making the exponent in the distance inversion function a parameter called "scale", it was possible to work with an optimally sampled sound file and create either the perception of a nearby insect or a distant Jumbo Jet. This level of control makes itself most apparent in Heisenberg in the dramatic yet slowly shifting Doppler shifts created by fast moving loud distant sound sources. Without the scale parameter distant sound sources and the dramatic Doppler shifts created by large relative velocities would be inaudible. This is a feature not documented in some classic texts on spatialisation nor implemented in some proprietary 3D sound systems.

2.2 Problems Related to Spatially Simultaneous Sonic and Visual Objects.

2.2.1 Design Criteria Two In our day to day life we are able to attribute sounds causally to an event or object which is more often than not visible in some way. In this way our vision and hearing provide complimentary information about the location of events around us. It has been shown that each modality influences the spatial localisation of a stimuli source established by the other. (Lewis, Beauchamp and DeYoe. 2000) The synthesis of such spatial audio-visual relationships lends considerable authenticity to virtual environments. (Begault 1990) While there is a large amount of visual bias in situations when discrepancies between visual and

217

Time Space Texture: An Approach to Audio-Visual Composition

auditory spatial location occur, (Welch and Warren 1980) the correlation of cues from each modality may serve to encourage and reinforce the perception of intended abstract relationships between auditory and visual objects. 2.2.2 Temporal resolution The main strength of Pan for the author is its ability to provide perfect temporal synchronisation between sounds, their trajectories and their animated representations. No external devices are needed to achieve accuracy within the range of one audio sample. This kind of accuracy does however seem a little like overkill when one considers the temporal resolution of the television screens for which the animation is prepared. With such displays there is no satisfactory means to visually represent a sound which has a dynamic spectral envelope that endures no longer than the minimum duration for which a frame can be displayed. This is one twenty fifth of a second in the PAL format used in Australia and Britain and roughly one thirtieth of a second for the NTSC format used in the Unites States. This problem becomes more insidious when percussive passages take place at a regular rate that is slightly out of phase with the redraw rate of whatever television standard is being used. For this reason all percussive passages in Heisenberg are synchronised with the 25 frames per second redraw rate of the PAL television standard. In most cases this means tempos of 93.75 bpm are used. While it remains impossible to accurately represent percussive sounds with any temporal detail, at least it is possible to match their onset and offset with the onset and offset of an image. Creating passages that synchronise in this way, and at this rate creates an effect commonly referred to as strobing. Strobing and flicker effects are regarded favourably in some circles and not so favourably in others, and ideally screens and projection systems with much faster redraw rates will be available in the long term.

218

Time Space Texture: An Approach to Audio-Visual Composition

2.2.3 Spatial Dimensions When creating spatial works for cinema style performances, one becomes aware of a great disparity between the spatial range available for representing objects sonically and the spatial range available for representing objects visually. Irrespective of spatial resolution, a sonic object can be located anywhere around the audience on a horizontal plane when using five surround speakers. The representation of all visual objects however must vie for space within whatever dimensions are afforded by the available screen. 2.2.4 Calibrating for Spatial Simultaneity. In a 3D animation in which the camera is both moving around a scene and panning from side to side, it is important that sound sources and their visual representation move together relative to the camera position and angle of rotation. If the camera pans to the left, then related sonic and visual objects should move off towards the right together relative to the central axis of view. It is one of the functions of Pan to ensure that all sounds are spatialised relative to the position and rotation of the camera being used to shoot the animation. It is essential that the audiovisual field remains spatially coincident during camera transformations, because ultimately the position and rotation of the camera represents the spatial position and alignment of the audience during theatre performances. An extension of this problem is the need to spatially calibrate the visual field to match the rendered sound field. To calibrate the sound field spatially it should only be necessary to centre the circle of speakers around the supposed central audience position and arrange them using the angles indicated in the ITU 5.1 theatre sound specification. To calibrate the visual field however one should consider - before the animation is created - the angular relationship that exists between the central audience position and the side edges of the screen in the target theatre situation. This angle must then be used as the field of vision in the camera that shoots the 3D animation. In this way sounds and associated objects 219

Time Space Texture: An Approach to Audio-Visual Composition

will appear to be spatially coincident. Figure 2.1 below may help explain this problem.

Figure 2.1 - Field of Vision and Spatial coincidence.

2.3 Problems Related to Abstractly Related Sonic and Visual Objects 2.3.1 Design Criteria Three It has been a major objective of the author’s research to create animations that explore cross-modal exchanges between audition and vision, and which explore mental imagery associated with the experience of listening to acousmatic and computer music. The author appreciates that mental imagery takes highly individual forms, and the imagery of one does not often resemble the imagery of another when responding to the same piece of music. It is hoped however that research describing universal styles and recurring themes in mental imagery, synesthesia and other cross-modal categorisation systems will help offset this subjectivity.

A

system

involving

this

research

has

been

utilised

in

the

development of Schwarzchild, Loucid and Heisenberg. 2.3.2 Mental Imagery When discussing mental imagery it is important to differentiate imagery specific to each sensory modality. Auditory, visual, kinesthetic, tactile, and 220

Time Space Texture: An Approach to Audio-Visual Composition

olfactory imagery are all separate but related fields of cognitive psychology. (Richardson 1999) Such is the dominance of occularity that in its broader usage "mental imagery" may be taken to mean the visual variety of mental imagery. The auditory kind of mental imagery is generally referred to as "auditory imagery" in this and other literature. (Reisberg 1992) It has been suggested that fifty percent of people experience mental imagery of some sort when listening to music. (Huron 1999) Recent PET and EEG scans of the human brain have confirmed the employment of neural areas usually associated with vision in the exercise of musical tasks. (Nakamura et al. 1999) (Platel et al. 1997) Psychologists regard music as being a powerful source of mental imagery. (Quittner 1980) The use of mental imagery in music therapy techniques such as the Bonny Method of Guided Imagery (Goldberg 1995) has created a body of research describing the features of such imagery. (Cho 2002) (Lem 1998) Some recurring themes in mental imagery in response to musical stimuli include "'Nature scenes' i.e. sun, sky, ocean, plant and animal, etc., 'Journey', 'I' and 'Emotion' i.e. happiness, sadness, depression, etc." (Cho 2001) 2.3.3 Synesthesia While many experience mental imagery as a result of musical audition, only one in twenty five thousand people experience synesthetic perception, in which coloured forms appear involuntarily in the field of vision in response to sonic stimuli. (Cytowic 1992) Auditory-visual synesthetic imagery may be described as the superimposition of geometric figures known as "photisms" over the normal field of vision. While being largely flat and figurative, synesthetic photisms do have a sense of extruded depth. Following a query by the author regarding dimensional features in synesthetic photisms, a synesthete named Sarah provided the following description of her synesthetic perception: "I have sound>colour synesthesia; to put it simply I see colours, and sometimes patterns, when hearing sound. To me these colours and patterns seem two-dimensional. When I hear words and simple sounds the 221

Time Space Texture: An Approach to Audio-Visual Composition

colours are also simple; flat or graduating colours depending on what I'm hearing. When the sounds have some sort of rhythm, as with music or even poetry, these colours form moving patterns… To me it looks like a sort of filter or overlay. I'm not even sure sometimes if the colours are projected into space at all, or if I'm just seeing them "in my mind's eye" so to speak. I still see these colours with my eyes closed, for instance. In fact I see them more strongly with my eyes closed." A synesthete named Lisa then offered a description of her perception: "My sound->sight synesthesia is sometimes projected as flat, like Sarah's, but mine can just as easily be 3d with depth. For instance, I was riding in a friend's van the other day, and the side door wasn't all the way shut. It kept making this noise that to me looked like slightly asymmetrical orangey-yellow cylinders coming from the top left of my vision to somewhat near me on my bottom right. (It drove me nuts for almost an hour, 'til I was able to re-close it.)" There is a notable difference between these descriptions of synesthetic perception and the three dimensional landscapes and journeys common in nonpathological mental imagery. (Cho 2001) For the development of spatially dynamic audio-visual works, other systems must be drawn on to compliment any understanding of cross-modal exchange based on synesthetic perception. 2.3.4 A Systematic Approach The author has devised a systematic approach to the visual representation of aural attributes which may assist others in developing such works. The first stage in this process involves reducing imagery to four main levels. This reduction involves an illustrative consideration of the neural dispersion of cognitive

activity

responsible

for

each

of:

synesthesia,

spatial

cognition,

associative mental imagery, and causal analysis. The reduction of auditory-visual imagery using this schema permits the further analysis of each of the four main levels using more specialised modal translation systems.

Level

Location

Features 222

Time Space Texture: An Approach to Audio-Visual Composition

Level One Sub – Cortical Figurative Level Two Parietal Spatial Level Three Right Frontal Associative Level Four Left Frontal Causal Table 2.1 - The author’s four part reduction of auditory-visual imagery. (It should be noted that the neural locations are very generalised.)

2.3.5 Level One - Synesthesia The first level of the four part scheme in table 2.1 describes relationships between sound and visual imagery purported to take place at a sub-cortical level. Many neurologists involved in research into Synesthesia believe that this is the area of the brain responsible for Synesthetic perception. (Cytowic 1989) (BaronCohen and Harrison 1996). Research into Synesthesia has suggested constant relationships between certain aural qualities and resultant visual percepts. (Marks 1978) In many of the exchanges between the modalities of audition and vision in synesthetic perception, intensity plays a significant role. (Marks 1978) In the context of aural perception, intensity is a product of both high frequency spectra and loudness. In visual perception intensity is a function of brightness. The relationship of aural intensity to visual intensity experienced by most synesthetes is similar to that of nonsynesthetes. (Marks 1978) Some cross modal relationships are set out below in Table 2.2: Aural Feature Visual Counterpart High Pitch Small (bright) photism Low Pitch Large (dull) photism High Loudness Bright Photism Low Loudness Dull Photism "Course" sonic texture. Rough/Sawtooth shapes "Smooth" sonic texture. Smooth flowing shapes. Table 2.2 Mapping aural features to visual figurative features in synesthetic photisms.

While it is not explicitly described in any cross-modal systems, intensity due to loudness can also be associated with the size of an object. This is perhaps related to the way that objects become larger and louder as they come closer to an 223

Time Space Texture: An Approach to Audio-Visual Composition

observer. It may be noted that a direct relationship between loudness and size would at times conflict with the idea that objects with primarily high frequency spectra are small in visible size. There is also a potential for conflict in situations involving loud sounds with a low pitch - they need to be both bright and dull at the same time. The similar applies to quiet, high frequency sounds. Besides these conflicts of attribution, the relationship between aural texture and visual texture would seem to be intuitive. Sounds with saw-tooth like amplitude envelopes precipitate saw tooth shaped photisms. Also, sounds with smooth envelopes produce smooth photisms. Not in reference to synesthetic photisms

this

sonic-visual

topological

equivalence

is

also

described

as

"physiognomic perception". (Davies 1978) Colour is considered to be a highly subjective and widely varying visual attribute of sound by both synesthetes and non-synesthetes. 2.3.6 Level Two – Spatial Aspects In a positional context, the relationship between space and music employed is the uniform co-existence of sonic and visual objects. While there is a large amount of visual bias in situations when discrepancies between visual and auditory spatial locations occur, (Welch and Warren 1980) the careful correlation of cues from each modality may serve to reinforce the perception of intended abstract relationships between auditory and visual objects. In a design context however, the relationship between space and sound can become more complex. In classical literature, architecture is often described as having a relationship to music in the use of proportion. (Yi 1991) (Bragdon 1939) (Ong 1994) (Treib 1996) Comparisons based on the broader principle of gestalt isomorphism are also useful in formulating relationships between music and architecture. (Wertheimer 1938) (Lyons 2000). A dynamic variant of isomorphism may however be more useful in temporal works. Such an idea has been expounded by Candace Brower (1998) in which Density 21.5 by Edgard Varese is described in terms of pathways, containment 224

Time Space Texture: An Approach to Audio-Visual Composition

and blockage. This idea of containment and release was a central visual design feature in the author’s 2000 animation: "Loucid" and then again in Heisenberg. In some contexts the relationship between human gesture and music (Battey 1998) can be more useful in describing spatial-auditory relationships. This is for three reasons. The first is the powerful existential relationship between kinesthetic, auditory and visual cognition. (Priest 1998) The second is related to the fact that human gesture is at once spatial and temporal whereas architecture is spatial, but frozen in time. The third is the ability of human gesture to be performed within a confined visual field - such as that afforded by a visual display. 2.3.7 Level Three – Associative Imagery The relationship of mental imagery to sound stimuli is highly subjective in nature. Generally the cognition of such imagery involves exchanges between encoded properties and associative memory as described in Kosslyn’s (1994) protomodel of visual perception. For this reason such imagery is delimited by one’s visual experiences and mediated by the cognition of similarity. (Sloman 1999) Level three concerns mental imagery created by a low level, primary process style of association. (Dailey 1995) It is imagery that is associated in a fuzzy way with general auditory features during passive listening. Associative imagery as described here is that utilised in the Bonny Method of Guided Imagery. (Goldberg 1995) Level three imagery is usually derived from the overall gestalt of a sonic passage. No effort is made to isolate individual sounds or to determine their sources. The associated question in this stage is: "What does that sound remind me of?" 2.3.8 Level Four - Causal Attribution This final level describes an analytical approach to attributing a sound source of initially unknown origin to an object / event. As opposed to the previous stage - which is passive, involuntary and not consciously directed - this stage is active, conscious, constructive and analytic. It depends as much on memory as the 225

Time Space Texture: An Approach to Audio-Visual Composition

associative imagery described above, however the way in which memory is accessed is more directed. In this stage each individual sound source is addressed in series, and the question asked: "What physical object could be making that sound?" The overall nature of the scene is then constructed from the composite of each individual object. 2.3.9 Resolving the Four Levels. It may be apparent at this point that each level of the four part system would suggest different types of imagery. In the authors work different levels of the system are given priority in each scene. In some scenes - such as the green room and the last cloudfall scene in Heisenberg - associative imagery is the dominant guide to visual design. In other scenes the causal approach is taken. Generally once this decision has been made the detail of a scene is developed using the spatial and figurative levels. Imagery suggested by these two low levels is only implemented where it doesn’t conflict with any imagery pre-determined by the causal or associative levels.

2.4 Problems Related to Effective Design. 2.4.1 Limiting the Scope. Design aesthetics will only be discussed briefly in reference to aspects of the authors own approach to design where it pertains to the sort of works and problems being discussed here. 2.4.2 Designing Sound Spatialisation When

animating

dramatically

sound

modified

by

sources, the

dynamic

proximity

of

and a

spectral sound

qualities

source

to

are the

camera/microphone position. The application of compression, dynamic loudness attenuation, or dynamic spectral filtering may compromise the illusion of moving sound sources. The relative loudness of sounds in a piece and certain rhythmic effects are dependent on sound source proximity. Doppler pitch shifts create a situation where a sound’s pitch content is bent upward while approaching the camera and bent downward whilst travelling away. 226

Time Space Texture: An Approach to Audio-Visual Composition

The idea that this might be useful in any melodic sense is defeated by the fact that a sound can only move towards the camera for a certain amount of time before it collides with it. Similarly a sound can only travel away from the camera for a certain amount of time until it is inaudible. 2.4.3 Timbral Composition In

the

development

of

sonic

material

for

Heisenberg,

primary

sonic

constituents were initially chosen on the basis of desirable timbral properties. In combining sounds the author’s concerns are divided between various approaches. In a more analytic approach, consideration is given to the spectro-morphology of initially selected sounds. (Smalley 1986, 1997) An advantage of the elemental system of categorisation (Stewart 1987) is that it can be applied equally well to both sonic and visual objects. Throughout the authors animated works there are literal implementations of this system. Fire, water, clouds and landscapes are all visual representational components in various pieces. Also, elemental systems have the rare attribute of near universality in category systems across all human cultures. With elemental and spectro-morphological considerations in mind, sounds of differing natures are combined to produce balance, juxtaposition and other structural features where desired. By necessity this must be done with a mind to spatial trajectories and visual composition. In Heisenberg at least, fractal noise and other random procedures are then used to make decisions and generate detail at the microcosmic level. 2.4.4 Visual Concerns A primary visual concern for the author is the composition of objects within the frame of the screen. In most scenes in Heisenberg for example, attempts are made to ensure that there is a balanced distribution of visible objects within the screen area. At no time for instance is there nothing to be seen on-screen, and in most cases at least one instance of the majority of auditory objects can be seen. Ideally the composition of objects will be balanced at all times. 227

Time Space Texture: An Approach to Audio-Visual Composition

The choice of hue for a scene is generally developed using the elemental or associative cross-modal systems described above. In Heisenberg, the use of high levels of saturation and the tendency towards chromatic homogeneity within a section are based purely on aesthetic choices.

3 Complex Problems 3.1 Overview With the concerns and problems native to each of the four separate design criteria established, it is now possible to consider conflicts arising from certain combinatorial permutations of these criteria.

3.2 Occlusion 3.2.1 Occlusion The most significant complex problem for the author in the creation of works such as Heisenberg is that of visual occlusion. In visual perception, when opaque objects are superimposed, the foreground object always occludes those behind it. The closer an object is to the viewer, the larger is it’s perceived size and its capacity to conceal objects behind it. In a scene in which objects are moving and changing size and shape, occlusion problems arise constantly. Some examples are discussed below. 3.2.2 Occlusion and Spatial Arrangement Occlusion sets up limitations on the way in which objects can be arranged spatially. The position and size of an object must be taken into account when planning the animation of a scene. When combined with a concern for crossmodal mapping and physiognomic perceptual styles, a tendency arises for scenes to be full of layers of objects at increasing depth. In Schwarzchild, Loucid and Heisenberg large dull objects – which resemble wall planes, ground planes or entire rooms – are used to represent low frequency sounds. As sounds become higher in frequency they tend to become smaller and move into the foreground.

228

Time Space Texture: An Approach to Audio-Visual Composition

Objects within the room must be animated spatially in such a way that they do not pass through the geometry associated with the low frequency sound object. 3.2.3 Occlusion and Audio-Visual Proximity Occlusion problems effect the proximity of sonic objects to the camera during spatial animation. In the visual domain, if an object is too close to the camera it can not only conceal the rest of the scene, it can also intersect or overlap the position of the camera. In 3D animation, a geometry-camera intersection will create unwanted artefacts. In the auditory domain proximity also creates loudness levels which may distort audio signals and even damage audio amplification equipment. To avoid such effects of proximity it is necessary to hand animate objects carefully so that no camera collisions occur. In systems that are animated using algorithmic techniques, this can be achieved by converting cartesian position coordinates to polar coordinates and setting a minimum distance limit between objects and the camera location. To avoid sudden collision type motion when the minimum distance is achieved, a gaussian envelope filter can be applied to distance data to smooth out sudden changes in motion.

3.3 Other Criteria Conflicts 3.3.1 Aural Immersion - Field of View Conflicts One conflict between the four design criteria in Table 1.1 results from the aforementioned desire to frame within a screen of limited size, an object associated with every sound audible within that scene. This of course creates a conflict of interest when one wishes to immerse the audience in sound and yet have the sound sources visible at the same time. In Heisenberg, this is solved by arranging multiple instances of certain objects around the camera. In this way the audience is immersed in sound, and at least one instance of each sound object is visible whichever way the camera points. The audience should even be able to infer the appearance of invisible instances from one visible instance of that object type. In Heisenberg objects which exist in 229

Time Space Texture: An Approach to Audio-Visual Composition

only one instance tend to be spatialised so that for at least fifty percent of the time they are clearly visible in the field of vision. Instances are usually different waveforms of a similar type, or a de-correlated and delayed version of the original instance. (Kendall 1995)

3.4 Feedback Loops in Audio-Visual Design 3.4.1 Overview Problems like occlusion, which are due to simple conflicts of interest, can usually

be

solved

with

a

little

compromise.

Difficulties

arise

when

the

manipulation of an attribute in either the visual or aural domain adversely effects related attributes in the other domain. Solutions for problems involving multiple conflicts of interest compounded by a potential for feedback between spatial, auditory and visual attributes of a scene can be a little trickier to conceptualise solutions for. Their solution usually involves tweaking spatial audio-visual relationships until the problematic system settles down and becomes acceptable in both audio and visual design domains. 3.4.2 Integrated Spatial Audio Visual Composition When spatial simultaneity and cross modal exchanges are a creative concern, feedback between sonic, visual and spatial elements has the potential to enter infinite feedback loops. When adding a new sound to a sonic passage, the new sound may suggest a certain object, and while the new sound may blend nicely in amongst pre-existing audio material the new object may not. Finding a spatial location for that object where occlusion isn’t a problem will then become an issue. If the object is moving, altering it’s position will effect it’s Doppler shifts and other dynamic audio information. This in turn may effect the way that sound would look in the first place. Changing the object’s look may fix this, so long as animation and other data doesn’t need to be changed with the new visual features. Still, with the object added and altered thus, the overall sonic dynamic of the entire passage may have altered. Other objects in the scene may need to be altered and added to represent these changes faithfully. The entire visual strata of the scene 230

Time Space Texture: An Approach to Audio-Visual Composition

may in fact need to be re-designed. Besides any re-animation this may require, this may also in turn effect the perception of the new object relative to the new nature of the scene. The altered object type may not now be appropriate. This kind of thing can go on indefinitely and the mental pre-visualisation of a solution which will permit the system to work on all levels will save a lot of time in trial and error. In the authors experience pre-visualised scenes are also generally the most satisfactory visually anyway. 3.4.3 Low Frequency and Reverberation As described in section 3.2.2, low frequency sounds are often mapped into ground planes, wall planes or entire rooms. The animation of low frequency objects therefore has the potential to feed back into the reverberation in the scene. In turn, reverberation has the capacity to feedback into general visual qualities within the scene – especially if wall reflection Doppler shifting is enabled.

4 Conclusion – Problems in Music Education The initial problem for many attempting to develop interdisciplinary works such as those described here might be the lack of a conceptual starting point. This may be due in part to the traditional absence of discussion about visual artefacts involved in music cognition in music scholarship and education. While the visual arts have a considerable tradition regarding the depiction of auditory phenomena visually, (Kandinsky 1912) visual-auditory relationships are not an integral part of traditional music scholarship. This may be related to the fact that mental imagery is experienced less often by trained musicians. (Huron 1999) This in turn has been suggested to be due to a shift in music cognition towards linguistic areas of the brain as linguistic abstractions of music are assimilated during traditional music education. (Crowder and Pitt 1992) Bio-musicologists have suggested that the emphasis in western music on reading notated music from left to right as one would read a spoken language, has had a major influence on the cognition of music and the

231

Time Space Texture: An Approach to Audio-Visual Composition

evolution of the art as a whole. (Wallin 1991) This is perhaps reflected in the popularity of linguistic and grammatical models of music, such as the generative theory of tonal music (GTTM) of Lerdahl and Jackendoff. (1985) As Sir Thomas Beecham once quipped, "A musicologist is a man who can read music but can't hear it." While the system of Common Music Notation (CMN) underlying western music scholarship is still useful for musicians and composers working with traditional western music and instruments, it is not always applicable to the music of other cultures, or to many new types of electronic music. For some time musicologists have considered the possible shortcomings of describing music purely in terms derived from CMN. (Nattiez 1978) (Padham 1996) CMN and many models of music based on it are now increasingly regarded as being outmoded and irrelevant by modern sonic artists working with new technology and a broad timbral palette. (Wishart 1986) More recently it has been shown that cognitive musicology relying on CMN fails to meet basic adequacy criteria. (Leman 1999). Many have argued that musical experience has ineffable qualities – that is qualia that cannot be expressed in any other way. (Raffman, 1993) There is in fact no a-priori reason why the analysis and discussion of music should be limited to linguistic models based on CMN, when it has been shown repeatedly that a significant degree of musical experience does not reduce well to CMN or related linguistic models. Of the new music models developed in the 20th century, many make extensive use of visual analogies that do not involve CMN. (Mattis 1992), (Palombini 1992), (Smalley 1986, 1997) A visual approach to sonic art based on mental imagery is much more compatible with dominant dual coding theories of cognition, in which linguistic modes of cognition are complimented by mental imagery modes, and vice versa. (Paivio 1978) Music analysis based on such descriptive techniques might be described as Musicography. (Palombini 1992) (Lyons 1999) 232

Time Space Texture: An Approach to Audio-Visual Composition

The development of Musicography as a field within music research and education would do much to stimulate creativity in music culture. (Dailey 1994) While there can be no doubt that linguistic approaches to music discussion and education are valuable, there is increasingly no reason why they should not be complimented by analysis of the integral visual aspects of musical experience. Such a change to music scholarship would certainly do much to propagate musical concerns in future interdisciplinary artworks. The absence of such teaching will however perpetuate the developmental retardation of musicians seeking to work with new media and interdisciplinary art.

5 References Baron-Cohen,

S.,

and

J.

Harrison.

eds.

1996.

Synesthesia:

Classic

and

Contemporary Readings. Oxford: Blackwells. Battey Bret. 1998. An investigation into the Relationship between Language, Gesture

and

Music.

http://staff.washington.edu/bbattey/Ideas/lang-gest-

mus.html

Begault, Durand R. 1994. 3D Sound for Virtual Reality and Multimedia. Cambridge, MA. Academic Press. Brower, Candace. 1997. "Pathway, Blockage, and Containment in 'Density 21.5'," Theory and Practice 22. New York: Columbia University Press. Davies, John Booth. 1978. The Psychology of Music. Stanford University Press. Bragdon, Claude. 1978, (c1939). The Beautiful Necessity - Architecture as frozen Music. Wheaton, Ill.: Theosophical Pub. House. Bregman, Albert S. 1990. Auditory Scene Analysis: The Perceptual Organisation of Sound. Cambridge, Mass.: Bradford Books, MIT Press.

233

Time Space Texture: An Approach to Audio-Visual Composition

Campbell, Joseph. 1957. Man and Time: Papers from the Eranos Yearbooks. New York: Princeton University Press. Cho, Young-soo. 2001 "The influence of music on individual mental imagery." Korean

Journal

of

Music

Therapy.

Vol.

3,

No.

1,

pp.

31.49

http://www.ouimoi.com/mt/KAMT.htm

Cho, Young-soo. 2002. A Bibliography of UMI Dissertations Dealing with Mental Imagery. Referenced 3-5-2002. http://www.ouimoi.com/mt/UMI1.htm Chowning, J. 1971. "The simulation of moving sound sources." Journal of the Audio Engineering Society. Vol 19, No.1. Crowder, R.G; Pitt, M.A.; (1992) "Research on Memory/Imagery for Musical Timbre." In. Reisberg, D. (Ed.) Auditory Imagery. Hillsdale, New Jersey: Lawrence Erlbaum. Dodge, Charles; Jerse, Thomas A. 1997 Computer music : synthesis, composition, and performance. 2nd ed. New York : Schirmer Books ; London : Prentice Hall International. Cytowic, Richard E.1989. Synesthesia: a Union of the Senses, New York: Springer Verlag. Dailey, Audrey, R. 1995. Creativity, Primary Process Thinking, Synesthesia, and Physiognomic Perception. Unpublished doctoral dissertation. University of Maine. Goldberg, Frances Smith. 1995. "The Bonny Method of Guided Imagery." In, Wigram, T; Saperston, B; West, R. (Eds.) The Art and Science of Music Therapy. Harwood Academic Publishers.

234

Time Space Texture: An Approach to Audio-Visual Composition

Harley, Maria Anna. 1994. Space and Spatialization in Contemporary Music: History,

ideas

and

Implementation.

Montreal:

McGill

University.

Unpublished doctoral dissertation. Huron D. 1999 "Music838 Exam Questions and Answers." Ohio State University. http://www.music-cog.ohiostate.edu/Music838/exam_questions_answers.html#Cognition

Kosslyn, S.M. 1994. Image and Brain: The resolution of the imagery debate. Cambridge, MA: MIT Press. Leman, Marc. 1999. "Adequacy Criteria for Models of Musical Cognition." In J.N. Tabor (Ed.) Navigating New Musical Horizons. Westport CT: Greenwood Publishing Company. Lewis JW; Beauchamp MS; DeYoe EA. 2000. "A comparison of visual and auditory motion

processing

in

human

cerebral

cortex."

Cerebral

Cortex

Sep;10(9):873-88 http://www.mcw.edu/cellbio/bios/lewis.html Lyons, Andrew D. 1999. A Course in Applied Musicography.Unpublished report. http://www.users.bigpond.com/tstex/Musicography.html

Lyons,

Andrew

D.

2000.

Gestalt

Approaches

to

the

Gesamtkunstwerk.

Unpublished paper. http://www.users.bigpond.com/tstex/gestalt.htm Lyons. Andrew D. 2001. "Synaesthesia: A Cognitive Model of Cross Modal Association." Consciousness,

Literature

and

the

Arts.

Spring

2001.

http://www.users.bigpond.com/tstex/synaesthesia.htm

Marks, L. E. 1978. The Unity of the Senses: Interrelations among the Modalities. New York: Academic Press.

235

Time Space Texture: An Approach to Audio-Visual Composition

Marin, Servio Tulio. 1994. The concept of the visonual. Aural and visual associations in twentieth century music theatre. Unpublished Doctoral Dissertation. University of California San Diego. Mattis, Olivia. 1992. Edgard Varese and The Visual Arts. Unpublished Doctoral Dissertation. Stanford University. Moore, Richard. F. 1990. Elements of computer music. Englewood Cliffs, N.J : Prentice Hall. Nattiez, Jean-Jacques. 1990. Music and Discourse: Toward a Semiology of Music Translated from French by Carolyn Abbate. Princeton University Press. Ong, Tze-Boon. 1994. Music as a generative process in Architectural form and space composition. Unpublished Doctoral Dissertation. Rice university: Houston, Texas. Priest, Stephen. 1998. Mearleau Ponty. London: Routledge. Reisberg, D. (Ed.) 1992. Auditory Imagery. Hillsdale, New Jersey: Lawrence Erlbaum. Richardson, J.T.E. 1999. Imagery. East Sussex, UK: Psychology Press Ltd. Paivio, A. 1978. "The relationship between verbal and perceptual codes." In E.C. Carterette and M.p. Friedman (Ed.) Handbook of Perception: Vol VIII.New York: Academic Press. Palombini, Carlos. 1993. Pierre Schaeffer’s Typo Morphology of Sonic Objects. University of Durham. Unpublished Doctoral Dissertation. Quittner, A.L. 1980. The facilitative effects of music on mental imagery: A multiple measures approach. Unpublished masters thesis. Florida State University, Tallahassee, Florida. 236

Time Space Texture: An Approach to Audio-Visual Composition

Roads, Curtis. et al. (eds.) 1997. Musical signal processing. Exton, PA: Swets & Zeitlinger. Seashore, Carl E. 1967. Psychology of Music. NewYork: Dover Publications. Sheperd, Roger N; and Cooper, Lynn A. 1982. Mental images and Their Transformations. Cambridge, MA: MIT Press. Sloman, Steven A. and Rips, Lance J. (eds.) Similarity and symbols in human thinking. Cambridge, Mass. : MIT, Press, 1999. Smalley, Denis. 1986. "Spectro-morphology and Structuring Processes." In S.Emmerson, ed., The Language of Electroacoustic Music. London: Macmillan.pp.61-93. Smalley,

Denis.

1997.

"Spectromorphology

:

Explaining

Sound-Shapes."

Organised Sound, 2(2). Cambridge University Press. Stein, Barry M. 1993. The Merging of the Senses. Cambridge, Mass.: MIT Press. Stewart, R.J. 1987. Music and the elemental psyche. Wellingborough: Aquarian Press. Summer, Lisa. 1985. "Imagery and Music." Journal of Mental Imagery. 9(4). New York: Brandon House. Treib, Marc. 1996. Space Calculated in Seconds. Princeton, N.J. : Princeton University Press. Tuchman, Maurice; Freeman, Judi and Blotkamp, Carel. (eds.) 1986 The Spiritual in Art: Abstract Painting 1890-1985. New York: Abbeville Press. Wertheimer, Max. 1938 "Laws of Organisation in Perceptual Forms". in Ellis, W. A source book of Gestalt psychology. London: Routledge & Kegan Paul.

237

Time Space Texture: An Approach to Audio-Visual Composition

Wallin,

Nils

Lennart.

1991.

Biomusicology:

Neurophysiological,

Neuropsychological and evolutionary perspectives on the origins and purposes of music.Stuyvesant, NY: Pendragon Press. Wishart, Trevor. 1985 On Sonic Art. York: Imagineering Press. Yi, Dae-Am. 1991 Musical Analogy in Gothic and Renaissance Architecture. Unpublished Doctoral Dissertation. University of Sydney: Sydney, Australia.

238

Time Space Texture: An Approach to Audio-Visual Composition

Appendix H – Pan hscript The Pan script below is a Houdini native hscript which writes and executes a number of other hscripts and perl scripts in order to interface data in a CHOP network in Houdini with Csound. The output is 3D spatialised sound for a 5.1 sound system. It should be noted that the rever section in Pan is a modified version of one by Hans Mikelson provided in the CD-ROMs that accompanied the Csound book. (Boulanger, 2000)

NB: To protect code formatting, the font size and page margins have had to be compressed.

################################################# # # Written by Andrew D. Lyons 2001. ################################################# # INITIALISE VARIABLES ################################################# setenv LENGTH=`(2^$EXP)/$AR` setenv N_OBJECTS=`chopn("/ch/$NAME/pos_null")/3` setenv FLENGTH=`rint((2^$EXP)/$AR*25)` setenv KEXP=`chop("/ch/$NAME/kround_up/table_size")` setenv MAXDEL=`chop("/ch/$NAME/max_delay/chan")` set CS='"$CSOUND"' # set csound directory var set snd="$NAME" # local var for dir and file names set scale="$SCALE" # The sound level exponent for distance cues set sndir="$HOME/project/sounds/$NAME" # dir raw sounds to be written to set fsndir="L:/Audio/Project/$snd" # destination dir for final panned 5.1 sounds set orcdir="$HOME/project/csound/$NAME" # dir csound orc + sco files to go to set chandir="$HOME/project/chans" # dir doppler shift channels to go to set cscript="$HOME/project/csound/scripts" # csound scripts dir set script="$HOME/project/scripts" # hscript dir set prog="$HOME/project/programs" # perl and other programs set hip="$HIPDIR/$HIPNAME" # The name and path of the current hip set kr="$KR" # krate set ar="$AR" # arate set exp="$EXP" # exponent for a-length set kexp="$KEXP" # exponent for k-length set slength=[(2^$exp)/$ar] # sco length expression in secs - p3 set tlength=[2^$kexp] # ktable length in samples - f1 set alength=[2^$exp] # atable length in samples - f1 set otlength=(2^$kexp) # ktable length in samples for orc - f1 set tslength=[(2^$kexp)/$kr] # ktable length in secs - f1 set olength=((2^$exp)/$AR) # orc length expression set clength=\'(2^$exp)/$AR\' # command line length expression set n="$N_OBJECTS" # number of channels set speakers="$SPEAKERS" # the number of speakers being panned to set nchans=`$n*$speakers` # the number of control channels set walls=6 # The number of walls to reflect sound off

239

Time Space Texture: An Approach to Audio-Visual Composition

set revchans=`$walls*$SPEAKERS` # the number of amp chans for 6 early reflections set refchans=`$walls*$N_OBJECTS` # the number of delay channels for 6 early reflections set revgain = `$GAIN*$REV_GAIN` # Reverberation gain set gain = $GAIN # Direct + Reflection gain set nrev=`$walls*$speakers` # The number of reverberators needed. set nazac=`$speakers*14+1` # The number of zac a-rate spaces needed. set nkzac=`$n*3+1` # The number of zac a-rate spaces needed. set cinst1=`4+$n+$speakers` # the number of Csound instruments in panner set cinst2=`$refchans+$nrev+$speakers +3` # the number of Csound instruments in panner set maxdel=$MAXDEL # the largest size needed for a delay loop set highgain=`$FILTSCALER*$HIGHGAIN+$HIGHADD` # multiples for dist to create hpl filter freqs set lowgain=`$FILTSCALER*$LOWGAIN+$LOWADD` # multiples for dist to create lpf filter freqs set ca=`1/$CA` # the inverse of the speed of sound set rad=0.01745329252 # Factor to convert degrees to radians set ispeakers = `1/$speakers` # Inverse of speaker number set iwalls = `1/$walls` # Inverse of wall number set in = `1/$n` # Inverse of object number set b=`$cinst1+$refchans + $nrev` # variables added with Houdini V5 for use in orc creation. set c= `$cinst1+$refchans + $nrev + $speakers` # variables added with Houdini V5 for use in orc creation. set d=`$cinst1+$cinst2` # variables added with Houdini V5 for use in orc creation.

set y=1 set i=1 set f=1 set v=1

# Csound sco table index number # Iterator for instrument-local loops # Iterator for "instr" numbers # Iterator for orc + sco file numbers

################################################# # WRITE SCRIPT TO SAVE RAW AUDIO ################################################# echo "setenv NPAN=$NPAN" > $script/raw1$snd.txt echo "set f=0" >> $script/raw1$snd.txt echo "set i=1" >> $script/raw1$snd.txt echo "" >> $script/raw1$snd.txt echo "opcf /ch/`$NAME`" >> $script/raw1$snd.txt echo "opcook raw_audio" >> $script/raw1$snd.txt echo "while (\$f<$n)" >> $script/raw1$snd.txt echo " opparm adelete selnumbers ( \"\$f\" )" >> $script/raw1$snd.txt echo " opcook adelete" >> $script/raw1$snd.txt echo " opsave adelete \"$sndir/raw$snd\$i.wav\"" >> $script/raw1$snd.txt echo " echo \"saved raw$snd\$i.wav\"" >> $script/raw1$snd.txt echo " set f = \`\$f+1\`" >> $script/raw1$snd.txt echo " set i = \`\$i+1\`" >> $script/raw1$snd.txt echo "end" >> $script/raw1$snd.txt ################################################# # WRITE SCRIPT TO SAVE DIRECT DISTANCE CHANNELS ################################################# echo "set f=0" > $script/raw2$snd.txt echo "set i=1" >> $script/raw2$snd.txt echo "" >> $script/raw2$snd.txt echo "opcf /ch/`$NAME`" >> $script/raw2$snd.txt echo "opcook distance" >> $script/raw2$snd.txt echo "while (\$f<$n)" >> $script/raw2$snd.txt echo " opparm ddelete selnumbers ( \"\$f\" )" >> $script/raw2$snd.txt echo " opcook ddelete" >> $script/raw2$snd.txt echo " opsave ddelete \"$chandir/dirdist$snd\$i.chan\"" >> $script/raw2$snd.txt echo " echo \"saved dirdist$snd\$i.chan\"" >> $script/raw2$snd.txt echo " set f = \`\$f+1\`" >> $script/raw2$snd.txt echo " set i = \`\$i+1\`" >> $script/raw2$snd.txt echo "end" >> $script/raw2$snd.txt ################################################# # WRITE SCRIPT TO SAVE DIRECT AZIMUTH CHANNELS ################################################# echo "set f=0" >> $script/raw2$snd.txt echo "set i=1" >> $script/raw2$snd.txt echo "" >> $script/raw2$snd.txt echo "opcf /ch/`$NAME`" >> $script/raw2$snd.txt echo "opcook azimuth" >> $script/raw2$snd.txt echo "while (\$f<$n)" >> $script/raw2$snd.txt echo " opparm azidelete selnumbers ( \"\$f\" )" >> $script/raw2$snd.txt

240

Time Space Texture: An Approach to Audio-Visual Composition

echo " opcook azidelete" >> $script/raw2$snd.txt echo " opsave azidelete \"$chandir/dirazi$snd\$i.chan\"" >> $script/raw2$snd.txt echo " echo \"saved dirazi$snd\$i.chan\"" >> $script/raw2$snd.txt echo " set f = \`\$f+1\`" >> $script/raw2$snd.txt echo " set i = \`\$i+1\`" >> $script/raw2$snd.txt echo "end" >> $script/raw2$snd.txt ################################################# # WRITE SCRIPT TO SAVE SPEAKER REVERB AMP CHANNELS ################################################# echo "set f=0" >> $script/raw2$snd.txt echo "set i=1" >> $script/raw2$snd.txt echo "" >> $script/raw2$snd.txt echo "opcf /ch/`$NAME`" >> $script/raw2$snd.txt echo "opcook rev_amps" >> $script/raw2$snd.txt echo "while (\$f<$walls)" >> $script/raw2$snd.txt echo " opparm rev_amp selnumbers ( \"\$f\" )" >> $script/raw2$snd.txt echo " opcook rev_amp" >> $script/raw2$snd.txt echo " opsave rev_amp \"$chandir/rev_amp$snd\$i.chan\"" >> $script/raw2$snd.txt echo " echo \"saved rev_amp$snd\$i.chan\"" >> $script/raw2$snd.txt echo " set f = \`\$f+1\`" >> $script/raw2$snd.txt echo " set i = \`\$i+1\`" >> $script/raw2$snd.txt echo "end" >> $script/raw2$snd.txt ################################################# # WRITE SCRIPT TO SAVE REVERB TIME CHANNEL ################################################# echo "set f=0" >> $script/raw2$snd.txt echo "set i=1" >> $script/raw2$snd.txt echo "" >> $script/raw2$snd.txt echo "opcf /ch/`$NAME`" >> $script/raw2$snd.txt echo "opcook rev_length" >> $script/raw2$snd.txt echo "opsave rev_length \"$chandir/rev_length$snd.chan\"" >> $script/raw2$snd.txt echo "echo \"saved rev_length$snd.chan\"" >> $script/raw2$snd.txt ################################################# # WRITE SCRIPT TO SAVE REFLECTED DISTANCE CHANNELS ################################################# echo "set f=0" >> $script/raw2$snd.txt echo "set i=1" >> $script/raw2$snd.txt echo "" >> $script/raw2$snd.txt echo "opcf /ch/`$NAME`" >> $script/raw2$snd.txt echo "opcook refdist" >> $script/raw2$snd.txt echo "while (\$f<$refchans)" >> $script/raw2$snd.txt echo " opparm refdist_del selnumbers ( \"\$f\" )" >> $script/raw2$snd.txt echo " opcook refdist_del" >> $script/raw2$snd.txt echo " opsave refdist_del \"$chandir/refdist$snd\$i.chan\"" >> $script/raw2$snd.txt echo " echo \"saved refdist$snd\$i.chan\"" >> $script/raw2$snd.txt echo " set f = \`\$f+1\`" >> $script/raw2$snd.txt echo " set i = \`\$i+1\`" >> $script/raw2$snd.txt echo "end" >> $script/raw2$snd.txt ################################################# # WRITE SCRIPT TO SAVE REFLECTED AZIMUTH CHANNELS ################################################# echo "set f=0" >> $script/raw2$snd.txt echo "set i=1" >> $script/raw2$snd.txt echo "" >> $script/raw2$snd.txt echo "opcf /ch/`$NAME`" >> $script/raw2$snd.txt echo "opcook azimuth1" >> $script/raw2$snd.txt echo "while (\$f<$refchans)" >> $script/raw2$snd.txt echo " opparm refazi_delete selnumbers ( \"\$f\" )" >> $script/raw2$snd.txt echo " opcook refazi_delete" >> $script/raw2$snd.txt echo " opsave refazi_delete \"$chandir/refazi$snd\$i.chan\"" >> $script/raw2$snd.txt echo " echo \"saved refazi$snd\$i.chan\"" >> $script/raw2$snd.txt echo " set f = \`\$f+1\`" >> $script/raw2$snd.txt echo " set i = \`\$i+1\`" >> $script/raw2$snd.txt echo "end" >> $script/raw2$snd.txt ################################################# # GENERATE PERL SCRIPT TO GENERATE PANNER ORC FILE

241

Time Space Texture: An Approach to Audio-Visual Composition

################################################# echo "" > $prog/pan$snd.perl ###################################### PERL SCRIPT VARIABLES echo "\$snd=\"$snd\";" >> $prog/pan$snd.perl echo "\$scale=\"$scale\";" >> $prog/pan$snd.perl echo "\$hip=\"$hip\";" >> $prog/pan$snd.perl echo "\$kr=$kr;" >> $prog/pan$snd.perl echo "\$ar=$ar;" >> $prog/pan$snd.perl echo "\$kexp="$KEXP";" >> $prog/pan$snd.perl echo "\$exp=$exp;" >> $prog/pan$snd.perl echo "\$slength=\"$slength\";" >> $prog/pan$snd.perl echo "\$tlength=\"$tlength\";" >> $prog/pan$snd.perl echo "\$otlength=\"$otlength\";" >> $prog/pan$snd.perl echo "\$tslength=\"$tslength\";" >> $prog/pan$snd.perl echo "\$olength=\"$olength\";" >> $prog/pan$snd.perl echo "\$clength=$clength;" >> $prog/pan$snd.perl echo "\$n=$n;" >> $prog/pan$snd.perl echo "\$speakers=$speakers;" >> $prog/pan$snd.perl echo "\$nchans=$nchans;" >> $prog/pan$snd.perl echo "\$cinst1=$cinst1;" >> $prog/pan$snd.perl echo "\$cinst2=$cinst2;" >> $prog/pan$snd.perl echo "\$refchans=$refchans;" >> $prog/pan$snd.perl echo "\$maxdel=$maxdel;" >> $prog/pan$snd.perl echo "\$ca=$ca;" >> $prog/pan$snd.perl echo "\$CS='\"\$CSOUND\"';" >> $prog/pan$snd.perl echo "\$sndir=\"$sndir\";" >> $prog/pan$snd.perl echo "\$fsndir=\"$fsndir\";" >> $prog/pan$snd.perl echo "\$orcdir=\"$orcdir\";" >> $prog/pan$snd.perl echo "\$chandir=\"$chandir\";" >> $prog/pan$snd.perl echo "\$cscript=\"$cscript\";" >> $prog/pan$snd.perl echo "\$script=\"$script\";" >> $prog/pan$snd.perl echo "\$prog=\"$prog\";" >> $prog/pan$snd.perl echo "\$nrev=\"$nrev\";" >> $prog/pan$snd.perl echo "\$walls =$walls ;" >> $prog/pan$snd.perl echo "\$nazac=\"$nazac\";" >> $prog/pan$snd.perl echo "\$nkzac=\"$nkzac\";" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "\$f=1;" >> $prog/pan$snd.perl echo "\$y=1;" >> $prog/pan$snd.perl echo "\$v=1;" >> $prog/pan$snd.perl echo "\$rad=$rad;" >> $prog/pan$snd.perl echo "\$revgain =$revgain ;" >> $prog/pan$snd.perl echo "\$gain =$gain ;" >> $prog/pan$snd.perl echo "\$iwalls =$iwalls ;" >> $prog/pan$snd.perl echo "\$ispeakers =$ispeakers ;" >> $prog/pan$snd.perl echo "\$in =$in ;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "########################################## DIRECTORY EXISTENCE TESTS" >> $prog/pan$snd.perl echo "unless ( -e \"L:/\" ) {" >> $prog/pan$snd.perl echo " die \"\t################## !! NETWORK CONNECTION TO AUDIO IS NOT OPEN !! ################## \n\";" >> $prog/pan$snd.perl echo " }" >> $prog/pan$snd.perl echo "unless ( -e \$sndir ) {" >> $prog/pan$snd.perl echo " mkdir( \$sndir );" >> $prog/pan$snd.perl echo " }" >> $prog/pan$snd.perl echo "unless ( -e \$fsndir ) {" >> $prog/pan$snd.perl echo " mkdir ( \$fsndir ); " >> $prog/pan$snd.perl echo " }" >> $prog/pan$snd.perl echo "unless ( -e \$orcdir ) {" >> $prog/pan$snd.perl echo " mkdir( \$orcdir );" >> $prog/pan$snd.perl echo " }" >> $prog/pan$snd.perl echo "########################################## HEADER" >> $prog/pan$snd.perl

echo "open(PAN,\">$orcdir/$snd.orc\");" >> $prog/pan$snd.perl echo "close(PAN);" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "open(PAN,\">>$orcdir/$snd.orc\");" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"sr =$AR\n\";" >> $prog/pan$snd.perl

242

Time Space Texture: An Approach to Audio-Visual Composition

echo "print PAN \"kr =$KR\n\";" >> $prog/pan$snd.perl echo "print PAN \"ksmps =`$AR/$KR`\n\";" >> $prog/pan$snd.perl echo "print PAN \"nchnls=1\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"zakinit

$nazac , $nkzac\n\";" >> $prog/pan$snd.perl

echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "while (\$i<=\$n) {" >> $prog/pan$snd.perl echo " print PAN \"gasnd\$i init 0\n\";" >> $prog/pan$snd.perl echo " ++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "while(\$i<=\$n) {" >> $prog/pan$snd.perl echo " print PAN \"gafilt\$i init 0\n\";" >> $prog/pan$snd.perl echo " ++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "while(\$i<=\$speakers) {" >> $prog/pan$snd.perl echo " print PAN \"gadir\$i init 0\n\";" >> $prog/pan$snd.perl echo " ++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "while (\$i<=$refchans) {" >> $prog/pan$snd.perl echo " print PAN \"garef\$i init 0\n\";" >> $prog/pan$snd.perl echo " ++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo " print PAN \"gkrevlength

init 0\n\";" >> $prog/pan$snd.perl

echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "while (\$i<=$speakers) {" >> $prog/pan$snd.perl echo " print PAN \"gkrevamp\$i init 0\n\";" >> $prog/pan$snd.perl echo " ++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \"gkindex init 0\n\";" >> $prog/pan$snd.perl #echo "print PAN \"gaindex init 0\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"gilowgain = $LOWGAIN\n\";" >> $prog/pan$snd.perl echo "print PAN \"gilowadd = $LOWADD\n\";" >> $prog/pan$snd.perl echo "print PAN \"gihighgain = $HIGHGAIN\n\";" >> $prog/pan$snd.perl echo "print PAN \"gihighadd = $HIGHADD\n\";" >> $prog/pan$snd.perl echo "print PAN \"gifiltscale= $FILTSCALER\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"gimaxdel= $maxdel\n\";" >> $prog/pan$snd.perl echo "++\$y;" >> $prog/pan$snd.perl

243

Time Space Texture: An Approach to Audio-Visual Composition

echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \";########################################## GLOBALS ########################################## \n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl

echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkindex line 2, p4, $otlength\n\";" >> $prog/pan$snd.perl #echo "print PAN \"\tkaindex line 2, p3, $olength\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tgkindex=kindex\n\";" >> $prog/pan$snd.perl #echo "print PAN \"\tgaindex=kaindex\n\";" >> $prog/pan$snd.perl echo "print PAN \"endin\n\";" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "while (\$i<=\$n) {" >> $prog/pan$snd.perl echo ' print PAN " \taout$i soundin \"$sndir/raw$snd$i.wav\"\n";' >> $prog/pan$snd.perl echo " print PAN \" \tgasnd\$i = aout\$i\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \"endin\n\";" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl #echo "\$y=(\$y+\$n);" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tkreverb table gkindex, \$y\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tgkrevlength = kreverb\n\";" >> $prog/pan$snd.perl echo "print PAN \"endin\n\";" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "++\$y;" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "while (\$i<=$speakers) {" >> $prog/pan$snd.perl echo " print PAN \" \tkreverb table gkindex, \$y\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tgkrevamp\$i = kreverb\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$y;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \"endin\n\";" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl

echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \";########################################## DIRECT ########################################## \n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "while (\$i<=\$n) {" >> $prog/pan$snd.perl echo " print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tkdist table gkindex, \$y\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tadist interp kdist\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tatap vdelay gasnd\$i, (adist * $ca * 1000 ), ( $maxdel * 5000 )\n\";" >> $prog/pan$snd.perl echo " print PAN \" \taout tone atap, (\$gain/(-(kdist^ \$scale ) * gifiltscale)* gilowgain + gilowadd)\n\";" >> $prog/pan$snd.perl echo " print PAN \" \taout atone aout, (\$gain/(kdist ^ \$scale ) * gifiltscale * gihighgain + gihighadd)\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tgafilt\$i = aout\n\";" >> $prog/pan$snd.perl

244

Time Space Texture: An Approach to Audio-Visual Composition

echo " print PAN \" \tkamp = (\$gain/(kdist^ \$scale ))\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tklamp = (1-1/sqrt(kdist^ \$scale))*\$revgain\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tkgamp = 1/sqrt(kdist^ \$scale)*\$revgain\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tzkw kamp, \$i\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tzkw klamp, (\$n+\$i)\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tzkw kgamp, (\$n*2+\$i)\n\";" >> $prog/pan$snd.perl echo " print PAN \"endin\n\";" >> $prog/pan$snd.perl echo " print PAN \"\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "++\$y;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl

echo "\$x=1;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "while(\$x<=\$speakers) {" >> $prog/pan$snd.perl echo "\$r=\$y;" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "while(\$i<=\$n) {" >> $prog/pan$snd.perl echo "print PAN \"\tkangle\$i table gkindex, \$r\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkamp\$i zkr \$i\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tklamp\$i zkr (\$n+\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkgamp\$i zkr (\$n*2+\$i)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$r;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "if(\$x==1){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S2b>=kangle\$i)&&(kangle\$i>=$S1) ? cos(((kangle\$i-$S1)/($S2b-$S1))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S1>=kangle\$i)&&(kangle\$i>=$S5) ? sin(((kangle\$i-$S5)/($S1-$S5))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkout = (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (kout * kamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i interp (kout * klamp\$i+kgamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i = (arev\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==2){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S2b>=kangle\$i)&&(kangle\$i>=$S1) ? sin(((kangle\$i-$S1)/($S2b-$S1))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S3>=kangle\$i)&&(kangle\$i>=$S2) ? cos(((kangle\$i-$S2)/($S3-$S2))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkout = (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (kout * kamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i interp (kout * klamp\$i+kgamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i = (arev\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==3){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S3>=kangle\$i)&&(kangle\$i>=$S2) ? sin(((kangle\$i-$S2)/($S3-$S2))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S4>=kangle\$i)&&(kangle\$i>=$S3) ? cos(((kangle\$i-$S3)/($S4-$S3))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkout = (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (kout * kamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i interp (kout * klamp\$i+kgamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i = (arev\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==4){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S4>=kangle\$i)&&(kangle\$i>=$S3) ? sin(((kangle\$i-$S3)/($S4-$S3))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S5>=kangle\$i)&&(kangle\$i>=$S4) ? cos(((kangle\$i-$S4)/($S5-$S4))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkout = (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (kout * kamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i interp (kout * klamp\$i+kgamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i = (arev\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl

245

Time Space Texture: An Approach to Audio-Visual Composition

echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==5){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S5>=kangle\$i)&&(kangle\$i>=$S4) ? sin(((kangle\$i-$S4)/($S5-$S4))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S1>=kangle\$i)&&(kangle\$i>=$S5) ? cos(((kangle\$i-$S5)/($S1-$S5))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkout = (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (kout * kamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i interp (kout * klamp\$i+kgamp\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i = (arev\$i * gafilt\$i)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "\$z=1;" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "print PAN \"\tarev=((\";" >> $prog/pan$snd.perl echo "print PAN \"arev\$z\";" >> $prog/pan$snd.perl echo "++\$z;" >> $prog/pan$snd.perl echo "while(\$z<=\$n) { " >> $prog/pan$snd.perl echo " print PAN \"+arev\$z\";" >> $prog/pan$snd.perl echo " ++\$z;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \") * $in)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tzawm arev, (\$speakers+\$x)\n\";" >> $prog/pan$snd.perl echo "\$z=1;" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "print PAN \"\tadir=((\";" >> $prog/pan$snd.perl echo "print PAN \"aout\$z\";" >> $prog/pan$snd.perl echo "++\$z;" >> $prog/pan$snd.perl echo "while(\$z<=\$n) { " >> $prog/pan$snd.perl echo " print PAN \"+aout\$z\";" >> $prog/pan$snd.perl echo " ++\$z;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \") * $in)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tgadir\$x = adir\n\";" >> $prog/pan$snd.perl echo " print PAN \"endin\n\";" >> $prog/pan$snd.perl echo " print PAN \"\n\";" >> $prog/pan$snd.perl echo "++\$x;" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "\$y=\$r;" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \";########################################## REFLECTIONS ########################################## \n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "while (\$i<=\$refchans) {" >> $prog/pan$snd.perl echo "\$x=1;" >> $prog/pan$snd.perl echo "while (\$x<=\$n) {" >> $prog/pan$snd.perl echo " print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tkdist table gkindex, \$y\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tadist interp kdist\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tatap vdelay gasnd\$x, (adist * $ca * 1000 ), ( $maxdel * 5000 )\n\";" >> $prog/pan$snd.perl echo " print PAN \" \taout tone atap, (\$gain/(-(kdist^ \$scale) * 0.9 * gifiltscale) * gilowgain + gilowadd)\n\";" >> $prog/pan$snd.perl echo " print PAN \" \taout atone aout, (\$gain/(kdist^ \$scale) * 0.9 * gifiltscale * gihighgain + gihighadd)\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tamp interp (\$gain/(kdist^ \$scale))\n\";" >> $prog/pan$snd.perl echo " print PAN \" \tgaref\$i = ( amp * aout )\n\";" >> $prog/pan$snd.perl echo " print PAN \"endin\n\";" >> $prog/pan$snd.perl echo " print PAN \"\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "++\$y;" >> $prog/pan$snd.perl echo "++\$x;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl

246

Time Space Texture: An Approach to Audio-Visual Composition

echo "" >> $prog/pan$snd.perl echo "\$x=1;" >> $prog/pan$snd.perl echo "\$g=0;" >> $prog/pan$snd.perl echo "while(\$x<=\$speakers) {" >> $prog/pan$snd.perl echo "\$c=1;" >> $prog/pan$snd.perl echo "\$r=\$y;" >> $prog/pan$snd.perl echo "\$w=1;" >> $prog/pan$snd.perl echo "while(\$w<=$walls) {" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "while(\$i<=\$n) {" >> $prog/pan$snd.perl echo "print PAN \"\tkangle\$i table gkindex, \$r\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$r;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "if(\$x==1){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S2b>=kangle\$i)&&(kangle\$i>=$S1) ? cos(((kangle\$i-$S1)/($S2b-$S1))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S1>=kangle\$i)&&(kangle\$i>=$S5) ? sin(((kangle\$i-$S5)/($S1-$S5))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * garef\$c)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$c;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==2){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S2b>=kangle\$i)&&(kangle\$i>=$S1) ? sin(((kangle\$i-$S1)/($S2b-$S1))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S3>=kangle\$i)&&(kangle\$i>=$S2) ? cos(((kangle\$i-$S2)/($S3-$S2))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * garef\$c)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$c;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==3){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S3>=kangle\$i)&&(kangle\$i>=$S2) ? sin(((kangle\$i-$S2)/($S3-$S2))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S4>=kangle\$i)&&(kangle\$i>=$S3) ? cos(((kangle\$i-$S3)/($S4-$S3))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * garef\$c)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$c;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==4){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S4>=kangle\$i)&&(kangle\$i>=$S3) ? sin(((kangle\$i-$S3)/($S4-$S3))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S5>=kangle\$i)&&(kangle\$i>=$S4) ? cos(((kangle\$i-$S4)/($S5-$S4))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * garef\$c)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$c;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "} elsif (\$x==5){ " >> $prog/pan$snd.perl echo "while(\$i<=\$n) { " >> $prog/pan$snd.perl echo "print PAN \"\tksnda = (($S5>=kangle\$i)&&(kangle\$i>=$S4) ? sin(((kangle\$i-$S4)/($S5-$S4))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tksndb = (($S1>=kangle\$i)&&(kangle\$i>=$S5) ? cos(((kangle\$i-$S5)/($S1-$S5))*90*$rad) : 0)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i interp (ksnda + ksndb)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taout\$i = (aout\$i * garef\$c)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "++\$c;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "\$z=1;" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "print PAN \"\taref=((\";" >> $prog/pan$snd.perl echo "print PAN \"aout\$z\";" >> $prog/pan$snd.perl echo "++\$z;" >> $prog/pan$snd.perl echo "while(\$z<=\$n) { " >> $prog/pan$snd.perl echo " print PAN \"+aout\$z\";" >> $prog/pan$snd.perl echo " ++\$z;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \") * $in)\n\";" >> $prog/pan$snd.perl

247

Time Space Texture: An Approach to Audio-Visual Composition

echo "print PAN \"\tzawm aref*\$iwalls, \$x\n\";" >> $prog/pan$snd.perl echo " print PAN \"endin\n\";" >> $prog/pan$snd.perl echo " print PAN \"\n\";" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "++\$w;" >> $prog/pan$snd.perl echo "++\$g;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "++\$x;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "\$y=\$r;" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \";########################################## REVERB ########################################## \n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "while(\$i<=\$speakers) {" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tiamp = p4\n\";" >> $prog/pan$snd.perl echo "print PAN \"\timin = p5\n\";" >> $prog/pan$snd.perl echo "print PAN \"\timax = p6\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tirmax = p7\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tifiltpre = p8\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tifilt = p9\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tidens = p10\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tiinch = p11\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tioutch = p12\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tibool = p13\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tiq = sqrt(.5)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkvol = gkrevamp\$i\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tktime = gkrevlength\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkbool = ((ktime<(imin-10)) ? 0 : 1 )\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkamp = ktime-imin\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkamp limit kamp, 0, irmax-imin\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tkamp = iamp * kvol * (kamp / (irmax-imin))\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tamp interp kamp\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tif

kbool == 0 kgoto null\n\";" >> $prog/pan$snd.perl

echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev zar ( \$speakers + (iinch * \$speakers) + \$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "print PAN \"\ta1 echo "print PAN \"\ta2 echo "print PAN \"\ta3 echo "print PAN \"\ta4 echo "print PAN \"\ta5 echo "print PAN \"\ta6 echo "print PAN \"\titim1 echo "print PAN \"\titim2 echo "print PAN \"\titim3 echo "print PAN \"\titim4 echo "print PAN \"\titim5 echo "print PAN \"\titim6 echo "print PAN \"\tat1 echo "print PAN \"\tat2 echo "print PAN \"\tat3 echo "print PAN \"\tat4 echo "print PAN \"\tat5 echo "print PAN \"\tat6 echo "print PAN \"\tatim1 echo "print PAN \"\tatim2 echo "print PAN \"\tatim3 echo "print PAN \"\tatim4 echo "print PAN \"\tatim5 echo "print PAN \"\tatim6

init init init init init init = = = = = = oscil oscil oscil oscil oscil oscil = = = = = =

0\n\";" >> $prog/pan$snd.perl 0\n\";" >> $prog/pan$snd.perl 0\n\";" >> $prog/pan$snd.perl 0\n\";" >> $prog/pan$snd.perl 0\n\";" >> $prog/pan$snd.perl 0\n\";" >> $prog/pan$snd.perl ( rnd(1000)*0.001 * imax *1.75)\n\";" >> $prog/pan$snd.perl ( rnd(1000)*0.001 * imax *1.75)\n\";" >> $prog/pan$snd.perl ( rnd(1000)*0.001 * imax *1.75)\n\";" >> $prog/pan$snd.perl ( rnd(1000)*0.001 * imax *1.75)\n\";" >> $prog/pan$snd.perl ( rnd(1000)*0.001 * imax *1.75)\n\";" >> $prog/pan$snd.perl ( rnd(1000)*0.001 * imax *1.75)\n\";" >> $prog/pan$snd.perl itim1*.05, .50, 1, .2\n\";" >> $prog/pan$snd.perl itim2*.05, .56, 1, .4\n\";" >> $prog/pan$snd.perl itim3*.05, .54, 1, .6\n\";" >> $prog/pan$snd.perl itim4*.05, .51, 1, .7\n\";" >> $prog/pan$snd.perl itim5*.05, .53, 1, .9\n\";" >> $prog/pan$snd.perl itim6*.05, .55, 1 \n\";" >> $prog/pan$snd.perl itim1+at1+5\n\";" >> $prog/pan$snd.perl itim2+at2+5\n\";" >> $prog/pan$snd.perl itim3+at3+5\n\";" >> $prog/pan$snd.perl itim4+at4+5\n\";" >> $prog/pan$snd.perl itim5+at5+5\n\";" >> $prog/pan$snd.perl itim6+at6+5\n\";" >> $prog/pan$snd.perl

248

Time Space Texture: An Approach to Audio-Visual Composition

echo "print PAN \"\tig11 echo "print PAN \"\tig12 echo "print PAN \"\tig13 echo "print PAN \"\tig14 echo "print PAN \"\tig15 echo "print PAN \"\tig16

= = = = = =

.55*idens\n\";" >> $prog/pan$snd.perl .43*idens\n\";" >> $prog/pan$snd.perl -.41*idens\n\";" >> $prog/pan$snd.perl -.39*idens\n\";" >> $prog/pan$snd.perl .35*idens \n\";" >> $prog/pan$snd.perl -.33*idens \n\";" >> $prog/pan$snd.perl

echo "print PAN \"\tig21 echo "print PAN \"\tig22 echo "print PAN \"\tig23 echo "print PAN \"\tig24 echo "print PAN \"\tig25 echo "print PAN \"\tig26

= = = = = =

-.42*idens \n\";" >> $prog/pan$snd.perl -.45*idens \n\";" >> $prog/pan$snd.perl .46*idens \n\";" >> $prog/pan$snd.perl .36*idens \n\";" >> $prog/pan$snd.perl -.33*idens \n\";" >> $prog/pan$snd.perl .34*idens \n\";" >> $prog/pan$snd.perl

echo "print PAN \"\tig31 echo "print PAN \"\tig32 echo "print PAN \"\tig33 echo "print PAN \"\tig34 echo "print PAN \"\tig35 echo "print PAN \"\tig36

= = = = = =

.41*idens \n\";" >> $prog/pan$snd.perl .45*idens \n\";" >> $prog/pan$snd.perl .47*idens \n\";" >> $prog/pan$snd.perl -.42*idens \n\";" >> $prog/pan$snd.perl -.40*idens \n\";" >> $prog/pan$snd.perl -.39*idens \n\";" >> $prog/pan$snd.perl

echo "print PAN \"\tig41 echo "print PAN \"\tig42 echo "print PAN \"\tig43 echo "print PAN \"\tig44 echo "print PAN \"\tig45 echo "print PAN \"\tig46

= = = = = =

-.43*idens \n\";" >> $prog/pan$snd.perl -.42*idens \n\";" >> $prog/pan$snd.perl -.45*idens \n\";" >> $prog/pan$snd.perl .47*idens \n\";" >> $prog/pan$snd.perl .43*idens \n\";" >> $prog/pan$snd.perl .41*idens \n\";" >> $prog/pan$snd.perl

echo "print PAN \"\tig51 echo "print PAN \"\tig52 echo "print PAN \"\tig53 echo "print PAN \"\tig54 echo "print PAN \"\tig55 echo "print PAN \"\tig56

= = = = = =

-.43*idens \n\";" >> $prog/pan$snd.perl .42*idens \n\";" >> $prog/pan$snd.perl -.45*idens \n\";" >> $prog/pan$snd.perl .47*idens \n\";" >> $prog/pan$snd.perl -.43*idens \n\";" >> $prog/pan$snd.perl .41*idens \n\";" >> $prog/pan$snd.perl

echo "print PAN \"\tig61 echo "print PAN \"\tig62 echo "print PAN \"\tig63 echo "print PAN \"\tig64 echo "print PAN \"\tig65 echo "print PAN \"\tig66

= = = = = =

.43*idens \n\";" >> $prog/pan$snd.perl -.42*idens \n\";" >> $prog/pan$snd.perl .45*idens \n\";" >> $prog/pan$snd.perl -.47*idens \n\";" >> $prog/pan$snd.perl .43*idens \n\";" >> $prog/pan$snd.perl -.41*idens \n\";" >> $prog/pan$snd.perl

echo "print PAN \"\tarev echo "print PAN \"\taa1 echo "print PAN \"\taa2 echo "print PAN \"\taa3 echo "print PAN \"\taa4 echo "print PAN \"\taa5 echo "print PAN \"\taa6 echo "print PAN \"\ta1 echo "print PAN \"\ta2 echo "print PAN \"\ta3 echo "print PAN \"\ta4 echo "print PAN \"\ta5 echo "print PAN \"\ta6

tone

arev,

ifiltpre \n\";" >> $prog/pan$snd.perl

vdelay3 arev+ig11*a1+ig12*a2+ig13*a3+ig14*a4+ig15*a5+ig16*a6, atim1, 4000\n\";" >> $prog/pan$snd.perl vdelay3 arev+ig21*a1+ig22*a2+ig23*a3+ig24*a4+ig25*a5+ig26*a6, atim2, 4000\n\";" >> $prog/pan$snd.perl vdelay3 arev+ig31*a1+ig32*a2+ig33*a3+ig34*a4+ig35*a5+ig36*a6, atim3, 4000\n\";" >> $prog/pan$snd.perl vdelay3 arev+ig41*a1+ig42*a2+ig43*a3+ig44*a4+ig45*a5+ig46*a6, atim4, 4000\n\";" >> $prog/pan$snd.perl vdelay3 arev+ig51*a1+ig52*a2+ig53*a3+ig54*a4+ig55*a5+ig56*a6, atim5, 4000\n\";" >> $prog/pan$snd.perl vdelay3 arev+ig61*a1+ig62*a2+ig63*a3+ig64*a4+ig65*a5+ig66*a6, atim6, 4000\n\";" >> $prog/pan$snd.perl pareq pareq pareq pareq pareq pareq

aa1, 10000*ifilt, 0.5, iq, 2\n\";" >> $prog/pan$snd.perl aa2, 9000*ifilt, 0.5, iq, 2\n\";" >> $prog/pan$snd.perl aa3, 9500*ifilt, 0.5, iq, 2\n\";" >> $prog/pan$snd.perl aa4, 8500*ifilt, 0.5, iq, 2\n\";" >> $prog/pan$snd.perl aa5, 7000*ifilt, 0.5, iq, 2\n\";" >> $prog/pan$snd.perl aa6, 6000*ifilt, 0.5, iq, 2\n\";" >> $prog/pan$snd.perl

echo "print PAN \"\tarev dcblock (arev * ibool + ((a1+a2+a3+a4+a5+a6)* 0.1667 * amp))\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tzawm arev, ( \$speakers + (ioutch * \$speakers)+\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tzawm (arev*\$ispeakers), ( 13 * \$speakers+\$i)\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tnull:\n\";" >> $prog/pan$snd.perl echo " print PAN \"endin\n\";" >> $prog/pan$snd.perl echo " print PAN \"\n\";" >> $prog/pan$snd.perl echo "++\$f;" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \";########################################## OUTPUT ########################################## \n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl

249

Time Space Texture: An Approach to Audio-Visual Composition

echo "\$i=1;" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "while(\$i<=\$speakers) { " >> $prog/pan$snd.perl echo "print PAN \"\taref\$i zar \$i\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tadirf\$i = (gadir\$i + (aref\$i * -\$in))\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tadir\$i buthp adirf\$i, 90\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tadir\$i buthp adir\$i, 80\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tadir\$i buthp adir\$i, 70\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tadir\$i buthp adir\$i, 60\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tadir\$i buthp adir\$i, 50\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tadir\$i buthp adir\$i, 40\n\";" >> $prog/pan$snd.perl echo 'print PAN " \tfout \"$fsndir/direct$snd$i.wav\", 2, adir$i\n";' >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "while(\$i<=\$speakers) { " >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp adirf\$i, 180\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 170\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 160\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 150\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 140\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 130\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "\$x=1;" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir=((\";" >> $prog/pan$snd.perl echo "print PAN \"aLFEdir\$x\";" >> $prog/pan$snd.perl echo " ++\$x;" >> $prog/pan$snd.perl echo "while (\$x<=\$speakers) {" >> $prog/pan$snd.perl echo " print PAN \"+aLFEdir\$x\";" >> $prog/pan$snd.perl echo " ++\$x;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \") )\n\";" >> $prog/pan$snd.perl echo 'print PAN " \tfout \"$fsndir/LFE_direct$snd.wav\", 2, aLFEdir\n";' >> $prog/pan$snd.perl echo "print PAN \"endin\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo " ++\$f;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "while(\$i<=\$speakers) { " >> $prog/pan$snd.perl echo "print PAN \"\tarevf\$i zar (13 * \$speakers+\$i)\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "while(\$i<=\$speakers) { " >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i buthp arevf\$i, 90\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i buthp arev\$i, 80\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i buthp arev\$i, 70\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i buthp arev\$i, 60\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i buthp arev\$i, 50\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tarev\$i buthp arev\$i, 40\n\";" >> $prog/pan$snd.perl echo 'print PAN " \tfout \"$fsndir/rev$snd$i.wav\", 2, arev$i\n";' >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl

echo "" >> $prog/pan$snd.perl

250

Time Space Texture: An Approach to Audio-Visual Composition

echo "\$i=1;" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "while(\$i<=\$speakers) { " >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp arevf\$i, 180\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 170\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 160\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 150\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 140\n\";" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir\$i butlp aLFEdir\$i, 130\n\";" >> $prog/pan$snd.perl echo "++\$i;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl echo "\$i=1;" >> $prog/pan$snd.perl echo "\$x=1;" >> $prog/pan$snd.perl echo "print PAN \"\taLFEdir=((\";" >> $prog/pan$snd.perl echo "print PAN \"aLFEdir\$x\";" >> $prog/pan$snd.perl echo " ++\$x;" >> $prog/pan$snd.perl echo "while (\$x<=\$speakers) {" >> $prog/pan$snd.perl echo " print PAN \"+aLFEdir\$x\";" >> $prog/pan$snd.perl echo " ++\$x;" >> $prog/pan$snd.perl echo "}" >> $prog/pan$snd.perl echo "print PAN \" ) ) \n\";" >> $prog/pan$snd.perl echo 'print PAN " \tfout \"$fsndir/LFE_rev$snd.wav\", 2, aLFEdir\n";' >> $prog/pan$snd.perl echo "print PAN \"endin\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo " ++\$f;" >> $prog/pan$snd.perl echo "print PAN \"instr \$f\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tzacl 0, $nazac\n\";" >> $prog/pan$snd.perl echo "print PAN \"\tzkcl 0, $nkzac\n\";" >> $prog/pan$snd.perl echo "print PAN \"endin\n\";" >> $prog/pan$snd.perl echo "print PAN \"\n\";" >> $prog/pan$snd.perl echo " ++\$f;" >> $prog/pan$snd.perl echo "close(PAN);" >> $prog/pan$snd.perl echo "" >> $prog/pan$snd.perl

################################################# # EXECUTE PERL ORC FILE GENERATOR ################################################ unix perl "$JOB/programs/pan$snd.perl" ################################################# # GENERATE PANNER SCO FILES ################################################# set i=1 set f=1 echo "" > $orcdir/$snd.sco echo "f1 0 65536 10 1" >> $orcdir/$snd.sco set f = `$f+1` echo "" >> $orcdir/$snd.sco echo "f$f 0 $tlength -23 \"$chandir/rev_length$snd.chan\"" >> $orcdir/$snd.sco set f = `$f+1` set i=1 echo "" >> $orcdir/$snd.sco while($i<=$speakers) echo "f$f 0 $tlength -23 \"$chandir/rev_amp$snd$i.chan\"" >> $orcdir/$snd.sco set i = `$i+1` set f = `$f+1` end set i=1 echo "" >> $orcdir/$snd.sco

251

Time Space Texture: An Approach to Audio-Visual Composition

while($i<=$n) echo "f$f 0 $tlength -23 \"$chandir/dirdist$snd$i.chan\"" >> $orcdir/$snd.sco set i = `$i+1` set f = `$f+1` end set i=1 echo "" >> $orcdir/$snd.sco while($i<=$n) echo "f$f 0 $tlength -23 \"$chandir/dirazi$snd$i.chan\"" >> $orcdir/$snd.sco set i = `$i+1` set f = `$f+1` end set i=1 echo "" >> $orcdir/$snd.sco while($i<=$refchans) echo "f$f 0 $tlength -23 \"$chandir/refdist$snd$i.chan\"" >> $orcdir/$snd.sco set i = `$i+1` set f = `$f+1` end set i=1 echo "" >> $orcdir/$snd.sco while($i<=$refchans) echo "f$f 0 $tlength -23 \"$chandir/refazi$snd$i.chan\"" >> $orcdir/$snd.sco set i = `$i+1` set f = `$f+1` end echo "" >> $orcdir/$snd.sco set i=1 set f=1 echo "i$f 0 $slength $tslength" >> $orcdir/$snd.sco set f = `$f+1` while($f<=$cinst1) echo "i$f 0 $slength" >> $orcdir/$snd.sco set f = `$f+1` end while($f<=$b) echo "i$f 0 $slength" >> $orcdir/$snd.sco set f = `$f+1` end while($f<=$c) # inst echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0 echo "i$f 0

start duration amp mintime maxtime maxamp prefilt filtfactor dens z in z out thru $slength 1 1 8 4 10000 0.9 0.55 0 1 0" >> $orcdir/$snd.sco $slength 1.058 17 12 9000 0.850.51 1 2 1" >> $orcdir/$snd.sco $slength 1.1 17 51 27 8000 0.8 0.53 2 3 1" >> $orcdir/$snd.sco $slength 1.2 51 93 71 7000 0.750.57 3 4 1" >> $orcdir/$snd.sco $slength 1.3 93 187 137 6000 0.7 0.52 4 5 1" >> $orcdir/$snd.sco $slength 1.4 187 333 260 5000 0.650.58 4 6 1" >> $orcdir/$snd.sco $slength 1.5 333 713 523 4000 0.6 0.52 5 7 1" >> $orcdir/$snd.sco $slength 1.7 713 1397 1055 3000 0.55 0.53 5 8 1" >> $orcdir/$snd.sco $slength 2.1 1397 3173 2285 20000.5 0.57 5 9 1" >> $orcdir/$snd.sco $slength 2.5 3173 7139 5156 10000.45 0.52 6 10 1" >> $orcdir/$snd.sco $slength 3 7139 15871 11505 700 0.4 0.54 6 11 1" >> $orcdir/$snd.sco

set f = `$f+1` end while($f<=$d) echo "i$f 0 $slength" >> $orcdir/$snd.sco set f = `$f+1` end ################################################# # GENERATE CSOUND PANNER EXECUTION FILES #################################################

252

Time Space Texture: An Approach to Audio-Visual Composition

echo "" > $cscript/$snd.csh echo "$CS/csound -+y -d -n $orcdir/$snd.orc $orcdir/$snd.sco" >> $cscript/$snd.csh

################################################# # GENERATE HSCRIPT TO SAVE RAW SOUND CHANS ################################################# # echo "hscript $hip << ZZ" > $script/rawrun1$snd.csh echo "source $script/raw1$snd.txt" >> $script/rawrun1$snd.csh echo "ZZ" >> $script/rawrun1$snd.csh ################################################# # GENERATE HSCRIPT TO SAVE AZIMUTH AND DISTANCE CHANS ################################################# # echo "hscript $hip << ZZ" > $script/rawrun2$snd.csh echo "source $script/raw2$snd.txt" >> $script/rawrun2$snd.csh echo "ZZ" >> $script/rawrun2$snd.csh

253

Time Space Texture: An Approach to Audio-Visual Composition

Appendix I – Csound 5.1 Panning Orc + Sco Csound Spatialisation Orchestra File The orchestra file below was created by Pan to spatialise the crystal object in Schwarzchild. Note that this is a script to spatialise one sound. Orc files for spatialising multiple sounds are up to four times as long. It should be noted that the rever section in Pan is a modified version of one by Hans Mikelson provided in the CD-ROMs that accompanied the Csound book. (Boulanger, 2000)

sr kr ksmps nchnls

=44100 =200 =220.5 =1

zakinit

71 ,

4

gasnd1

init

0

gafilt1

init

0

gadir1 gadir2 gadir3 gadir4 gadir5

init init init init init

0 0 0 0 0

garef1 garef2 garef3 garef4 garef5 garef6

init init init init init init

0 0 0 0 0 0

gkrevlength

init

0

gkrevamp1 gkrevamp2 gkrevamp3 gkrevamp4 gkrevamp5 gkindex

init init init init init init

0 0 0 0 0 0

gilowgain gilowadd gihighgain gihighadd gifiltscale

= 1000 = 22000 = 20 =0 =1

gimaxdel

= 0.0251594

;########################################## GLOBALS ##########################################

instr 1 kindex gkindex=kindex

line

2, p4, ( 2^19 )

254

Time Space Texture: An Approach to Audio-Visual Composition

endin instr 2 aout1 soundin gasnd1 = aout1

"I:/andrew/project/sounds/schz2/rawschz21.wav"

endin

instr 3 kreverb table gkindex, 2 gkrevlength = kreverb endin instr 4 kreverb table gkindex, 3 gkrevamp1 = kreverb kreverb table gkindex, 4 gkrevamp2 = kreverb kreverb table gkindex, 5 gkrevamp3 = kreverb kreverb table gkindex, 6 gkrevamp4 = kreverb kreverb table gkindex, 7 gkrevamp5 = kreverb endin

;########################################## DIRECT ##########################################

instr 5 kdist table gkindex, 8 adist interp kdist atapvdelay gasnd1, (adist * 0.00289855 * 1000 ), ( 0.0251594 * 5000 ) aout tone atap, (1/(-(kdist^ 2 ) * gifiltscale)* gilowgain + gilowadd) aout atone aout, (1/(kdist ^ 2 ) * gifiltscale * gihighgain + gihighadd) gafilt1 = aout kamp = (1/(kdist^ 2 )) klamp = (1-1/sqrt(kdist^ 2))*0.166667 kgamp = 1/sqrt(kdist^ 2)*0.166667 zkw kamp, 1 zkw klamp, (1+1) zkw kgamp, (1*2+1) endin instr 6 kangle1 kamp1 klamp1 kgamp1 ksnda ksndb kout = aout1 aout1 arev1 arev1 arev=((arev1) * 1) zawm adir=((aout1) * 1) gadir1 = adir

table

gkindex, 9 zkr 1 zkr (1+1) zkr (1*2+1) = ((360 >= kangle1)&&(kangle1 >= 330) ? cos(((kangle1-330)/(360-330))*90*0.01745329252) : 0) = ((330 >= kangle1)&&(kangle1 >= 250) ? sin(((kangle1-250)/(330-250))*90*0.01745329252) : 0) (ksnda + ksndb) interp (kout * kamp1) = (aout1 * gafilt1) interp (kout * klamp1+kgamp1) = (arev1 * gafilt1)

kangle1 kamp1 klamp1 kgamp1 ksnda ksndb kout = aout1 aout1 arev1 arev1

table

arev,

(5+1)

endin instr 7 gkindex, 9 zkr 1 zkr (1+1) zkr (1*2+1) = ((360 >= kangle1)&&(kangle1 >= 330) ? sin(((kangle1-330)/(360-330))*90*0.01745329252) : 0) = ((30 >= kangle1)&&(kangle1 >= 0) ? cos(((kangle1-0)/(30-0))*90*0.01745329252) : 0) (ksnda + ksndb) interp (kout * kamp1) = (aout1 * gafilt1) interp (kout * klamp1+kgamp1) = (arev1 * gafilt1)

255

Time Space Texture: An Approach to Audio-Visual Composition

arev=((arev1) * 1) zawm adir=((aout1) * 1) gadir2 = adir

arev,

(5+2)

endin instr 8 kangle1 kamp1 klamp1 kgamp1 ksnda ksndb kout = aout1 aout1 arev1 arev1 arev=((arev1) * 1) zawm adir=((aout1) * 1) gadir3 = adir

table

gkindex, 9 zkr 1 zkr (1+1) zkr (1*2+1) = ((30 >= kangle1)&&(kangle1 >= 0) ? sin(((kangle1-0)/(30-0))*90*0.01745329252) : 0) = ((110 >= kangle1)&&(kangle1 >= 30) ? cos(((kangle1-30)/(110-30))*90*0.01745329252) : 0) (ksnda + ksndb) interp (kout * kamp1) = (aout1 * gafilt1) interp (kout * klamp1+kgamp1) = (arev1 * gafilt1)

kangle1 kamp1 klamp1 kgamp1 ksnda ksndb kout = aout1 aout1 arev1 arev1 arev=((arev1) * 1) zawm adir=((aout1) * 1) gadir4 = adir

table

gkindex, 9 zkr 1 zkr (1+1) zkr (1*2+1) = ((110 >= kangle1)&&(kangle1 >= 30) ? sin(((kangle1-30)/(110-30))*90*0.01745329252) : 0) = ((250 >= kangle1)&&(kangle1 >= 110) ? cos(((kangle1-110)/(250-110))*90*0.01745329252) : 0) (ksnda + ksndb) interp (kout * kamp1) = (aout1 * gafilt1) interp (kout * klamp1+kgamp1) = (arev1 * gafilt1)

kangle1 kamp1 klamp1 kgamp1 ksnda ksndb kout = aout1 aout1 arev1 arev1 arev=((arev1) * 1) zawm adir=((aout1) * 1) gadir5 = adir

table

arev,

(5+3)

endin instr 9

arev,

(5+4)

endin instr 10 gkindex, 9 zkr 1 zkr (1+1) zkr (1*2+1) = ((250 >= kangle1)&&(kangle1 >= 110) ? sin(((kangle1-110)/(250-110))*90*0.01745329252) : 0) = ((330 >= kangle1)&&(kangle1 >= 250) ? cos(((kangle1-250)/(330-250))*90*0.01745329252) : 0) (ksnda + ksndb) interp (kout * kamp1) = (aout1 * gafilt1) interp (kout * klamp1+kgamp1) = (arev1 * gafilt1) arev,

(5+5)

endin

;########################################## REFLECTIONS ##########################################

instr 11 kdist table gkindex, 10 adist interp kdist atapvdelay gasnd1, (adist * 0.00289855 * 1000 ), ( 0.0251594 * 5000 ) aout tone atap, (1/(-(kdist^ 2) * 0.9 * gifiltscale) * gilowgain + gilowadd) aout atone aout, (1/(kdist^ 2) * 0.9 * gifiltscale * gihighgain + gihighadd) amp interp (1/(kdist^ 2)) garef1 = ( amp * aout ) endin

256

Time Space Texture: An Approach to Audio-Visual Composition

instr 12 kdist table gkindex, 11 adist interp kdist atapvdelay gasnd1, (adist * 0.00289855 * 1000 ), ( 0.0251594 * 5000 ) aout tone atap, (1/(-(kdist^ 2) * 0.9 * gifiltscale) * gilowgain + gilowadd) aout atone aout, (1/(kdist^ 2) * 0.9 * gifiltscale * gihighgain + gihighadd) amp interp (1/(kdist^ 2)) garef2 = ( amp * aout ) endin instr 13 kdist table gkindex, 12 adist interp kdist atapvdelay gasnd1, (adist * 0.00289855 * 1000 ), ( 0.0251594 * 5000 ) aout tone atap, (1/(-(kdist^ 2) * 0.9 * gifiltscale) * gilowgain + gilowadd) aout atone aout, (1/(kdist^ 2) * 0.9 * gifiltscale * gihighgain + gihighadd) amp interp (1/(kdist^ 2)) garef3 = ( amp * aout ) endin instr 14 kdist table gkindex, 13 adist interp kdist atapvdelay gasnd1, (adist * 0.00289855 * 1000 ), ( 0.0251594 * 5000 ) aout tone atap, (1/(-(kdist^ 2) * 0.9 * gifiltscale) * gilowgain + gilowadd) aout atone aout, (1/(kdist^ 2) * 0.9 * gifiltscale * gihighgain + gihighadd) amp interp (1/(kdist^ 2)) garef4 = ( amp * aout ) endin instr 15 kdist table gkindex, 14 adist interp kdist atapvdelay gasnd1, (adist * 0.00289855 * 1000 ), ( 0.0251594 * 5000 ) aout tone atap, (1/(-(kdist^ 2) * 0.9 * gifiltscale) * gilowgain + gilowadd) aout atone aout, (1/(kdist^ 2) * 0.9 * gifiltscale * gihighgain + gihighadd) amp interp (1/(kdist^ 2)) garef5 = ( amp * aout ) endin instr 16 kdist table gkindex, 15 adist interp kdist atapvdelay gasnd1, (adist * 0.00289855 * 1000 ), ( 0.0251594 * 5000 ) aout tone atap, (1/(-(kdist^ 2) * 0.9 * gifiltscale) * gilowgain + gilowadd) aout atone aout, (1/(kdist^ 2) * 0.9 * gifiltscale * gihighgain + gihighadd) amp interp (1/(kdist^ 2)) garef6 = ( amp * aout ) endin instr 17 kangle1 table gkindex, 16 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? cos(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? sin(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef1) aref=((aout1) * 1) zawm aref*0.166667, 1 endin instr 18 kangle1 table gkindex, 17 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? cos(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? sin(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef2) aref=((aout1) * 1) zawm aref*0.166667, 1 endin instr 19 kangle1 table gkindex, 18 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? cos(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? sin(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb)

257

Time Space Texture: An Approach to Audio-Visual Composition

aout1 = aref=((aout1) * 1) zawm

(aout1 * garef3) aref*0.166667, 1

endin instr 20 kangle1 table gkindex, 19 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? cos(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? sin(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef4) aref=((aout1) * 1) zawm aref*0.166667, 1 endin instr 21 kangle1 table gkindex, 20 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? cos(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? sin(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef5) aref=((aout1) * 1) zawm aref*0.166667, 1 endin instr 22 kangle1 table gkindex, 21 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? cos(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? sin(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef6) aref=((aout1) * 1) zawm aref*0.166667, 1 endin instr 23 kangle1 table gkindex, 16 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? sin(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((30 >= kangle1)&&(kangle1 >= 0) ? cos(((kangle1-0)/(30-0))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef1) aref=((aout1) * 1) zawm aref*0.166667, 2 endin instr 24 kangle1 table gkindex, 17 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? sin(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((30 >= kangle1)&&(kangle1 >= 0) ? cos(((kangle1-0)/(30-0))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef2) aref=((aout1) * 1) zawm aref*0.166667, 2 endin instr 25 kangle1 table gkindex, 18 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? sin(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((30 >= kangle1)&&(kangle1 >= 0) ? cos(((kangle1-0)/(30-0))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef3) aref=((aout1) * 1) zawm aref*0.166667, 2 endin instr 26 kangle1 table gkindex, 19 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? sin(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((30 >= kangle1)&&(kangle1 >= 0) ? cos(((kangle1-0)/(30-0))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef4) aref=((aout1) * 1) zawm aref*0.166667, 2 endin

258

Time Space Texture: An Approach to Audio-Visual Composition

instr 27 kangle1 table gkindex, 20 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? sin(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((30 >= kangle1)&&(kangle1 >= 0) ? cos(((kangle1-0)/(30-0))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef5) aref=((aout1) * 1) zawm aref*0.166667, 2 endin instr 28 kangle1 table gkindex, 21 ksnda = ((360 >= kangle1)&&(kangle1 >= 330) ? sin(((kangle1-330)/(360-330))*90*0.01745329252) : 0) ksndb = ((30 >= kangle1)&&(kangle1 >= 0) ? cos(((kangle1-0)/(30-0))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef6) aref=((aout1) * 1) zawm aref*0.166667, 2 endin instr 29 kangle1 table gkindex, 16 ksnda = ((30 >= kangle1)&&(kangle1 >= 0) ? sin(((kangle1-0)/(30-0))*90*0.01745329252) : 0) ksndb = ((110 >= kangle1)&&(kangle1 >= 30) ? cos(((kangle1-30)/(110-30))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef1) aref=((aout1) * 1) zawm aref*0.166667, 3 endin instr 30 kangle1 table gkindex, 17 ksnda = ((30 >= kangle1)&&(kangle1 >= 0) ? sin(((kangle1-0)/(30-0))*90*0.01745329252) : 0) ksndb = ((110 >= kangle1)&&(kangle1 >= 30) ? cos(((kangle1-30)/(110-30))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef2) aref=((aout1) * 1) zawm aref*0.166667, 3 endin instr 31 kangle1 table gkindex, 18 ksnda = ((30 >= kangle1)&&(kangle1 >= 0) ? sin(((kangle1-0)/(30-0))*90*0.01745329252) : 0) ksndb = ((110 >= kangle1)&&(kangle1 >= 30) ? cos(((kangle1-30)/(110-30))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef3) aref=((aout1) * 1) zawm aref*0.166667, 3 endin instr 32 kangle1 table gkindex, 19 ksnda = ((30 >= kangle1)&&(kangle1 >= 0) ? sin(((kangle1-0)/(30-0))*90*0.01745329252) : 0) ksndb = ((110 >= kangle1)&&(kangle1 >= 30) ? cos(((kangle1-30)/(110-30))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef4) aref=((aout1) * 1) zawm aref*0.166667, 3 endin instr 33 kangle1 table gkindex, 20 ksnda = ((30 >= kangle1)&&(kangle1 >= 0) ? sin(((kangle1-0)/(30-0))*90*0.01745329252) : 0) ksndb = ((110 >= kangle1)&&(kangle1 >= 30) ? cos(((kangle1-30)/(110-30))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef5) aref=((aout1) * 1) zawm aref*0.166667, 3 endin instr 34 kangle1 table gkindex, 21 ksnda = ((30 >= kangle1)&&(kangle1 >= 0) ? sin(((kangle1-0)/(30-0))*90*0.01745329252) : 0) ksndb = ((110 >= kangle1)&&(kangle1 >= 30) ? cos(((kangle1-30)/(110-30))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb)

259

Time Space Texture: An Approach to Audio-Visual Composition

aout1 = aref=((aout1) * 1) zawm

(aout1 * garef6) aref*0.166667, 3

endin instr 35 kangle1 table gkindex, 16 ksnda = ((110 >= kangle1)&&(kangle1 >= 30) ? sin(((kangle1-30)/(110-30))*90*0.01745329252) : 0) ksndb = ((250 >= kangle1)&&(kangle1 >= 110) ? cos(((kangle1-110)/(250-110))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef1) aref=((aout1) * 1) zawm aref*0.166667, 4 endin instr 36 kangle1 table gkindex, 17 ksnda = ((110 >= kangle1)&&(kangle1 >= 30) ? sin(((kangle1-30)/(110-30))*90*0.01745329252) : 0) ksndb = ((250 >= kangle1)&&(kangle1 >= 110) ? cos(((kangle1-110)/(250-110))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef2) aref=((aout1) * 1) zawm aref*0.166667, 4 endin instr 37 kangle1 table gkindex, 18 ksnda = ((110 >= kangle1)&&(kangle1 >= 30) ? sin(((kangle1-30)/(110-30))*90*0.01745329252) : 0) ksndb = ((250 >= kangle1)&&(kangle1 >= 110) ? cos(((kangle1-110)/(250-110))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef3) aref=((aout1) * 1) zawm aref*0.166667, 4 endin instr 38 kangle1 table gkindex, 19 ksnda = ((110 >= kangle1)&&(kangle1 >= 30) ? sin(((kangle1-30)/(110-30))*90*0.01745329252) : 0) ksndb = ((250 >= kangle1)&&(kangle1 >= 110) ? cos(((kangle1-110)/(250-110))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef4) aref=((aout1) * 1) zawm aref*0.166667, 4 endin instr 39 kangle1 table gkindex, 20 ksnda = ((110 >= kangle1)&&(kangle1 >= 30) ? sin(((kangle1-30)/(110-30))*90*0.01745329252) : 0) ksndb = ((250 >= kangle1)&&(kangle1 >= 110) ? cos(((kangle1-110)/(250-110))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef5) aref=((aout1) * 1) zawm aref*0.166667, 4 endin instr 40 kangle1 table gkindex, 21 ksnda = ((110 >= kangle1)&&(kangle1 >= 30) ? sin(((kangle1-30)/(110-30))*90*0.01745329252) : 0) ksndb = ((250 >= kangle1)&&(kangle1 >= 110) ? cos(((kangle1-110)/(250-110))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef6) aref=((aout1) * 1) zawm aref*0.166667, 4 endin instr 41 kangle1 table gkindex, 16 ksnda = ((250 >= kangle1)&&(kangle1 >= 110) ? sin(((kangle1-110)/(250-110))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? cos(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef1) aref=((aout1) * 1) zawm aref*0.166667, 5 endin

260

Time Space Texture: An Approach to Audio-Visual Composition

instr 42 kangle1 table gkindex, 17 ksnda = ((250 >= kangle1)&&(kangle1 >= 110) ? sin(((kangle1-110)/(250-110))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? cos(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef2) aref=((aout1) * 1) zawm aref*0.166667, 5 endin instr 43 kangle1 table gkindex, 18 ksnda = ((250 >= kangle1)&&(kangle1 >= 110) ? sin(((kangle1-110)/(250-110))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? cos(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef3) aref=((aout1) * 1) zawm aref*0.166667, 5 endin instr 44 kangle1 table gkindex, 19 ksnda = ((250 >= kangle1)&&(kangle1 >= 110) ? sin(((kangle1-110)/(250-110))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? cos(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef4) aref=((aout1) * 1) zawm aref*0.166667, 5 endin instr 45 kangle1 table gkindex, 20 ksnda = ((250 >= kangle1)&&(kangle1 >= 110) ? sin(((kangle1-110)/(250-110))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? cos(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef5) aref=((aout1) * 1) zawm aref*0.166667, 5 endin instr 46 kangle1 table gkindex, 21 ksnda = ((250 >= kangle1)&&(kangle1 >= 110) ? sin(((kangle1-110)/(250-110))*90*0.01745329252) : 0) ksndb = ((330 >= kangle1)&&(kangle1 >= 250) ? cos(((kangle1-250)/(330-250))*90*0.01745329252) : 0) aout1 interp (ksnda + ksndb) aout1 = (aout1 * garef6) aref=((aout1) * 1) zawm aref*0.166667, 5 endin

;########################################## REVERB ##########################################

instr 47 iamp imin imax irmax ifiltpre ifilt = idens iinch ioutch ibool iq kvol ktime kbool kamp kamp kamp amp if

= = = = = p9 = = = = = = = = = =

p4 p5 p6 p7 p8 p10 p11 p12 p13 sqrt(.5) gkrevamp1 gkrevlength ((ktime<(imin-10)) ? 0 : 1 ) ktime-imin limit kamp, 0, iamp * kvol * (kamp / (irmax-imin)) interp kamp

kbool == 0 kgoto null

261

irmax-imin

Time Space Texture: An Approach to Audio-Visual Composition

arev

zar

( 5 + (iinch * 5) + 1)

a1 init 0 a2 init 0 a3 init 0 a4 init 0 a5 init 0 a6 init 0 itim1 = ( rnd(1000)*0.001 * imax *1.75) itim2 = ( rnd(1000)*0.001 * imax *1.75) itim3 = ( rnd(1000)*0.001 * imax *1.75) itim4 = ( rnd(1000)*0.001 * imax *1.75) itim5 = ( rnd(1000)*0.001 * imax *1.75) itim6 = ( rnd(1000)*0.001 * imax *1.75) at1 oscil itim1*.05, .50, 1, .2 at2 oscil itim2*.05, .56, 1, .4 at3 oscil itim3*.05, .54, 1, .6 at4 oscil itim4*.05, .51, 1, .7 at5 oscil itim5*.05, .53, 1, .9 at6 oscil itim6*.05, .55, 1 atim1 = itim1+at1+5 atim2 = itim2+at2+5 atim3 = itim3+at3+5 atim4 = itim4+at4+5 atim5 = itim5+at5+5 atim6 = itim6+at6+5 ig11 = .55*idens ig12 = .43*idens ig13 = -.41*idens ig14 = -.39*idens ig15 = .35*idens ig16 = -.33*idens ig21 = -.42*idens ig22 = -.45*idens ig23 = .46*idens ig24 = .36*idens ig25 = -.33*idens ig26 = .34*idens ig31 = .41*idens ig32 = .45*idens ig33 = .47*idens ig34 = -.42*idens ig35 = -.40*idens ig36 = -.39*idens ig41 = -.43*idens ig42 = -.42*idens ig43 = -.45*idens ig44 = .47*idens ig45 = .43*idens ig46 = .41*idens ig51 = -.43*idens ig52 = .42*idens ig53 = -.45*idens ig54 = .47*idens ig55 = -.43*idens ig56 = .41*idens ig61 = .43*idens ig62 = -.42*idens ig63 = .45*idens ig64 = -.47*idens ig65 = .43*idens ig66 = -.41*idens arev tone arev, ifiltpre aa1 vdelay3 arev+ig11*a1+ig12*a2+ig13*a3+ig14*a4+ig15*a5+ig16*a6, atim1, 4000 aa2 vdelay3 arev+ig21*a1+ig22*a2+ig23*a3+ig24*a4+ig25*a5+ig26*a6, atim2, 4000 aa3 vdelay3 arev+ig31*a1+ig32*a2+ig33*a3+ig34*a4+ig35*a5+ig36*a6, atim3, 4000 aa4 vdelay3 arev+ig41*a1+ig42*a2+ig43*a3+ig44*a4+ig45*a5+ig46*a6, atim4, 4000 aa5 vdelay3 arev+ig51*a1+ig52*a2+ig53*a3+ig54*a4+ig55*a5+ig56*a6, atim5, 4000 aa6 vdelay3 arev+ig61*a1+ig62*a2+ig63*a3+ig64*a4+ig65*a5+ig66*a6, atim6, 4000 a1 pareq aa1, 10000*ifilt, 0.5, iq, 2 a2 pareq aa2, 9000*ifilt, 0.5, iq, 2 a3 pareq aa3, 9500*ifilt, 0.5, iq, 2 a4 pareq aa4, 8500*ifilt, 0.5, iq, 2 a5 pareq aa5, 7000*ifilt, 0.5, iq, 2

262

Time Space Texture: An Approach to Audio-Visual Composition

a6 pareq aa6, 6000*ifilt, 0.5, iq, 2 arev dcblock (arev * ibool + ((a1+a2+a3+a4+a5+a6)* 0.1667 * amp)) zawm arev, ( 5 + (ioutch * 5)+1) zawm (arev*0.2), ( 13 * 5+1) null: endin instr 48 iamp imin imax irmax ifiltpre ifilt = idens iinch ioutch ibool iq kvol ktime kbool kamp kamp kamp amp if

= = = = = p9 = = = = = = = = = =

p10 p11 p12 p13 sqrt(.5) gkrevamp2 gkrevlength ((ktime<(imin-10)) ? 0 : 1 ) ktime-imin limit kamp, 0, iamp * kvol * (kamp / (irmax-imin)) interp kamp

kbool == 0 kgoto null

arev a1 a2 a3 a4 a5 a6 itim1 itim2 itim3 itim4 itim5 itim6 at1 at2 at3 at4 at5 at6 atim1 atim2 atim3 atim4 atim5 atim6 ig11 ig12 ig13 ig14 ig15 ig16 ig21 ig22 ig23 ig24 ig25 ig26 ig31 ig32 ig33 ig34 ig35 ig36 ig41 ig42

p4 p5 p6 p7 p8

zar init init init init init init

0 0 0 0 0 0

oscil oscil oscil oscil oscil oscil = = = = = = = = = = = = = = = = = = = = = = = = = =

= = = = = = itim1*.05, .50, 1, .2 itim2*.05, .56, 1, .4 itim3*.05, .54, 1, .6 itim4*.05, .51, 1, .7 itim5*.05, .53, 1, .9 itim6*.05, .55, 1 itim1+at1+5 itim2+at2+5 itim3+at3+5 itim4+at4+5 itim5+at5+5 itim6+at6+5 .55*idens .43*idens -.41*idens -.39*idens .35*idens -.33*idens -.42*idens -.45*idens .46*idens .36*idens -.33*idens .34*idens .41*idens .45*idens .47*idens -.42*idens -.40*idens -.39*idens -.43*idens -.42*idens

( 5 + (iinch * 5) + 2)

( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75)

263

irmax-imin

Time Space Texture: An Approach to Audio-Visual Composition

ig43 = -.45*idens ig44 = .47*idens ig45 = .43*idens ig46 = .41*idens ig51 = -.43*idens ig52 = .42*idens ig53 = -.45*idens ig54 = .47*idens ig55 = -.43*idens ig56 = .41*idens ig61 = .43*idens ig62 = -.42*idens ig63 = .45*idens ig64 = -.47*idens ig65 = .43*idens ig66 = -.41*idens arev tone arev, ifiltpre aa1 vdelay3 arev+ig11*a1+ig12*a2+ig13*a3+ig14*a4+ig15*a5+ig16*a6, atim1, 4000 aa2 vdelay3 arev+ig21*a1+ig22*a2+ig23*a3+ig24*a4+ig25*a5+ig26*a6, atim2, 4000 aa3 vdelay3 arev+ig31*a1+ig32*a2+ig33*a3+ig34*a4+ig35*a5+ig36*a6, atim3, 4000 aa4 vdelay3 arev+ig41*a1+ig42*a2+ig43*a3+ig44*a4+ig45*a5+ig46*a6, atim4, 4000 aa5 vdelay3 arev+ig51*a1+ig52*a2+ig53*a3+ig54*a4+ig55*a5+ig56*a6, atim5, 4000 aa6 vdelay3 arev+ig61*a1+ig62*a2+ig63*a3+ig64*a4+ig65*a5+ig66*a6, atim6, 4000 a1 pareq aa1, 10000*ifilt, 0.5, iq, 2 a2 pareq aa2, 9000*ifilt, 0.5, iq, 2 a3 pareq aa3, 9500*ifilt, 0.5, iq, 2 a4 pareq aa4, 8500*ifilt, 0.5, iq, 2 a5 pareq aa5, 7000*ifilt, 0.5, iq, 2 a6 pareq aa6, 6000*ifilt, 0.5, iq, 2 arev dcblock (arev * ibool + ((a1+a2+a3+a4+a5+a6)* 0.1667 * amp)) zawm arev, ( 5 + (ioutch * 5)+2) zawm (arev*0.2), ( 13 * 5+2) null: endin instr 49 iamp imin imax irmax ifiltpre ifilt = idens iinch ioutch ibool iq kvol ktime kbool kamp kamp kamp amp if

= = = = = p9 = = = = = = = = = =

p10 p11 p12 p13 sqrt(.5) gkrevamp3 gkrevlength ((ktime<(imin-10)) ? 0 : 1 ) ktime-imin limit kamp, 0, iamp * kvol * (kamp / (irmax-imin)) interp kamp

kbool == 0 kgoto null

arev a1 a2 a3 a4 a5 a6 itim1 itim2 itim3 itim4 itim5 itim6 at1 at2 at3 at4

p4 p5 p6 p7 p8

zar init init init init init init

oscil oscil oscil oscil

( 5 + (iinch * 5) + 3)

0 0 0 0 0 0 = = = = = = itim1*.05, .50, 1, .2 itim2*.05, .56, 1, .4 itim3*.05, .54, 1, .6 itim4*.05, .51, 1, .7

( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75)

264

irmax-imin

Time Space Texture: An Approach to Audio-Visual Composition

at5 oscil itim5*.05, .53, 1, .9 at6 oscil itim6*.05, .55, 1 atim1 = itim1+at1+5 atim2 = itim2+at2+5 atim3 = itim3+at3+5 atim4 = itim4+at4+5 atim5 = itim5+at5+5 atim6 = itim6+at6+5 ig11 = .55*idens ig12 = .43*idens ig13 = -.41*idens ig14 = -.39*idens ig15 = .35*idens ig16 = -.33*idens ig21 = -.42*idens ig22 = -.45*idens ig23 = .46*idens ig24 = .36*idens ig25 = -.33*idens ig26 = .34*idens ig31 = .41*idens ig32 = .45*idens ig33 = .47*idens ig34 = -.42*idens ig35 = -.40*idens ig36 = -.39*idens ig41 = -.43*idens ig42 = -.42*idens ig43 = -.45*idens ig44 = .47*idens ig45 = .43*idens ig46 = .41*idens ig51 = -.43*idens ig52 = .42*idens ig53 = -.45*idens ig54 = .47*idens ig55 = -.43*idens ig56 = .41*idens ig61 = .43*idens ig62 = -.42*idens ig63 = .45*idens ig64 = -.47*idens ig65 = .43*idens ig66 = -.41*idens arev tone arev, ifiltpre aa1 vdelay3 arev+ig11*a1+ig12*a2+ig13*a3+ig14*a4+ig15*a5+ig16*a6, atim1, 4000 aa2 vdelay3 arev+ig21*a1+ig22*a2+ig23*a3+ig24*a4+ig25*a5+ig26*a6, atim2, 4000 aa3 vdelay3 arev+ig31*a1+ig32*a2+ig33*a3+ig34*a4+ig35*a5+ig36*a6, atim3, 4000 aa4 vdelay3 arev+ig41*a1+ig42*a2+ig43*a3+ig44*a4+ig45*a5+ig46*a6, atim4, 4000 aa5 vdelay3 arev+ig51*a1+ig52*a2+ig53*a3+ig54*a4+ig55*a5+ig56*a6, atim5, 4000 aa6 vdelay3 arev+ig61*a1+ig62*a2+ig63*a3+ig64*a4+ig65*a5+ig66*a6, atim6, 4000 a1 pareq aa1, 10000*ifilt, 0.5, iq, 2 a2 pareq aa2, 9000*ifilt, 0.5, iq, 2 a3 pareq aa3, 9500*ifilt, 0.5, iq, 2 a4 pareq aa4, 8500*ifilt, 0.5, iq, 2 a5 pareq aa5, 7000*ifilt, 0.5, iq, 2 a6 pareq aa6, 6000*ifilt, 0.5, iq, 2 arev dcblock (arev * ibool + ((a1+a2+a3+a4+a5+a6)* 0.1667 * amp)) zawm arev, ( 5 + (ioutch * 5)+3) zawm (arev*0.2), ( 13 * 5+3) null: endin instr 50 iamp imin imax irmax ifiltpre ifilt = idens iinch ioutch ibool iq

= = = = = p9 = = = = =

p4 p5 p6 p7 p8 p10 p11 p12 p13 sqrt(.5)

265

Time Space Texture: An Approach to Audio-Visual Composition

kvol ktime kbool kamp kamp kamp amp if arev

= = = = =

gkrevamp4 gkrevlength ((ktime<(imin-10)) ? 0 : 1 ) ktime-imin limit kamp, 0, iamp * kvol * (kamp / (irmax-imin)) interp kamp

irmax-imin

kbool == 0 kgoto null zar

( 5 + (iinch * 5) + 4)

a1 init 0 a2 init 0 a3 init 0 a4 init 0 a5 init 0 a6 init 0 itim1 = ( rnd(1000)*0.001 * imax *1.75) itim2 = ( rnd(1000)*0.001 * imax *1.75) itim3 = ( rnd(1000)*0.001 * imax *1.75) itim4 = ( rnd(1000)*0.001 * imax *1.75) itim5 = ( rnd(1000)*0.001 * imax *1.75) itim6 = ( rnd(1000)*0.001 * imax *1.75) at1 oscil itim1*.05, .50, 1, .2 at2 oscil itim2*.05, .56, 1, .4 at3 oscil itim3*.05, .54, 1, .6 at4 oscil itim4*.05, .51, 1, .7 at5 oscil itim5*.05, .53, 1, .9 at6 oscil itim6*.05, .55, 1 atim1 = itim1+at1+5 atim2 = itim2+at2+5 atim3 = itim3+at3+5 atim4 = itim4+at4+5 atim5 = itim5+at5+5 atim6 = itim6+at6+5 ig11 = .55*idens ig12 = .43*idens ig13 = -.41*idens ig14 = -.39*idens ig15 = .35*idens ig16 = -.33*idens ig21 = -.42*idens ig22 = -.45*idens ig23 = .46*idens ig24 = .36*idens ig25 = -.33*idens ig26 = .34*idens ig31 = .41*idens ig32 = .45*idens ig33 = .47*idens ig34 = -.42*idens ig35 = -.40*idens ig36 = -.39*idens ig41 = -.43*idens ig42 = -.42*idens ig43 = -.45*idens ig44 = .47*idens ig45 = .43*idens ig46 = .41*idens ig51 = -.43*idens ig52 = .42*idens ig53 = -.45*idens ig54 = .47*idens ig55 = -.43*idens ig56 = .41*idens ig61 = .43*idens ig62 = -.42*idens ig63 = .45*idens ig64 = -.47*idens ig65 = .43*idens ig66 = -.41*idens arev tone arev, ifiltpre aa1 vdelay3 arev+ig11*a1+ig12*a2+ig13*a3+ig14*a4+ig15*a5+ig16*a6, atim1, 4000 aa2 vdelay3 arev+ig21*a1+ig22*a2+ig23*a3+ig24*a4+ig25*a5+ig26*a6, atim2, 4000

266

Time Space Texture: An Approach to Audio-Visual Composition

aa3 vdelay3 arev+ig31*a1+ig32*a2+ig33*a3+ig34*a4+ig35*a5+ig36*a6, atim3, 4000 aa4 vdelay3 arev+ig41*a1+ig42*a2+ig43*a3+ig44*a4+ig45*a5+ig46*a6, atim4, 4000 aa5 vdelay3 arev+ig51*a1+ig52*a2+ig53*a3+ig54*a4+ig55*a5+ig56*a6, atim5, 4000 aa6 vdelay3 arev+ig61*a1+ig62*a2+ig63*a3+ig64*a4+ig65*a5+ig66*a6, atim6, 4000 a1 pareq aa1, 10000*ifilt, 0.5, iq, 2 a2 pareq aa2, 9000*ifilt, 0.5, iq, 2 a3 pareq aa3, 9500*ifilt, 0.5, iq, 2 a4 pareq aa4, 8500*ifilt, 0.5, iq, 2 a5 pareq aa5, 7000*ifilt, 0.5, iq, 2 a6 pareq aa6, 6000*ifilt, 0.5, iq, 2 arev dcblock (arev * ibool + ((a1+a2+a3+a4+a5+a6)* 0.1667 * amp)) zawm arev, ( 5 + (ioutch * 5)+4) zawm (arev*0.2), ( 13 * 5+4) null: endin instr 51 iamp imin imax irmax ifiltpre ifilt = idens iinch ioutch ibool iq kvol ktime kbool kamp kamp kamp amp if

= = = = = p9 = = = = = = = = = =

p10 p11 p12 p13 sqrt(.5) gkrevamp5 gkrevlength ((ktime<(imin-10)) ? 0 : 1 ) ktime-imin limit kamp, 0, iamp * kvol * (kamp / (irmax-imin)) interp kamp

kbool == 0 kgoto null

arev a1 a2 a3 a4 a5 a6 itim1 itim2 itim3 itim4 itim5 itim6 at1 at2 at3 at4 at5 at6 atim1 atim2 atim3 atim4 atim5 atim6 ig11 ig12 ig13 ig14 ig15 ig16 ig21 ig22 ig23 ig24 ig25

p4 p5 p6 p7 p8

zar init init init init init init

0 0 0 0 0 0

oscil oscil oscil oscil oscil oscil = = = = = = = = = = = = = = = = =

= = = = = = itim1*.05, .50, 1, .2 itim2*.05, .56, 1, .4 itim3*.05, .54, 1, .6 itim4*.05, .51, 1, .7 itim5*.05, .53, 1, .9 itim6*.05, .55, 1 itim1+at1+5 itim2+at2+5 itim3+at3+5 itim4+at4+5 itim5+at5+5 itim6+at6+5 .55*idens .43*idens -.41*idens -.39*idens .35*idens -.33*idens -.42*idens -.45*idens .46*idens .36*idens -.33*idens

( 5 + (iinch * 5) + 5)

( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75) ( rnd(1000)*0.001 * imax *1.75)

267

irmax-imin

Time Space Texture: An Approach to Audio-Visual Composition

ig26 = .34*idens ig31 = .41*idens ig32 = .45*idens ig33 = .47*idens ig34 = -.42*idens ig35 = -.40*idens ig36 = -.39*idens ig41 = -.43*idens ig42 = -.42*idens ig43 = -.45*idens ig44 = .47*idens ig45 = .43*idens ig46 = .41*idens ig51 = -.43*idens ig52 = .42*idens ig53 = -.45*idens ig54 = .47*idens ig55 = -.43*idens ig56 = .41*idens ig61 = .43*idens ig62 = -.42*idens ig63 = .45*idens ig64 = -.47*idens ig65 = .43*idens ig66 = -.41*idens arev tone arev, ifiltpre aa1 vdelay3 arev+ig11*a1+ig12*a2+ig13*a3+ig14*a4+ig15*a5+ig16*a6, atim1, 4000 aa2 vdelay3 arev+ig21*a1+ig22*a2+ig23*a3+ig24*a4+ig25*a5+ig26*a6, atim2, 4000 aa3 vdelay3 arev+ig31*a1+ig32*a2+ig33*a3+ig34*a4+ig35*a5+ig36*a6, atim3, 4000 aa4 vdelay3 arev+ig41*a1+ig42*a2+ig43*a3+ig44*a4+ig45*a5+ig46*a6, atim4, 4000 aa5 vdelay3 arev+ig51*a1+ig52*a2+ig53*a3+ig54*a4+ig55*a5+ig56*a6, atim5, 4000 aa6 vdelay3 arev+ig61*a1+ig62*a2+ig63*a3+ig64*a4+ig65*a5+ig66*a6, atim6, 4000 a1 pareq aa1, 10000*ifilt, 0.5, iq, 2 a2 pareq aa2, 9000*ifilt, 0.5, iq, 2 a3 pareq aa3, 9500*ifilt, 0.5, iq, 2 a4 pareq aa4, 8500*ifilt, 0.5, iq, 2 a5 pareq aa5, 7000*ifilt, 0.5, iq, 2 a6 pareq aa6, 6000*ifilt, 0.5, iq, 2 arev dcblock (arev * ibool + ((a1+a2+a3+a4+a5+a6)* 0.1667 * amp)) zawm arev, ( 5 + (ioutch * 5)+5) zawm (arev*0.2), ( 13 * 5+5) null: endin ;########################################## OUTPUT ########################################## instr 52 aref1 zar 1 adirf1 = (gadir1 + (aref1 * -1)) adir1 buthp adirf1, 90 adir1 buthp adir1, 80 adir1 buthp adir1, 70 adir1 buthp adir1, 60 adir1 buthp adir1, 50 adir1 buthp adir1, 40 fout "L:/Audio/Project/schz2/directschz21.wav", 2, adir1 aref2 zar 2 adirf2 = (gadir2 + (aref2 * -1)) adir2 buthp adirf2, 90 adir2 buthp adir2, 80 adir2 buthp adir2, 70 adir2 buthp adir2, 60 adir2 buthp adir2, 50 adir2 buthp adir2, 40 fout "L:/Audio/Project/schz2/directschz22.wav", 2, adir2 aref3 zar 3 adirf3 = (gadir3 + (aref3 * -1)) adir3 buthp adirf3, 90 adir3 buthp adir3, 80 adir3 buthp adir3, 70 adir3 buthp adir3, 60 adir3 buthp adir3, 50 adir3 buthp adir3, 40 fout "L:/Audio/Project/schz2/directschz23.wav", 2, adir3 aref4 zar 4

268

Time Space Texture: An Approach to Audio-Visual Composition

adirf4 = (gadir4 + (aref4 * -1)) adir4 buthp adirf4, 90 adir4 buthp adir4, 80 adir4 buthp adir4, 70 adir4 buthp adir4, 60 adir4 buthp adir4, 50 adir4 buthp adir4, 40 fout "L:/Audio/Project/schz2/directschz24.wav", 2, adir4 aref5 zar 5 adirf5 = (gadir5 + (aref5 * -1)) adir5 buthp adirf5, 90 adir5 buthp adir5, 80 adir5 buthp adir5, 70 adir5 buthp adir5, 60 adir5 buthp adir5, 50 adir5 buthp adir5, 40 fout "L:/Audio/Project/schz2/directschz25.wav", 2, adir5 aLFEdir1 butlp adirf1, 180 aLFEdir1 butlp aLFEdir1, 170 aLFEdir1 butlp aLFEdir1, 160 aLFEdir1 butlp aLFEdir1, 150 aLFEdir1 butlp aLFEdir1, 140 aLFEdir1 butlp aLFEdir1, 130 aLFEdir2 butlp adirf2, 180 aLFEdir2 butlp aLFEdir2, 170 aLFEdir2 butlp aLFEdir2, 160 aLFEdir2 butlp aLFEdir2, 150 aLFEdir2 butlp aLFEdir2, 140 aLFEdir2 butlp aLFEdir2, 130 aLFEdir3 butlp adirf3, 180 aLFEdir3 butlp aLFEdir3, 170 aLFEdir3 butlp aLFEdir3, 160 aLFEdir3 butlp aLFEdir3, 150 aLFEdir3 butlp aLFEdir3, 140 aLFEdir3 butlp aLFEdir3, 130 aLFEdir4 butlp adirf4, 180 aLFEdir4 butlp aLFEdir4, 170 aLFEdir4 butlp aLFEdir4, 160 aLFEdir4 butlp aLFEdir4, 150 aLFEdir4 butlp aLFEdir4, 140 aLFEdir4 butlp aLFEdir4, 130 aLFEdir5 butlp adirf5, 180 aLFEdir5 butlp aLFEdir5, 170 aLFEdir5 butlp aLFEdir5, 160 aLFEdir5 butlp aLFEdir5, 150 aLFEdir5 butlp aLFEdir5, 140 aLFEdir5 butlp aLFEdir5, 130 aLFEdir=((aLFEdir1+aLFEdir2+aLFEdir3+aLFEdir4+aLFEdir5) ) fout "L:/Audio/Project/schz2/LFE_directschz2.wav", 2, aLFEdir endin instr 53 arevf1 zar (13 * 5+1) arevf2 zar (13 * 5+2) arevf3 zar (13 * 5+3) arevf4 zar (13 * 5+4) arevf5 zar (13 * 5+5) arev1 buthp arevf1, 90 arev1 buthp arev1, 80 arev1 buthp arev1, 70 arev1 buthp arev1, 60 arev1 buthp arev1, 50 arev1 buthp arev1, 40 fout "L:/Audio/Project/schz2/revschz21.wav", 2, arev1 arev2 buthp arevf2, 90 arev2 buthp arev2, 80 arev2 buthp arev2, 70 arev2 buthp arev2, 60 arev2 buthp arev2, 50 arev2 buthp arev2, 40 fout "L:/Audio/Project/schz2/revschz22.wav", 2, arev2 arev3 buthp arevf3, 90 arev3 buthp arev3, 80 arev3 buthp arev3, 70 arev3 buthp arev3, 60

269

Time Space Texture: An Approach to Audio-Visual Composition

arev3 buthp arev3, 50 arev3 buthp arev3, 40 fout "L:/Audio/Project/schz2/revschz23.wav", 2, arev3 arev4 buthp arevf4, 90 arev4 buthp arev4, 80 arev4 buthp arev4, 70 arev4 buthp arev4, 60 arev4 buthp arev4, 50 arev4 buthp arev4, 40 fout "L:/Audio/Project/schz2/revschz24.wav", 2, arev4 arev5 buthp arevf5, 90 arev5 buthp arev5, 80 arev5 buthp arev5, 70 arev5 buthp arev5, 60 arev5 buthp arev5, 50 arev5 buthp arev5, 40 fout "L:/Audio/Project/schz2/revschz25.wav", 2, arev5 aLFEdir1 butlp arevf1, 180 aLFEdir1 butlp aLFEdir1, 170 aLFEdir1 butlp aLFEdir1, 160 aLFEdir1 butlp aLFEdir1, 150 aLFEdir1 butlp aLFEdir1, 140 aLFEdir1 butlp aLFEdir1, 130 aLFEdir2 butlp arevf2, 180 aLFEdir2 butlp aLFEdir2, 170 aLFEdir2 butlp aLFEdir2, 160 aLFEdir2 butlp aLFEdir2, 150 aLFEdir2 butlp aLFEdir2, 140 aLFEdir2 butlp aLFEdir2, 130 aLFEdir3 butlp arevf3, 180 aLFEdir3 butlp aLFEdir3, 170 aLFEdir3 butlp aLFEdir3, 160 aLFEdir3 butlp aLFEdir3, 150 aLFEdir3 butlp aLFEdir3, 140 aLFEdir3 butlp aLFEdir3, 130 aLFEdir4 butlp arevf4, 180 aLFEdir4 butlp aLFEdir4, 170 aLFEdir4 butlp aLFEdir4, 160 aLFEdir4 butlp aLFEdir4, 150 aLFEdir4 butlp aLFEdir4, 140 aLFEdir4 butlp aLFEdir4, 130 aLFEdir5 butlp arevf5, 180 aLFEdir5 butlp aLFEdir5, 170 aLFEdir5 butlp aLFEdir5, 160 aLFEdir5 butlp aLFEdir5, 150 aLFEdir5 butlp aLFEdir5, 140 aLFEdir5 butlp aLFEdir5, 130 aLFEdir=((aLFEdir1+aLFEdir2+aLFEdir3+aLFEdir4+aLFEdir5 ) ) fout "L:/Audio/Project/schz2/LFE_revschz2.wav", 2, aLFEdir endin instr 54 zacl 0, zkcl 0,

71 4

endin

270

Time Space Texture: An Approach to Audio-Visual Composition

Csound Spatialisation Sco File The Csound score file below was created by pan to spatialise the crystal object in Schwarzchild. This is the score file that goes with the orc file at the start of this section. f1 0 65536 10 1 f2 0 [2^19] -23 "I:/andrew/project/chans/rev_lengthschz2.chan" f3 0 [2^19] -23 "I:/andrew/project/chans/rev_ampschz21.chan" f4 0 [2^19] -23 "I:/andrew/project/chans/rev_ampschz22.chan" f5 0 [2^19] -23 "I:/andrew/project/chans/rev_ampschz23.chan" f6 0 [2^19] -23 "I:/andrew/project/chans/rev_ampschz24.chan" f7 0 [2^19] -23 "I:/andrew/project/chans/rev_ampschz25.chan" f8 0 [2^19] -23 "I:/andrew/project/chans/dirdistschz21.chan" f9 0 [2^19] -23 "I:/andrew/project/chans/dirazischz21.chan" f10 0 [2^19] -23 "I:/andrew/project/chans/refdistschz21.chan" f11 0 [2^19] -23 "I:/andrew/project/chans/refdistschz22.chan" f12 0 [2^19] -23 "I:/andrew/project/chans/refdistschz23.chan" f13 0 [2^19] -23 "I:/andrew/project/chans/refdistschz24.chan" f14 0 [2^19] -23 "I:/andrew/project/chans/refdistschz25.chan" f15 0 [2^19] -23 "I:/andrew/project/chans/refdistschz26.chan" f16 0 [2^19] -23 "I:/andrew/project/chans/refazischz21.chan" f17 0 [2^19] -23 "I:/andrew/project/chans/refazischz22.chan" f18 0 [2^19] -23 "I:/andrew/project/chans/refazischz23.chan" f19 0 [2^19] -23 "I:/andrew/project/chans/refazischz24.chan" f20 0 [2^19] -23 "I:/andrew/project/chans/refazischz25.chan" f21 0 [2^19] -23 "I:/andrew/project/chans/refazischz26.chan" i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 i11 i12 i13 i14 i15 i16 i17 i18 i19 i20 i21 i22 i23 i24 i25 i26 i27 i28 i29 i30 i31

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

[ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100]

[ ( 2^19 ) /200]

271

Time Space Texture: An Approach to Audio-Visual Composition

i32 i33 i34 i35 i36 i37 i38 i39 i40 i41 i42 i43 i44 i45 i46 i47 i47 i47 i47 i47 i47 i47 i47 i47 i47 i47 i48 i48 i48 i48 i48 i48 i48 i48 i48 i48 i48 i49 i49 i49 i49 i49 i49 i49 i49 i49 i49 i49 i50 i50 i50 i50 i50 i50 i50 i50 i50 i50 i50 i51 i51 i51 i51 i51 i51 i51 i51 i51 i51 i51 i52 i53 i54

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

[ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100] [ ( 2^26 ) /44100]

1 1.05 1.1 1.2 1.3 1.4 1.5 1.7 2.1 2.5 3 1 1.05 1.1 1.2 1.3 1.4 1.5 1.7 2.1 2.5 3 1 1.05 1.1 1.2 1.3 1.4 1.5 1.7 2.1 2.5 3 1 1.05 1.1 1.2 1.3 1.4 1.5 1.7 2.1 2.5 3 1 1.05 1.1 1.2 1.3 1.4 1.5 1.7 2.1 2.5 3

1 8 17 51 93 187 333 713 1397 3173 7139 1 8 17 51 93 187 333 713 1397 3173 7139 1 8 17 51 93 187 333 713 1397 3173 7139 1 8 17 51 93 187 333 713 1397 3173 7139 1 8 17 51 93 187 333 713 1397 3173 7139

8 17 51 93 187 333 713 1397 3173 7139 15871 8 17 51 93 187 333 713 1397 3173 7139 15871 8 17 51 93 187 333 713 1397 3173 7139 15871 8 17 51 93 187 333 713 1397 3173 7139 15871 8 17 51 93 187 333 713 1397 3173 7139 15871

4 12 27 71 137 260 523 1055 2285 5156 11505 4 12 27 71 137 260 523 1055 2285 5156 11505 4 12 27 71 137 260 523 1055 2285 5156 11505 4 12 27 71 137 260 523 1055 2285 5156 11505 4 12 27 71 137 260 523 1055 2285 5156 11505

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 700 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 700 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 700 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 700 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 700

0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4

272

0.55 0.51 0.53 0.57 0.52 0.58 0.52 0.53 0.57 0.52 0.54 0.55 0.51 0.53 0.57 0.52 0.58 0.52 0.53 0.57 0.52 0.54 0.55 0.51 0.53 0.57 0.52 0.58 0.52 0.53 0.57 0.52 0.54 0.55 0.51 0.53 0.57 0.52 0.58 0.52 0.53 0.57 0.52 0.54 0.55 0.51 0.53 0.57 0.52 0.58 0.52 0.53 0.57 0.52 0.54

0 1 2 3 4 4 5 5 5 6 6 0 1 2 3 4 4 5 5 5 6 6 0 1 2 3 4 4 5 5 5 6 6 0 1 2 3 4 4 5 5 5 6 6 0 1 2 3 4 4 5 5 5 6 6

1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11

0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1

Time Space Texture: An Approach to Audio-Visual Composition

Appendix J – Various Scripts Various Scripts Used To Create the Music VR DVD Works

This first perl script was used to create all the letter nodes in Houdini that were randomly selected from to create the green letters on the walls of the Green Room in Heisenberg:

$i=0; $a=1; $n=36; @char=(1,2,3,4,5,6,7,8,9,0,"q","w","e","r","t","y","u","i","o","p","a","s","d","f","g","h","j","k","l","z","x","c","v","b","n","m"); open(SCRIPT,">D:/andrew/project/scripts/randomfont.hsh"); close(SCRIPT); while ($a<=$n) { open(SCRIPT,">>D:/andrew/project/scripts/randomfont.hsh"); print SCRIPT "opadd -n font font$a\n"; print SCRIPT "opparm font$a type ( poly ) installer ( 0 ) file ( \'D:/andrew/fonts/newfonts/ELECTROH.pfa\' ) text ( $char[$i] ) hcenter ( on ) vcenter ( on ) t ( 0 0 0 ) s ( 1.4 1.4 ) kern ( 0 0 ) italic ( 0 ) lod ( 0.3 ) hole ( on )\n"; print SCRIPT "opwire font$a -$i switch1\n"; ++$i; ++$a; } close(SCRIPT);

This next hscript was used to export location information from Houdini to the Lake Huron, and was used in the creation of Schwarzchild’s stereo/binaural audio track and early 8 channel performances of Schwarzchild. # Written by Andrew Lyons 1998.# Writes out a point's xyz information to a huron.loc format # for the duration of the period between $FSTART and $FEND.

fcur 10001 fplayback -i off -r off -s 1 echo 'Connect 129.78.86.202' > $HOME/Lake/Schwarz1.loc while $F < 22000 echo 'Time' `(($FF-1)/2.5)` >> $HOME/Lake/Schwarz1.loc echo 'Loc 1' `point("/obj/geo1/channel1",0,"P",0)` `point("/obj/geo1/channel1",0,"P",2)` `point("/obj/geo1/channel1",0,"P",1)` >> $HOME/Lake/Schwarz1.loc echo 'Loc 2' `point("/obj/geo2/channel2",0,"P",0)` `point("/obj/geo2/channel2",0,"P",2)` `point("/obj/geo2/channel2",0,"P",1)` >> $HOME/Lake/Schwarz1.loc echo 'Loc 3' `point("/obj/geo3/channel3",0,"P",0)` `point("/obj/geo3/channel3",0,"P",2)` `point("/obj/geo3/channel3",0,"P",1)` >> $HOME/Lake/Schwarz1.loc echo 'Loc 4' `point("/obj/geo4/channel4",0,"P",0)` `point("/obj/geo4/channel4",0,"P",2)` `point("/obj/geo4/channel4",0,"P",1)` >> $HOME/Lake/Schwarz1.loc fcur `$FF+2.525` end echo '# End' >> $HOME/Lake/Schwarz1.loc

273

Time Space Texture: An Approach to Audio-Visual Composition

Appendix K – Instructions for DVD Use Instructions For DVD Use The Music VR DVD will display two menus upon loading. The first is the "Audio selection menu", and the second menu is the "Program selection menu". This second "Program selection menu" is the "Main Menu" to which you will be returned if you press the "menu" button on your DVD player at any time. If you wish to return to the "Audio selection menu" at any time you will need to eject the DVD, reinitialise the player and reload the DVD into the drive. Here is how to play the DVD: 1) Load the MusicVR DVD in a DVD drive. 2) Allow the DVD to deliver the first "Audio selection menu" without selecting the "menu" button on your DVD player. 3) Choose from either "DTS 5.1" or "Stereo/binaural" audio. Stereo/Binaural is the default option. DTS 5.1 is the preffered option for examination of works. 4) The "Program selection menu" will appear with which to choose an animation. 5) Choose an animation from the four available. The first program is the default, and the DVD will play through all four animations in sequence if left uninterrupted.

PAL Format Only

Please note that this DVD has no region limitations. It is in PAL format only though. Many domestic DVD players and Televisions in the USA will play PAL format DVD's. If your computer has a DVD drive then you will be able to play it there if your domestic system does not play PAL DVD's.

274

Time Space Texture: An Approach to Audio-Visual Composition

13 Bibliography Achterberg, J. (1985). Imagery in healing. Boston: Shambala. New Science Library. Adorno, Theodor W. (1956). Quasi una Fantasia, Essays on Modern Music, (Translated by Rodney Livingstone). London, New York: VERSO. Adorno, Theodor W. (1973) Philosophy of Modern Music. translated by Anne G. Mitchell and Wesley V. Blomster. New York: The Seabury Press. Adorno, Theodor W. (2002) Essays on Music. Richard Leppert (ed.) Susan H Gillespie (trans.) University of California Press. Anderson, Joseph D. (1996). The Reality of Illusion: An Ecological Approach to Cognitive Film Theory. Carbondale and Edwardsville: Southern Illinois University Press. Armstrong, E. (1990) “Evolution of the brain”, in G. Paxinos, (ed), The Human Nervous System. San Diego:Academic Press. Armstrong, E. (1991) “The Limbic System and Culture: An Allometric Analysis of the Neocortex and Limbic Nuclei.” Human Nature. 2:117 Arnheim, Rudolf. (1986) New Essays on the Psychology of Art. Berkeley, Los Angeles and London: University of California Press. Baron-Cohen, S., and J. Harrison. eds. (1996). Synesthesia: Classic and Contemporary Readings. Oxford: Blackwells. Barsalou, L.W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577-609. 275

Time Space Texture: An Approach to Audio-Visual Composition

Barsalou,

L.W.

(2003).

Perceptual

Bases

of

Cognition.

(http://userwww.service.emory.edu/~barsalou/Research/researchperception.html) Referenced 16-1-2003. Bartlett, F.C., (1932), Remembering: A study in experimental and social psychology, London: Cambridge University Press. Battey Bret. (1998). An investigation into the Relationship between Language, Gesture and Music. http://staff.washington.edu/bbattey/Ideas/lang-gestmus.html Begault, Durand R. (1994). 3D Sound for Virtual Reality and Multimedia. Cambridge, MA. Academic Press. Block N (ed) (1981a) Imagery. Cambridge MA: MIT Press. Block N (ed) (1981b) Readings in Philosophy of Psychology, Vol. 2. London: Methuen. Blood, A.J. & Zatorre, R.J. (2001). “Intensely pleasurable responses to music correlate with activity in brain regions implicated with reward and emotion.” Proceedings of the National Academy of Sciences, 98, 1181811823. Bonny, H. (1989). "Sound as Symbol: Guided Imagery and Music in Clinical Practice." Music Therapy Perspectives 6: 7-10. Bonny, H. (1993). "Body Listening: A new way to review the GIM tapes." Journal of the Association for Music and Imagery 2: 3-13. Bonny, H. L. (1987). "Reflections: Music - the language of immediacy." The Arts in Psychotherapy 14(3): 255-261.

276

Time Space Texture: An Approach to Audio-Visual Composition

Bonny, Helen Lindquist. (1994). “Twenty-one Years Later: A GIM Update.” Music Therapy Perspectives, 12(2), 70-74. Boulanger, Richard. (Ed.) (2000). The CSound Book: Synthesis, Composition, and Performance, Cambridge, MA: MIT Press. Bragdon, Claude. (1978, (c1939)). The Beautiful Necessity - Architecture as frozen Music. Wheaton, Ill.: Theosophical Pub. House. Bregman, Albert S. (1990). Auditory Scene Analysis: The Perceptual Organisation of Sound. Cambridge, Mass.: Bradford Books, MIT Press. Brittanica.

(2003).

Aesthetics.

Britannica

Premium

Service.

(http://www.britannica.com/eb/article?eu=108463) Referenced 25 Feb, 2003.

Also

at:

http://cyberspacei.com/jesusi/inlight/philosophy/aesthetics/Aesthetics.ht m Brower, Candace. (1997). "Pathway, Blockage, and Containment in 'Density 21.5'," Theory and Practice 22. New York: Columbia University Press. Campbell, Lawrence. (2002). The Sapir-Whorf Hypothesis. Referenced 21-112002. http://venus.va.com.au/suggestion/sapir.html Chalmers, David J. (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness

Studies

2(3):200-219.

http://www.u.arizona.edu/~chalmers/papers/facing.html Chion, Michel. (1983). Guide des objets sonores - Pierre Schaeffer et la Recherche Musicale. Paris: Buchet Chastel.

277

Time Space Texture: An Approach to Audio-Visual Composition

Cho, Young-soo. (2001) "The influence of music on individual mental imagery." Korean

Journal

of

Music

Therapy.

Vol.

3,

No.

1,

pp.

31.49

http://www.ouimoi.com/mt/KAMT.htm Cho, Young-soo. (2002). A Bibliography of UMI Dissertations Dealing with Mental Imagery. Referenced 3-5-2002. http://www.ouimoi.com/mt/UMI1.htm Chowning, J. (1971). "The simulation of moving sound sources." Journal of the Audio Engineering Society. Vol 19, No.1. Clifton, Thomas. (1983). Music as heard : a study in applied phenomenology. New Haven : Yale University Press. Cook, Nicholas. (1990). Music, Imagination, and Culture. Oxford: Clarendon Press. Cook, Nicholas. (2002). 'Epistemologies of Music Theory'. In Thomas Christensen (ed.), The Cambridge History of Western Music Theory. Cambridge University Press. pp.78-105 Croce, Benedetto. (1922). Aesthetic as science of expression and general linguistic. Translated from the Italian by Douglas Ainslie. New York: Farrar, Straus and Giroux. Crowder, R.G; Pitt, M.A.; (1992). “Research on Memory/Imagery for Musical Timbre.” In. Reisberg, D. (Ed.) Auditory Imagery. Hillsdale, New Jersey: Lawrence Erlbaum. Cytowic, Richard E. (1989). Synesthesia: a Union of the Senses, New York: Springer Verlag. Dailey, Audrey, R. (1995). Creativity, Primary Process Thinking, Synesthesia, and Physiognomic Perception. Unpublished doctoral dissertation. University of Maine.

278

Time Space Texture: An Approach to Audio-Visual Composition

Davies, John Booth. (1978). The Psychology of Music. Stanford University Press. Dennett, D, C. (1988). “Quining qualia.” In Marcel A and Bisiach E (eds) Consciousness in Contemporary Science, pp. 42-77. New York: Oxford University Press. Reprinted in Lycan WG (1990) Mind and Cognition: A Reader, pp. 519-47. Cambridge: Blackwell. Deutsch, Diana. (1984). “Musical space”’. In R. Crozier and T Chapman (Eds) Cognitive Processes in the Perception of Art, New York: North Holland. Dodge, Charles & Jerse, Thomas A. (1997). Computer music : synthesis, composition, and performance. 2nd ed. New York : Schirmer Books ; London : Prentice Hall International. Erdonmez Grocke, D. E. (1999a). A Phenomenological Study of Pivotal Moments in Guided Imagery and Music (GIM) Therapy. Unpublished Doctoral Dissertation. University of Melbourne. Erdonmez Grocke, D.E.. (1999b). The music that underpins pivotal moments in Guided Imagery and Music. In T.Wigram and J.De Backer (Eds.) Clinical Applications of Music Therapy in Psychiatry. London: Jessica Kingsley. p. 197-210 Erdonmez, D. E. (1993). Music: A mega vitamin for the brain. In M. Heal and T. Wigram (Eds.) Music Therapy in Health and Education. London: Jessica Kingsley. Fodor, JA. (1975). The Language of Thought. New York: Crowell. Fodor, J. A. (1983). The Modularity of Mind. Bradford Books. MIT Press, Cambridge, MA.

279

Time Space Texture: An Approach to Audio-Visual Composition

Foucault, Michel. (1970). The order of things: An archaeology of the human sciences. Random House-Pantheon, NY. Furniss, Tom. (1993). Edmund Burke's Aesthetic Ideology. Cambridge University Press. Gibson, J. J. (1979). The ecological approach to visual perception. Boston, MA: Houghton Mifflin. Glass, Philip. (1974). The programme note for "Music In Twelve Parts" from the programme booklet for the "Meta-Music Festival", Berlin 1974. Godøy, R. I. (1999). “Cross-modality and conceptual shapes and spaces in music theory”. In I. Zannos (Ed.), Music and Signs (pp. 85-98). Bratislava: ASCO Art & Science. Godøy, R.I. & H. Jørgensen (Eds.), (2001). Musical Imagery. Lisse (Holland): Swets & Zeitlinger. Goldberg, F. S. (1992). Images of emotion: The role of emotion in Guided Imagery and Music. Journal of the Association for Music and Imagery, 1, 5-17. Goldberg, Frances Smith. (1995). “The Bonny Method of Guided Imagery.” In, Wigram, T; Saperston, B; West, R. (Eds.) The Art and Science of Music Therapy. Harwood Academic Publishers. Goldstone,

R.

L.,

&

Barsalou,

L.

(1998).

“Reuniting

perception

and

conception”. In S. A. Sloman and L. J. Rips (Eds.) Similarity and symbols in human thinking. (pp. 145-176). Cambridge, MA: MIT Press. Graves, Maitland. (1951). The Art of Color and Design. New York: McGraw-Hill.

280

Time Space Texture: An Approach to Audio-Visual Composition

Gregory, R. L., Haugeland, John. (1987). “Intentionality” in R. L. Gregory, ed., The Oxford Companion to the Mind, Oxford University Press. Grout, Donald. (1988). Palisca, Claude, V. A History of Western Music. London: WW Norton and Company. Hofstadter, Albert, and Richard Kuhns, eds. (1964). Philosophies of Art and Beauty, Selected Readings In Aesthetics From Plato to Heidegger. Chicago: University of Chicago Press. Holt, R, R. (1964). “Imagery: The return of the ostracized.” American Psychologist

19:

254-266.

Accessed

Jan

6

2003.

http://www.philosophypages.com/hy/6w.htm#quine Huron, D. (1999). “Music838 Exam Questions and Answers.” Ohio State University.

http://www.music-cog.ohio-

state.edu/Music838/exam_questions_answers.html#Cognition Ihde, D. (1986). Experimental Phenomenology: An Introduction. Albany: State University of New York. Ihde, Don. If phenomenology is an albatross, is postphenomenology possible? Stony Brook Research Philosophy Department. Viewed 19-11-2002. http://www.sunysb.edu/philosophy/new/research/ihde_3.html . Ihde, Don. (1976). Listening and Voice: A Phenomenology of Sound. Ohio University Printers. Ingarden, Roman. (1986). The work of Music and the Problem of its identity. Translated by Adam Czerniawski. Berkeley: University of California Press.

281

Time Space Texture: An Approach to Audio-Visual Composition

Janata, p. (2001). Neurophysiological mechanisms underlying auditory image formation in music. In R. I. Godøy & H. Jørgensen (Eds.), Musical Imagery. Lisse: Swets & Zeitlinger Publishers. Juslin, P. & Sloboda, J. A. (Eds.). (2001). Music and Emotion: Theory and Research. Oxford: Oxford University Press. Kandinsky, W. (1982 (c1912)). "On the Spiritual in Art" in Kandinsky: Complete Writings on Art, edited and translated by K.C. Lindsay and p. Vergo. London: Faber and Faber. Kendall, Gary S. (1995). “The Decorrelation of Audio Signals and Its Impact on Spatial Imagery.” Computer Music Journal. 19(4) p.71 Kleinginna, p. R. and Kleinginna, A. M. (1981). “A Categorized List of Emotion Definitions with Suggestions for a Consensual Definition” Motivation and Emotion. 5, pp.345-355. Kosslyn, S,M. (1980). Image and Mind. Cambridge, MA: Harvard University Press. Kosslyn, S.M. (1994). Image and Brain: The resolution of the imagery debate. Cambridge, MA: MIT Press. Kosslyn.

S.M.,

M.

Behrmann,

and

M.

Jeannerod,

Eds.

(1995).

The

Neuropsychology of Mental Imagery. New York: Pergamon. Lang,

P.

J.

(1979).

“A

bio-information

theory

of

emotional

imagery”.

Psychophysiology, 16, 495-512. Lem, A. (1998). “EEG reveals potential connections between selected categories of imagery and the psycho-acoustic profile of music.” The Australian Journal of Music Therapy. Volume 9. p.3.

282

Time Space Texture: An Approach to Audio-Visual Composition

Leman, Marc. (1999). “Adequacy Criteria for Models of Musical Cognition.” In J.N. Tabor (Ed.) Navigating New Musical Horizons. Westport CT: Greenwood Publishing Company. Lerdahl, Fred and Jackendoff, Ray. (1983). A Generative Theory of Tonal Music. Cambridge, MA: MIT Press. Lewis, JW; Beauchamp, MS; DeYoe, EA. (2000). "A comparison of visual and auditory motion processing in human cerebral cortex." Cerebral Cortex Sep;10(9):873-88. Lyons, Andrew D. (1992). Sine Qua Non. (Video Music Theatre piece) duration: 12 minutes. Lyons, Andrew D. (1994). To Know, To Dare, To Will, To Be Silent. (Video Music Theatre piece) duration: 12 minutes. Lyons, Andrew D. (1999). A Course in Applied Musicography. Unpublished report. http://www.users.bigpond.com/tstex/Musicography.html Lyons,

Andrew

D.

(2000).

Gestalt

Approaches

to

the

Gesamtkunstwerk.

Unpublished paper. http://www.users.bigpond.com/tstex/gestalt.htm Lyons, Andrew D. (2002). “Abstractly Related and Spatially Simultaneous Auditory-Visual Objects.” Proceedings of the 2002 Australasian Computer Music

Conference.

pp.

71-81.

http://www.users.bigpond.com/tstex/ACMA2002.htm Lyons. Andrew D. (2001). “Synaesthesia: A Cognitive Model of Cross Modal Association." Consciousness, Literature and the Arts. Spring 2001. http://www.users.bigpond.com/tstex/synaesthesia.htm

283

Time Space Texture: An Approach to Audio-Visual Composition

Margolis, J. (1980). Art and philosophy. Atlantic Highlands, N.J.: Humanities Press. Marks, L. E. (1978). The Unity of the Senses: Interrelations among the Modalities. New York: Academic Press. Martin, John H. (1991). "Coding and Processing of Sensory Information" in Eric R Kandel, James H. Schwartz and Thomas M. Jessel. (eds). Principles of Neural Science. London: Prentice Hall. Martindale, C. (1989). "Personality, Situation and Creativity." In, J.A. Glover, R.R. Ronning, and C.R. Reynolds (Eds.) Handbook of Creativity. New York: Plenum. Martindale, C. (1991). “Creative imagination and neural activity”. In R. Kruzendorf & A. Sheikh (Eds.) Psychophysiology of Mental Imagery: Theory, Research and Application. Amityville, NY: Baywood. Martindale, C. and Hines, D. (1975). “Creativity and cortical activity activation during

creative,

intellectual,

and

EEG

feedback

tasks”.

Biological

Psychology, 3, 71-80. Mattis, Olivia. (1992). Edgard Varese and The Visual Arts. Unpublished Dissertation. Stanford University. McGregor , Rob. (1999). Practical C++. QUE publications. McNeill, David. (1992). Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. Merleau-Ponty, Maurice. (1962). Phenomenology of Perception. Translated by Colin Smith. London: Routledge and K. Paul.

284

Time Space Texture: An Approach to Audio-Visual Composition

Meyer, Leonard B. (1956). Emotion and Meaning in Music. Chicago: Chicago University Press. Meyer, Leonard B. (1967). Music, the Arts and Ideas: Patterns and Predictions in Twentieth Century Culture. The University of Chicago Press, Chicago. Minsky, Marvin and Laske, Otto. (1991). A Conversation with Marvin Minsky. http://web.media.mit.edu/~minsky/papers/Laske.Interview.Music.txt Referenced: April 17, 2004 Minsky, Marvin. (2004). The Emotion Machine. Referenced: April 18, 2004. http://web.media.mit.edu/~minsky/E1/eb1.html Moore, Richard. F. (1990). Elements of computer music. Englewood Cliffs, N.J : Prentice Hall. Nakamura,

S.,

et

simultaneous

al.

(1999).

“Analysis

measurement

of

of

regional

music-brain cerebral

interaction blood

flow

with and

electroencephalogram beta rhythm in human subjects.” Neuroscience Letters, 275 (3 ), 222 –226. Nattiez, Jean-Jacques. (1990). Music and Discourse: Toward a Semiology of Music. Translated from French by Carolyn Abbate. Princeton University Press. Norman,Don. (2004). Emotional Design: Why we love (or hate) everyday things. New York: Basic Books. Ong, Tze-Boon. (1994). Music as a generative process in Architectural form and space composition. Unpublished Doctoral Dissertation. Rice university: Houston, Texas. Ortony, A., Clore, G.L., and Collins, A. (1988). The Cognitive Structure of Emotions. New York: NY: Cambridge University Press.

285

Time Space Texture: An Approach to Audio-Visual Composition

Osborne,

John

W.

Representation

(1989). of

“A

Extra

Phenomenological Musical

Ideas.”

Investigation

Journal

of

of

the

Experimental

Psychology. Vol 20(2). p.151-176. Padham, Max. (1996). Adorno Modernism & Mass Culture – essays on critical theory and music. London: Kahn and Averill. Paivio A (1971). Imagery and Verbal Processes. New York: Holt, Rinehart & Winston. Paivio A (1986). Mental Representations: A Dual Coding Approach. New York: Oxford University Press. Paivio, A. (1978d). “The relationship between verbal and perceptual codes”. In E.C. Carterette & M.p. Friedman (Eds), Handbook of perception: Vol. VIII, Perceptual coding (pp. 375-397). New York: Academic Press. Paivio, A. (1978). “The relationship between verbal and perceptual codes.” In E.C. Carterette and M.p. Friedman (Ed.) Handbook of Perception: Vol VIII.New York: Academic Press. Palombini, Carlos. (1993). Pierre Schaeffer’s Typo Morphology of Sonic Objects. University of Durham. Unpublished Doctoral Dissertation. Platel, H., Price, C., Baron, J. C., Wise, R., Lambert, J., Frackowiak, R. S. J., Lechevalier, B., & Eustache, F. (1997). The structural components of music perception - A functional anatomical study. Brain, 120, 229-243. Poincare, H. (1970). “Mathematical creation”. In; Vernon, p. E. (Ed.) Creativity: Selected Readings. Middlesex, England: Penguin. Priest, Stephen. (1998). Mearleau Ponty. London: Routledge.

286

Time Space Texture: An Approach to Audio-Visual Composition

Prusinkiewicz, Przemyslaw. (1990). The algorithmic beauty of plants. New York, Berlin : Springer-Verlag. Pylyshyn, Z. W. (1978). “Imagery and artificial intelligence.” from C.W. Savage, ed., Perception and Cognition. Issues in the Foundations of Psychology, Minnesota Studies in the Philosophy of Science, vol. 9, Minneapolis: University of Minnesota Press) pp. 19-55 Quittner, A.L. (1980). The facilitative effects of music on mental imagery: A multiple measures approach. Unpublished masters thesis. Florida State University, Tallahassee, Florida. Raffman, Diana. (1993). Language Music and Mind. Cambridge, M.A.: MIT Press. A Bradford Book. Reisberg, D. (Ed.) (1992). Auditory Imagery. Hillsdale, New Jersey: Lawrence Erlbaum. Richardson, JTE. (1999). Mental Imagery. Hove, U.K.: Psychology Press. Rider MS and Achterberg J: Effect of music-assisted imagery on neutrophils and lyphocytes. Boifeedback and Self-Regulation 14(#3):247-257, (1989. Rips, L.J., (1989). “Similarity, typicality, and categorization.” In: Vosniadou, S., Ortony,

A.

(Eds.)

Similarity

and

Analogical

Reasoning.

Cambridge

University Press, Cambridge. Risset, Jean-Claude. (1989). “Paradoxical sounds”. In Max V. Mathews and John R. Pierce, editors, Current Directions in Computer Music Research. Cambridge, Massachusetts : MIT Press. Roads, Curtis. et al. (eds.) (1997). Musical signal processing. Exton, PA: Swets & Zeitlinger. 287

Time Space Texture: An Approach to Audio-Visual Composition

Rollins, M. (1989). Mental Imagery: On the Limits of Cognitive Science. New Haven, CT: Yale University Press. Rothfarb,

Lee.

(1992).

"Hermeneutics

and

Energetics:

Music-Theoretical

Alternatives in the Early 1900's" Journal of Music Theory. (36). Yale University Press. Russcol, Herbert, (1972). The Liberation of Sound : An Introduction to Electronic Music , Englewood Cliffs, N.J., Prentice-Hall. Sartre, Jean Paul. (1972). The Psychology of the Imagination. Secaucus, New Jersey: Citadel Press. Scherer, K. R., & Zentner M. R. (2001). “Emotional effects of music: production rules.” In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion (pp. 361392). Oxford: Oxford University Press. Schneider, A. & Godøy, R. I. (2001). Perspectives and Challenges of Musical Imagery. In R. I. Godøy & H. Jørgensen (Eds.), Musical Imagery (pp. 237250). Lisse (Holland): Swets & Zeitlinger. Schoenberg ,Arnold. (1911). Harmonielehre, Universal Edition, Wien, (1978; translated by R. E. Carter, Theory of Harmony, Faber, London. Scruton, R. (1983). The aesthetic understanding. New York: Methuen. Shaywitz, et al. (1995). “Sex differences in the functional organization of the brain for language”. Nature ,373:607 –9. Shepard, RN, and Cooper, L. (1982). Mental Images and Their Transformations. Cambridge, MA: MIT Press.

288

Time Space Texture: An Approach to Audio-Visual Composition

Shepard, Roger N. (1964). “Circularity in Judgements of Relative Pitch.” In Journal of the Acoustical Society of America. Volume 36, Number 12, pp. 2346-53. Slawson, Wayne. (1985). Sound Color. University of California Press. Slezak, P. (1995). The "philosophical" case against visual imagery. In Slezak P, Caelli T and Clark R (eds) Perspectives on Cognitive Science, pp. 237-271. Norwood, NJ: Ablex. Sloman, Steven A. and Rips, Lance J. (eds.) (1999). Similarity and symbols in human thinking. Cambridge, Mass. : MIT Press. Sloman, Steven A. and Rips, Lance J. (1998). “Similarity as an explanatory construct.” Cognition. (65). Elsevier. Pp. 87–101. Smalley, Denis. (1996). “The Listening Imagination : Listening in the Electroacoustic Era” Contemporary Music Review, Vol 13, Part 2, pp.77-107 (Harwood Academic Publishers). Smalley,

Denis.

(1997).

"Spectromorphology

:

Explaining

Sound-Shapes."

Organised Sound, 2(2). Cambridge University Press. Smalley, Denis. (1986). “Spectro-morphology and Structuring Processes.” In S.Emmerson, ed., The Language of Electroacoustic Music. London: Macmillan.pp.61-93. Spiegelberg,

H.

(1965).

The

Phenomenological

Movement:

A

Historical

Introduction. The Hague: Martinus Nijhof Spurling, Laurie. (1977). Phenomenology and the Social World: The Philosophy of Merleau–Ponty and its relation to the Social Sciences. London: Routledge & Kegan Paul Ltd.

289

Time Space Texture: An Approach to Audio-Visual Composition

Stein, Barry M. and Meredith, M. Alex. (1993). The Merging of the Senses. Cambridge, Mass.: MIT Press. Stewart, R.J. (1987). Music and the elemental psyche. Wellingborough: Aquarian Press. Strawson, P.F. (1959). Individuals. London: Methuen. Summer, Lisa. (1985). “Imagery and Music.” Journal of Mental Imagery.

9(4).

New York: Brandon House. Tarasti, Eero. (1994). A Theory of Musical Semiotics. Indiana University Press. Thomas,

Nigel

J.T.

(2002).

Mental

Imagery,

Philosophical

Issues

Referenced

About.

19-11-2002.

http://www.calstatela.edu/faculty/nthomas/mipia.htm Thomas, R.T.J. (1999). Are Theories of Imagery Theories of Imagination? An Active Perception Approach to Conscious Mental Content. Cognitive Science,

23,

pp.207-245.

Online

at:

http://www.calstatela.edu/faculty/nthomas/im-im/im-im.htm Treib, Marc. (1996).

Space Calculated in Seconds. Princeton, N.J. : Princeton

University Press. Vaknin,

Sam.

(2003).

Intuition.

http://samvak.tripod.com/intuition.html

Referenced February 2003. Walker, R. (1987). The effects of culture, environment, age, and musical training on choice of visual metaphors for sound. Perception & Psychophysics, 42(5):491-502.

290

Time Space Texture: An Approach to Audio-Visual Composition

Wallin,

Nils

Lennart.

(1991).

Biomusicology:

Neurophysiological,

Neuropsychological and evolutionary perspectives on the origins and purposes of music.Stuyvesant, NY: Pendragon Press. Weber,

Max.

(1958).

The

Rational

and

Social

Foundations

of

Music,

Trans. by, Don Martindale, Gertrude Neuwirth, Johannes Riedel; Southern Illinois University Press. Welch, Robert R and Warren, David H. (1980). “Immediate Perceptual Response to Intersensory Discrepancy.” Psychological Bulletin. 8(3) 638-667. Werner, H. (1948). Comparative Psychology of Mental Development. New York: International Universities Press, Inc. Wertheimer, Max. (1938). "Laws of Organization in Perceptual Forms". in Ellis, W. A source book of Gestalt psychology. London: Routledge & Kegan Paul. Wishart, Trevor. (1985). On Sonic Art. York: Imagineering Press. Yates, F,A. (1966). The Art of Memory. London: Routledge & Kegan Paul. Yi, Dae-Am. (1991). Musical Analogy in Gothic and Renaissance Architecture. Unpublished Doctoral Dissertation. University of Sydney: Sydney, Australia. Zatorre, R.J., (2001). “Neural specializations for tonal processing.” Annals of the New York Academy of Science. June; 930: 193-210. Zuckerkandl, V. (1956). Sound and Symbol: Music and the External World. Translated by W.R. Trask. New York:Pantheon Books. Zuidervaart, Lambert. (2002). Kant and Adorno in Hollywood: From Aesthetics to Cultural

Theory.

Referenced:

24-11-2002

http://isis.csuhayward.edu/alss/alss/phil/ctr/Zuidervaart_abstract.rtf

291

Time Space Texture: An Approach to Audio-Visual Composition

292

Time Space Texture: An Approach to Audio-Visual ... - Andrew D Lyons

An analytic approach will precipitate more linguistic thought about music. ..... inputs. The top layers of the superior colliculus are primarily receivers of visual ...... music, yet seldom do the two meet with the tools of analytical philosophy alone.

2MB Sizes 0 Downloads 100 Views

Recommend Documents

Time Space Texture: An Approach to Audio-Visual ... - Andrew D Lyons
Hermanovic and Janet Fraser from Side Effects Software for making Houdini the ...... mediated by two independent but richly interconnected symbolic systems ...

Feynman, Space-Time Approach to Quantum Electrodynamics.pdf ...
Page 1 of 21. Page 1 of 21. Page 2 of 21. Page 2 of 21. Page 3 of 21. Page 3 of 21. Feynman, Space-Time Approach to Quantum Electrodynamics.pdf. Feynman, Space-Time Approach to Quantum Electrodynamics.pdf. Open. Extract. Open with. Sign In. Main menu

Is Matter an emergent property of Space-Time?
gravitational field changes the matter content of space-time. A review of ..... The answer is negative because the coarse grained space-time turns out not to be a ...

Space-Time Video Montage - CiteSeerX
Microsoft Research Asia†. Beijing , P.R.China. {yasumat,xitang}@microsoft.com ... This paper addresses the problem of automatically syn-. *This work was done while the first author was visiting Microsoft Re- ..... 3 shows four frames from the origi

Approaches to the space-time modelling of infectious ...
methods of analysis which allow the modelling of such data, Becker (1989,1995) ..... LAWSON, A. B. 1997 Some spatial statistical tools for pattern recognition.

audiovisual services -
Meeting Room Projector Package - $630 ... From helping small gatherings create a great impact to amplifying a stage experience for hundreds of attendees,.

pdf-0253\crime-victims-an-introduction-to-victimology-by-andrew ...
pdf-0253\crime-victims-an-introduction-to-victimology-by-andrew-karmen.pdf. pdf-0253\crime-victims-an-introduction-to-victimology-by-andrew-karmen.pdf.

an-introduction-to-ray-tracing-by-andrew-s.pdf
an-introduction-to-ray-tracing-by-andrew-s.pdf. an-introduction-to-ray-tracing-by-andrew-s.pdf. Open. Extract. Open with. Sign In. Main menu.

adaptive modulation in time and space for space time ...
For Cases 1, 5, and 9, we may use the best code ... Cases 5 and 9: Use the best non-adaptive encoders ..... Broadband Wireless Data Networks," IEEE Comm.

Fast Bilateral-Space Stereo for Synthetic Defocus - Andrew Adams
with two rear-facing cameras to enable this “defocus” ef- fect. With two or more images ...... agreed the most (the window with the maximum variance across the ...

Space Time Vortex Theory.pdf
Space Time Vortex Theory.pdf. Space Time Vortex Theory.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Space Time Vortex Theory.pdf.

Space-Time Video Montage
but uses an arbitrary space-time volume that is extracted as ... streams have been divided into a set of meaningful and ... erate a summary video. ... Figure 2. Overview of the space-time video montage. 2. Overview of .... stage. 3.1. Finding informa