Virtual Acoustic Technology: Its Role in the Development of an Auditory Navigation Beacon for Building Evacu ation

 

Peter Rutherford

University of Strathclyde

Department of Architecture and Building Science

131 Rottenrow,

Glasgow G4 ONG

Tel: (0141) 552 4400 extension 3017

Fax: (0141) 552 3997

e-mail: P.Rutherford@strath.ac.uk

This paper addresses the issue of escape from unfamiliar, smoke filled buildings such as department stores or hote ls where being blinded by smoke may result in death to its occupants. It proposes that we may be able to apply concise auditory information to the escape procedure. In order to achieve this, predictive virtual acoust ic techniques have been employed to assess its feasibility.

Keywords: navigation, localization, binaural room simulation

INTRODUCTION

The application of sound as an aid and sa feguard for navigation has a long and varied history. From the days when whistles and bells served their purpose as warning signals to the wary ma riner, the auditory mechanism has been used to resolve the deficiencies i mposed by the visual system. Indeed, it may be stated that the ability to localize sound in enclosed environments is a very important function of the auditory system of humans. If, for example, one is sharing a dark cav e with a sabre-toothed tiger, the ability to obtain the location of the t iger by listening to its growl is of considerable value!

However, because our brains and whole way of life ar e built around light and vision, we fear the prospect of blindness; of be ing trapped in darkness, debilitated and as a result in a helpless state. One such example of complete visual deprivation is being trapped in an u nfamiliar building which is on fire and smoke filled. The victim is not f amiliar with the escape exits or indeed the location of the fire itself, thus not able to form any kind of escape plan.

 

Aim

The aim of this paper therefore is to add ress this issue of escape from such buildings, proposing that it is possi ble to apply concise auditory information to the escape procedure, guidin g the building=92s occupants to safety. Underlying the whole paper is a d escription of predictive virtual acoustic techniques which were used to e valuate the feasibility of such a proposal. Using psychoacoustic modellin g and acoustically =91rendering=92 the escape routes, a series of experim ents were created which proved the success of the navigation beacon. As c an be understood, exposing subjects to the dangers of a fire situation in order to achieve an optimised navigation beacon is not necessarily the b est way to undertake research. Virtual acoustic technology therefore allo wed such research to be conducted in the safety of the office, allowing f or a far greater range of proposals than would normally have been availab le.

 

1. Building Egress and Wayfindi ng Conditions

In order to develop an auditory navigatio n beacon, it is necessary to look at several factors which influence succ essful evacuation from buildings. The following therefore briefly outline s these factors.

As Passini states (1992, p.22), "Buildings are often difficult for wayfinding under normal conditions, but very particular an d often critical wayfinding problems occur when buildings have to be urge ntly evacuated as in the case of fires." Therefore, being familiar with o nes surroundings is greatly needed when a sudden exit has to be made (Mal hotra, 1987, p.57). If the routes to the various exits are known and if a ny exits are blocked by smoke then the alternative exits are not so diffi cult to find. However, where the occupants are not familiar with their su rroundings, movement will be slow and wayfinding will prove difficult. It must be kept in mind that, unlike a drawing, no overall view of complete escape is available when a person walks along a corridor. Signs may be a vailable to aid sighted navigation, but if the corridor is completely smo ke filled then they are in themselves useless.

In addition, within a smoke filled corridor, the cho ice of escape action is restricted, resulting in hysteria or panic. Vario us authors agree (Sime, 1985, pp.697-724; Canter, 1980) that the "associa tion of panic and fires does not hold as long as an escape route appears feasible to the victim." However, panic may be induced if =91there is a l imited number of escape routes=85..some of these routes are affected by f ire and smoke=85some of these routes are as a result blocked=92 (Marchant , 1973). So, how does anyone stand any chance of escape under such a situ ation? They have very limited wayfinding cues and along with many precipi tating factors, such as delayed warning and ambiguous messages, really do not stand any chance of escape. Consider for example the conventional fi re alarm. It has been shown to be among the most inefficient means to get people to leave a building (Proulx, 1991). It is too loud, over alarming , unlocalizable, and worst of all effectively cuts out peoples ability to verbally communicate with each other.

 

There are methods available for aiding navigation, s uch as speech based evacuation systems, emergency lighting and luminous e scape systems placed on floors and skirting boards (Krokeide, 1988), but they in themselves may be useless. Firstly, it is a misconception that pe ople drop to the floor in a fire situation, as has been consistently prov en in aircraft evacuation. Secondly, in the presence of irritant gases, s uch as ammonia based combustion products, people cannot open their eyes.

 

2. Criteria Imposed on the Desi gn of the Navigation Beacon

Having established a series of influencin g factors which govern the need for a navigation beacon, it was necessary to draw up a shortlist of criteria which were to be the basis for its de velopment. The following outlines these criteria:

 

3. Basic Theory of 3-D Sound Localization

Before describing the experiments whi ch were conducted for this research, a brief introduction to the theories associated with three-dimensional sound localization will be given. As t he victim of a fire situation must make wayfinding decisions in all direc tions (front, back, left, right), this section explains the processes inv olved. Although vertical (or elevational) localization will be discussed, this is not a property which was considered to be important for the navi gation beacon. This arises due to the fact that in Great Britain, and mos t of Europe, people escape to the ground floor, unlike the USA where many buildings involve roof-top evacuation.

Auditory localization is in itself a relatively simp le concept to explain. However, in order to describe it, some differentia tion must be made between localization in the azimuthal (left / right) pl ane, and localization in the vertical (front / overhead / back) plane.

3.1 Localization in the Hori zontal Plane

Much of the research on human sou nd localization has derived from the classic duplex theory by Lord Rayleigh, which emphasised the role of two primary cues; interaural diff erences in time of arrival (ITDs) and interaural differences in intensity (IIDs). As will be described, these lateralization cues are highl y frequency dependent. Figure 1 clearly indicates this theory.

Figure 1. The localization cues of interaural time an d intensity differences. 

The most important property of a bundle of sound wav es propagating from source to receiver is the arrival of the first sound waves at the two ears. For example, source A is straight in front of the listener, therefore both the arrival time and intensity are equal due to the path lengths being the same. However, source B is at 30o a zimuth (to the right) of the listener, thus the paths are now unequal, th e sound at the left ear arriving not only later, but also quieter than th e right. It is this path difference which is the basis for both the ITD a nd IID. As mentioned, these cues are highly frequency dependent. For exam ple, the ITD cue relates to the hearing system=92s ability to detect inte raural phase differences below approximately 1 kHz. However, IIDs do not work at this low frequency as sound diffracts quite easily around the hea d at this range, thus no significant attenuation. For IIDs to come into p lay, they must have waveform components which have wavelengths smaller th an the diameter of the head. Essentially, frequencies greater than 1.5 kH z will be attenuated at the left ear as the head is acting as an obstacle =2E This creates a shadow effect on the ear furthest away from the source. As frequency gets higher (wavelength getting smaller), the shado w gets greater (Middlebrooks and Green, 1991). Work done by Wightman and Kistler (1992) concluded that the interaural time difference cue was the dominant cue for lateralization, overriding interaural intensity differen ces if necessary.

The previous example can be assimilated to a per son coming to an junction in a corridor, having to make a left or right d ecision as to where they are going. However, the question arises as to th e possibility of the sound originating from behind them. In this case, bo th the ITD and IID are the same as they would be from the front, so what does the ear do to disambiguate front from back?

3.2 Localization in the Vert ical Plane and from Behind

The ears ability to disambiguate sources from front to back or from above and below in cases where ITDs an d IIDs cannot support this information is described by the role of spe ctral cues in localization. These spectral cues are provided by the w ay in which sound is absorbed and diffracted by the outer ear (or pinna) and torso. More specifically, the helix of the pinna (or folds) induce mi nute timing (phase) delays within a range of 0-300 micro seconds, causing the spectral content of the sound at the eardrum to be slightly differen t to that of the source. Once again, these phase delays are highly freque ncy dependent. Having established this concept, these microtime delays, r esonances and diffractions can be translated into a mathematical model of the ear known as a Head Related Transfer Function, HRTF (Wenzel et. al., 1993; Begault, 1994, pp.51-82). This HRTF is different for each position of the sound source in relation to the listener, and also varies from li stener to listener. From this, the remaining sections will discuss the ex periments undertaken which are based upon these theories.

4. Experiment 1: An Investigati on into 3-D Localization

This first experiment concentrated on the subject=92s ability to localize sound in free-field conditions for the f ollowing purposes:

It was thought that this first experiment would prove to be an important stepping stone in the development of the navigat ion beacon by giving a good idea as to what types of signals would prove to be the best for localization.

4.1 Subjects and Apparatus

Thirteen subjects (nine males and fou r females), all from the ABACUS unit of the department participated in th e experiment. All reported normal hearing. These subjects were experiment ed on in a normal reverberant room (10 x 5 meters) which was partitioned off using cloth covered fibreboard into a working area of 2 x 2 meters.

A Pentium PC was used for the experiment, the spatia lizer board (Beachtron) coming from Crystal River Engineering. Various si gnals were created using Syntrillium Software=92s Cool Edit for spatializ ation into three-dimensions by the Beachtron card. The card was programme d in C, its output sound files dumped to Digital Compact Cassette which g ives a very high quality digital recording. This was necessary as, due to the DOS based nature of the Beachtron, there was no way in which it coul d be used for both simultaneous play and record. These waveforms were the n dumped back into Cool Edit and trimmed to their desired lengths, making sure that the onset portion of the wave was not truncated in any way.

The main interface for the experiment was created in Visual Basic which replayed the wavefiles previously spatialized over he adphones (figure 2). The experiment was therefore non-real time and had n o tracking abilities. Essentially, the subjects played a wavefile from th e right hand menu and tried to judge its location on the hemisphere. Thei r response was then logged by the program for further analysis. The whole experiment presentation was as follows:

These frequencies were chosen so that the researcher could identify any possible spectral dependent characteristics for audit ory localization.

Figure 2. The Visual Basic user interface. On the rig ht are the wavefile play buttons, on the left the hemisphere of possible locations.

4.2 Results and Discussion

The experiment generally did not work as well as expected. However, it did give some interesting points for di scussion.

As this work was free field and did not take int o any consideration the nature of the space which would affect localizati on, an additional experiment was created of which the optimum signal for the navigation beacon was found. The following section therefore describe s this experiment and results gained.

5. Experiment 2: Influence o f the Environment on the Navigation Beacon

It is known that the environment through which sound propagates strongly influences the ability to localize a soun d source. Infact, the term Precedence Effect(or Law of the First Wavefron t) has been used to describe such a phenomenon (Blauert, 1983). In essenc e, the precedence effect explains an important inhibitory mechanism of th e auditory system that allows one to localize sound in the presence of re verberation. The second suite of experiments in this research therefore e xamined localization within an environmental context using prediction too ls called EASE and EARS. These tools are used for binaural room simula tion.

5.1 Binaural Room Simulation

Binaural room simulation allows us to authentically expose a person to a certain acoustic environment in order to evaluate its performance. Traditionally, such systems are used for th e design of spaces for acoustical performances such as concert halls, sim ulating the auditory perception of a listener in a specific seat of such a hall. Over the past 5 or 6 years, it has developed to a state of maturi ty, providing a sound basis for auditory experimentation applications. Wi thin this work, the program EASE (Electroacoustic Simulator for Engineers ) has been used to calculate the impulse response of the room, and its si ster program EARS (Electronically Auralized Room Simulation) to convolve the room response with HRTFs and an anechoic signal. The following figure illustrates the general concept behind such a system.

Figure 3. Outline of the architecture behind the bina ural room simulation system used for developing the navigation beacon. Ad apted from Blauert, 1997 (p.376).

Firstly, the room or corridor is geometrically model led (in EASE) for positions, dimensions and orientations of surfaces, inc luding the assigning of materials to each surface. Each material has a sp ecific absorption or reflectance characteristic which will determine the resultant frequency dependent reverberation characteristics of the space. The sound sources (or in this case the navigation beacons) are then posi tioned within the space, including their directivity patterns. The listen er (or listeners as may be the case in multiple explorations) is finally positioned in the space. Using geometrical acoustic algorithms, namely ra y-tracing (and if needed the mirror-image method), the propagation of sou nd from source to receiver is calculated. This propagation is affected by elementary filters such as the absorption or reflection from the surface s, or attenuation through air (although smoke is another important consid eration). As a result, components of the sound field which impinge upon t he listeners head, namely the ratios between the direct sound and reflect ions of different orders are simulated with respect to their direction of incidence and arrival times.

The next stage in the simulation is the inclusion of listener=92s ears in the model (within EARS). Head related transfer func tions (measured from a dummy head) are convolved with each sound ray=92s direction of incidence to the ear. From this, the binaural impulse respon se of the room is given.

The final stage in the process is called auralizatio n, or in other words, transferring all this binary data into audible soun d. Here the dry signal (as would be presented by the navigation beacon) i s convolved with the binaural impulse response. As a result, the effectiv eness of any signal presented can be presented virtually, tested a nd re-tested in order to achieve its maximum navigation potential. Howeve r, as with the previous case, this simulation assumes a static case (i.e. source and receiver are not moving) as this is a computationally very ex pensive process. As will be discussed, this was once again a problem.

5.2 Subjects and Apparatus

18 subjects were used in this experim ent, most of them having been involved with the previous one. Once again, a Visual Basic interface was created to replay wavefiles which had previ ously been auralized. Although there were 10 experiments in this set, onl y three criteria were sought to be explored, namely:

From this, 30 wavefiles were created, ranging from u nfiltered broadband noise to pure tones. The following section therefore summarises the results of these experiments.

5.3 Results and Discussion

It was obvious from the very beginnin g of the experiment that certain signals were yielding far better left / right localization performance than others. Figure 4 outlines the prefere nces of the subjects for varying signal types. As can be seen, numbers 25 , 35 and 38 consistently prove themselves to be far superior to any of th e other signals presented.

Figure 4. Frequency of occurrence chart illustrating signal preferences

The question arises as to what type of signals were presented in this experiment. Firstly, let us consider that number 1 is p ure broadband noise. Theoretically, this should have given excellent dire ctional indication as to its location. However, consistent reports by the subjects indicated that the sound was too =91fuzzy=92 due to its large s pectral content. When viewing these results in accordance with work done by Hartmann (1993), it seems apparent that the corridor with which the so und was propagating through was acting as a strong waveguide. Here, it is probable that the room geometry (long, narrow and low) re-orders the ref lections, confusing them with the direct sound. As a result, there is a s mall breakdown in the precedence effect. Effectively what happened was th at due to the vast spectral content of the broadband noise, the reflectio ns from the walls appear as a masker, decreasing localization performance =2E

On the other hand however, numbers 25 and 35 were to ne pulses with a duration of 0.05 seconds, their frequencies at 400 and 4 500 Hz. Following from the previous theory (and from subjects response), they were always perceived as being much clearer and more =91to the left (or right)=92. Unfortunately, as they were pure tones, all subjects had d ifficulty in perceiving whether they came from the front or back. The sim ulated front / back experiments offered absolutely no correct directional judgements, all 30 sounds being perceived as originating from the rear.

Finally, number 38 is broadband noise which had been filtered, containing only frequencies below 1600 Hz, tones at 400, 4500 and 12000 Hz, and bandpassed noise from 10 to 14 kHz. Essentially, number 38 includes all the information necessary for left / right localization (the 1500 Hz barrier as mentioned before for ITDs) and two bands of infor mation which yield front / back responses (the tones and the bandpassed n oise). As was also required, this signal has a clear spectral band for co nventional fire alarms to operate In addition, there is also the possibil ity for people to communicate. Since numbers 25 and 35 had poor front / b ack localization ability, number 38 must be seen as the optimum signal fo r the beacon. One bonus result was the way in which number 38 was perceiv ed. As the tones only appeared at the first impulse of the noise, they dr ew the subjects attention to the navigation beacon, acting as an attentio n cueing aid. In addition, the nature of the pulse also gave some idea as to the distance from source to listener. The experiment was as a result a success and concludes this research.

Conclusions

It can certainly be said that through out the past 3 years of this research, it would have been almost impossib le to conduct these experiments without the use of such virtual acoustic tools. Infact, it would not be untrue to state that to set up real-life e xperiments as described above would have taken the whole 3 years, nevermi nd complete the research. Of course, as with any work there is a consider able amount to be done in the future and of interest to the researcher is the way in which smoke adversely influences the propagation of sound in such enclosures (with wider applications in chemical plants etc..). Howev er, it is hoped that this paper has given a sufficient outline of the too ls and techniques involved in developing the navigation beacon, hopefully taking an active roll of saving lives in the future.

References

Begault, D. R., 3-D Sound for Virtual Reality and Multimedia, Academic Press Ltd., Cambridge, Massachusetts , 1994.

Blauert, J., Spatial Hearing. The Psyc hophysics of Human Sound Localization, MIT Press, Cambridge, Massachusett s, 1983 & 1997.

Bridges, A., Charitos, D., Rutherford, P. Way finding, Spatial Elements and Spatial Support Systems in Virtual Environm ents in CAAD: Towards New Design Conventions, Asanowicz, A., Jakim owicz, A. (eds), Technical University of Bialystok publication, Poland. 1 997.

Canter, D., (ed.) Fires and Human Behaviou r, John Wiley and Sons, New York, 1980.

Hartmann, W. M. (1983). "Localization of soun d in rooms," Journal of the Acoustical Society of America. Vol. 74 , No. 5, 1380 - 1391.

Krokeide, G., An Introduction to Luminous Esc ape Systems in Safety in the Built Environment, Sime, J. D., (ed), E. & F. Spon Ltd, London, 1988.

Malhotra, H. L., Fire Safety in Buildings (Report for the Department of the Environment), HMSO Publications, 19 87, p.57.

Marchant, E.W.,(ed.) A Complete Guide to F ire and Buildings, Medical and Technical Publishing Co. Ltd., Lancast er, 1973.

Middlebrooks, J. C., Green, D. M. (1991). "So und localization by human listeners," Annual Review of Psychology. Vol. 42, 135 - 159.

Passini, R., Wayfinding in Architecture, Van Nostrand Reinhold, New York, 1992, p. 22.

Proulx, G., 1991, "To prevent =91panic=92 in an underground emergency; why not tell people the truth?", Third Internat ional Symposium on Fire Safety, Science, Elsevier, Essex.

Sime, J.D., (1985), "Movement toward the fami liar; person and place affiliation in a fire entrapment setting.", Env ironment and Behaviour, 17 (6), pp 697-724.

Stevens, S. S., Davis, H., Hearing; Its Ps ychology and Physiology, Acoustical Society of America / American Ins titute of Physics, New York, 1983.

Wenzel, E.M., Arruda, M., Kistler, D.J., Wightman , F.L. (1993). Localization using nonindividualized head-related tran sfer functions," Journal of the Acoustical Society of America. Vol =2E 94, No. 1, 111 - 123.

Wightman, F. L., Kistler, D. (1992). "The dom inant role of low-frequency interaural time differences in sound localiza tion," Journal of the Acoustical Society of America. Vol. 91, No. 3, 1648 - 1661.