domingo, 21 de marzo de 2010

Human ear response to audio frequencies

Music and The Human Ear

This section contains information on the softest and loudest sounds we can hear, the range of frequencies we can hear, subjective vs. objective loudness, how we locate the source of a sound, and sound distortion. This section focuses mainly on the ear itself, but the brain is an integral part of the human hearing system. A separate section considers the function of the brain in more detail.

The human ear is a truly remarkable instrument. At one point in my life I designed Electronic Counter Measures (ECM) systems for the U. S. military. The primary function of an ECM system is to detect an enemy before he (it's rarely a she) detects you, for self-defense. It is interesting to compare the characteristics of a good ECM system and human hearing:

Comparison of characteristics

Human hearing is a superior defensive system in every respect except source location accuracy. Note: Jourdain (page 23) states that human accuracy is 1-2 degrees in azimuth.
In contrast, a military system designed for communications (rather than detection) would typically have a much smaller ratio of highest-to-lowest frequency, no source location capability, and often a narrow directional coverage. For human communication a frequency ratio of 10:1 and a ratio of strongest to weakest signal of 10,000:1 would suffice. The far larger actual ratios strongly imply a purpose other than communication.
All of this tells me that the ear evolved primarily for self-defense (or perhaps hunting, as one reader pointed out), and language and enjoyment of music are delightful evolutionary by-products. A defensive purpose also suggests some direct hard-wiring between the ears and primitive parts of the brain, which may account for the powerful emotional impact of music - and its virtual universality among human cultures. A few years after writing this paragraph I found the very interesting book This is Your Brain on Music which confirms the speculation on wiring to primitive parts of the brain, but argues that music has a definite evolutionary function.

Soft Sounds and Loud Sounds

Acknowledgment: a good part of the material in the remainder of this section is derived from an excellent book The Master Handbook of Acoustics by F. Alton Everest, and from the chapter he contributed to the Handbook for Sound Engineers. Seereferences. These sources also contain much additional interesting material. David Worrall has posted his course notes of Physics and Psychophysics of Music on the web, which includes an informative section on the physiology of hearing. A series of tutorial papers on hearing and other related topics has also been posted by HeadWize.
Sound pressure level (SPL) is given in dB SPL. This is a scale that is defined such that the threshold of hearing is close to 0 dB. The threshold of pain is about 135 dB. This is a logarithmic scale where power doubles for each 3 dB increase; the 135 dB difference between the thresholds of hearing and pain means the power doubles about 45 times - an increase of 32 trillion (32x1012) in the power level. This is an incredible dynamic range, and totally blows away anything human engineers are capable of creating. (Actually in a Dec 99 Newsgroup post Dick Pierce states that B&K 4138 microphones have a dynamic range of 140 dB, so I was underrating human engineers). At the low end of the range the ears lose function due to background noise. At 0 dB SPL noise created by blood flow in the ear is one source. It is shown elsewhere that the noise of molecules colliding with the eardrum is not far below this level. At the threshold sound level of 0 dB SPL Everest states that the eardrum moves a distance smaller than the diameter of a hydrogen molecule! At first I was incredulous when I read this, but it is consistent with the change in diameter of the balloon example used in the previous section. For a 0 dB SPL the change in balloon diameter is 6x10-10 inches, which is about 1/10 of the diameter of a hydrogen atom. The sensitivity of the ear is truly mind-boggling.
Pressure is an objective physical parameter. The relationship of SPL to the subjective sensitivity to sound is discussed below. The human ear is most sensitive in a band from about 2,000-5,000 Hz. This is an important region for understanding speech, and could be construed to imply that hearing evolved to match speech. However, did the ear evolve to be sensitive to the speech frequency band, or did human speech evolve to match the band where the ear is most sensitive? (I read somewhere that babies cry in the frequency band where the ear is most sensitive). As measured by Voss and Allen, a typical eardrum absorbs about 75% of the incident sound energy at 5 kHz. The sensitivity vs. frequency behavior has a fair resemblance to the response of a piston load matched to the impedance of air, as shown in the physics section. Music levels vary from about 50 dB for quiet background music to maybe 120 dB for a very loud rock band. Subjectively, a 2-3 dB change in sound level is barely perceptible; if someone asks you to "turn up the volume a little," you will probably increase the sound by at least 3 dB. (Note that if you have a 100-Watt amplifier and it doesn't play loud enough, you need a 200-Watt amplifier to turn up the volume 3 dB. This can get very expensive very quickly). Interestingly there were some ABX test results on the web which indicate that a 0.3 dB difference in level can be detected (link no longer exists). However the test procedure allows switching between the two levels as much as you want before making a decision, and the test used pink noise for the sound. You can hear what a 3 dB difference sounds like yourself with sound files in the sound demo section.
A full orchestra can also hit a sound level of 110 dB and more, and then play a quiet passage at 20-30 dB. To reproduce this faithfully requires a recorded sound source capable of covering this 80+ dB dynamic range. (Everest quotes one researcher who claims a 118 dB range is required). A vinyl record is good for about 50-70 dB; a standard compact disc with 16-bit encoding can cover a 96 dB range, and the 24-bit DVD disk format a 144 dB range - in theory. Real D/A converters tend to be noise limited to a somewhat lower range.
A problematic aspect of music for a sound system designer is that there are brief transients ("spikes") in sound level that far exceed average power levels. Usually people talk about average, or root-mean-square (RMS) power. RMS power is really only important with respect to the generation of heat. In my opinion, peak power is far more important, since this is when a speaker could be driven into a non-linear region, and when an amplifier would clip. These two effects are major causes of distortion. Using Cool Edit 96, I recorded 10-20 second segments from Talking Heads "Burning Down the House," Diana Krall "All or Nothing at All," and Shostakovich Symphony #5. I then processed the cuts in Matlab, to generate the outputs of a 3-way crossover. The crossover frequencies are 300 and 3000 Hz. Both 1st order Butterworth and 4th order Linkwitz-Riley filters were modeled. Finally I calculated the average and peak power in each driver band, with results as shown in the tables below.

All powers are shown as a percentage of the same quantity in the unfiltered music. Note that the average power for the Butterworth adds to 100%, but the Linkwitz-Riley adds to less than 100%. The voltage output of a Linkwitz-Riley coherently adds to unity, but the power addition is less than unity. The peak power is obtained by computing the time-domain waveform of the signal output by the crossover. Then the peak value is found. Typically the peaks occur at different times for the tweeter, midrange, and woofer, so there is no physical significance to the sum of the three powers in this case. The startling result is that by far the greatest demands on peak power are in the midrange for the Krall and Shostakovich. The 4th order reduces the demands in the high and low bands, but there is little difference in the mid-band. Only the Talking Heads cut has a greater demand in the bass. It is also quite significant that even though the average tweeter power is low, the peak tweeter power is not all that much lower than other bands, and in fact is greater than the woofer in some cases!
When I play the Talking Heads cut, my CLIO sound measurement system shows a peak sound level of 100 dB SPL in the room, and an average of around 95 dB. Judging from the oscilloscope connected to the amp outputs, the average amplifier output power appears to be about 17 watts. The ratio of peak power to RMS power was 40:1, 40:1 and 30:1 for the Talking Heads, Diana Krall, and Shostakovich cuts respectively. Therefore, for 17 watt RMS, the peak power demands are on the order of 700 Watts. This indicates that either my amps can put out peaks much higher than their rated power (possible, but I'm not sure), or they are clipping. There are demo files In the sound demo section which simulate clipping by tube and solid-state amplifiers. For more on this subject see the section on amplifier distortion.
Jourdain (page 41) states that an orchestra produces 67 watts of acoustic power at full blast. Loudspeakers have efficiencies on the order of 0.5 to 2% converting electrical power to acoustic power. Even at 2% efficiency this implies that well over 3,000 watts of electrical power would be required to duplicate this sound level. Of course an orchestra plays in a large auditorium, and no doubt less power is needed for a small room. This still indicates that power requirements should not be underestimated.

The Audio Spectrum

A major criterion of a good sound system is its frequency response. The usual frequency range considered "hi-fi" is 20-20,000 Hz. These sample tones are audible with good loudspeakers or headphones, but many computer speakers will not reproduce them at all: a 100 Hz tone, (12 kb wav file) and a 10,000 Hz tone (44 kb wav file). Yesterday I did a test using the very accurate signal generator built into my CLIO system. I can clearly hear, and certainly can feel, a 10 Hz tone. My sound system totally poops out below 10 Hz, so I can't test any lower than that. The lowest notes on organs and pianos are 16.4 and 24.5 Hz respectively. Testing at the other extreme, as a 61 year-old male (when I originally wrote this) I can hear a 13,500 Hz tone, but no higher. (It is generally agreed that women are more sensitive to high frequencies). However, good high frequency response is required to produce sharp transients, such as a snap of the fingers. I performed a test using a Ry Cooder CD, "Talking Timbuktu." Track 10 on this disk has some very sharp transients that just leap out at you from a good sound system. My pre-amp has a filter that cuts off frequencies above 12,000 Hz. With this filter in, the transients limp out rather than leap out. This shows that even though I cannot hear a pure tone in most of the range of frequencies cut out by the filter, I can clearly hear the difference in the sound quality of the transients. I repeated this test recently (at age 67) with a segment of this cut recorded as a .wav file, and digitally processed with a 12kHz filter. This time the test was a double-blind ABX test, and I can't reliably detect any difference (I can still hear a 13,000 Hz tone). I now doubt the validity of the earlier test. See the discussion on high frequency tests in the section on sound demos.
James Boyk at Caltech has posted an interesting paper on the frequencies generated by musical instruments between 20kHz and 102 kHz! He also cites a paper that states that people react to sounds above 26 kHz even when they cannot consciously hear the sound. Jourdain (page 42) states that sound can be heard up to 40 kHz if sufficiently loud (A knowledgeable reviewer of the book is skeptical about this claim. Unfortunately the link to the review no longer works).
The ear tends to combine the sound within critical bandwidths, which are about 1/6 octave wide (historically thought to be 1/3 octave). This has led to the practice of averaging frequency response over 1/3 octave bands to produce beautiful-looking frequency response curves. In my opinion this is misleading. Suppose a loudspeaker has a bad dropout (very weak response) over a narrow frequency range; the dropout will be totally obscured by averaging. But when a musical instrument plays a note that just happens to fall in the dropout notch, you will not be able to hear the note. See the example of a warts-and-all response(28.2 kb) vs. a 1/3 octave smoothed response (24.5 kb) from my final system measurements section. Since we can barely hear a 2-dB difference in sound level, it is reasonable to accept ±2 dB as an excellent level of performance for frequency response. In fact this is impossible to achieve in the real world, due to room acoustics. (see the section on room acoustics). Personally I would say a more-or-less practical goal for a sound system installed in a room is a frequency response ±5 dB from 200-20,000 Hz, and maybe ±10 dB from 10-200 Hz. It is also worth noting that the ear itself has a quite variable frequency response, as shown by measured data on head-related transfer functions, and as discussed in the next section.
What is the minimum audible change in frequency? I created two .wav files: case #1 was a series of 1/2 second tone bursts, all at a frequency of 800 Hz; for case #2 the bursts alternated between 800 and 805 Hz. I can reliably distinguish between these two cases in a double-blind test. This difference in frequency is less than 1/100 of an octave. I could also distinguish between 400 and 402 Hz. According to Jourdain (page 18) this is about normal for a young person; at age 61 I'm not supposed to be able to detect a difference of less than about 8 Hz at 400 Hz. But I can. (I repeated this test at age 67, and I still can do it). Sample files are described in the sound demo section. An interesting detail is that tone bursts that start and stop abruptly are easier to discriminate than bursts with a fade-in fade-out. I don't know if this is simply a timing issue, or if the brain is making use of the higher Fourier transform sidelobes that occur for a square window (the spectrum for a tapered burst is extremely narrow, the square burst spectrum has extensive sidelobes about 40 dB below the peak).
For music the audio spectrum is divided into discrete notes. A brief discussion of the interesting subject of musical scales is given in a separate section.
Subjective vs. Objective Sound Levels
SPL is an objective measurement of sound pressure, or power in watts, and is independent of frequency. In 1933 Fletcher and Munson of Bell Labs did a study that showed that subjective sound levels varied significantly from the SPL level. That is, when two tones were played at the exactly the same SPL level, one sounded louder than the other. And the results were very dependent on how loud the tones were to begin with. This is illustrated by the set of Fletcher-Munson curves [102 Kb]. The vertical axis is the objective SPL sound level. Each of the curves in the graph represents a constant subjective sound level, which are in units called "phones." The lowest curve is the minimum audible level of sound. As noted above, the ear is most sensitive around 2-5 kHz. To be audible at this minimum level, a sound at 20Hz must be 80 dB (100 million times!) more powerful than a sound at 3 kHz.
Near the top, the curve at 100 phones is a fairly loud level. To sound equally loud at this level the sound at 20 Hz must be about 40 dB more powerful. This change in subjective level for different loudness levels means that music played softly will seem to be lacking in bass. For years pre-amps have come equipped with "loudness" controls to compensate for this. For me, part of "Hi-fidelity" means playing music at the same level it was originally played, so this is all academic - but interesting none the less.

Source Location

An important characteristic of a sound system is the "sound image." An ideal system would create a vivid illusion of the location of each musical instrument. In designing a system it is important to understand, as well as current knowledge permits, how we locate the source of a sound. One thing that is clear is that the brain processes several different types of data to extract directional information. The data include:
shape of the sound spectrum at the eardrum
difference in sound intensity between the left and right ears
difference in time-of-arrival between the left and right ears
difference in time-of-arrival between reflections from the ear itself
A remarkable fact is that the pinna, the cartilage-filled structure surrounding the ear canal (commonly simply called the "ear"), is a vital part of direction sensing. Test subjects can be trained to locate sound using only one ear. But when the ridges of the pinna are gradually filled in, the ability is lost, in proportion to the filled in area. Apparently the brain uses reflections from the ridges of the pinna (19.4 kb) to determine direction. The head and pinna have a major effect on the sound that arrives at the ear. This effect is mathematically represented by a head-related transfer function (HRTF). There are files in the sound demo section where a monophonic sound source is processed with HRTFs to synthesize sound arriving from various directions. The full HRTFs contain both the difference in sound intensity, and difference in time-of-arrival. There are two other demo files where only one of these two differences are retained. When I listen to these files I perceive the apparent direction almost equally well with all three files, indicating that the brain has a remarkable capability of making good use of whatever information it gets.
The significance of the pinna reflection experiments for a sound system designer is that time delays on the order of 0.1 millisecond can effect sound imaging. Time delays between the left and right ear are on the order of 0.5 milliseconds, and are quite important. On the other hand, researchers have found that echoes in the range of 1 to 50 milliseconds are lumped together by the brain with the direct sound, so they are not actually heard as distinct echoes. Delays greater than 50 milliseconds are heard as echoes. My own echo research is described in the sound demo section, and you can listen to the results yourself. Echoes in the range of 25 to 100 milliseconds give a "cavernous" quality to the sound. What is commonly called an "echo," a distinct repetition of the original sound, only occurs for echoes of 400 milliseconds or longer. Echoes in the range of 0.1 to 2 milliseconds do cause changes in the apparent direction of the source.
A regular CD sampled at 44.1 kHz is theoretically capable of reproducing frequencies up to 22 kHz, which corresponds to a transient duration of .05 milliseconds. However, as discussed in a recent paper by Mike Story (e-mail to request a copy) the anti-aliasing filters required to record within this band cause the transients to be blurred, in effect smearing the ability of our ears to distinguish direction. Mike reports that in listening tests 96 kHz recordings provide notably better spatial resolution. In the Handbook for Sound Engineers Steve Dove says anti-aliasing filters "....exhibit serious frequency dependent delay and convoluted frequency/phase characteristics... leaving mangled audio in their wake". He also advocates sampling around 100 kHz, and says the result is a more open and spacious sound. Humans perceive left-right direction more accurately than up-down direction. Presumably this is due to the fact that we generally move in two dimensions along a more-or-less level surface. All of this information is important for the sound system designer, particularly regarding the control of sound diffractionand reflection, both of which can muddle the sound image.
Distortion is a commonly accepted criterion for evaluating high-fidelity sound equipment. It is usually understood to mean the tones in the reproduced sound that were not present in the original sound. An ideal sound system component has a perfectly linear response. This means that the ratio of the output and the input signal magnitude is always exactly the same, and the relative phase is constant, regardless of the strength of the signal. For a non-linear response (anything other than a linear response), distortion will occur. It is commonly categorized as total harmonic distortion (THD) and intermodulation distortion. Harmonic distortion means that a pure 1000 Hz input tone results in spurious outputs at 2000 Hz, 3000 Hz, and other integer multiples of the input frequency. Intermodulation distortion means two input tones at 1000 Hz and 100 Hz result in spurious outputs at 900 Hz, and 1100 Hz, among others.
The audibility of phase distortion is controversial. Some loudspeaker manufacturers, such as Dunlavy (apparently now out of business), cite flat phase response as a significant feature of their products. There is no question that under some artificial circumstances phase distortion is audible. Further discussion on the interesting topic of phase audibility can be found here.
So called "Doppler" distortion is produced by the motion of the loudspeaker cone itself. This creates some harmonic distortion, but the most significant effect is intermodulation distortion. This class of distortion can only be reduced by reducing the cone motion. A large surface, such as the membrane of an electrostatic speaker, will produce very little Doppler distortion. See theanalysis for a piston in a tube for technical details.
Also see the discussion above on "clipping."
Everest quotes research indicating that amplitude distortion has to reach a level of 3% to be audible. However this varies greatly depending on the distortion harmonic products, and on the sound source. More on this below. Good CD players, amplifiers and pre-amplifiers typically have distortion levels of 0.1% or less. (Tube amps typically have higher distortion). Loudspeakers are the weak link regarding distortion. It is hard to even get information on loudspeaker distortion since it looks embarrassing compared to the values advertised for electronics. I measured 2nd and 3rd harmonic distortion of my sound system end-to-end using myCLIO sound measuring system. Since speaker distortion dominates, this is essentially a measurement of speaker distortion. The measurement was made using one speaker; with two speakers the distortion would be the same, but the SPL levels would increase 6 dB for the two lower frequency bands, and 3 dB for the upper bands. The entire measured distortion curve at the higher power level is shown in the section on final system measurements.

Measured harmonic distortion

Distortion is universally considered to be bad, and it is perhaps not generally realized that musical instruments introduce overtones that have similarities to distortion. I imagine most music lovers are aware that all musical instruments produce a fundamental tone (the "note"), and a series of overtones. The overtones are at frequencies higher than the fundamental tone, and give the sound a rich quality not possessed by a pure tone. Overtones are generally harmonics (integer multiples) of the fundamental frequency. The relative strength of the various harmonics gives the instrument its characteristic sound. You can hear a comparison of a real piano note [42kb] and a tone {42kb] with the same fundamental frequency, but lacking in overtones. There is also additional description of the spectrum of this note.
The ear is not perfectly linear and produces distortion. A short discussion of the non-linear behavior of the ear can be found in aseparate section. Finally, air itself is non-linear, and harmonic distortion grows steadily as a wave propagates (see plane waves in the physics section). This is usually a very small effect, but can be significant in the throat of a horn speaker.
The subject of sound quality is not at all clear-cut. Even though tube amplifiers have higher measured distortion, a lot of knowledgeable people swear that they sound better. I finally dove into this subject in August 2006. I can clearly hear THD at 0.5% for a pure 440 Hz tone and the type of harmonics produced by a typical solid-state amp; for the type of harmonics produced by a single-ended triode amp I could not detect distortion until it reached a level of 10%. This amazing difference is covered in detail in the section on amplifier distortion. For music samples the difference is not quite as big, but is still quite significant. Many people have come to the conclusion that THD is a terrible way to judge amplifier quality, and I totally agree. Norman Koren, an advocate of tube amplifiers, has posted a very interesting commentary on the subject of distortion and the effect of feedback.

Human Hearing

The human ear is an exceedingly complex organ. To make matters even more difficult, the information from two ears is combined in a perplexing neural network, the human brain. Keep in mind that the following is only a brief overview; there are many subtle effects and poorly understood phenomena related to human hearing.
Figure 22-1 illustrates the major structures and processes that comprise the human ear. The outer ear is composed of two parts, the visible flap of skin and cartilage attached to the side of the head, and the ear canal, a tube about 0.5 cm in diameter extending about 3 cm into the head. These structures direct environmental sounds to the sensitive middle and inner ear organs located safely inside of the skull bones. Stretched across the end of the ear canal is a thin sheet of tissue called the tympanic membrane or ear drum. Sound waves striking the tympanic membrane cause it to vibrate. The middle ear is a set of small bones that transfer this vibration to the cochlea (inner ear) where it is converted to neural impulses. The cochlea is a liquid filled tube roughly 2 mm in diameter and 3 cm in length. Although shown straight in Fig. 22-1, the cochlea is curled up and looks like a small snail shell. In fact, cochlea is derived from the Greek word for snail.
When a sound wave tries to pass from air into liquid, only a small fraction of the sound is transmitted through the interface, while the remainder of the energy is reflected. This is because air has a low mechanical impedance (low acoustic pressure and high particle velocity resulting from low density and high compressibility), while liquid has a high mechanical impedance. In less technical terms, it requires more effort to wave your hand in water than it does to wave it in air. This difference in mechanical impedance results in most of the sound being reflected at an air/liquid interface.
The middle ear is an impedance matching network that increases the fraction of sound energy entering the liquid of the inner ear. For example, fish do not have an ear drum or middle ear, because they have no need to hear in air. Most of the impedance conversion results from the difference in area between the ear drum (receiving sound from the air) and the oval window (transmitting sound into the liquid, see Fig. 22-1). The ear drum has an area of about 60 (mm)2, while the oval window has an area of roughly 4 (mm)2. Since pressure is equal to force divided by area, this difference in area increases the sound wave pressure by about 15 times.
Contained within the cochlea is the basilar membrane, the supporting structure for about 12,000 sensory cells forming thecochlear nerve. The basilar membrane is stiffest near the oval window, and becomes more flexible toward the opposite end, allowing it to act as a frequency spectrum analyzer. When exposed to a high frequency signal, the basilar membrane resonates where it is stiff, resulting in the excitation of nerve cells close to the oval window. Likewise, low frequency sounds excite nerve cells at the far end of the basilar membrane. This makes specific fibers in the cochlear nerve respond to specific frequencies. This organization is called the place principle, and is preserved throughout the auditory pathway into the brain.
Another information encoding scheme is also used in human hearing, called the volley principle. Nerve cells transmit information by generating brief electrical pulses called action potentials. A nerve cell on the basilar membrane can encode audio information by producing an action potential in response to each cycle of the vibration. For example, a 200 hertz sound wave can be represented by a neuron producing 200 action potentials per second. However, this only works at frequencies below about 500 hertz, the maximum rate that neurons can produce action potentials. The human ear overcomes this problem by allowing several nerve cells to take turns performing this single task. For example, a 3000 hertz tone might be represented by ten nerve cells alternately firing at 300 times per second. This extends the range of the volley principle to about 4 kHz, above which the place principle is exclusively used.
Table 22-1 shows the relationship between sound intensity and perceived loudness. It is common to express sound intensity on a logarithmic scale, called decibel SPL (Sound Power Level). On this scale, 0 dB SPL is a sound wave power of 10-16watts/cm2, about the weakest sound detectable by the human ear. Normal speech is at about 60 dB SPL, while painful damage to the ear occurs at about 140 dB SPL.

The difference between the loudest and faintest sounds that humans can hear is about 120 dB, a range of one-million in amplitude. Listeners can detect a change in loudness when the signal is altered by about 1 dB (a 12% change in amplitude). In other words, there are only about 120 levels of loudness that can be perceived from the faintest whisper to the loudest thunder. The sensitivity of the ear is amazing; when listening to very weak sounds, the ear drum vibrates less than the diameter of a single molecule!
The perception of loudness relates roughly to the sound power to an exponent of 1/3. For example, if you increase the sound power by a factor of ten, listeners will report that the loudness has increased by a factor of about two (101/3 ≈ 2). This is a major problem for eliminating undesirable environmental sounds, for instance, the beefed-up stereo in the next door apartment. Suppose you diligently cover 99% of your wall with a perfect soundproof material, missing only 1% of the surface area due to doors, corners, vents, etc. Even though the sound power has been reduced to only 1% of its former value, the perceived loudness has only dropped to about 0.011/3 ≈ 0.2, or 20%.
The range of human hearing is generally considered to be 20 Hz to 20 kHz, but it is far more sensitive to sounds between 1 kHz and 4 kHz. For example, listeners can detect sounds as low as 0 dB SPL at 3 kHz, but require 40 dB SPL at 100 hertz (an amplitude increase of 100). Listeners can tell that two tones are different if their frequencies differ by more than about 0.3% at 3 kHz. This increases to 3% at 100 hertz. For comparison, adjacent keys on a piano differ by about 6% in frequency.

The primary advantage of having two ears is the ability to identify the direction of the sound. Human listeners can detect the difference between two sound sources that are placed as little as three degrees apart, about the width of a person at 10 meters. This directional information is obtained in two separate ways. First, frequencies above about 1 kHz are stronglyshadowed by the head. In other words, the ear nearest the sound receives a stronger signal than the ear on the opposite side of the head. The second clue to directionality is that the ear on the far side of the head hears the sound slightly later than the near ear, due to its greater distance from the source. Based on a typical head size (about 22 cm) and the speed of sound (about 340 meters per second), an angular discrimination of three degrees requires a timing precision of about 30 microseconds. Since this timing requires the volley principle, this clue to directionality is predominately used for sounds less than about 1 kHz.
Both these sources of directional information are greatly aided by the ability to turn the head and observe the change in the signals. An interesting sensation occurs when a listener is presented with exactly the same sounds to both ears, such as listening to monaural sound through headphones. The brain concludes that the sound is coming from the center of the listener's head!
While human hearing can determine the direction a sound is from, it does poorly in identifying the distance to the sound source. This is because there are few clues available in a sound wave that can provide this information. Human hearing weakly perceives that high frequency sounds are nearby, while low frequency sounds are distant. This is because sound waves dissipate their higher frequencies as they propagate long distances. Echo content is another weak clue to distance, providing a perception of the room size. For example, sounds in a large auditorium will contain echoes at about 100 millisecond intervals, while 10 milliseconds is typical for a small office. Some species have solved this ranging problem by using active sonar. For example, bats and dolphins produce clicks and squeaks that reflect from nearby objects. By measuring the interval between transmission and echo, these animals can locate objects with about 1 cm resolution. Experiments have shown that some humans, particularly the blind, can also use active echo localization to a small extent.

Lenny Z Perez M.

Invite your mail contacts to join your friends list with Windows Live Spaces. It's easy! Try it!

No hay comentarios:

Publicar un comentario