
Pim Heerens studied Applied Physics at the Technical University of Delft, Netherlands (TU‐Delft) from 1961 to 1967. As staff member of the Physics Department of TU‐Delft he received his PhD in 1979 for research into vacuum gauges with close connections to capacitive sensors (for which he obtained patents in 1980), membrane technology and Laplace potential analysis. In 1999 he retired from TU‐Delft in the age of 59 (mainly because of aggravated symptoms of Ménière’s disease). In 2001 he started as independent researcher into the functioning of the human hearing sense, which resulted in a number of new insights in this functioning, based on careful interpretation of experimental data combined with fundamental physics principles.
Scientific publications and other work of W.Chr. Heerens |
Publications of W.Chr. Heerens as principal author or co-author
Publications of W.Chr. Heerens as author or co-author
Invited symposium and congress papers
Ph.D. theses with significant involvement of W.Chr. Heerens
Back to index
Terug naar index
Sander de Ru studied Medicine at the University of Utrecht and trained as Otolaryngologist (ENT doctor) at the University Medical Center in Utrecht. In 2005 he received his PhD for research into diagnostical aspects of surgery of the glandula parotidea. Sander is currently employed as ENT doctor at the Central Military Hospital in Utrecht, Netherlands.
ACRONYMS IntroShot1 a3ccm
ACRONYMS IntroShot2 apmas
First let me introduce myself by means of a short information.
My name is: Willem C. Heerens. (November 1940)
I am a Dutch scientist – PhD MSc in Applied Physics.
Emeritus Associate Professor. Dep. of Physics, Technical University Delft. The Netherlands.
Honorary Professor. Altai State Technical University, Russia.
The scientific output during my research career can be seen in the list of papers in:
ACRONYMS IntroShot5 aobmm
The content of the book is divided into nine chapters.
ACRONYMS IntroShot6 tsefs
Sound pressure frequency spectrum
Perilymph velocity frequency spectrum
Sound energy frequency spectrum
Transfer of sound pressure signal to sound energy frequency spectrum
by Willem Christiaan Heerens
Applying Physics Makes Auditory Sense
A New Paradigm in Hearing Fig. 4. Transfer of sound pressure signal
to sound energy frequency spectrum
Relation between stimulus and what we hear :
For 1 frequency :
by Willem Christiaan Heerens
2 frequencies 1st and 2nd harmonics
amplitudes according to 1/f relation :
by Willem Christiaan Heerens
1. Willem C. Heerens & Jacob Alexander De Ru, Applying physics makes auditory sense : a new paradigm in hearing, 2010
Applying physics makes auditory sense : a new paradigm in hearing, 2010
Toepassen van Fysica Zinvol bij het Horen : Een Nieuw Gehoorparadigma, 2010
I. Heerens W, Mangelinckx Y, de Ru J. Verification of calculations of residual pitch and beat phenomena by the reader, 2010
II. Heerens W, Mangelinckx Y, de Ru J. The residual pitch and beat phenomena that can be heard in practice by the reader, 2010
Heerens W, Mangelinckx Y, de Ru J. Perception calculations, 2010
This associated software: "Perception calculations" together provide you with a tool to personally verify the predicted residual pitch and beat phenomena described in chapter 3 of the booklet.
This program is ONLY available for computer systems running under Windows.
We also present the composed sound fragments without this program, so in case you are not able to use the program, you are invited to download the composed sound fragments to personally verify the predicted residual pitch and beat phenomena.
Presentation of composed sound fragments. E.2. Pitch perception in incomplete harmonic sound complexes.I. Heerens W, Mangelinckx Y, de Ru J. Verification of calculations of residual pitch and beat phenomena by the reader, 2010
II. Heerens W, Mangelinckx Y, de Ru J. The residual pitch and beat phenomena that can be heard in practice by the reader, 2010
ACRONYMS IntroShot7 pcbwp
Heerens presented in a PDF the solution of the non-stationary Bernoulli equation, that is perfectly well valid in the case of the push-pull movements of the perilymph inside the scala tympani [ST] and scala vestibuli [SV], while the in between embedded scala media [SM], filled with endolymph at rest, has substantial – and therefore not negligible – dimensions.
According to hydrodynamic rules these dimensional conditions make that the hypothesis in which both the influence of the Reissner membrane and the content of the SM can be ignored and the cochlear duct can be considered as a folded tube with only the BM as an interface in between is definitely invalid.
Well like the well-known promoter of physics, MIT professor Walter Lewin, does in his magnificent physics courses, Heerens has built his own demonstration equipment for clearly showing what happens on the walls of a duct in which an alternating flow in core direction is evoked.
The one experimental set-up is extremely simple, but therefore also highly convincing.
As can be seen in the above figure, to mimic utmost compliance in the ‘walls’ in one of the experiments Heerens has hanged on thin wires in an open frame two sheets of paper that can move freely.
Between the two he can evoke an alternating flow parallel to the surfaces of the sheets of paper with by moving up and down a spatula.
And like it is shown in the next figure he has constructed a closed loop with a tube and a bellow, the latter centrally subdivided by a plate, with which he can create a push-pull flow in the tube, while in the upper branch of the tube locally a flexible yellow membrane is mounted in the wall, which registers what happens on the wall of the tube.
In front of the membrane a wire cross is closely mounted. Striking light from above forms a bended shadow of the wire cross on the membrane if that membrane is moving away – so inwards the tube – while during movement outwards of the membrane the shadow won’t be present because the wire cross is laid on the bending membrane.
The obtained results he found in both experiments?
The evoked motion patterns are exactly identical to what could predict out of the theory Heerens has presented. The two sheets of paper are not at all moving in outward direction. They are moving in opposite direction, so towards the core line of the alternating flow. And under a steady alternating stimulus (with constant amplitude) they both do that with a stationary deflection on which an alternating deflection is superposed with doubled frequency.
This indicates that both sheets experience the influence of an alternating and in average lower pressure evoked in the space between the two sheets.
This behavior is shown in the following multi moment presentation:
The tube experiment also shows that the membrane in the wall is always moving inwards – so towards the core line of the tube. And superposed on a constant deflection inwards the membrane also deflects periodically with double frequency related to the original stimulus frequency.
This is given in the following impression:
Without any doubt this is indicating that at least squaring of the input stimulus plays a dominating role.
[Note: To make it even more convincing for everyone, see the video registration of the tube experiment.]
Video Movement a membrane. By the Bernoulli effect.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 3)
All of us are on one and the same line considering the new insights regarding the functioning of our hearing sense.
Keep reading and consider our insights regarding our review in which the non-existence of traveling waves in the cochlea (the non-existence of forward and backward traveling waves in the cochlea) and the appearance of pressure differences between scala tympani and scala vestibuli will be observed more closely once again.
Applying physics makes auditory sense with a three compartment cochlear model in which the cross sections of scala tympani and scala vestibuli are chosen as equal in area. And more important, we take into account the existing influence of the scala media with its results and the analytical model of Heerens for perilymph movement in the cochlear duct, where the non-stationary Bernoulli effect results as the solution.
And if all will be correct then all solutions lead to the uniform conclusion that inside the cochlea the sound pressure signal, evoked in the outer ear channel, will be transferred into the sound energy stimulus. And that this stimulus is responsible for the activation of the basilar membrane and for the signal to the brain.
For now the only clear and firm conclusion one can draw is: The medium in the tube is moving as a whole. And therefore these experimental results, in combination with the theoretical solution of the non-stationary Bernoulli equation, are one of the reasons that the transmission line concept cannot play a role in it either.
Perhaps I could share some idea for further research.
If we could make actual and correct pressure measurements in the cochlea to reveal wether the non-stationary Bernoulli effect is a good description of the actual physics-of-how-the-cochlea-isolates-frequencies-along-its-length?
Organ of corti operation. Inner hair cells are the leftmost row, outer hair cells are the other three rows.
I would consider:
I would propose to use a pitot tube, with sensor in the side wall [ B in the next figure, left side in that figure ] to actual have correct pressure measurements in the perilymph flow tube inside the cochlea.
Therefore I would propose to use a pitot tube, with sensor in the side wall [ B in the figure, left side in the figure ] to actual have correct pressure measurements in the perilymph flow tube inside the cochlea.
So, I would propose to use a pitot tube, with sensor in the side wall [ B in the figure, left side in the figure ]
Organ of corti operation. Inner hair cells are the leftmost row, outer hair cells are the other three rows.
Video Movement of the basilar membrane. By the Bernoulli effect.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 1)
Video Movement of the basilar membrane 2f.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 7)
It’s logic we ask your attention for chapter 3 and in connection with that chapter the appendices in our booklet? Its title is already explaining it: ‘Methods and experiments for verification’.
In that chapter there are described all together 32 psychoacoustic experiments entirely based on our hypothesis for which the resulting sound impressions to be heard are predicted.
Simply because these predicting results can be obtained by pure mathematical calculations.
Actually these monaural experiments ( also binaural experiments can be done ) are not only psychoacoustic and therefore subjective in origin, they are going a step further towards the really wanted objectivity. We can simply offer our subjects sound complexes for which we can calculate in advance and therefore predict what the subjects will tell us what they hear. It is possible to test our hypothesis with some psychoacoustic experiments. You can easily enlarge the number of psychoacoustic experiments including phase phenomena.
For instance as an example:
Heerens W, Mangelinckx Y, de Ru J. Perception calculations, 2010
This associated software: "Perception calculations" together provide you with a tool to personally verify the predicted residual pitch and beat phenomena described in chapter 3 of the booklet.
This program is ONLY available for computer systems running under Windows.
We also present the composed sound fragments without this program, so in case you are not able to use the program, you are invited to download the composed sound fragments to personally verify the predicted residual pitch and beat phenomena.
1. Willem C. Heerens & Jacob Alexander De Ru, Applying physics makes auditory sense : a new paradigm in hearing, 2010
Applying physics makes auditory sense : a new paradigm in hearing, 2010
Toepassen van Fysica Zinvol bij het Horen : Een Nieuw Gehoorparadigma, 2010
I. Heerens W, Mangelinckx Y, de Ru J. Verification of calculations of residual pitch and beat phenomena by the reader, 2010
II. Heerens W, Mangelinckx Y, de Ru J. The residual pitch and beat phenomena that can be heard in practice by the reader, 2010
Heerens W, Mangelinckx Y, de Ru J. Perception calculations, 2010
This associated software: "Perception calculations" together provide you with a tool to personally verify the predicted residual pitch and beat phenomena described in chapter 3 of the booklet.
This program is ONLY available for computer systems running under Windows.
We also present the composed sound fragments without this program, so in case you are not able to use the program, you are invited to download the composed sound fragments to personally verify the predicted residual pitch and beat phenomena.
Presentation of composed sound fragments. E.2. Pitch perception in incomplete harmonic sound complexes.I. Heerens W, Mangelinckx Y, de Ru J. Verification of calculations of residual pitch and beat phenomena by the reader, 2010
II. Heerens W, Mangelinckx Y, de Ru J. The residual pitch and beat phenomena that can be heard in practice by the reader, 2010
So, also: "... - that is how pitch is perceived." ?
If I also look at that one complicated pitch perception example, namely, the complicated pitch perception example (E) given by De Cheveigné
De Cheveigné A. (2005) Pitch Perception Models. In: Plack CJ, Oxenham AJ, Fay RR, Popper AN, editors. Pitch: Neural Coding and Perception: 169 - 233. New York: Springer Science + Business Media, Inc. ISBN 10: 0-387-2347-1.
with a vision of our debates from W5 to W7 the corresponding and resulting sound energy frequency spectrum (W7), according to our paradigm, can be calculated:
For the complicated pitch perception example given by De Cheveigné [40] the corresponding and resulting sound energy frequency spectrum can be calculated.This is shown W7.Video All this results in the explanation of a number of important auditory phenomena.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 5)
Please, do the following series experiments:
And let me inform you in advance that the remarkable results heard by me (Heerens) were also heard without exception by all other not specifically trained observers I had asked to do these experiments as well.
With my (Heerens) age plus a severe loss of hearing due to Meniere’s disease since 1985 I can still hear a 10 kHz pure sinus tone beep.
However if I take for instance the frequency series of:
10000+10004+10008+10012+10016+10020+10024 Hz
with all sine functions I clearly observe a 4 Hz beat in that beep tone.
And then with alternating sine cosine contributions. I clearly observe an 8 Hz beat in the same beep tone.
This phenomenon – known in literature for higher difference frequencies like 100 Hz between successive lower frequent harmonics – cannot be explained by simply adding up in a linear way all the contributions to a total of one single frequency of 10012 Hz, that is modulated in such a simple way.
This because such an addition results in the combination:
And if we look at the modulation factor:
we see here three modulation frequencies: 4, 8 and 12 Hz , each with amplitude 2 as equal weight factor.
Together with the constant contribution
with weight factor 1 this can only result in a weird mixing in modulations
of 8 + 16 + 24 Hz.
[Here a doubled frequency contribution to the beat exists for each contribution, because the different modulation factors reaches between + and – 100%]
This calculated sum signal is definitely not what we hear.
Another confusing fact is found if we observe the way these seven frequency contributions are related to the beat of 4 Hz we hear in case of all sine function contributions, and the beat of 8 Hz we hear in case of alternating sine / cosine contributions to the sound complex.
From Internet I found the figure out of the presentation of A. Foulkner about the perception of pitch :
http://www.phon.ucl.ac.uk/courses/spsci/audper/Pitch_AUDL4007_2010.pdf
There in slide 25 a clear picture of resolved and unresolved frequencies is given:
The 10000 and following frequencies can be seen as the 2000th to 2006th harmonics of the ‘pitch’ of 5 Hz.
So no question about it: The entire sound complex is completely unresolved.
Changing the all sine complex into the alternating sine / cosine complex would result for linear summation to:
An even much stranger modulation of a 10012 Hz beep, including a phase shift, that can never create the 8 Hz beat we actually hear.
In other words: It is absolutely clear that there happens something different inside the cochlea.
I say in the cochlea, because we cannot expect that our brain simply calculates such phenomena out of the extremely poorly to the frequencies correlated firing rates in the nerve fibers. Firing rates that have no frequency relation to the offered sound and appears more like stochastic contributions.
And if we compose the following sound complex with all sine functions:
10000+10004.0625+ 10008+10012.0625+ 10016+10020.0625+10024 Hz
[ So each of the 2nd , 4th and 6th contributions are shifted by an extreme small amount of 1/16th of one Hz. ]
we hear a very peculiar sound:
A beep with a variable beat, that alternates every 8 seconds from a 4 Hz beat to an 8 Hz beat.
Now changing the sine functions of the 2nd , 4th and 6th contributions into cosine changes not really the alternating beat in the then heard sound.
What we hear in both cases is a sound fragment with a period of 8 seconds that alters from a 4 Hz beat into an 8 Hz beat.
The changing of the ‘mistuned contribution’ from f. i. 10012.0625 into 10011.9375 Hz – or one of the other two contributions, only changes partially the depth of the beat, but not the rhythm of it.
And when all these three contributions are ‘mistuned’ to lower frequencies:
10003.9375 Hz + 10011.9375 Hz + 10019.9375 Hz
the modulation exactly sounds like that with the ‘mistuned’ frequencies:
10004.0625 Hz + 10012.0625 Hz + 10020.0625 Hz
And finally the last step: We can even raise
all frequency components with an amount , not equally to the difference frequency of 4 Hz, what makes that
the series aren’t even extreme high harmonics of the 4 Hz difference frequency.
And even this doesn’t change any noticeable thing in the heard sound.
Let me state again:
A sound complex, in this example existing of seven completely unresolved contributions evokes hearing sensations that are held for impossible in the current pitch perception theory.
In the current hearing theory, in both all sine as well as alternative sine – cosine contributions, you wouldn’t expect a double beat phenomenon.
To my knowledge this is a brand new anomaly within the paradigm of the current hearing theory.
But these phenomena are completely calculable and predictable as well in all details if we apply the hearing paradigm that I have formulated in the booklet, mentioned earlier:
Applying Physics Makes Auditory Sense
I invite you to verify or if you wish to falsify these experimental results by
carrying out the experiments described by me.
I hope this will cause a lot of astonishment and excitement.
Kind greetings
Pim Heerens
by Willem Christiaan Heerens
You can carry out the following series of experiments:
Please download the software program with which these sound complexes can be properly
calculated in the form of wav files from here:
This program is ONLY available for computer systems running under Windows.
[ NOTE: The standard setting in the 1/f mode in this software program
takes care that all the individually primary calculated frequencies
contribute equal energy to the resulting sound pressure signal.
This condition is very important for the influences on pitch calculations in
case higher values of the differences between contributing frequencies exist. ]
by Willem Christiaan Heerens
At high frequencies, do we perceive differences between random and deterministic components?
There is a very simple answer to the question.
That answer is:
We definitely hear great differences. They depend on the ‘composition’ of the contributing sinusoids.
But also on the length of the period of listening.
And in such compositions both the choices of frequencies and phases have strong influence.
For example:
Please download the software program with which these sound complexes can be properly
calculated in the form of wav files from here:
This program is ONLY available for computer systems running under Windows.
[ NOTE: The standard setting in the 1/f mode in this software program
takes care that all the individually primary calculated frequencies
contribute equal energy to the resulting sound pressure signal.
This condition is very important for the influences on pitch calculations in
case higher values of the differences between contributing frequencies exist. ]
Please calculate with high resolution the following three compositions,
using five sinusoids:
1. 10,000 / 10,002 / 10,004 / 10,006 / 10,008 Hz. All sine contributions.
In that case you will hear the high tone that corresponds with 10,004 Hz but
with a strong beat of 2 Hz.
2. 10,000 / 10,004 / 10,008 Hz. All three sine contributions.
10,002 / 10,006 Hz. Both cosine contributions. So a 90 degree phase shift.
In that case you will hear the high tone that corresponds again with 10,004 Hz
but now with a strong 4 Hz beat.
3. 10,000 / 10,002.0333 / 10,004 / 10,006.0333 / 10,008 Hz. All sine contributions.
In that case you will hear the high tone of 10,004 Hz again,
but within a period of 30 seconds
and starting with a 2 Hz beat
after 7.5 seconds the beat will gradually change into a 4 Hz beat.
After 15 seconds the beat is back again at 2 Hz.
At 22.5 seconds again at 4 Hz
and after 30 seconds the composition ends with a 2 Hz beat in the 10,004 Hz tone.
If you change the sine contributions of 10,002.0333 and 10,006.0333 Hz into cosine
the composition
starts with a beat of 4 Hz,
2 Hz at 7.5 sec,
4 Hz at 15 sec,
2 Hz at 22.5 sec
and finally 4 Hz at 30 sec.
For noise filtered by a narrow band-pass around 10 kHz it is known that we
will hear just a 10 kHz tone. Nothing more.
So on the question:
For example, do we perceive a difference between a few sinusoids around 10kHz
and a band-pass filtered noise around the same frequency?
The answer is clear: Although, according to existing perception theory, the
different frequency contributions in the composition are entirely unresolved
we can hear differences related to different phase and frequency settings.
In this context we can look at August Seebeck’s statement that he published in the year 1844:
“How else can the question as to what makes out a tone be decided but by the ear?”
It was part of his answer to the erroneous hypotheses of Ohm about pitch perception in the famous
Ohm-Seebeck dispute.
And we can add the following to it:
“After verifying the sound experiments, we are of the opinion that this -
Applying Physics Makes Auditory Sense - theory is representative for the working
principle of the human ear and for the cochlea.”
The above described sound experiments with indisputable results are entirely
based on the hearing theory of Heerens / J. A. de Ru in the booklet:
Applying Physics Makes Auditory Sense
Based on the concept in this booklet that our hearing sense is
differentiating and squaring the incoming sound pressure stimulus, this mechanism
evokes in front of the basilar membrane the sound energy frequency spectrum.
In that case Fourier series calculations show exactly the frequency spectrum
including the 2, 4, 6 and 8 Hz difference frequency contributions. Of which
the 2 and 4 Hz frequencies are responsible for the beat phenomena.
Heerens presented in a PDF the solution of the non-stationary Bernoulli equation, that is perfectly well valid in the case of the push-pull movements of the perilymph inside the scala tympani [ST] and scala vestibuli [SV], while the in between embedded scala media [SM], filled with endolymph at rest, has substantial – and therefore not negligible – dimensions.
According to hydrodynamic rules these dimensional conditions make that the hypothesis in which both the influence of the Reissner membrane and the content of the SM can be ignored and the cochlear duct can be considered as a folded tube with only the BM as an interface in between is definitely invalid.
Well like the well-known promoter of physics, MIT professor Walter Lewin, does in his magnificent physics courses, Heerens has built his own demonstration equipment for clearly showing what happens on the walls of a duct in which an alternating flow in core direction is evoked.
The one experimental set-up is extremely simple, but therefore also highly convincing.
As can be seen in the above figure, to mimic utmost compliance in the ‘walls’ in one of the experiments Heerens has hanged on thin wires in an open frame two sheets of paper that can move freely.
Between the two he can evoke an alternating flow parallel to the surfaces of the sheets of paper with by moving up and down a spatula.
And like it is shown in the next figure he has constructed a closed loop with a tube and a bellow, the latter centrally subdivided by a plate, with which he can create a push-pull flow in the tube, while in the upper branch of the tube locally a flexible yellow membrane is mounted in the wall, which registers what happens on the wall of the tube.
In front of the membrane a wire cross is closely mounted. Striking light from above forms a bended shadow of the wire cross on the membrane if that membrane is moving away – so inwards the tube – while during movement outwards of the membrane the shadow won’t be present because the wire cross is laid on the bending membrane.
The obtained results he found in both experiments?
The evoked motion patterns are exactly identical to what could predict out of the theory Heerens has presented. The two sheets of paper are not at all moving in outward direction. They are moving in opposite direction, so towards the core line of the alternating flow. And under a steady alternating stimulus (with constant amplitude) they both do that with a stationary deflection on which an alternating deflection is superposed with doubled frequency.
This indicates that both sheets experience the influence of an alternating and in average lower pressure evoked in the space between the two sheets.
This behavior is shown in the following multi moment presentation:
The tube experiment also shows that the membrane in the wall is always moving inwards – so towards the core line of the tube. And superposed on a constant deflection inwards the membrane also deflects periodically with double frequency related to the original stimulus frequency.
This is given in the following impression:
Without any doubt this is indicating that at least squaring of the input stimulus plays a dominating role.
[Note: To make it even more convincing for everyone, see the video registration of the tube experiment.]
Video Movement a membrane. By the Bernoulli effect.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 3)
All of us are on one and the same line considering the new insights regarding the functioning of our hearing sense.
Keep reading and consider our insights regarding our review in which the non-existence of traveling waves in the cochlea (the non-existence of forward and backward traveling waves in the cochlea) and the appearance of pressure differences between scala tympani and scala vestibuli will be observed more closely once again.
Applying physics makes auditory sense with a three compartment cochlear model in which the cross sections of scala tympani and scala vestibuli are chosen as equal in area. And more important, we take into account the existing influence of the scala media with its results and the analytical model of Heerens for perilymph movement in the cochlear duct, where the non-stationary Bernoulli effect results as the solution.
And if all will be correct then all solutions lead to the uniform conclusion that inside the cochlea the sound pressure signal, evoked in the outer ear channel, will be transferred into the sound energy stimulus. And that this stimulus is responsible for the activation of the basilar membrane and for the signal to the brain.
For now the only clear and firm conclusion one can draw is: The medium in the tube is moving as a whole. And therefore these experimental results, in combination with the theoretical solution of the non-stationary Bernoulli equation, are one of the reasons that the transmission line concept cannot play a role in it either.
However due to us the nowadays valid hypothesis, that in the cochlea via a transmission line mechanism stimuli can be transferred, is laying heavily under fire.
That hypothesis is even fundamentally erroneous. This because that hypothesis is based on the interpretation of a ‘wavy movement’ observed on the basilar membrane with a short wavelength, with which in combination with the accompanying vibration frequency by means of the equation 𝑣 = 𝑓 × 𝜆 [propagation velocity = frequency × wavelength] an extreme low propagation velocity result for mechanical vibrations in the cochlea.
Actually the fundamental error made here is based on the fact that the propagation velocity for mechanical vibrations and waves – and hence also for acoustical vibrations – is a physics quality that is bound to the medium [solid, liquid or gas] in which that propagation is taking place.
Because of that, the propagation velocity of the mechanical vibration is the – by the medium involved and bounded – decisive quantity for the relation that will exist between the frequency and the wavelength in the follow-up.
And not vice-versa, like it is applied for many decennia in the scientific world of hearing.
I (Heerens) with my non-stationary Bernoulli effect have established that there cannot propagate a traveling wave inside the cochlear duct with extreme small wavelength that transports vibration energy. Neither forwards nor backwards.
And this means that there cannot exist an active transmission line.
And also no OAE stimuli, of which one states that they are transported by backwards traveling toward the middle ear cavity.
Out of that it is crystal clear that we do not firm as a rock believe in traveling waves, both forwards and backwards. And in oto-acoustic emission stimuli, which are evoked anywhere in the basilar membrane and in the cochlear duct transferred to the middle ear system by means of backward traveling waves and finally inside the outer ear canal this will become detectable as sound.
During the course of time we just became more and more convinced that the content of our presentations together with the booklet are forming an extraordinary strong basis for defending the new paradigm formulated by Heerens.
When all involved persons still have the intention to bring the knowledge of our hearing sense on a higher level, and sequentially out of that to develop the possibilities for effective treatments of hearing disabilities for the benefit of that worldwide gigantic group hearing impaired who still lack healing therapies, then we must cooperate extremely intensive to realize that goal.
Keep reading.
All these intentions are clearly expressed on this webpage.
... and transformed into deeds.
The second reason for rejecting the traveling wave concept is the following: Heerens also has studied the different possibilities for ‘traveling waves’ in literature. And then especially he has looked at the conditions, parameters and geometrical dimensions under which such waves can exist.
In short (you don’t need expensive literature retrievals, because you can read a summary of the possible wave forms in Wikipedia) we can state that there are three forms to distinguish:
1. Rayleigh waves
Rayleigh waves are a type of surface acoustic waves which travel on solid materials. The typical speed of these waves is slightly less than that of so-called shear waves. And it is by a factor (dependent on the elastic constants) given by the bulk material. This speed is of the order of 2–5 km/s. For a sound signal with a 1000 Hz frequency this means that the minimal wavelength is approximately 2 meter. While the BM has a length of approximately 35 millimeter, it is impossible to make a realistic combination for application in the cochlea.
Besides that Rayleigh waves are surface waves where the thickness of the material must be relatively high related to the concerned wavelength. With a fraction of a millimeter thickness for the BM you can forget that this type of wave can play a role in the BM vibrations.
2. Love waves
In the field of elastodynamics, Love waves, named after A. E. H. Love, are described as horizontally polarized shear waves guided by an elastic layer, which is "welded" to an elastic half space (so a very thick part of bulk material) on one side while bordering a vacuum on the other side. In literature can be found that the wavelength of these waves is relatively longer than that of Rayleigh waves. And also these conditions and parameters are nowhere found in the cochlear partition.
3. Lamb waves
Lamb waves propagate in solid plates. They are elastic waves whose particle motion lies in the plane that contains the direction of wave propagation and the plate normal (the direction perpendicular to the plate). In 1917, the English mathematician Horace Lamb published his classic analysis and description of acoustic waves of this type. The wave propagation velocities of the two possible modes in Lamb waves are comparable with that of the Rayleigh wave. And therefore they also don’t provide for a possible application in the traveling wave description inside the cochlea.
In other words: we also cannot make a realistic fit with Lamb waves inside the cochlea. Of course everybody can persist in believing that until now registered auditory experimental results justify the formulated hypothesis that such types of waves can exist in the cochlea.
Then however you are forced to answer the following question:
On what underlying physics grounds is it possible that material quantities and acoustic process parameters inside the cochlea can be altered in such a way that as a result the wavelength of 1.5 meter for a 1000 Hz stimulus in bulk perilymph fluid can be altered in less than 1.5 millimeter?
As can be seen from the Rayleigh, Love and Lamb waves the circumstances and material properties cannot provide for a scaling factor better than 0.5 from bulk material sound velocity to the concerned type of wave.
Be aware that inside the cochlea a scaling factor of 0.001 or even smaller will have to be possible. This can be considered as completely impossible.
What remains is that just as Heerens stated: The described non-stationary Bernoulli effect, that provides for the sound energy stimulus everywhere in front of the BM, is driving the BM vibrations.
I (Yves Mangelinckx) have always wondered about what drives BM vibrations?
It is the everywhere present sound energy stimulus that drives the BM.
I (Heerens) have derived the analytical solution for the non-stationary non-viscous incompressible time dependent wiggle-waggle movements directed along the core of the perilymph duct. Because in that case the reduction of the complex set of Navier-Stokes equations to the non-stationary Bernoulli equation is fully permitted.
Organ of corti operation. Inner hair cells are the leftmost row, outer hair cells are the other three rows.
On this website you will find Supporting Material
Promotional Material and Downloads
ISBN 978-90-816095-1-7
Applying physics makes auditory sense
A New Paradigm in Hearing
Willem Chr. Heerens
and
J. Alexander de Ru
©2010 Heerens and De Ru
And finally we can explore the calculated
results in real sound experiments.
For this purpose Yves Mangelinckx, co-author of the Appendices has developed a
relatively simple and easy-to-use, efficiently operating software program.
[ See also Appendix A I or A II ]
Direct presentation of composed sound fragments as a result of our experiments. In case you are not able to use the calculation program mentioned in Appendix I, this Appendix II and the associated sound fragments, calculated by us with the program designed by Yves Mangelinckx, provide you with the possibility to listen to the predicted residual pitch and beat phenomena as described in Chapter 3 of this booklet. For each experiment described in Chapter 3 we have filled in the correct frequencies within the calculation program, and composed a sound complex fragment with a ten second duration. You are invited to download the composed sound fragments.
Video Movement of the basilar membrane. By the Bernoulli effect.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 1)
Video Movement of the basilar membrane 2f.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 7)
Heerens and De Ru
“Not the end, but merely a beginning!”
“Bernoulli's Law”
“The incoming sound signal is transformed into the sound energy signal inside the cochlea. It is this signal that evokes both the mechanical vibrations in the basilar membrane and the corresponding electrical stimuli in the organ of Corti, stimuli that are subsequently sent to the brain in a frequency selective manner.”
“transforms = differentiates and squares (present inside the alternating perilymph movement), so yes! (it transforms) into the sound energy signal (inside the perilymph alternating movement)”
yes! (in the yellow dots path of the perilymph duct)
and only by the alternating perilymph movement
hydrodanymics inside the alternating movement
... forward - back - forward - back - forward - back ... (alternating perilymph duct)
Presentation: Applying physics makes auditory sense On Prezi
“Based on our insights derived from literature we arrive at two more basic principles that form the cornerstones of our model: namely, the fact that the attenuation of the eardrum and the ossicular chain are at the root of the extremely large dynamic range of our auditory sense, and the fact that the bone conduction phenomenon is actually the result of the push-pull movement of the perilymph fluid instead of the presumed deformation of the bony structures.”
“This revised study of the entire set of mechanisms and functions, actually a new and exciting paradigm, enables us to explain most if not all of the, thus far unsolved, major mysteries in the functioning of the auditory sense.”
The content of the book is divided into nine chapters.
As I mention in our booklet Wever and Lawrence actually show that there is only a cochlear microphonic effect related to a sound stimulus in case there is movement of the perilymph. Not by a pressure wave inside the perilymph.
The movement of this incompressible fluid column is only possible if stapes/oval window and round window are moving in opposite direction. Just as it is observed so many times. So both oval and round window must deflect, together with the eardrum.
For lower sound pressure stimuli these deflections will behave linear. But in the case of higher sinusoidal sound pressure stimuli on the eardrum, its deflections become nonlinear. These deflections change from purely sinusoidal into deflections with increasingly ‘flattened’ large excursions. And that counts for the oval and round window mechanical contributions as well. This means that the perilymph movement is no longer purely sinusoidal, but tends to develop into the direction of a smoothed block function. So loaded with higher harmonics at the cost of the fundamental/center frequency amplitude.
This is equal to harmonic distortion. After the differentiation [from perilymph displacement to perilymph velocity] and squaring [from perilymph velocity to pressure differences by the Bernoulli effect on the walls of the perilymph duct] the basilar membrane is finally stimulated with the frequencies of the sound energy signal.
Next to a less than linearly increasing fundamental [or center frequency] basilar membrane deflection at the center frequency resonance place, towards the round window higher harmonic contributions also start to evoke deflections of the basilar membrane at places with corresponding higher harmonic resonance characteristics.
So all higher harmonic contributions, due to non-linear deflections of the three membranes involved, are also in the basilar membrane movements beyond the center frequency basilar membrane movements.
So next to a less than linearly increasing fundamental [or center frequency] basilar membrane deflection at the center frequency resonance place, towards the round window higher harmonic contributions also start to evoke deflections of the basilar membrane at places with corresponding higher harmonic resonance characteristics.
This complete process has nothing to do with the existence of an outer hair cell driven cochlear amplifier. It is simple non-linear mechanical behavior of the involved membranes i.e. the eardrum, the oval window and the round window.
But please take in mind that we have to forget about the current hypothesis: the ossicular chain with its lever is dimensioned to overcome the extreme difference in acoustical impedance between air and fluid.
There is nothing else than a small fluid column that is ‘oscillating’ with a few micrometer displacement under the acoustic pressure stimulus. There is no need for the acoustical impedance transformation, that is hypothesized in current hearing models.
Let me give a similar example: another in the current hearing theory unexplained nonlinear phenomenon, that is relative closely related to the nonlinear behavior just mentioned, is the so-called half octave shift.
In their paper Cody and Johnstone have described clearly what is observed:
If a pure sinusoidal tone is exposed to the ear with a 75dB intensity or higher, during a longer period of time, a difference in sensitivity is found between the situations before and after the exposure. The auditory sensitivity is temporarily decreased due to the exposure, but audiometry shows surprisingly that the difference does not exists for the used frequency, but for a half octave higher frequency. And that independent of the used primary frequency.
Cody, AR and JohnstoneM. (1981) Acoustic trauma: Single neuron basis for the “half-octave shift” , BJASA. 70(3), Sept. 1981.And now my explanation for this peculiar phenomenon:
Video Half Octave Shift.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 8)
So, also: "... - that is how pitch is perceived." ?
If I also look at that one complicated pitch perception example, namely, the complicated pitch perception example (E) given by De Cheveigné
De Cheveigné A. (2005) Pitch Perception Models. In: Plack CJ, Oxenham AJ, Fay RR, Popper AN, editors. Pitch: Neural Coding and Perception: 169 - 233. New York: Springer Science + Business Media, Inc. ISBN 10: 0-387-2347-1.
with a vision of our debates from W5 to W7 the corresponding and resulting sound energy frequency spectrum (W7), according to our paradigm, can be calculated:
For the complicated pitch perception example given by De Cheveigné [40] the corresponding and resulting sound energy frequency spectrum can be calculated.This is shown W7.This is also shown in this figure.
The relationship between frequency and wavelength is covered in any physics textbook.
About the interpretation of the equation: Interpretation of the equation in which is expressed that the wave velocity equals the frequency multiplied by the wavelength [ in common wave propagation theory in physics ].
Take for example (sound):
With the traveling wave equation: the propagation velocity of the wave [in m/s] equals the frequency [in Hz] multiplied by the wavelength [in m].
This equation must be interpreted in the following way: the speed of a (sound) wave that moves through a medium isn’t dependent on its frequency and its wavelength.
The speed (of sound) – hence also the speed with which (sound) energy is transported – is a material constant and it therefore only depends on a number of properties of that medium. And the only way to change that speed is to change the properties of the medium.
Once the speed (of sound) in a medium is determined the above mentioned equation expresses the relation between the (sound) frequency and the wavelength.
The two have an inverse relationship.
Given the frequency of the wave, the wavelength is equal to the speed (of sound) in the medium divided by the frequency.
Or in reverse:
Given the wavelength of the wave, the frequency is equal to the speed (of sound) in the medium divided by the wavelength.
In textbooks you can read: The speed of sound in fluids and solids is given by the square route of the compressibility modulus [in Pascal] divided by the density [in kg/m³]. As an indication: this results in a speed velocity of 1858 m/s for glycerin and 870 m/s for paraffin oil.
You can see that the wave propagation velocity in a medium, the acoustic vibration frequency and the corresponding wavelength have the following common basic relation: the wave propagation velocity equals the acoustic vibration frequency multiplied by the corresponding wavelength.
This relation is one of the fundamental corner stones of common wave propagation theory in physics.
For the definition of a wave you can than look in the Webster Dictionary.
Webster Dictionary Definition of a Wave.
Webster’s dictionary defines a wave as “a disturbance or variation that transfers energy progressively from point to point in a medium and that may take the form of an elastic deformation or of a variation of pressure, electric or magnetic intensity, electric potential, or temperature.”
Be aware that the equation in which is expressed that the wave velocity equals the frequency multiplied by the wavelength can easily lead to a completely erroneous interpretation.
For the physics in the equation you have to be aware about for example the following:
Measuring both the wavelength in the ‘wave’ evoked by the frequency stimulus and subsequently calculating the propagation speed of the ‘wave’ by multiplying wavelength with frequency has for example nothing to do with correct physics.
So, it is important to know: The speed of a sound wave that moves through a medium isn’t dependent on its frequency and its wavelength.
Because, the speed is given ...
Namely:
The speed of sound in fluids and solids is given by the square route of the compressibility modulus [in Pascal] divided by the density [in kg/m³]. As an indication: this results in a speed velocity of 1858 m/s for glycerin and 870 m/s for paraffin oil.
With this you can explore the above mentioned relationship.
And now we can look at: the content of the book
Applying physics makes auditory sense
And we can look at:
Significance of the present findings for the concept of a traveling wave
In a 1954 paper, Wever, Lawrence, and von Békésy reconciled some of their views on the nature of the traveling wave. They stated that when the cochlea is stimulated with a tone, a BM "displacement wave seems to be moving up the cochlea. Actually...each element of the membrane is executing sinusoidal vibrations...different elements...executing these vibrations in different phases. This action can be referred to as that of a traveling wave, provided that...nothing is implied about the underlying causes. It is in this sense that Békésy used the term ‘traveling wave’..." [pp. 511-513 of Wever et al. (1954)].
It is in this sense that Békésy used the term ‘traveling wave’..." [pp. 511-513 of Wever et al. (1954)].
And we can look at:
Significance of the present findings for the concept of a traveling wave
In a 1954 paper, Wever, Lawrence, and von Békésy reconciled some of their views on the nature of the traveling wave. They stated that when the cochlea is stimulated with a tone, a BM "displacement wave seems to be moving up the cochlea. Actually...each element of the membrane is executing sinusoidal vibrations...different elements...executing these vibrations in different phases. This action can be referred to as that of a traveling wave, provided that...nothing is implied about the underlying causes. It is in this sense that Békésy used the term ‘traveling wave’..." [pp. 511-513 of Wever et al. (1954)].
And we can look at:
Ren’s unintentional attack on Von Békésy’s “Traveling Wave Theory”
An paper of Ren is:
Longitudinal pattern of basilar membrane vibration in the sensitive cochlea
Proceedings of the National Academy of Sciences - pnas.org
PNAS | December 24, 2002 | vol. 99 | no. 26 | 17101-17106.
Experiment: Laser interferometrical measurements of the basilar membrane movement.
In the 13,3 – 19 kHz area of the basilar membrane of a gerbil.
Results: The movement of the basilar membrane, from the higher frequency side towards the lower side, is restricted to 300 μm on both sides of the point of maximum activity. The shape of the movement was exactly symmetrical around this point.
How do we have to interpret that “wavy” movement of the basilar membrane?
In this we have to observe the following facts in physics:
In a medium [ gas, liquid, solid material ] there exists a uniform relation between the propagation velocity v of sound or vibration, the frequency f and the wavelength λ of the sound or vibration wave:
v = f × λ
v is lowest in gasses: In air 330 m/s
v in water but also in perilymph 1500 m/s
v is highest in solid material to ca. 8000 m/s
Together with the lowest [ 20 Hz ] and highest [ 20.000 Hz ] sound frequencies that we are able to hear, the wavelength varies in the perilymph from 75 meter to 7.5 cm
Always significantly larger than the size of the cochlea.
Consequences:
In the much shorter perilymph duct there cannot run a “sound wave”.
The perilymph between oval and round windows is just able to move forwards and backwards as a whole.
Tissue around the perilymph channel behaves more like a solid material than like a liquid.
That tissue needs a larger size for a traveling wave.
Conclusion:
There cannot propagate a traveling wave inside the cochlea.
But what kind of movement is observed then ?
Therefore we must observe at first the way of movement of a singular resonator.
A resonator exist of a body connected to a spring, and is possessing in practice also damping.
If the body is given a deflection in opposite direction to the spring influence and that body is released, it will move harmonically with descending amplitude around the equilibrium point.
The frequency in that case is known as resonance frequency fr
Let us observe the reaction of a spring-mass-system
on a periodic stimilus
If the resonator is brought into a vibrating movement, then three different situations can exist, dependent on the relationship between stimulus frequency f and resonance frequency fr :
f < fr : reduced in phase movement, with phase angle: 0
f = fr : increase due to resonance but also a phase retardation with phase angle: ½ π
f > fr : strongly reduced movement in opposite direction with phase angle: π
Followed by the remarkable mechanical setup of the basilar membrane:
This basilar membrane [ BM ] exists of an array of small resonators, that have gradually decreasing resonance frequencies from the round window up to the helicotrema.
And then in case of an everywhere equal in phase stimulus on the entire BM, the following is happening:
All parts of the BM having fr > f : move in phase with the stimulus.
That movement becomes larger if fr approaches f closer and will retard gradually in phase.
In case of resonance a large movement is and there exist a phase retardation of ½ π
All parts of the BM with fr < f are more and more moving in opposite phase with the stimulus and with a growing decreasing in deflection.
And what phenomenon is comparable to this?
The “wave” in the stadium!
And dependent on the quality factor in resonance, strongly coupled to the rate of damping, the moving area becomes smaller, while the maximum deflection becomes larger.
On theoretical grounds it is no mystery that this “wavy movement” of the BM is always running from the round window [base] towards the helicotrema [apex] of the cochlea.
It is a locally bound reaction behavior on a universally existing stimulus.
Using the material specifications this behavior can be calculated in a perfect way.
And now we can look at: the content of the book
Applying physics makes auditory sense
And we can look at: the figure 5 of the book
Applying physics makes auditory sense
An animation movie of it:
Deflection profiles of the basilar membrane around fc in sequential steps of T/12
Video Forced movements of a mass - spring system on a periodic stimulus.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 4)
Indeed, the documented remarks of Von Békésy that calculating the behavior of the basilar membrane was far too complicated and would only lead to useless ‘armchair theoretical speculations’ are erroneous.
Using complex number mathematics and especially conformal transformations, in case of a homogeneous pressure stimulus on the basilar membrane, I (Heerens) could find the solution of which the result is presented in Fig.5 of our booklet. And that result is in full agreement with the findings of Tianying Ren in his interferometer experiments on basilar membrane stimulation.
[These complete calculations will be part of the next book we are intended to publish]
If you would study the history behind the Nobel prize award of Georg von Békésy, you will see that there were already serious doubts about Georg von Békésy’s statements in those days. Earlier in 1953 – ‘54 he had a serious dispute with the two other auditory coryfees of that era, Glen Wever and Merle Lawrence, that ended in their combined paper I have referred in the booklet:
Wever EG, Lawrence M, Von Békésy G. (1954) A note on recent developments in auditory theory. Proc Natl Acad Sci U S A 40: 508–12.The main conclusion in that paper? It remains still mysterious how the cochlea does manage the transfer of acoustic energy into the electric signals to the brain. And actually this mystery exists until nowadays.
And what happened until now with our contribution in the review process of the different scientific journals? It follows exactly the timetable for a paradigm shift, described by science philosopher Thomas Kuhn in his essay: ‘The Structure of Scientific Revolutions’.
In a 1954 paper, Wever, Lawrence, and von Békésy reconciled some of their views on the nature of the traveling wave. They stated that when the cochlea is stimulated with a tone, a BM "displacement wave seems to be moving up the cochlea. Actually...each element of the membrane is executing sinusoidal vibrations...different elements...executing these vibrations in different phases. This action can be referred to as that of a traveling wave, provided that...nothing is implied about the underlying causes. It is in this sense that Békésy used the term ‘traveling wave’..." [pp. 511-513 of Wever et al. (1954)].
Unhindered by his disdain; as always following the curiosity that leads the way in science:
One can do the following math:
Start by calculating the sinusoidal pressure stimulation with frequency, which uniformly acts on the basilar membrane, while this membrane is infinitesimally divided into an array of individual resonators with a logarithmically decreasing resonance frequency from base to apex.
The reason for this uniform pressure stimulation is found in the fact that it has shown that the perilymph moves as a whole fluid column along the front side of the basilar membrane, thus resulting in uniform pressure effects on the basilar membrane as well.
Making use of complex function theory and conformal transformations this general vibrational transfer model of the basilar membrane, despite its complexity, offers an analytical solution.
(complex function theory and conformal transformations) -- (deflection profiles basilar membrane)
[These complete calculations will be part of the next book we are intended to publish]
Video Movement of the basilar membrane. By the Bernoulli effect.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 2)
What's more: this solution has led to a very useful result:
And it is in accordence with what Ren and his team observed with their direct laser interferometer measurements of basilar membrane movements.
Ren’s unintentional attack on Von Békésy’s Traveling Wave Theory
The paper of Ren is: Longitudinal pattern of basilar membrane vibration in the sensitive cochlea
Proceedings of the National Academy of Sciences - pnas.org PNAS | December 24, 2002 | vol. 99 | no. 26 | 17101-17106.
Experiment: Laser interferometrical measurements of the basilar membrane movement. In the 13,3 - 19 kHz area of the basilar membrane of a gerbil.
Results: The movement of the basilar membrane, from the higher frequency side towards the lower side, is restricted to 300 μm on both sides of the point of maximum activity. The shape of the movement was exactly symmetrical around this point.
The authors of the manuscript "Applying Physics Makes Auditory Sense." have actually paid rather a lot of attention to the form of displacement, which corresponds with the form that Ren et al. have actually measured.
So, there is a discrepancy between the assumed travelling wave from current theories and the experimental results by Ren et al.
In their experiments Ren et al. they observed a short ‟wave pattern‟, symmetrically divided on either side of the point of resonance. What's more, according to Ren et al, the movement of this observed wave pattern along the basilar membrane, running from base to apex, did not decrease in speed.
According the manuscript "Applying Physics Makes Auditory Sense.": Due to the peculiar basilar membrane resonance possibilities found in practice, a uniform sinusoidal pressure stimulus results in a mirror symmetrical phase wave pattern that shows a propagating wave running from base to apex. And this waveform on the basilar membrane is identical to that which Ren et al. observed in their laser interferometer experiments on gerbils.
The reason for this phase dependent behavior is explained in more general terms in the manuscript "Applying Physics Makes Auditory Sense.".
A detailed mathematical explanation and analytical calculation has been excluded from that manuscript - but is available.
[These complete calculations will be part of the next book we are intended to publish]
That is a phenomenon that is always shown in normal hearing persons. The higher the acoustic stimulus above a certain level is offered, the more the frequency selectivity will be reduced.
In our hearing theory this is also an absolutely normal behavior. The explanation, based on physics, is even extremely simple.
The basilar membrane has a logarithmically distributed frequency resonance sensitivity. How that is tonotopically organized is already observed by Helmholtz. An array of tiny bar or fiber shaped organelles perpendicular to the longitudinal core of the cochlear duct are embedded in the basilar membrane. Showing a structure like a harpsichord or even better a xylophone. There also exists anisotropic stiffness in the basilar membrane itself and the coupling between those ‘xylophone’ bars is weaker than in the direction along the bars.
If the basilar membrane is locally tuned in its resonance performance to be identical to the resonance frequency of the local bar, the entire structure looks like a frequency analyzer, based on locally distributed resonance.
Actually a very recent paper of Eze and Olsen is in full agreement with my statements:
Eze, N and Olson, ES. (2011) Basilar membrane velocity in a cochlea with a modified organ of Corti.
Biophys J. Feb 16 2011;100(4):858-67.
They describe in the abstract:
For small excursions of the basilar membrane the interaction between adjacent bars via the membrane tissue remain small, which leads to a relative high resonance quality factor for each of the ‘bar dominated’ resonators.
However, if the applied resonance signal becomes larger, the limited stretching possibilities of the the membrane tissue between adjacent bars affects the resonance of each of the two bars. This because they become mechanically closer coupled resonators, that have different resonance frequencies.
In that case the resonance quality factor of each of the resonators is decreasing. And like it is shown in Fig.3 the resonance peak spreads, while the peak height is reduced.
Mathematically it can even be proven that the area under each curve, which is equal to the integral over the frequency ratio, is the same for each damping ratio.
The lower the quality factor, the higher the damping factor and the broader the peak curve around center frequency fc will be. [purple to red curves in Fig.3].
And in that case we have to deal with the fact that each of two adjacent frequencies is spreading its peak in a broader area around its center frequency.
The peaks will overlap each other and as soon as the so called ‘full-with-half-maximum’ value reaches the critical value, for which there exist no central dip between the two peaks, the two frequencies cannot be detected separately.
This is shown in the left and center peaks in Fig.4. From left to right the selectivity conditions become better and actually in the right peak combination the two peaks can be observed as separated.
At first I have to bring under your attention that in our hearing concept the possibility for frequency dependent nonlinearity in the movement of the eardrum, oval window and round window is taken into account as well.
For instance: let us observe what happens if we raise the amplitude of an acoustic stimulus in our concept.
The sound energy will raise as well, but in a quadratic way. And not only the sound energy frequency spectrum will be evoked on the basilar membrane, but also the time-average of the overall sound energy signal. That last signal is actually similar to the DC component in the cochlear potential variations. The signal that also varies with 6 dB if the acoustic pressure stimulus is varied by a factor of 2, like it is observed and reported by Wever and Lawrence, and later by Voss, Rosowski and Peake.
Wever EG, Lawrence, M. (1950) The acoustic pathways to the cochlea. JASA 22: 460-7.
Voss SE, Rosowski JJ, Peake WT. (1996) Is the pressure difference between the oval and round windows the stimulus for cochlear responses? JASA Sept 100(3): 1602-16.
Based on observations, we have hypothesized in our theory that this signal is used as the control signal for the contraction of both musculus tensor tympani and musculus stapedius. This in order to set via the eardrum and ossicular chain the signal transfer on an optimal level for the inner ear system.
In different circumstances of the offered soundscapes on the one hand and different system errors in the outer ear, middle ear and inner ear on the other hand, this leads to logically predictable symptoms.
Let me show you a few of these results.
1) A normal hearing person in an airplane cabin during both departure and landing – in the period that the cabin pressure is falling respectively raising with a maximum of 0.24 atm between ground level and altitude – experiences an unpleasant feeling of fullness in the ears and a certain reduction in hearing sensitivity.
In the current hearing theory this phenomenon is solely attributed to the difference in pressure over the eardrum caused by the change in pressure, that is loading the eardrum.
Chewing or swallowing, which opens the Eustachian tube, is advised and reduces the symptoms somewhat. But swallowing at a higher frequency, which keeps the changes in pressure difference over the eardrum lower than swallowing at low frequency, has by far not or even not at all the influence we would expect.
In our hearing theory the explanation of this phenomenon is really different, but compelling.
In both situations, the increasing difference in pressure over the eardrum forces the eardrum to deflect in the direction of the ‘under-pressure’ area. That can be the middle ear cavity in case of landing and the environment in case of departure.2) The same ‘fullness in the ear’ and the reduction in hearing abilities is experienced by Ménière patients.
The endolymphatic hydrops causes the basilar membrane to bent away from the tectorial membrane and stretches the hair bundles of the outer hair cells. Exactly what the DC contribution of a sound energy signal would do. It mimics the influence of a constant and too loud acoustical signal.
So this erroneous DC signal also transfers the signal to stretch the middle ear muscles, creating ‘fullness in the ear’ and the symptom of a reduced hearing.
But for Ménière patients there happens inside the cochlea also another phenomenon.
The extra stretching of the basilar membrane changes the mechanical properties of the membranous material. The tonotopical resonance frequency distribution of the membrane is changed – higher basilar membrane resonances shift towards the helicotrema – while the tonotopical distribution of the ‘xylophone‘ array (mentioned before) is place bounded.
This reduces the quality factor of the combined resonance performance, reduces the local basilar membrane deflection – causing an extra reduction in hearing – and broadens the resonance profiles on the basilar membrane – causing a reduction in frequency selectivity.
And as a sideline:
I only mention this to show you that apparently a major part of my hearing problems – in the current hearing theory always observed as outer hair cell destruction, so incurable sensory-neural hearing loss – is reversible, at least to a significant percentage.
3) But that also counts for another example, the phenomenon of presbyacusis.
Prof. Robert Frisina from Rochester Institute of Technology, National Technical Institute for the Deaf, has reported experiments that show just the opposite effects related to my ‘treatment’ of Ménière’s disease.
He used aldosterone prescription to increase the neural activity in presbyacusis patients. And he reported improvements in hearing in case of increased potassium intake in elderly people.
To me it looks as if presbyacusis is to some extend the opposite of Ménière’s disease.
My hypothesis here?
If the potassium concentration in the endolymph is constitutionally lowered in elderly people, the endolymph pressure will also be reduced and the basilar membrane will have lost some of its tension. This can result in a shift of the basilar membrane resonance distribution just opposite to that for Ménière patients. But the tuned xylophone structure remains at its location and the combined system of membrane and xylophone gets an increased damping factor.
The total resonance profile of the basilar membrane shifts in such a way that higher resonance frequencies reduce more than the lower ones. But again the overall decrease of the local resonance quality factor generates a decrease in frequency selectivity, comparable with that of Ménière patients.
Because in our hearing theory it counts for both Ménière patients and presbyacusis patients that resonance peak broadening, due to increased damping, occurs that leads to decrease in frequency selectivity, it isn’t remarkable that both groups of patients show diminished speech recognition capabilities.
And now after this explanations:
I have a question:
Are you still so absolutely confident that in our new hearing paradigm the above mentioned phenomena cannot be explained? And even more, do you have serious doubts that our new hearing paradigm isn’t capable of explaining existing auditory mysteries and anomalies and isn’t capable of predicting new and in the current hearing theory not possible phenomena?
For the moment, with the above explanation as basis,
Basic physics, like the Bernoulli effect I have used in the booklet, and in my explanations here above, provide for even better hypotheses for the compressive nonlinearity on the basilar membrane and the level dependent frequency selectivity. Besides that: successive investigations of basilar membrane deflection phenomena indicate more and more salient that such cochlear amplifier structures or organelles are not found in the cochlear duct. The less than one percent motility of hair bundles can hardly be observed as a serious candidate for an approximately 60 – 70 dB auditory amplifier gain.
A) In general from our hearing theory it can be observed that all types of dysfunction between the outer ear and the basilar membrane affect our hearing abilities in the linear domain. This means that even if such a dysfunction is both frequency dependent and not giving a zero signal transfer – in the current hearing theory often as a rule ascribed to perceptive hearing disabilities – it can be compensated with a well-tuned filter array. And all present hearing aids are – of course with concept related different quality – capable of performing such a compensation.
But that compensation isn’t possible with a present generation external hearing aid as soon as the disability in hearing is located in the basilar membrane, the organ of Corti, the nerve connection between the cochlea and the auditory cortex, and finally the parts of the brain involved in the auditory processes.
This because we have to take into account that in the cochlea the two process steps of differentiation and squaring are taking place.
Let me give for instance the example of presbyacusis, originated in a reduced sensitivity of the higher frequencies in the basilar membrane or the organ of Corti.
Fig.5 gives schematically what happens if we compensate with a conventional hearing aid the reduced higher frequency sensitivity.
For the patient involved, the red curve in Fig.5 shows the audiogram. Approximately 35 – 40 dB overall hearing loss and 70 dB for the tones of 4800 and 5200 Hz.
To obtain a flat sensitivity curve (blue in the figure) we have to use an overall amplification for the lower tones of 40 dB and for the frequencies of 4800 and 5200 Hz an amplification of 70 dB.
If we apply that with a conventional hearing aid the sound pressure contributions of the two higher frequencies will need an extra amplification of 30 dB above the overall amplification.
However that 30 dB extra compensation of the evoked sound pressure contributions of 4800 and 5200 Hz will also generate via the Bernoulli effect their difference frequency of 400 Hz on the basilar membrane, with the same 30 db amplification as the two primary contributions.
But the area on the basilar membrane where this difference frequency contribution will evoke resonance doesn’t need that 30 dB amplification. That 400 Hz signal must be amplified, like all the other frequencies that are generated in that part of the auditory spectrum with the overall 40 dB and not with the 70 dB that actually is in use.
The elderly presbyacusis patient will definitely complain about the annoying sound impression of the apparatus, which is comparable with recruitment, and after a short test period he/she will decide not to buy it.
A hearing aid that can compensate for this perceptive hearing problems must have a completely different function scheme.
At first we have to consider that if we measure the audiogram of a pure presbyacusis patient – so with purely perceptive hearing loss – by detecting the sensitivity thresholds of the standard frequency series 125+250+500+1000+2000+4000+8000 Hz, according to the new hearing paradigm, we actually measure the data belonging to the doubled in frequency series of stimulations of the basilar membrane, so for the frequency series of 250+500+1000+2000+ 4000+8000+16000 Hz. And it is this spectrum that is transferred in a deformed way to the hearing center in the brain.
With that knowledge we can now design the new hearing aid. It is clear that such a device must compensate in the quadratic environment, but must ‘translate’ this into a signal for the speaker, that is placed in the linear environment in front of the eardrum.
Of course extra facilities can be built in as well, like an automatic gain control, frequency spectrum reduction etc.
B) An other application exists in the CI hearing equipment.
In the current hearing theory it is hypothesized that combination tones, missing pitches and other nonlinear processes are generated in the brain. Nevertheless CI users complain about the disappointing perception performances of music. Also for speech recognition they observe and report the missing of lower frequencies.
In our hearing paradigm it is clear that combination tones, missing pitches and a lot of other nonlinear phenomena are evoked by the hydro-dynamical process in front of the basilar membrane. With the result that the electrical signal in the organ of Corti is proportional to the sound energy frequency signal.
So if we want to mimic that signal for CI users, we need to do exactly what the ear is doing.
My final remark so far:
Of course we have to consider that the technical facilities that can be offered will always remain a surrogate for normal hearing. What is really lost in the organ of Corti we cannot replace and compensate.
But if our hearing paradigm can be proven by realistic experiments as correct, the adapted hearing aids for CI users, presbyacusis patients and even for Ménière patients with sensory-neural hearing loss will be the ultimate solution for their fitting problems.
Relationship between sound velocity, frequency and wavelength:
this is elementary physics and is covered in any physics textbook.
Velocity = frequency times wavelength.
This is particularly useful. You have relationships between them.
I can maybe add something about the interpretation of the equation.
Interpretation of the equation in which is expressed that the wave velocity equals the frequency multiplied by the wavelength [ in common wave propagation theory in physics ].
Take for example (sound):
With the travelling wave equation: the propagation velocity of the wave [in m/s] equals the frequency [in Hz] multiplied by the wavelength [in m].
This equation must be interpreted in the following way: the speed of a (sound) wave that moves through a medium isn’t dependent on its frequency and its wavelength.
The speed (of sound) - hence also the speed with which (sound) energy is transported - is a material constant and it therefore only depends on a number of properties of that medium. And the only way to change that speed is to change the properties of the medium.
Once the speed (of sound) in a medium is determined the above mentioned equation expresses the relation between the (sound) frequency and the wavelength.
The two have an inverse relationship.
Or in reverse:
In textbooks you can read: The speed of sound in fluids and solids is given by the square route of the compressibility modulus [in Pascal] divided by the density [in kg/m³]. As an indication: this results in a speed velocity of 1858 m/s for glycerin and 870 m/s for paraffin oil.
You can see that the wave propagation velocity in a medium, the acoustic vibration frequency and the corresponding wavelength have the following common basic relation: the wave propagation velocity equals the acoustic vibration frequency multiplied by the corresponding wavelength.
This relation is one of the fundamental corner stones of common wave propagation theory in physics.
For the definition of a wave you can than look in the Webster Dictionary.
Webster Dictionary Definition of a Wave.
Webster's dictionary defines a wave as "a disturbance or variation that transfers energy progressively from point to point in a medium and that may take the form of an elastic deformation or of a variation of pressure, electric or magnetic intensity, electric potential, or temperature."
Be aware that the equation in which is expressed that the wave velocity equals the frequency multiplied by the wavelength can easily lead to a completely erroneous interpretation.
For the physics in the equation you have to be aware about for example the following:
Measuring both the wavelength in the ‘wave’ evoked by the frequency stimulus and subsequently calculating the propagation speed of the ‘wave’ by multiplying wavelength with frequency has for example nothing to do with correct physics.
So, it is important to know: The speed of a sound wave that moves through a medium isn’t dependent on its frequency and its wavelength.
Because, the speed is given ...
Namely:
The speed of sound in fluids and solids is given by the square route of the compressibility modulus [in Pascal] divided by the density [in kg/m3]. As an indication: this results in a speed velocity of 1858 m/s for glycerin and 870 m/s for paraffin oil.
We really must remind you to the fact that a mechanical vibration - and the sound stimulus is such a vibration - in a fluid, or in this case water like perilymph, will always propagate with the speed of sound, which has typically here the value of 1500 m/s.“This three compartment cochlear model can account for elaborate modelling of the physics of the cochlea. It is well illustrated. Bernoulli's law is applied under quasi-static conditions.”
“By resonance in the basilar membrane, i.e. the frequency-place related distributed resonance capability, the stimulus can evoke simultaneously all the frequency contributions of the sound energy signal, including exact phase relation for each contribution, which will be sent to the auditory cortex.”
“The sound pressure variations in front of the eardrum evoke movement of the perilymph fluid in the cochlea. This transfer of accoustic pressure variations to perilymph velocity means that the incoming signal is differentiated in time.”
“And subsequently, it is the velocity of the perilymph fluid that causes pressure differences on either side of the Reissner membrane and basilar membrane based on Bernoulli's law.”
“Effectively this means that the sound signal is first differentiated and subsequently squared in the human ear. The pressure differences then set the basilar membrane into motion to stimulate the auditory nerves via the organ of Corti.”
“Here Bernoulli's law is applied under quasi-static conditions which is allowed because the low viscosity and incompressibility of the perilymph fluid and the low Reynolds number during the time dependent movements guarantee the necessary laminar flow conditions.”
The latter is based on the following conditions: the scala tympani and the scala vestibuli can be regarded as a tube filled with perilymph; an incompressible fluid of low viscosity. This fluid flows back and forth periodically, along a short trajectory that is aligned with the core direction of the tube.
The Reynolds number in the fluid is far below the threshold for turbulent flow conditions; hence the flow will be laminar.
When we depart from the generally valid Navier-Stokes equation for hydrodynamic behavior, it follows that the above mentioned restrictive conditions allow us, without further restriction, to reduce the complexity of the Navier-Stokes equation to the non-stationary Bernoulli equation.
Without creating any additional errors the spiraled cochlear partition can be unrolled and the hydrodynamic problem to be solved can be observed as a one-dimensional flow, hence free of rotation.
That means it can be seen as a periodic potential flow in which the fluid velocity is the gradient of a so-called velocity-potential.
The fluid velocity distribution will be a solution of Laplace’s equation for the velocity-potential.
And the solution of the non-stationary Bernoulli equation in this case results in the typical Bernoulli relation between the pressure change Δ𝑝 in the fluid and the fluid velocity 𝑣:
So after this analytical approache these results should also be found in calculations for the perilymph velocity distribution and the perilymph pressure distribution.
... in scala tympani and scala vestibuli as well. Also, for both scalae the same, above-mentioned equations are valid, because together the scalea form one stream tube.
... that equal sized cross sections of both scalae would give equal pressures.
... that a balance in the size of these cross sections leads to reduced sensitivity of the cochlea for sudden fast movements of the head.
Perhaps I could share some idea for further research.
If we could make actual and correct pressure measurements in the cochlea to reveal wether the non-stationary Bernoulli effect is a good description of the actual physics-of-how-the-cochlea-isolates-frequencies-along-its-length?
Organ of corti operation. Inner hair cells are the leftmost row, outer hair cells are the other three rows.
I would consider:
I would propose to use a pitot tube, with sensor in the side wall [ B in the next figure, left side in that figure ] to actual have correct pressure measurements in the perilymph flow tube inside the cochlea.
Therefore I would propose to use a pitot tube, with sensor in the side wall [ B in the figure, left side in the figure ] to actual have correct pressure measurements in the perilymph flow tube inside the cochlea.
So, I would propose to use a pitot tube, with sensor in the side wall [ B in the figure, left side in the figure ]
From limited knowledge, decades ago, dating back to the nineteenth century:
Ohm's law of specific acoustic energies was the first biological application of Fourier's theorem.
Actually, it was already suggested in J. Müller.s Handbuch der Physiologie des Menschen, Vol. II, Hölscher, Coblenz, 1838.
At that time there was no alternative but to think in terms of energy and mechanics rather than information and neurons.
The human was considered a machine.
Our research interest concerns the verification on correct application of physics in explanations and hypotheses about hearing related processes.
Our research gives us many indications, confirms the suspicion that with regarding auditory sense, we really again go all the way back to the analysis of the sound energy, as was already suggested by Ohm decades ago,
by discovering 'The non stationary Bernoulli effect' inside the cochlea, actually, suggested in 2010: that there is: "the incoming sound pressure stimulus is differentiated – according the transfer from sound pressure to perilymph velocity – and squared – according the transfer from perilymph velocity to cochlear microphonics."
Of course, the term acoustic energy is justified until the hair cells act (by the biological particulars of the human ear).
Correspondingly, one can check by physics to what extent inherited terminology like „acoustic energy" is still appropriate to the defenition of acoustic energy, that the sound pressure is squared and there the logarithm is taken.
And put a corresponding investigation into question:
considering 'the term acoustic energy is justified until the hair cells act', is it then not completely overlooked the fact that the true physical value of the acoustics energy is proportional to both the square of the amplitude and to the square of the frequency?
After all, is it not that what explains the 1/f character of pleasant music?
The differentiation step in that transfer makes that all frequency contributions in the sound pressure signal with 1/f amplitude ratio's have equal contributions in the sound energy signal.
1/f spectra have the unique distinction of being "scale invariant" in the sense that the energy in an interval df is proportional to df.
The 1/f spectra in fact have the property that the in an interval with width df available energy is proportional to df but not with f. There, namely "scale invariant" attribute for. It is not the energy, but the signal amplitude with which 1/f scales.
Also for the sound spectrum usually offered to our hearing, it is like that. And it turns out that there are very many natural sources to have a 1/f spectrum distribution.
And then it is also consistent with the fact that each frequency interval of width df [difference in Hz] at a random location in the spectrum with frequency f [value in Hz] has an energy content that depends on the value of the frequency interval df but does not depend on the frequency f itself.
I have heard it said on a number of occasions that 1/f spectra are very commonly encountered among natural signals, and one might perhaps expect the auditory system to reflect this fact in its design.
In nature it is quite normal that there are noises generated by noise sources, and the noise spectral energy density - the sound energy per frequency band with constant width - is constant.
But that also means that the sound signals provided by the sources themselves consist of frequency fees, which have amplitudes inversely proportional to their frequency f. Which leads to the so-called 1/f spectrum.
If the hearing of mammals differentiates - so there out of the sound pressure signal per frequency contribution f also an f time greater perilymph velocity, which neutralizes the existing 1/f factor therein - and then squaring, so that the final sound energy signal anywhere on the basilar membrane therefore no longer depends on the frequency f, the auditory sense is in the best possible way adapted to the perception of sound sources with a 1/f spectrum.
Looked at classroom and offices, respectively, and generally found that the spectrum of summed background sounds rolls off (declines in amplitude) according to a 1/f function, somewhat similar to pink noise.
That's because both natural sources, but also the most musical instruments inside the generated energy with constant spectral energy density - so 1/f spectra exciting - are broadcasting periodic noises.
Of course 1/f rules: not unlimited in long-term spectra. But there does not exist in nature normally indefinitely sounding by sound sources.
1/f, a reasonable approximation of the occurring spectral energy density.
And now that Bernoulli effect inside the cochlea: differentiating and quadrating.
From here, we can make use of the so-called 1/f relation for sounds found in nature. By this 1/f relation, the sound pressure amplitude p0i of a pure tone in a tone complex will be reciprocal to its frequency fi . Immediately, the reason for the preference for 1/f sound contributions becomes clear: The signal strength of each stimulus contribution on the BM (basilar membrane) becomes frequency independent. Not surprisingly, this well-established 1/f quality of sounds is a phenomenon that is omni-present in nature. The mammalian auditory sense shows a perfect adaptation to such sounds.
Sometimes: that 1/f character is just globally. It makes sense to me because nature is never exact in every detail.
But sure: there is a lot of 1/f associated with in nature, the speech and music sounds. And one wonders whether it affects the functioning of the auditory organ. But for me: It looks now: Or is it not more the other way around: the mammalian auditory sense shows a perfect adaptation to such 1/f sounds.
When a theory suggests that our hearing differentiates and squares. And that then the brains, of the most obvious, is getting evenly distributed sound energy frequency signals. So it benefits from that the noise amplitude spectrum is of the nature of 1/f.
In fact: the occurrence of 1/f spectra and hearing then show a logical duality. And then: Considering 'the term acoustic energy is justified until the hair cells act', Yes: the true physical value of the acoustics energy is then proportional to both the square of the amplitude and to the square of the frequency, if one considers that the mammalian cochlea differentiates and squares the incoming sound pressure signal inside the cochlea itself, in terms of physics, totally hydrodynamic in origin, contrary to that an early neural mechanism is responsible for it, contrary to brain processes it. Totally hydrodynamic in origin! Non stationary Bernoulli effect inside only the cochlea.
So now, to take a position about a terminology of acoustics energy:
Consequences of this for audiologic research:
Let's look at:
The Fletcher-Munson curve:
This curve expresses the data for the sensitivity of the human hearing sense.
It shows the hearing threshold based on the sound pressure level dB scale [dB SPL].
In an equation this quantity is expressed as:
The Fletcher-Munson curve in a graph:
This curve shows a very remarkable effect.
It is only flat in the frequency region between 3 and 4 kHz.
Beyond the frequency domain important for humans.
In a transfer from the sound pressure stimulus into the sound energy stimulus, the relation between the sound pressure quantity dB(SPL) and sound energy quantity dB(SEL) can be calculated as:
The final result is given by:
Or between 'sound energy level' and 'sound pressure level':
And this means that from dB(SPL) to dB(SEL) we have to correct the Fletcher-Munson
curve with a subtraction of 6dB/octave:
In the graph this is given by:
Correction with 6 dB/octave gives:
The sound energy sensitivity curve.
And now the curve shows the sensitivity for sound energy contributions:
Conclusions:
This audiology phenomena is explained in a way when we follow the paradigm in which the cochlea analyzes the sound energy frequency spectrum. In terms of physics, totally hydrodynamic in origin.
Here we have explained:
Modification of the Fletcher-Munson curve based on dB[SPL] scale into the dB[SEL] scale by a -6 dB/octave correction.
This all together forms the basis for the appreciation of 1 / f sound compositions.
In the hearing world and the sound recording world - based on auditory - one makes use of the so-called dB [SPL] [Sound Pressure Level] values.
These are then found by making use of the so-called "least squares" [Root-Mean-square] average calculation.
See:
http://en.wikipedia.org/wiki/Acoustic_pressure#Sound_pressure_level and:
http://en.wikipedia.org/wiki/Root-mean-square
And then systematically - if one does not consider a differentiating and squaring process in the cohlea - one makes only use of the Fletcher-Munson curve.
Sensitivity curve [Fletcher-Munson curve] is a curve for normal hearing sense.
In fact if you look at the definition of dB SPL, which stands for "decibel sound pressure level", to calculate realy everyting all right, using those dB SPL calculations, one can now look further also at the influence of the frequency thereto. If you do so, you'll see that with a same soundpressure at 2000 Hz the dB SPL value compared to that one of the 1000 Hz, is a factor 4, i.e. 6 dB higher than with using the existing definition of the dB SPL to calculate the value.
That's because it overlooks the fact that with the same pressure amplitude in each volume the same mass present at 2000 Hz moves two times rapidly as at 1000 Hz.
And then the energy of motion is still proportional to the square of the velocity.
The dB SPL scale should therefore be corrected - 6 dB ear sensitivity subtracted per octave.
Which means that for each octave there a 6 dB must be subtracted.
And that it is equally again connected with the Fletcher-Munson curve or isophone zero dB, the correction curve in use by audiometry.
If you apply there the 6 dB per octave correction, you see suddenly that most of that remarkable notability of the auditory sensitivity to rising pitch in the most sensitive field of our hearing disappears.
That means it is mutually so closely related.
This all together forms the basis for the appreciation of 1 / f sound compositions.
Video Modification of the Fletcher-Munson curve based on dB[SPL] scale into the dB[SEL] scale by a -6 dB/octave correction.
( If you don’t see the video image below: refresh the page or clear your cache. )(video 6)