Monitor Audio RX6 distortion - Page 2 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #31 of 51 Old 01-22-2014, 11:23 AM - Thread Starter
 
Heinrich S's Avatar
 
Join Date: Aug 2007
Posts: 974
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 30
Quote:
Originally Posted by Heinrich S View Post

It said the OHC iof the cochlea s the primary cause of distortion. So then what about the middle ear? It says from 40 to 110 dB SPL that the middle ear is quite linear, and does not result in noticeable distortion at normal listening levels. How do these two things relate? Further it says the inner ear non-linearity does produce distortion, which can be heard, and measured in the ear canal. Is that the OHC?

Basically what I'm trying to say is, if you don't listen to music at 110 dB, for peaks, it's not going to cause in-ear distortion according to what they say about the middle ear.

Arnyk, could you please respond to my questions regarding the in-ear distortion?
Heinrich S is offline  
Sponsored Links
Advertisement
 
post #32 of 51 Old 01-22-2014, 11:27 AM
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,530
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 851 Post(s)
Liked: 1205
Quote:
Originally Posted by Heinrich S View Post

Quote:
Originally Posted by arnyk View Post

You got that right! ;-)

So that means that if I'm mechanically limited then it would be at low frequencies. If I'm thermally limited then it would be at mids and highs only?

Mechanical limiting tends to occur near the low end of the response range of any driver. That turns out to be the bass end of a woofer, and someplace in the midrange for the tweeter.

Thermal modulation can take place any where in the frequency range of a driver because it tends to follow the envelope of the music and not the actual music waveform.
arnyk is offline  
post #33 of 51 Old 01-22-2014, 11:28 AM
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,530
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 851 Post(s)
Liked: 1205
Quote:
Originally Posted by Heinrich S View Post

Quote:
Originally Posted by Heinrich S View Post

It said the OHC iof the cochlea s the primary cause of distortion. So then what about the middle ear? It says from 40 to 110 dB SPL that the middle ear is quite linear, and does not result in noticeable distortion at normal listening levels. How do these two things relate? Further it says the inner ear non-linearity does produce distortion, which can be heard, and measured in the ear canal. Is that the OHC?

Basically what I'm trying to say is, if you don't listen to music at 110 dB, for peaks, it's not going to cause in-ear distortion according to what they say about the middle ear.

Arnyk, could you please respond to my questions regarding the in-ear distortion?

I did but you don't seem to be able to follow it.
arnyk is offline  
post #34 of 51 Old 01-22-2014, 11:33 AM - Thread Starter
 
Heinrich S's Avatar
 
Join Date: Aug 2007
Posts: 974
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 30
Quote:
Originally Posted by arnyk View Post

I did but you don't seem to be able to follow it.

But I don't think you did. The paper said SPL in the middle ear is linear from 40-110 dB. So what am I missing then? If I listen to music at below 110 dB peaks, you are telling me my ears can still suffer from in-ear distortions? The article is saying something different. It mentioned the OHC being the primary cause of non-linearity in the ear, but how does that affect me with regard to the middle ear, since it's linear over such a wide range?

Unless I sit right up close to a speaker to experience 110 dB, I don't think the SPL at normal seated distances would be enough to bring on ear distortion. That's what I read out of the article. Please, by all means, correct me if I''m wrong or if I missed something.
Heinrich S is offline  
post #35 of 51 Old 01-22-2014, 12:11 PM - Thread Starter
 
Heinrich S's Avatar
 
Join Date: Aug 2007
Posts: 974
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 30
Quote:
Originally Posted by arnyk 
Tweeter voice coils are tiny and yet they may havve to dissipate a lot of heat for their size. They have to be made out of copper or aluminum or something like them and these materials experience very significant changes in resistance due to normal heating and cooling as they operate. You apply a lot of power to a tweeter, its voice coil resistance might double, and less current than is expected is able to flow through it even though more voltage is applied to it. Voila! Thermal compression.

I assumed thermal compression was related to amplifier clipping.
Heinrich S is offline  
post #36 of 51 Old 01-22-2014, 10:12 PM - Thread Starter
 
Heinrich S's Avatar
 
Join Date: Aug 2007
Posts: 974
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 30
Quote:
Originally Posted by Heinrich S View Post

But I don't think you did. The paper said SPL in the middle ear is linear from 40-110 dB. So what am I missing then? If I listen to music at below 110 dB peaks, you are telling me my ears can still suffer from in-ear distortions? The article is saying something different. It mentioned the OHC being the primary cause of non-linearity in the ear, but how does that affect me with regard to the middle ear, since it's linear over such a wide range?

Unless I sit right up close to a speaker to experience 110 dB, I don't think the SPL at normal seated distances would be enough to bring on ear distortion. That's what I read out of the article. Please, by all means, correct me if I''m wrong or if I missed something.
Quote:
Originally Posted by arnyk View Post

I did but you don't seem to be able to follow it.

???
Heinrich S is offline  
post #37 of 51 Old 01-23-2014, 07:42 AM
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,530
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 851 Post(s)
Liked: 1205
Quote:
Originally Posted by Heinrich S View Post

Quote:
Originally Posted by arnyk View Post

I did but you don't seem to be able to follow it.

But I don't think you did. The paper said SPL in the middle ear is linear from 40-110 dB. So what am I missing then? .

The ear is composed of the outer ear, the middle ear, and the inner ear. For hearing to be linear, they all have to be linear. The article said that the outer hair cells in the inner ear were nonlinear.

This web page plays a number of pairs of tones which the ear can intermodulate due to the ear's nonlinear distortion to generate additional tones: http://www.phy.davidson.edu/fachome/dmb/wmpviz/beats/beats.htm
arnyk is offline  
post #38 of 51 Old 01-23-2014, 11:06 AM
AVS Addicted Member
 
amirm's Avatar
 
Join Date: Jan 2002
Location: Washington State
Posts: 18,481
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 1062 Post(s)
Liked: 557
Quote:
Originally Posted by Heinrich S View Post

But I don't think you did. The paper said SPL in the middle ear is linear from 40-110 dB. So what am I missing then? If I listen to music at below 110 dB peaks, you are telling me my ears can still suffer from in-ear distortions? The article is saying something different. It mentioned the OHC being the primary cause of non-linearity in the ear, but how does that affect me with regard to the middle ear, since it's linear over such a wide range?

Unless I sit right up close to a speaker to experience 110 dB, I don't think the SPL at normal seated distances would be enough to bring on ear distortion. That's what I read out of the article. Please, by all means, correct me if I''m wrong or if I missed something.
Arny has you on a wild goose chase. And your precision questioning is dragging him into it and he doesn't know how to dig his way out of that either smile.gif. The short answer is that this topic has nothing to do with the question you asked. Here is the long answer:

We are in elementary school here, trying to solve calculus problems. The science of how the ear works is extremely complex and under constant research. While there is consensus view among some, we still don't have a full model of how the ear works and when we do, not everyone agrees.

Anyway, the way the ear works is that it has three parts. The outer ear gathers the sound and directs it into your ear. This part of the auditory system actually includes your face, torso, etc. These play a large role in how we hear higher frequencies above a few hundred hertz and invalidate a lot of our common sense about room acoustics in the process. But that's for another thread. The inner ear is responsible for converting the sound pressure into electrical signals which the nerves then pick up and transmit to your brain for analysis. The middle ear is an impedance match system. It converts the pressure in air canal which is in the form of air molecules and impedance matches it to the fluid pressure that is used by the inner ear. I am grossly oversimplifying things here to get the message across.

The outer hair cells (OHCs) is a solution to a puzzle. The actual detection of sound pressure happens with inner ear hair (IHC). But we have a problem. That mechanism has a dynamic range of about 60 db. This is computed using the maximum amount of movement possible at the high end, and the minimum floor set by thermal noise and that of "brownian" motion of air molecules. But we can hear 120 dB worth of dynamic range. Where do we find the other 60 db? OHC is the answer. How it does that is disputed (as a matter of an electronic model). But at high level, the OHC which by the way are hairs that are connected at both end, are able through a process called polarization to stiffen or loosen. This in turn changes the sensitivity of the IHC. The polarization gets its directive from the voltage at the nerve cells of the IHCs. In other words, we have a feedback loop controlling a mechanical amplifier created by the OHC.

Since the OHC changes its "gain" (volume) based on how strong the stimulus (sound) is and its frequency, this is no linear amplifier. It is an amplifier that puts out a different output depending on how loud the input is (i.e. acts like a compressor) and what it contains. By definition then we have a non-linear system and hence the point the author made in the article you linked to.

The actual mechanisms here are covered in graduate level classes at school and even there, you are going to get a taste of it. Understanding the research and mechanisms requires knowledge of electronics, acoustics, and of course, medical field. So I have no expectation that you followed anything I said smile.gif.

So let me show you the real point which we have know for some 70 years. Here are the Fletcher Munson curves:

705px-Lindos4.svg.png

These are curves that show equal loudness for a specific SPL vs. frequency. As we look at the bottom curve we see very high sensitivity in the middle around 1 to 5 Khz which is where human voice is (most important thing in our lives to understand). That is at the lowest level that we can detect. We see that the system does not have a flat frequency response. Nothing remotely the same as a flat line of an electronic amplifier. The sensitivity in the mid frequencies is hugely higher than low and high frequencies. In that region though, the ear outperforms many microphones so the statement Arny made that the era sucks compared to a microphone, is not correct as a flat statement.

Now look at the series of curves in that graph as you go up. Those are lines for increasing loudness levels. We see that the curve shape changes. This is the compressor in action. As the sound gets louder and louder, we no longer need the extra amplification for the mid frequencies. Again, your normal electronic amplifier doesn't have this characteristic. Its response (until its limit) does not change as the input signal rises. So we have our non-linearity in plain view now. If a signal rises from low to high level, the amount of signal transmitted to the brain varies and does not follow a 1:1 relationship. This cases "distortion."

So why is all this a wild goose chase? Because what we listen to is created by another human, not a computer. That human had similar curves to us. The distortion that is created by the ear, also existed when the talent created it. We don't hear that loud kick drum as a microphone and measurement system would. We hear it as a human and so does the person who created that sound/instrument. It is for this reason that no speaker for example tries to follow these inverse of these curves as to give you a flat response. Of course we don't all hear exactly alike but such is life with audio reproduction. We are not trying to build a copy machine -- we can't for audio given the system architecture that we have. Instead, we are trying to listen to art -- an interpretation that is pleasing to us. Distortions and all.

If your system characteristic is changing as you increase the volume, it is a system problem, not your ears. If it were your ears, we could never playing anything loud and have it be pleasing.

Amir
Retired Technology Insider
Founder, Madrona Digital
"Insist on Quality Engineering"
amirm is online now  
post #39 of 51 Old 01-23-2014, 04:14 PM
AVS Addicted Member
 
amirm's Avatar
 
Join Date: Jan 2002
Location: Washington State
Posts: 18,481
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 1062 Post(s)
Liked: 557
Quote:
Originally Posted by bralas View Post

http://www.adx.co.nz/techinfo/audio/note128.pdf

Rane examines the subject of power amp clipping and amplitude compression. Also, correlating 'tweeter' damage not being a result of amplifier clipping (hi freq) products. 
Thanks for posting that. Rane white papers are usually a good read and this one was no exception. The author does the best job of explaining subjectively what happens when an amp runs out of horsepower. His theory of why tweeters get damaged sounds plausible at first. But I can't understand his proof. An amplifier works in time domain, not frequency. High frequencies ride on top of the low frequency swings. Should the low frequencies clip, same thing will happen to high frequencies as their tops get chopped off. I don't see why he thinks that the tweeter signal keeps getting higher and higher while the bass stays the same with just the one amp powering both. Are you able to figure out his logic and explain what I am missing? Thanks in advance smile.gif.

Amir
Retired Technology Insider
Founder, Madrona Digital
"Insist on Quality Engineering"
amirm is online now  
post #40 of 51 Old 01-23-2014, 06:05 PM
 
bralas's Avatar
 
Join Date: Jan 2014
Location: Lafayette, LA
Posts: 117
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 20

Greetings Amir,

 

I had to chew on this for a while myself. In fact, Am certain you will be able to further my perception on this as well.

 

Not sure if this is relevant? but an o-scope (something I use regularly) is a powerful tool but can possibly be misleading i.e. generating a really fast on/off pulse or rise/fall time, spectrum wise results in near infinite harmonic products. These products are not displayed on the scope, we only see a nice looking square waveform. I believe the Rane article is describing how nominal audio and "power" distribution is bundled about the lower frequencies, thus why the long term power dissipation of the tweeter is much smaller when compared to long term LF power dissipation.

 

Logic, the woofer is mechanically working relatively more, much more. Seems the majority of the audio information resides in the lower freq spectrum. I believe this could correlate to the fact that the human voice freq reside there, suggesting evolution defines where most of the audio information lives. Finally as the amplifier compress, the HF have headroom to actually rise, but the LF only clip?

bralas is offline  
post #41 of 51 Old 01-23-2014, 07:03 PM
AVS Addicted Member
 
amirm's Avatar
 
Join Date: Jan 2002
Location: Washington State
Posts: 18,481
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 1062 Post(s)
Liked: 557
Quote:
Originally Posted by bralas View Post

Greetings Amir,

I had to chew on this for a while myself. In fact, Am certain you will be able to further my perception on this as well.

Not sure if this is relevant? but an o-scope (something I use regularly) is a powerful tool but can possibly be misleading i.e. generating a really fast on/off pulse or rise/fall time, spectrum wise results in near infinite harmonic products. These products are not displayed on the scope, we only see a nice looking square waveform. I believe the Rane article is describing how nominal audio and "power" distribution is bundled about the lower frequencies, thus why the long term power dissipation of the tweeter is much smaller when compared to long term LF power dissipation.

Logic, the woofer is mechanically working relatively more, much more. Seems the majority of the audio information resides in the lower freq spectrum. I believe this could correlate to the fact that the human voice freq reside there, suggesting evolution defines where most of the audio information lives. Finally as the amplifier compress, the HF have headroom to actually rise, but the LF only clip?
Greetings back to you Bralas smile.gif. Your explanation matches his. So you are correct but I just can't make the analysis work.

Here is a simulation I just did in my audio workstation software. I combined two sine waves: one low frequency and one high. The former represents the signal that the woofer would get post crossover, and the higher one, what the tweeter sees, again post crossover. Here it is in "scope" or time view. On purpose I set the levels so that there is no clipping yet:

i-kvgHS9p-XL.png

As we see, the high frequency one is riding the much higher amplitude low frequency one (the wiggles on the big waveform). Confirming we have what we think we have, here is the spectrum:

i-x4JbWrc-XL.png

I put an arrow on the high frequency sine wave. We see two nice spikes confirming what we created. The "cursor" values are shown for the high frequency component at the bottom at about -34 db.

Now let's amplify the waveform as to cause it to clip. I am actually not pushing it to the limit of getting square wave as the author had done but enough to cause good clipping. Here it is in time domain:

i-3MpTJdx-XL.png

You now see the problem I have with his logic. The large waveform that comprised the low frequency is now indeed chopped off. But in the process we also chopped off the high frequency one that was riding on top of it. Notice the flat section is pretty much flat and doesn't have the high frequency components any larger than the low frequency one allows. Let's look at the spectrum:

i-q9L2gbq-X2.png

The waveform inside the ellipse in red is our former high frequency/tweeter component. We see that its level has not changed even though we amplified the whole signal enough to cause the waveform to badly clip. So his theory that the high frequency signal amplitude keeps going higher and higher does not hold. Once the signal clipped, the high frequency component got stopped in its track just the same. The amplifier would have run out of voltage supply and can't acommodate its additional swings above and beyond the low frequency one.

As expected we now have a ton of new distortion products with most of them landing in the tweeter spectrum. If we sum the power of all of these additional components, they add up to more than what our original single tweeter tone (again in red). Add the signal and new distortion products and clearly we have much more power being pumped into the tweeter. That will cause it to work harder and get potentially damaged. Net, net, what he says is not happening is, and what he says should be happening, not smile.gif.

If he is right, I must be missing something in this analysis. Do you see a hole?

Amir
Retired Technology Insider
Founder, Madrona Digital
"Insist on Quality Engineering"
amirm is online now  
post #42 of 51 Old 01-23-2014, 09:20 PM - Thread Starter
 
Heinrich S's Avatar
 
Join Date: Aug 2007
Posts: 974
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 30
Quote:
Originally Posted by amirm View Post

Arny has you on a wild goose chase. And your precision questioning is dragging him into it and he doesn't know how to dig his way out of that either smile.gif. The short answer is that this topic has nothing to do with the question you asked. Here is the long answer:

We are in elementary school here, trying to solve calculus problems. The science of how the ear works is extremely complex and under constant research. While there is consensus view among some, we still don't have a full model of how the ear works and when we do, not everyone agrees.

Anyway, the way the ear works is that it has three parts. The outer ear gathers the sound and directs it into your ear. This part of the auditory system actually includes your face, torso, etc. These play a large role in how we hear higher frequencies above a few hundred hertz and invalidate a lot of our common sense about room acoustics in the process. But that's for another thread. The inner ear is responsible for converting the sound pressure into electrical signals which the nerves then pick up and transmit to your brain for analysis. The middle ear is an impedance match system. It converts the pressure in air canal which is in the form of air molecules and impedance matches it to the fluid pressure that is used by the inner ear. I am grossly oversimplifying things here to get the message across.

The outer hair cells (OHCs) is a solution to a puzzle. The actual detection of sound pressure happens with inner ear hair (IHC). But we have a problem. That mechanism has a dynamic range of about 60 db. This is computed using the maximum amount of movement possible at the high end, and the minimum floor set by thermal noise and that of "brownian" motion of air molecules. But we can hear 120 dB worth of dynamic range. Where do we find the other 60 db? OHC is the answer. How it does that is disputed (as a matter of an electronic model). But at high level, the OHC which by the way are hairs that are connected at both end, are able through a process called polarization to stiffen or loosen. This in turn changes the sensitivity of the IHC. The polarization gets its directive from the voltage at the nerve cells of the IHCs. In other words, we have a feedback loop controlling a mechanical amplifier created by the OHC.

Since the OHC changes its "gain" (volume) based on how strong the stimulus (sound) is and its frequency, this is no linear amplifier. It is an amplifier that puts out a different output depending on how loud the input is (i.e. acts like a compressor) and what it contains. By definition then we have a non-linear system and hence the point the author made in the article you linked to.

The actual mechanisms here are covered in graduate level classes at school and even there, you are going to get a taste of it. Understanding the research and mechanisms requires knowledge of electronics, acoustics, and of course, medical field. So I have no expectation that you followed anything I said smile.gif.

So let me show you the real point which we have know for some 70 years. Here are the Fletcher Munson curves:

705px-Lindos4.svg.png

These are curves that show equal loudness for a specific SPL vs. frequency. As we look at the bottom curve we see very high sensitivity in the middle around 1 to 5 Khz which is where human voice is (most important thing in our lives to understand). That is at the lowest level that we can detect. We see that the system does not have a flat frequency response. Nothing remotely the same as a flat line of an electronic amplifier. The sensitivity in the mid frequencies is hugely higher than low and high frequencies. In that region though, the ear outperforms many microphones so the statement Arny made that the era sucks compared to a microphone, is not correct as a flat statement.

Now look at the series of curves in that graph as you go up. Those are lines for increasing loudness levels. We see that the curve shape changes. This is the compressor in action. As the sound gets louder and louder, we no longer need the extra amplification for the mid frequencies. Again, your normal electronic amplifier doesn't have this characteristic. Its response (until its limit) does not change as the input signal rises. So we have our non-linearity in plain view now. If a signal rises from low to high level, the amount of signal transmitted to the brain varies and does not follow a 1:1 relationship. This cases "distortion."

So why is all this a wild goose chase? Because what we listen to is created by another human, not a computer. That human had similar curves to us. The distortion that is created by the ear, also existed when the talent created it. We don't hear that loud kick drum as a microphone and measurement system would. We hear it as a human and so does the person who created that sound/instrument. It is for this reason that no speaker for example tries to follow these inverse of these curves as to give you a flat response. Of course we don't all hear exactly alike but such is life with audio reproduction. We are not trying to build a copy machine -- we can't for audio given the system architecture that we have. Instead, we are trying to listen to art -- an interpretation that is pleasing to us. Distortions and all.

If your system characteristic is changing as you increase the volume, it is a system problem, not your ears. If it were your ears, we could never playing anything loud and have it be pleasing.

Thanks for the reply Amirm! I think the more likely possibility is that I experienced loudspeaker compression. But my question is, is it possible, however unlkely, that at very high volumes (over 100 dB) that I experienced compression in the ear at low frequencies, or is that not possible at all?
Heinrich S is offline  
post #43 of 51 Old 01-23-2014, 10:46 PM
AVS Addicted Member
 
amirm's Avatar
 
Join Date: Jan 2002
Location: Washington State
Posts: 18,481
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 1062 Post(s)
Liked: 557
Quote:
Originally Posted by Heinrich S View Post

Thanks for the reply Amirm! I think the more likely possibility is that I experienced loudspeaker compression. But my question is, is it possible, however unlkely, that at very high volumes (over 100 dB) that I experienced compression in the ear at low frequencies, or is that not possible at all?
I don't think that is a factor at all. My sense is that you are running out of amplification power. 75 watts is not much at all for high volume listening. The amplifier has a dynamic protection circuit and will current limit at peak power. The current limiting will sound like weak bass. Indeed, this is my recommended test for whether you have enough amplifier power or not. Close your eyes and keep increasing the volume with bass heavy track. Remember the point at which you lose grip as you say. Open your eye and record the volume setting at which you hear the bass weakens. Lower the volume and start over again with your eyes closed. Once more record the volume setting. If you manage to get consistent values for the volume within a few dBs, then you are running out of amplification power.

I have done the exact test above while having 4 amps at my disposal with increasing power rating. Stepping up to the next power level and repeating the test clearly showed the limit going up. In that test, it took a 400 watt/ch amplifier in order to have completely linear performance with no limiting. This was in a very large space though so you don't necessarily need this much power. I have also taken bookshelf speakers and drove them with a modest amp like yours, and then turbo charged them with 300 watt/ch amp. eek.gifbiggrin.gif. The fidelity was definitely improved at higher volumes. Of course, I was careful to not blow the woofer out of the enclosure. biggrin.gif

Bottom line, to get to next step of analysis, you need a more powerful amplifier to compare. I don't know any other easy way to get the answer. As I have explained in other threads, speaker sensitivity ratings are marketing numbers so trying to do the paper math will not result in proper conclusions.

Amir
Retired Technology Insider
Founder, Madrona Digital
"Insist on Quality Engineering"
amirm is online now  
post #44 of 51 Old 01-24-2014, 02:28 AM
 
bralas's Avatar
 
Join Date: Jan 2014
Location: Lafayette, LA
Posts: 117
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 20
Quote:
Originally Posted by amirm View Post

If he is right, I must be missing something in this analysis. Do you see a hole?

I may (literally)

 

Note how time is integrated for the waveforms. As the low freq info nulls the hi freq indeed now has "voltage" headroom. The signal is no longer clipped, the lower freq no longer manifest. So indeed the author is correct in my book. Your plots are snapshot when both LF & HF are occurring:cool:

 

Edit: Believe this could be reproduced (o-scope) by selecting the right timebase and single trigger event...ta da!

bralas is offline  
post #45 of 51 Old 01-24-2014, 04:53 AM
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,530
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 851 Post(s)
Liked: 1205
Quote:
Originally Posted by amirm View Post


Here is a simulation I just did in my audio workstation software. I combined two sine waves: one low frequency and one high. The former represents the signal that the woofer would get post crossover, and the higher one, what the tweeter sees, again post crossover. Here it is in "scope" or time view. On purpose I set the levels so that there is no clipping yet:

i-kvgHS9p-XL.png

As we see, the high frequency one is riding the much higher amplitude low frequency one (the wiggles on the big waveform). Confirming we have what we think we have, here is the spectrum:

i-x4JbWrc-XL.png

I put an arrow on the high frequency sine wave. We see two nice spikes confirming what we created. The "cursor" values are shown for the high frequency component at the bottom at about -34 db.

Now let's amplify the waveform as to cause it to clip. I am actually not pushing it to the limit of getting square wave as the author had done but enough to cause good clipping. Here it is in time domain:

i-3MpTJdx-XL.png

You now see the problem I have with his logic. The large waveform that comprised the low frequency is now indeed chopped off.

First major error in logic. The low frequency waveform is not chopped off, it is subjected to waveform distortion. The phase "Chopped off" has a general meaning to native speakers of American English. The object that is chopped off is disconnected and removed. If someone has their hand chopped off, up until recently it was totally separated from the body, never to be a part of it again. But the low frequency tone remains, and is clearly seen in the "after" spectral analsys shown below.
Quote:
But in the process we also chopped off the high frequency one that was riding on top of it.

Second major error in logic, just like the frist and similarly disproven by Amir's own spectral analysis:
Quote:
Let's look at the spectrum:

i-q9L2gbq-X2.png

The waveform inside the ellipse in red is our former high frequency/tweeter component. We see that its level has not changed even though we amplified the whole signal enough to cause the waveform to badly clip.

The reasonable question is not whether or not the low or high frequency waves are amplified by clipping - that no such thing happened is obvious. In fact both waves have lost some energy due to the clipping process.

I had to rerun the experiment above because it is made of whole cloth - the after waves were rescaled and therefore they don't show the proper results.

Here is the spectrum of the before wave to confirm that the correct experiment started out with the same very nonstandard wave:



And here is the spectrum of the after wave properly scaled showing the actual loss of energy:




The question is whether or not their amplitude can be increased if the volume is turned up further, and they can.
Quote:
If he is right, I must be missing something in this analysis. Do you see a hole?

The hole in the logic above is quite clear. Interestingly enough it appears that the error started with a misconception of the meaning of common words and phrases in the American English language.

What I take away is yet another example of how a problem that is well described is already partially solved, and the solution to a problem that is mistakenly described may be already hopelessly bungled.

Similar errors afflict the subsequent discussion of distortion in the ear. I provided listening tests that clearly demonstrate audible nonlinear distortion in the ear. They are what they are and cannot be dismissed without making an error.

This is one reason why technology is composed of both theory and practice. Theory can be totally in error and still look good on paper.

Practice must be used to confirm theory to avoid the possibility of mental errors. Actual practice has no mind, no conscience, no limits, no biases - it does what it does without reference to anybody's wishes or dreams.

The idea that the ear has built in nonlinear distortion is well supported by current good science - there is a discussion of it in Zwicker and Fastl, for example. Anybody who has actually read and does actually understand Z&F should know this. Read here:

http://link.springer.com/chapter/10.1007%2F978-3-540-68888-4_14#page-1
arnyk is offline  
post #46 of 51 Old 01-24-2014, 05:20 AM
 
bralas's Avatar
 
Join Date: Jan 2014
Location: Lafayette, LA
Posts: 117
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 20
Quote:
Originally Posted by arnyk View Post


First major error in logic. The low frequency waveform is not chopped off, it is subjected to waveform distortion.

Please analyze my reply above.

 

Thanks to Amir, my interpretation has been enhanced. Any doubt the LF 'peaks' are compromised?

bralas is offline  
post #47 of 51 Old 01-24-2014, 05:54 AM
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,530
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 851 Post(s)
Liked: 1205
Quote:
Originally Posted by bralas View Post

Quote:
Originally Posted by arnyk View Post

First major error in logic. The low frequency waveform is not chopped off, it is subjected to waveform distortion.
Please analyze my reply above:
Quote:
Note how time is integrated for the waveforms. As the low freq info nulls the hi freq indeed now has "voltage" headroom. The signal is no longer clipped, the lower freq no longer manifest. So indeed the author is correct in my book. Your plots are snapshot when both LF & HF are occurring:cool:

I agree with the point you are trying to make, but disagree with the following phrase:

"the lower freq no longer manifest"

As nonlinear transform theory predicts and as Amir's own evidence shows, the lower frequency continues to be present, only at a slightly lower level.
Quote:
Thanks to Amir, my interpretation has been enhanced. Any doubt the LF 'peaks' are compromised?

Of course both the LF and HF waves were compromised! Was there ever any doubt? ;-)

The point is that both signals remain, admittedly rather highly compromised by the large amount of clipping shown in the example. I'd estimate the clipping at about 50% of a non-peak signal since real music has a far higher crest factor than the simplistic example involving two nonstandard pure tones. The example might have made more sense if it involved the SMPTE standard test tone (60 & 4 KHz, mixed 4:1) .

I would hope that no audiophile would play his audio system this highly distorted!

So we have an artificial and nonstandard example, sloppily prepared, mistakenly analyzed, and two sincere and insightful attempts to correct it.

I hope nobody remains confused by the mistaken conclusion that there are errors in the Rane paper on clipping. Similarly nobody should be mislead by the false claim that Zwicker and Fastl think that the ear has no nonlinear distortion.
arnyk is offline  
post #48 of 51 Old 01-24-2014, 12:29 PM
 
bralas's Avatar
 
Join Date: Jan 2014
Location: Lafayette, LA
Posts: 117
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 20

If you look back at the waveform diagram, you should note how the voltage is sampled over a relatively long period. The Low freq begin and end 

is captured, followed in time by the remaining hi frequency. The Low freq had decayed (to zero amplitude) thus leaving headroom for the

the hi freq power to rise. To do this on an o-scope is advanced. The typical auto trigger will not work. One would have to manual trigger 

and observe as the trace strobes over the correct time base. Just like a long time exposure for film if you will?   

bralas is offline  
post #49 of 51 Old 01-24-2014, 01:15 PM
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,530
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 851 Post(s)
Liked: 1205
Quote:
Originally Posted by bralas View Post

If you look back at the waveform diagram, you should note how the voltage is sampled over a relatively long period. The Low freq begin and end 
is captured, followed in time by the remaining hi frequency. The Low freq had decayed (to zero amplitude) thus leaving headroom for the
the hi freq power to rise. To do this on an o-scope is advanced. The typical auto trigger will not work. One would have to manual trigger 
and observe as the trace strobes over the correct time base. Just like a long time exposure for film if you will?   

You've lost me. This is what the .wav flooks like, properly sampled:




Both Amir (per his statement, above) and I are doing our work in audio editors, which gives us a lot of flexibility and makes the concept of triggering fairly irrelevant. We just position a sliding window over the waveform to suit our needs.
arnyk is offline  
post #50 of 51 Old 01-25-2014, 05:12 AM
 
bralas's Avatar
 
Join Date: Jan 2014
Location: Lafayette, LA
Posts: 117
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 20

I'm attempting to highlight your "snapshot" represents a small sample in time when compared to the Rane technical note #128. He is "recording" a much longer 100 (milliseconds) wave-form. The low freq component just like in real content, impulse / burst, significant concentrated yet momentary energy. Between these events are low freq nulls which leave ~100% of the voltage headroom. Rane is correct in my book, when one observes complex audio overtime, there is indeed opportunity for the Hi Freq spectrum energy to rise for the relative long term. Low Freq vs Hi Freq energy magnitude (voltage) merge. Meaning we keep turning up volume but only the upper freq spectrum has room to increase.

bralas is offline  
post #51 of 51 Old 01-25-2014, 05:41 AM
 
arnyk's Avatar
 
Join Date: Oct 2002
Location: Grosse Pointe Woods, MI
Posts: 14,530
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 851 Post(s)
Liked: 1205
Quote:
Originally Posted by bralas View Post

I'm attempting to highlight your "snapshot" represents a small sample in time when compared to the Rane technical note #128. He is "recording" a much longer 100 (milliseconds) wave-form. The low freq component just like in real content, impulse / burst, significant concentrated yet momentary energy. Between these events are low freq nulls which leave ~100% of the voltage headroom. Rane is correct in my book, when one observes complex audio overtime, there is indeed opportunity for the Hi Freq spectrum energy to rise for the relative long term. Low Freq vs Hi Freq energy magnitude (voltage) merge. Meaning we keep turning up volume but only the upper freq spectrum has room to increase.

Given that the period of Amir's so-called low frequency waveform is about 500 Hz, all showing more data does is show more repetitions of the same thing and obscure details, but here is a 100 MSec picture:



Focusing in on a 10 msec part of it to reduce the repetitive information:



And finally high passing with a 6 KHz filter to show what happens to the high frequency part of the wave:



The last picture clearly shows that when there is clipping, the high frequency component of the wave is modulated but not obliterated by the low frequency wave.
arnyk is offline  
Reply Audio Theory, Setup, and Chat

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off