digital room correction done less than lazy - AVS Forum
Forum Jump: 
 
Thread Tools
post #1 of 15 Old 10-10-2012, 01:37 AM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
i plan on getting this job done for locals by placing an ad in the newspaper, but i only plan on traveling in a 30-40 mile radius.

since i cant physically be there for everybody (and those that i can be there for might already have a calibrated microphone) i figure i will share with you, and maybe together we can heighten the experience of movie watching togther.


first of all let me point out the importance of getting your microphone calibrated, because i too used a microphone that was an analyzer mic but wasnt calibrated.. the difference after i got it back from calibration was remarkable to say the least.
(it allows your tool and your time to be less than a waste, because without the calibration.. the results are worthless, and sometimes the final result is worse than if you would of simply left it alone.)



i recommend putting each front speaker in the corner of the room to allow the room to fill up with sound.
because when the front speakers are placed closer together, the properties of physics doesnt allow the gap between the speaker and the corner to fill up with audio.. and the only audio that does get in there is the sloppy reflections.

to put it into perspective, air is a gas that moves kinda like water.
the only way to fill the room 'solid' is to place the speaker in a position that gives a fair chance to grip the water.

why is it even being brought up as important?
because the audio effects rely on the reflections of the walls to create the virtual speakers.
it is true, there are times when a speaker doesnt need a reflection to create a virtual speaker.. but this is only valid with lots of speaker cone movement.


here is a kindergarden example..
you throw out a sound that is positive phase, then throw out a negative phase of the same sound at twice the speed of the positive phase.
when those two pieces of air combine, they should cancel out.
but
what if the negative phase signal wasnt exactly opposite of the positive signal?
well that means the soundwave's amplitude will be almost cancelled out, but the audible energy will still be there at a lower perceived volume.
..what if you increase the gap between the two phases again?
the perceived volume will be louder again.

so how does a virtual speaker work?
the above 'dimming' of the audible signal takes place, then one of two things happens:
1. the same speaker throws out another signal after some time has passed with a collision course for the first signal.. and when those two signals meet, the sound is heard.
2. the other speaker throws out the second signal with a collision course for the first signal.

see.. if the speaker cone was going to throw out the signal and try to smack the first signal in the rear, then that signal would need to travel with more speaker cone movement to catch up to the first one.
but thanks to diffusion audio engineers know the soundwaves are causing ripples in the water, and they also know those ripples will spread out, and they also know if those ripples hit a wall.. then those ripples will spread out farther.

if you had an empty room that was square, and you removed the ceiling as well as the floor.. then placed a single speaker placed in a hole in the wall.. that speaker could play a tone and the ripples in the water would go on and on with the same pattern (including the bouncing ripples off the wall).
looking at that, a sound engineer can visualize the lines of the ripples as they move across the air and bump into eachother.
so for the virtual speaker to work, all the audio engineer has to do is decide what two lines bump into eachother for the virtual speaker to be at that location .. and then calculate how long the second signal must wait to get the two lines of ripple to line up with eachother.

there is a choice.
because crashing a soundwave into eachother with a head-on collision is the most violent, choosing a line of ripple that crashes head-on can sometimes produce the loudest effect.
but
sometimes it wont because the soundwave had to bounce _____ number of times before it was properly aligned head-on .. and by then the soundwave has lost too much energy compared to using a line of ripple that is more sideways (known as a T collision).

for a receipt of validity, go here and read:
http://en.wikipedia.org/wiki/Virtual_surround


virtual speakers dont trick the brain using phase and time variables, the sound waves literally collide in the air at the location heard.
the only way to 'trick' the brain is to place a microphone inside an ear of a virtual dummy head and record the impulse response from behind the ear.
the problem with this is, it only works when the shape of your ear is the same (or close to identical) as the dummy head because the reflections that bounce off the skin (as well as the soundwaves that are blocked) is exactly what was recorded with the impulse response.
there is a whole bunch of timing and phase information recorded in the impulse response because of the reflections that bounce off the skin and the soundwaves that are blocked, and it is that exact information that your brain has been listening to since when you first began listening.
you cant listen through another persons ears and be able to locate a sound behind you because the timing and phase information doesnt match what your brain is experienced with.
(if they are somewhat the same shape, you will know if it is in front or behind you.. but you wont have any idea about the degree of angle the sound is ... and that means you wont know if it is 54 degrees behind you or 80 degrees)

it can be done by taking the most basic averaged shape of the ear and recording the impulse responses.
you then get direction about to the side or behind you, but the result wont be accurate enough to scare the living light out of you (also known as 'made you look' ).



with that said..
getting those ripples to fill the room equally is very important.
placing the front speakers next to the television stand far away from the wall doesnt give those ripples a fair chance to exist.
it means if one speaker does the job and relys on a reflection, the sound arrives late.. and sometimes the sound can be heard more than once in the gap between the speaker and the corner of the room.
(it isnt loud, but it is just as bad as the room ringing .. and we all know the room ringing doesnt give the details and character of the audio a silent chance to be heard)
...it forces audio engineers to use the other front speaker to get the virtual speaker to work.


here is the problem..
many people put their couch right up against the wall, and that means the two ripples must make contact with eachother before the ripples hit the wall behind the couch and pour out into the room like a broken water pipe.
...and this is exactly why people always say it doesnt matter if the two front speakers are next to the television stand or in the corner, because they both create a line of 'possible' virtual speaker from one edge of the listening position to the other.
(and that is why they always tell you to leave a little bit of speaker cone facing the outer edge of the couch to increase the line width)



why did i say something about pulling them apart then?
because of a few reasons:
1. you limit your width of the virtual speaker location, because the two ripples arent very far apart when using one signal from one speaker and another signal from the other speaker.
2. not all sound engineers will choose to use the other speaker to reinforce the virtual speaker, and when they do.. you wont be there to catch much of a difference (if any at all).


you start to realize the second signal can come from the rear speakers, but there is why the distance delay is absolutely critical.
but
really think about it..
why snack on a stereo fade that is only as wide as your television stand when you could be hearing it from one wall to the other?
(this is especially true if the distance between your listening position and the speakers is small)
but
they already tell you this.. if you want to increase the soundstage width, simply seperate the speakers more.

you are going to thank me later when you realize not all of the ripples are counted from the second speaker, but the reflection off the wall is also used to help:
1. add amplitude to the sound
2. add detail & character, either because of signal to noise ratio .. or simply because the phase the sound engineer needs to shape the soundwave quickly and easily comes from the reflection off the wall.


look..
there is always going to be two different types of virtual speaker sounds:
1. the typical fade from left to right or front to back, and the special 'something inbetween'
2. the sound of rain that seems to transform the entire room into a landscape you can almost smell the change of wetness in the air.

if your speakers arent in the corner, be certain your room wont will up with number 2.
(sure, some of you will get some of the affect.. but it will be with the added noise between the front speaker and the wall reducing your signal to noise ratio, as well as pouring out distorted ripples of soundwaves into the air causing a loss of detail heard by the ear)

when the pattern of ripples in the water change from that empty square room, you can expect the transferred results to be different.
rectangled rooms dont need to worry much because the major focus is 'cube' .. meaning four walls, a floor, and a ceiling.
yes..
those of you with your front speakers away from the corner are trying to 'delete' one of the four walls.
cubic doesnt work with 5 .. it works with 6 .


what do i mean when i say cubic?
referring back to the empty room without any floor or ceiling, watching those ripples in the water as they remain constant in time and frequency variances .. some type of mathematical function must capture the existance of those ripples since they are constantly the same and reliable.
..............well here is the issue:
TRIgonometry works with triangles, thus THREE variables.
but that room has four sources.
if you add the floor and the ceiling, that is six sources and a double of three.
but.. if you dont let the front speakers use the front wall as a reflection because there isnt any line-of-sight from the cone to the wall ... that brings the number of sources back down to five and trigonometry doesnt fit the bill once again.

why would i say anything about cubic then?
because cubic works in powers of four, and is the next step up from trigonometry.
how does four fit into six sources? .. easy, there are two remaining.. one for positive and one for negative (or one for +180 degrees of phase and one for -180 degrees of phase).
really.. it should make sense that those two free variables allows the surround affect to move through the air freely when added with trigonometry.
...how does that make sense you ask?
because if you have six and you are using seven.. that means there is always going to be an extra one that you can use as a layer onto the existing six.. and that is how the sound can be hidden as it moves to the physical location before it explodes in a collision like a firework shooting up into the sky and bursting into color & sparkles.

now..
if you can see how to use seven, you might as well go on and move up to two iterations of cubic, because that brings the total number up to eight.. and that gives you one for the left and one for the right.
(or one for positive phase and one for negative phase)
anwaypasible is offline  
Sponsored Links
Advertisement
 
post #2 of 15 Old 10-10-2012, 02:34 AM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
okay..
now that i've told you the importance of the ripples in the water, including the reflected ripples in the water, i'm going to describe a little trick to get all of those ripples lined up in perfect time alignment.
in the end, your virtual speakers will be placed with high accuracy compared to the simple distance delay on the receiver.


what i did was this..
i put each speaker in the corner of the room, and one of them is pulled forward about a foot because of the computer and television stand in the way blocking the line-of-sight from the cone to the listening position.

i already had the distance measurements from the speaker to the listening position with my soundcard software.
so i grabbed an impulse response from the listening position.
then i went into the audio editor and trimmed the silence from the beginning and inversed the impulse response.. i applied it using a convolution filter.

what i did next starts the whole difference.
i measured from the listening position to the side wall, and translated that distance to milliseconds.
i measured the distance from the listening position to the other side wall, and translated that to milliseconds.

i put a sound delay plugin next in the filter chain and dialed in the delay.
then i put the microphone next to the side wall and grabbed an impulse response file.
i put the microphone next to the other wall and grabbed the other impulse response file.
did the same routine in the audio editor and made it a stereo file, then loaded another convolution filter to go after the sound delay plugin and put the stereo file in the convolution plugin.

then i determined what was going to come next.. the back wall or the ceiling?
in my room, the back wall is a shorter distance.. so i measured the distance from the back wall to the listening position and translated that to milliseconds.
i put another sound delay plugin into the chain after the last convolution plugin and dialed in the delay again.

then i put the microphone up by the back wall and recorded another impulse response file.. loaded up another convolution plugin after the above sound delay plugin and loaded the file.
then i did the same thing for the ceiling with the sound delay and impulse response file.

after that..
i measured the distance from the listening position to behind the two front speakers, because soundwaves gather back there and it is useful to get rid of them with the impulse response file.
i put another sound delay plugin into the chain of filters and proceeded with recording the impulse response file from behind the speakers.
i didnt put the microphone directly behind the speakers, i kept the height between ear position and the middle point between the floor and the ceiling.
i did all the same with the audio editor and made it a stereo file.
then i loaded up another convolution plugin after the last sound delay and loaded the impulse response file.

oh i didnt stop there...
obviously the distance from listening position to behind the speaker is the same as behind the speaker to the listening position.
so i added another sound delay plugin and dialed in the delay.
then i put the microphone back into the listening position and grabbed another impulse response file.
put the convolution plugin in after the above sound delay and loaded the impulse response file.

but i didnt stop there either..
i went and did the side walls again ... the back wall again ... and the ceiling again.



see..
i believe it needs to be done with two layers, one for the push of the cone and one for the inward movement of the cone... and together they 'clamp' down on eachother to form a really nice tight bond.
because look,
the soundwaves behind the speakers are going to be opposite phase, especially in my case because i've got my midranges sitting on top of the speakers box AND the back of my speaker cabinet vibrates opposite of the speaker cone .. and those two things added up to some noise i could hear when i only did it with one layer.
and when i got back to putting the microphone into the listening position, i realized i took the soundwaves from behind the speakers (the front corners of the room) and pulled them to the listening position.
..that means there is a new set of soundwaves in the listening position that are opposite phase of everything else, and i should smear those soundwaves across the whole room instead of letting them lay and die in the listening position as they mixed with the reflections of the walls.
because the whole reason why i was grabbing the impulse response files in the first place was to inverse the reflections off the walls .. so it only made sense to do a second layer and get rid of the new reflections.

after i thought about it some more..
i zoomed in on what i was working with and decided it is a good idea to try three CLAMPED layers:
1. for the sine wave
2. for the transient wave
3. for the nuance wave

i'm already happy with the rationale of doing one layer for the positive and one layer for the negative.
but i still want to try doing two more clamped layers to see if those other two qualities improve any.

here is something that needs to be noted..
lay out all the plugins first, then dial in all of the distance delays.
you need to calibrate the equalizer with all of those distance delays running first.. because if you dont, then the calibration of the equalizer will be wrong when you add the delays.
(trust me.. my equalizer settings are different with all of the delays running compared to doing the calibration without any delays)


what all of that does is..
it uses the same exact principle of the distance delay on the home theater receiver.
you tell the soundwave 'when' .. and then you see the soundwave crashing into something and grab the impulse response file to inverse the collision, telling the soundwave 'what'
since the 'when' and the 'what' is known.. all of the function can happen in an organized way.
it tells the ripples in the water when they are going to crash into eachother much better than using the distance from the speaker to the listening position.. because using only one on the receiver gets the ripples to the listening position, but from that spot those ripples run off until they splash into the wall.
there isnt any input to tell the process when it is going to hit the wall, and that means the audio effects that rely on the reflection will be milliseconds apart.


see..
if the entire room was truly recorded, then there wouldnt be any ripples of excess audio at all.. and the sound would come from absolutely everywhere, in front.. to the side.. behind you.. above you.. below you.. all fully equal (and maybe .5 dB more 1ft from the speaker because that is where the amplitude starts).

my room has a doorway without a door and it is right in the back corner of the room.. that is the reason why i didnt bother measuring from the listening position to the rear corners.
if i had a door i would of included those measurements too.





another really important fact for an audiophile..
we all know a corner of the room traps soundwaves because the chaos of slamming into eachother and growing amplitude.
my room is 7.5ft and there is a strip of wood paneling that is 7 inches wide.. the panel is attached at a 45 degree angle.
the front wall has it.. the two side walls have it.. but the rear wall doesnt have it because that is where you want the soundwaves to collect and cancel eachother out, behind your ears where your ears are pointed in the opposite direction so it isnt as bothersome or audible.
i've also got the regular 45 degree angle of 1 inch wood trim in the corners from the ceiling to the floor to help with the soundwaves that fit into the space (tweeters).
anwaypasible is offline  
post #3 of 15 Old 10-10-2012, 02:45 AM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
here is the waterfall before (notice the missing chunk at 60hz)




here is the waterfall after (notice the missing chunk at 60hz is filled in)




here is my edt before (edt is the ripples going over the microphone like an oscillating fan blowing air across you)


here is my edt after (notice it is much smoother)
anwaypasible is offline  
post #4 of 15 Old 10-10-2012, 03:14 AM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
what is really great about how it works is..
the distance delay in the soundcard software makes the two soundwaves arrive at the same time.
but once i get to the side walls, the distance isnt the same anymore.. and all of those individual delays between the impulse response files is what shifts the whole grid of the signal to get those ripples aligned in time without colliding into eachother.


i figure if i wanted to go a step further, i could actually do the distance like this..
measure from speaker to listening position, then add to that distance how far it is from the listening position to the side wall.

and then add distance from speaker to listening position to the distance of the back wall.

i guess that would time align the whole grid from bumping into eachother better.
but it is a matter of view.
because some will say why bother with the extra delay from speaker to listening position when your ears are what is in the listening position, and every reflection from that place in the room is what you want to get rid of.. and technically the distance from each speaker to the listening position has already been delayed, making it arrive at zero.

it is an introverted and extroverted joke about adding the extra distance delays from the speaker to the listening position.
what it is exactly that you are flipping inside out is called the 'sound field' .. and some oldschool audio engineers simply called it the LFO.
you want the soundfield calibrated for you, because all you are doing is taking the soundfield that was gifted to you by the distance delay that comes with the soundcard software or home theater receiver and stretching it out to the wall by supply information about when it is going to hit the wall and what to expect when it gets there.


or perhaps maybe the joke is as simple viewed like this..
you are creating a chain.
it starts with the delay from the speaker to the listening position, and from there you simply split the 'sweet spot' because you technically already got rid of the sweet spot once you inversed the impulse response from that spot.
its like the collision didnt happen, and there is nothing but raw pressure that needs is flowing with inertia from the direction it came from.

two waves at about 45 degree angles is going to make contact with eachother and blow up like a round ball (i ignored the floor)
and that ball is going to expand in equal directions (especially because of the floor.. but it could carry on backwards a bit more since forward to backwards was 55% of the original path)

just remember that a chain of filters shoots like a laser as a constant stream.
the delay for the speaker to the listening position is already added onto the audio signal .. and hey.. that delay actually comes last as it throws everything else into motion.
anwaypasible is offline  
post #5 of 15 Old 10-10-2012, 03:51 AM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
there are only two convolution plugins that do multichannel audio.
one of them is $120 with more knobs than necessary for what is needed.
the other one is free, but you gotta do the ENTIRE process again for every type of change to the audio format.
if you want to switch from 16bit stereo 48khz to 16bit stereo 44.1khz .. you gotta do it again for that sample rate.
if you want to switch from 16bit stereo 48khz to 24bit stereo 48khz .. you gotta do it again.
any change of bit depth .. any change of sample rate .. any change with the number of channels and that plugin will look for an impulse response file that has all of the matching data factors.

other plugins are nice.. if you did the impulse response file as 48khz and switch to 44.1khz .. the plugin keeps on going without prompting you for a new file.
but
those plugins are stereo only.

if you want to do what i did above for more than stereo speakers, that means you need a plugin for the rfront speakers.. another plugin for the rear speakers.. and another plugin for the center and subwoofer channel (and another plugin for the side speakers in a 7.1 setup).

VSTHOST wont let you put a seperate stereo plugin on the individual channels.
it wont let you do it for the input and output pins individually.. and it wont let you do it in pairs like 'front' or 'rear'
and to make matters worse.. sometimes VSTHOST wont save the settings you put into the plugin and you gotta do it all over again .... and if you save it after you input all the settings again, the program still wont remember the settings when you re-open it.
the only way to get around it is to save each plugin's settings with the 'save bank' function and then use the 'load bank' function to recall each plugin's settings individually ...... or... start a new session and load each and every single plugin again.. then dial in all the settings and save the session as a whole and see how long it will stay saved.



multichannel users are not out of luck.
there is a program called 'liveprofessor' that is a vst host that will let you connect each stereo plugin to the individual input and output pins exactly the way you need.
there is a pro version and a free version.
the free version only supports four inputs and four outputs, but limited to only 8 plugins.
the program only works with asio drivers, and that means you need virtual audio cable to send all the sound from the computer to asio4all to get the audio into liveprofessor.
but
liveprofessor is fast... i've got 17 plugins running for a double layer 'clamp' on my front speakers and my CPU usage shows 0% - 1%
you can find that software here:
www.ifoundasound.com

and if you hurry..
there is a beta going on until november 17th ? that will let you use the full functionality of the program.



i've gotta do an equalizer for the rear speakers and get the impulse response files for them before i load 'em into the plugin.. i just havent decided if i am going to do a seperate equalizer for the front speakers, a seperate equalizer for the rear speakers, and then just leave the soundcard equalizer off .. or if i am going to leave it on and simply adjust the rear speakers with an equalizer IF the peaks and dips arent much different, because i am not going to be asking for 10-20dB of gain when the soundcard equalizer has one of the frequencies set low and the other speakers need lots of boost at the same frequency.

i've watched a couple movies using the setup.
i use ac3filter as the audio decoder and i match the 7.3ms latency in liveprofessor with the -7.3ms in ac3filter for the audio to video synchronization .. i get better lip synchronization watching a movie on the computer than i do with my digital cable box connected straight to the soundcard's SPDIF input.
(no.. the SPDIF input doesnt get sent through the audio filters, i think because the soundcard grabs the digital signal and sends it directly to the digital to analog chip without ever touching the operating system)
anwaypasible is offline  
post #6 of 15 Old 10-10-2012, 07:46 AM
AVS Addicted Member
 
ccotenj's Avatar
 
Join Date: Mar 2005
Location: the toxic waste dumps of new jersey
Posts: 21,915
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 89
Quote:
Originally Posted by anwaypasible View Post

i recommend putting each front speaker in the corner of the room to allow the room to fill up with sound.
because when the front speakers are placed closer together, the properties of physics doesnt allow the gap between the speaker and the corner to fill up with audio.. and the only audio that does get in there is the sloppy reflections.

i stopped reading after this...
Ethan Winer, Five28 and toddr007 like this.

- chris

 

my build thread - updated 8-20-12 - new seating installed and projector isolation solution

 

http://www.avsforum.com/t/1332917/ccotenj-finally-gets-a-projector

ccotenj is offline  
post #7 of 15 Old 10-10-2012, 01:10 PM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Quote:
Originally Posted by ccotenj View Post

i stopped reading after this...

dont talk to me like that .
anwaypasible is offline  
post #8 of 15 Old 10-10-2012, 01:57 PM
AVS Addicted Member
 
rboster's Avatar
 
Join Date: Jul 2000
Location: USA
Posts: 17,554
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 23 Post(s)
Liked: 60
members are certainly welcome to offer substantive critical comments about the posts, but do not get personal.

"Retired" AVS Moderator
rboster is offline  
post #9 of 15 Old 10-11-2012, 09:31 AM
AVS Addicted Member
 
rboster's Avatar
 
Join Date: Jul 2000
Location: USA
Posts: 17,554
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 23 Post(s)
Liked: 60
Quote:
Originally Posted by anwaypasible View Post

dont talk to me like that .

He said nothing wrong. Members do not have to agree with your comments. They are allowed to disagree to their hearts content. They are not allowed to make comments about the member/poster....just the content of their posts.

"Retired" AVS Moderator
rboster is offline  
post #10 of 15 Old 10-11-2012, 12:53 PM
AVS Special Member
 
A9X-308's Avatar
 
Join Date: Mar 2008
Location: Australia; now run by adults.
Posts: 5,228
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 66 Post(s)
Liked: 54
Quote:
3. for the nuance wave

What is a "nuance wave"? Could you please provide an engineering reference?
A9X-308 is offline  
post #11 of 15 Old 10-12-2012, 04:24 AM
Senior Member
 
Tony_Montana's Avatar
 
Join Date: Mar 2011
Posts: 380
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 15
It seems to be a very interesting thread.

I would like to see more examples, more graphs and measurements.


Additional information & references would be wellcome too
Tony_Montana is offline  
post #12 of 15 Old 10-17-2012, 08:23 PM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Quote:
Originally Posted by A9X-308 View Post

What is a "nuance wave"? Could you please provide an engineering reference?

well here is the definition for transient:
In acoustics and audio, a transient is a high amplitude, short-duration sound at the beginning of a waveform that occurs in phenomena such as musical sounds, noises or speech.[1][2] It can sometimes contain a high degree of non-periodic components and a higher magnitude of high frequencies than the harmonic content of that sound[citation needed]. Transients do not necessarily directly depend on the frequency of the tone they initiate.

to me, that says it isnt a full wave.. because full waves are sine.
that means it isnt a full wave, as it says 'short-duration'
it also says there can be more than one component.
so to put that in short, it isnt a full soundwave, and that makes me start with the next mathamatical explanation.. a half of a soundwave (but it could actually be one-third or one-fourth, and if you can make it that far.. it could also be a smaller size).
since the smaller wave can happen at a high-degree of component (existance with variable amplitude) .. i just call a transient a seperate soundwave that isnt a full wave.

and with that said, i call a nuance wave a soundwave that is more cut up than a transient wave.
so if a sine is a full wave, a transient is a half wave, a nuance wave would then be one-third or one-fourth.
but
as i said, the size of the transient wave could also be one-third or one-fourth.. so for that, the transient is one-fourth.. then the nuance is smaller like one-sixth or one-eighth of a soundwave.

the definition for nuance is:
Nuance is a small or subtle distinction.

with that said,
a nuance can be exactly what a transient is to a sine wave., but the nuance is just a higher cut up soundwave.
it goes like this..
1. full wave (sine)
2. one layer of cutup wave (transient)
3. second layer of cutup wave (nuance)

and that is how i like to think of nuance.
there really isnt anything that i've read that says there is more than those three types of soundwaves.. and if they did, i would of put them in place/order.

it already says a transient is high amplitude, and a nuance is small but distinctive when compared to another soundwave.. and that means it is small but it can be heard to hear the difference.
it already says the transient is short-duration, but it also says a nuance is small.. and the reason why they said anything about 'distinctive' is because when they say small, they mean really small but not small enough that you cant notice it.


see..
what i was saying about doing three different layers of clamped correction is because opening up the air to allow the smaller soundwaves to float through the air should make them more audible.

when talking about audio mastering, real MASTERS would want to use all three waves individually or added up together to create something like an effect, maybe a final sound that sounds just the way they want or a harmonic that helps with the stereo panning or even something to do with canceling out some crosstalk between the two speakers.

doing the three layers isnt meant to keep all three types of soundwaves seperate from eachother, but to make the air less resistant to allow those smaller waves to make it to your ears (or the walls).
anwaypasible is offline  
post #13 of 15 Old 10-17-2012, 08:54 PM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Quote:
Originally Posted by Tony_Montana View Post

It seems to be a very interesting thread.
I would like to see more examples, more graphs and measurements.
Additional information & references would be wellcome too

here is the before minimum phase of the 60hz area that improved on the waterfall:



here is the after minimum phase of the 60hz area:



you can see how it was rippled, and then it went flat.
the whole entire line of phase is supposed to do that.
when you look at digital room correction software and see the before and after frequency response, the phase is supposed to do the same thing.

maybe they've narrowed it down to a choice between the phase or the frequency amplitude and only the frequency amplitude gets corrected.
but the reality is, both of them are supposed to do it.

i think maybe the program generates a minimum phase impulse response filter, and that means there isnt any large amount of phase in the impulse that can be inverted and corrected.
but it works when i do it, and that means whatever the window and the gate is pointing at within the FFT, it is very small and effective.
see..
what i dont get is, if the 60hz area of phase will change.. then why not other places too?
and i think of it like this..
well maybe my room rings at 30hz and 60hz is the second (or first?) harmonic .. and the window and gate of the FFT is pointed at that harmonic instead of the first one.
and again.. maybe the room rings at 15hz the strongest, and that is the one frequency that should get aim.

i havent used a different program that will allow me to adjust the window or gate of the FFT .. only predefined windows that already have settings adjusted, and for all i know they might all be using the same window and gate settings.
maybe it is only the gate setting, and not the window.
maybe adjusting the window is what changes it from one type to another, and therefore all of the window presets use the same gate settings.
because i read about the window being the method of FFT translation, and that is why the responses look different in the measurements.


i think to professionally master a room, the window and gate has to start at the smallest, and then work the way up to the largest.
from what i gather about the window and gate settings..
its like you put the microphone in the middle of the room, and if the default settings tell the microphone to send the information that is directly in front of the tip of the microphone ...
well then adjusting those settings will send information further and further away from the microphone tip until you hit the wall.

kinda like saying the defaut settings are set to listen to the loudest (thus largest) soundwaves or listen to the softest (thus smallest) soundwaves.
and i really think that is an easy way to look at it.
but
i also read something about different orders of harmonics.. and when you really think about it, the room fills up with soundwaves and they are all different size soundwaves.. so talking about anything more complex than the size of those soundwaves being focused on seems a bit silly .. but i think i can come up with something.

there is the reference signal, and the phase from that signal is known by the software.
then there is the signal coming from the speakers, and it is different in phase, but it should be the loudest largest soundwave.
and then there is the soundwave that bounces off of the wall once, and that wave will be smaller.. and the phase will also shift because of the bounce AND if it smashes into another soundwave to change the phase some more.
....and then the soundwave has bounced once, but bounces again .. and because of time, the soundwave is not as loud and it is smaller, and the phase of that wave will also be different as it bounces and collides with other soundwaves.

so there is really to ways to look at it.
1. you could look at the size of the soundwave and then check the phase of the wave.
2. you could look at the phase of the wave and find one that is closest to the reference signal, then check the size of the wave

either one has two sides of extreme, from largest to smallest .. and from closest to the reference signal compared to the largest difference.
maybe number 1 is one extreme end of the window and gate function, and number 2 is the other extreme end of the window and gate function.


the most weird part about it is..
there is an option to use the 'minimum phase' export function for the impulse response.. so what is the other one without the 'minimum phase' option checked?

i think the measurement sweep is really easy to comprehend.
you really dont need a timing reference as a loopback because the sound will simply come out of the speakers when they do, and the microphone started recording when you pressed go .. so when the microphone starts to receive sound, the loudest largest wave is the 'direct signal' and everything else is the 'wet' signal.
when you can seperate those two.. then you know you can look at the 'wet' signal and see just how much different it is when compared to the 'dry' signal.
and i think that is what gets all the echo information with a valid time reference.
anwaypasible is offline  
post #14 of 15 Old 10-18-2012, 05:37 PM
AVS Special Member
 
baniels's Avatar
 
Join Date: Oct 2006
Location: 52556
Posts: 1,009
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 31 Post(s)
Liked: 43
More speaker cone travel/excursion isn't going to help the 2nd wave catch up to the first. It will just give it more amplitude. The speed of sound isn't so pliable.
Quote:
Originally Posted by anwaypasible View Post


see.. if the speaker cone was going to throw out the signal and try to smack the first signal in the rear, then that signal would need to travel with more speaker cone movement to catch up to the first one.
baniels is online now  
post #15 of 15 Old 10-18-2012, 07:52 PM - Thread Starter
 
anwaypasible's Avatar
 
Join Date: Aug 2010
Location: illinois
Posts: 391
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
i can agree.
most people dont think of speaker's and audio with pockets of air pressure.. because that pocket of air pressure usually leaks sound that is audible at much much lower pressures.

the speed of sound is blistering fast.
that is how an entire room can fill up with audio from one thump of a speaker.


see.. it isnt about a pocket of air leaving the speaker cone, it is about a pocket of air touching the speaker cone.

and the only real way to really visualize it is to look at it like gelatin.
a speaker cone near the floor in the front of a room can suck backwards (inwards cone movement) and cause a disturbance in the gelatin all the way at the opposite side of the room up by the ceiling .. that is how the gelatin wiggles.

but it is possible to seperate the sound in a gelatin to make the opposite corner stop wiggling because the gelatin was touched.
it is possible to seperate the sound much sooner than the corner of the room, because that is exactly how the virtual speakers work.

because gas is like a gelatin, but softer, the air can have pockets inside of it.
the really tricky part is getting the audio pocket to stop bleeding out sound as the pocket travels through the air.
some air quality wont allow pockets as well as others, and it really matters most about the pliability of the gelatin as to whether it is tough and hard to wiggle (the pocket will be harder to explode) .. or if it is too soft and refuses to clump together (the pocket will refuse to be created).


your point is true when considering a pocket of air will bleed audio and the entire gelatin will wiggle and that makes it harder to do any type of catching up to the 1st wave with the 2nd wave.
the only way to do it is to somehow slow down the 1st soundwave.. and sometimes higher amplitude for the first and lower amplitude for the 2nd is the exact way to do it, it depends on the pressure and pliability.
other places might be the exact opposite.
and if you really wanted an extra boost, you could find the right amplitude.. and then switch to two soundwaves combined as one to create the harmonic 'target' tone.

there's been a whole bunch of speakers that worked without large cone movement that provide a really good example of air being a gelatin.
and there is always the question.. if it was a gelatin, then why does the sound move really easy and fast .. and why can i hear it instead of it sounding muffled like a blanket is thrown over the speaker?

well for those with that question.. you are already here reading this, therefore it should be simple for you to inverse the wiggle of gelatin and call it air.

the really funny part is..
looking at it all at a sub-atomic level, it is all particles no matter if it is a gas or gelatin.
but
smoke rings can do it.. how do you think smoke rings stay together instead of melting away the shape? .. it is because the smoke is in its own little pressurized pocket, and there really isn't much pressure there right?


the truth is..
soundwaves dont simply leave a speaker cone and disperse 'everywhere' .. and that is why you see rings coming out of a speaker cone in videos.
that is why they try to get you to point the speaker cone towards the wall with a certain degree of angle.
because if you get all your angles perfect, the phase in the listening position will start to show the biggest opposition to all of the phase in the area outside of the listening position.
it is an old valid method to try and reduce some ringing of the room.. but only because half of the room is used to do a phase cancellation of the other half of the room.
it really shouldnt have anything to do with the audio effects, because the time between dry and wet signal simply cannot be the same or averaged with the large number of different room sizes.
and that is the main reason why 'close your eyes and spray' doesnt work really good with audio effects.

it isnt all done with time domain.. it is also done with the phase domain.
one person walking at 3mph will travel from point A to point B because of the 3mph speed.
one person moving at 5mph pushing another person at 3mph ( a 2mph boost ) will travel the same point A to point B and will arrive faster because of the 5mph speed.

the pushing energy doesnt get lost sideways unless the person in front refuses to move.
the reason why people get the thought is because they think air is like water.. and if there is more amplitude, then the ripples of the waves get bigger but they dont move any faster.
but even the ripples in the water moves faster, it is just harder to see because the wave is usually not big enough to give a visual difference.
best way to see it is to watch a boat go by and look at the big fast waves first, then watch the waves get smaller and slower.


just like a vehicle..
the car only has one real amplitude, and that is the speed it travels.
you can think of the extra cone excursion at louder volumes as more force, but really it is more speed.

zoom in on the cone with me.
the cone has to move from point A to point B at the perfect time to claim what frequency it is.
it does not matter how fast or how slow the cone moves AS LONG AS the point A to point B time is perfect for the frequency.

imagine this.. say X frequency requires the time between point A and point B to be exactly 10 seconds.
and lets say the cone has to move 100mph to get from point A to point B in exactly 10 seconds.
with that information,
i can tell the cone to slow down to 50mph for exactly half of the trip, and then tell the cone to travel at 150mph for the other half of the trip.
see..
minus 50mph = a 50mph gap missing.
that means adding 50mph to the normal 100mph will get the missing 50mph back for the second 50% and will force the cone to get to point B in the 10 seconds required.

it works for sine waves easy, but start looking at more complex soundwaves and the processing is actually the same size but needs to be faster.

it boils down to this..
you have to love the air enough to 'know' it.
and smoke rings is the easiest way to see it possible.
anwaypasible is offline  
Reply Audio theory, Setup and Chat

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off