CD vs. Vinyl...the DeathMatch! - Page 4 - AVS Forum
Forum Jump: 
 
Thread Tools
post #91 of 2578 Old 06-27-2007, 07:10 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by m. zillch View Post

Instead of a computer they should have used a properly designed DBT on humans! No, I'm not kidding.

I think they did and it was 98%, I'm not sure, I'll check it again tonight.
classic77 is offline  
Sponsored Links
Advertisement
 
post #92 of 2578 Old 06-27-2007, 07:13 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by m. zillch View Post

Now you are making more sense to me. I'm glad you are in agreement then that we are keeping the liars out of this equation and assume people will honestly do their best to try to distinguish A from B (or whether X is A or B).

The point is if the human ear can't statistically differentiate between A or B then to human perception they sound the same. They may be different in other ways though: They may look different. They may taste different. They may smell different (mmmm polycarbonate ) BUT THEY DON'T SOUND DIFFERENT TO THE HUMAN EAR! They also may or may not measure differently to test instruments. But that doesn't matter.

Nope, I still don't quite agree! If the subject fails it does not prove "THEY DON'T SOUND DIFFERENT TO THE HUMAN EAR". It proves nothing. Nothing gained and nothing lost, zilch! For starters it's not about differentiating between A and B. It's about picking X from A and B.

From: http://www.hydrogenaudio.org/forums/...howtopic=16295

Rule 1 : It is impossible to prove that something doesn't exists. The burden of the proof is on the side of the one pretending that a difference can be heard.

It's impossible to prove that something doesn't exist!
classic77 is offline  
post #93 of 2578 Old 06-27-2007, 07:18 PM
AVS Special Member
 
m. zillch's Avatar
 
Join Date: Oct 2006
Posts: 3,832
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 29 Post(s)
Liked: 103
Quote:
Originally Posted by classic77 View Post

Nope, I still don't quite agree! If the subject fails it does not prove "THEY DON'T SOUND DIFFERENT TO THE HUMAN EAR". It proves nothing. Nothing gained and nothing lost, zilch! For starters it's not about differentiating between A and B. It's about picking X from A and B.

From: http://www.hydrogenaudio.org/forums...showtopic=16295

Rule 1 : It is impossible to prove that something doesn't exists. The burden of the proof is on the side of the one pretending that a difference can be heard.

It's impossible to prove that something doesn't exist!

Please fix your link, it's not working for me.

In A/V reproduction accuracy, there is no concept of "accounting for taste". We don't "pick" the level of bass any more than we get to pick the ending of a play. High fidelity is an unbiased, neutral, exact copy (or "reproduction") of the original source's tonal balance, timing, dynamics, etc..

m. zillch is online now  
post #94 of 2578 Old 06-27-2007, 07:33 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Another example. Picture A is different from picture B. This time sight instead of sound. Lets take a 1920*1080 (True HD) still picture. This is 2,073,600 pixels. Picture A is this picture. Picture B is picture A with one of the 2,073,600 pixels changed from blue to a slightly darker blue. Viewer is then presented with picture X, it could be picture A or picture B. What do you think the chances of that viewer noticing that one slightly changed pixel? Identifying whether X is A or B? Fat chance really, well who am I to say for sure, we could test it. Lets say the results are 52/48 correct/incorrect after 100 trials. Obviously this is not enough so A != B is not proven. But we can't assume A = B. In this case, we know that A != B, because we changed it!

Does this make sense? This test (assuming the results would go someway like I estimated) could not even prove what we already know. We knew A!=B, but viewer could not verify this.
classic77 is offline  
post #95 of 2578 Old 06-27-2007, 07:48 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by m. zillch View Post

Please fix your link, it's not working for me.

Link fixed.
classic77 is offline  
post #96 of 2578 Old 06-27-2007, 08:04 PM
AVS Special Member
 
m. zillch's Avatar
 
Join Date: Oct 2006
Posts: 3,832
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 29 Post(s)
Liked: 103
Quote:
Originally Posted by classic77 View Post

A!=B

You have to explain this nomenclature to me. On my monitor it shows up as letter A exclamation point equal sign B.....................is this like an equal sign with a diagonal slash through it meaning "not equal to"?

Link still not working. maybe I need cookies turned on or to be a member? I take it it's an entry on hydrogen's forum from some lay person like us, not a scientist right? could you please cut and paste the info instead? Thanks.

In A/V reproduction accuracy, there is no concept of "accounting for taste". We don't "pick" the level of bass any more than we get to pick the ending of a play. High fidelity is an unbiased, neutral, exact copy (or "reproduction") of the original source's tonal balance, timing, dynamics, etc..

m. zillch is online now  
post #97 of 2578 Old 06-27-2007, 08:12 PM
AVS Special Member
 
CharlesJ's Avatar
 
Join Date: Jan 2006
Posts: 3,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 43 Post(s)
Liked: 89
Quote:
Originally Posted by classic77 View Post

Yes but in a DBT, measurements are irrelevant, that is if one can can accurately pick X from A and B. Measurements can be used to form a hypothesis. How small a change of components is audible? So many variables, so very hard if not impossible to determine.

I used measurement as you used and equal sign. Of course measurement is not required to do a DBT. But still if the outcome is null, it doesn't mean they are equal, just not audibly different.


Quote:
Originally Posted by classic77 View Post

I'm not sure what you mean here sorry.

What I meant by A=A is that when no changes are made for all trials but the same component is played against itself, A does = A, people still perceive differences at a rather large number, under DBT. So, even when A=A, people claim otherwise.


Quote:
Originally Posted by classic77 View Post

Anyway here's an interesting article where a "Blind Test" (I assume it was a double blind test) got bi-wiring accepted by a speaker designer. I found this as I own a couple of his speakers. http://www.legendspeakers.com.au/inf.../biwiring.html

Unless I missed it, I didn't see any evidence that it was blind, how many trials used, how many correct guesses made, etc. So, it is unreliable at best. I have corresponded with at least one speaker company, or actually have seen a response from the engineering department, that they use it because the marketing department insisted, not because of engineering.
CharlesJ is online now  
post #98 of 2578 Old 06-27-2007, 08:15 PM
AVS Special Member
 
CharlesJ's Avatar
 
Join Date: Jan 2006
Posts: 3,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 43 Post(s)
Liked: 89
Quote:
Originally Posted by classic77 View Post

Also remember in a DBT a listener must choose! Even if they have no idea, they can't say I can't tell the difference, they must answer A or B.


Well, in a same/difference test, there are only two choices, same or different. If they cannot tell, then they would get that trial wrong. But, they do have a 50% chance of getting it right
CharlesJ is online now  
post #99 of 2578 Old 06-27-2007, 08:34 PM
AVS Special Member
 
CharlesJ's Avatar
 
Join Date: Jan 2006
Posts: 3,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 43 Post(s)
Liked: 89
Quote:
Originally Posted by classic77 View Post

So how can one say that this proves that CD and Vinyl are absolutely no different, equal, when I might have just lied my ass off? You can't.


Be careful here. Don't forget that you are comparing a recording of a vinyl to CD, then compare that CD to the vinyl. That is what has been discussed. Tests in the past of this has shown null results.
CharlesJ is online now  
post #100 of 2578 Old 06-27-2007, 08:36 PM
AVS Special Member
 
CharlesJ's Avatar
 
Join Date: Jan 2006
Posts: 3,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 43 Post(s)
Liked: 89
Quote:
Originally Posted by JorgeLopez11 View Post

Classic77.

Don't get confused. DBT are extensively used to make millionaire decisions in food industry, medicine and some other areas where sensory tests are mandatory. They are actually conclusive!

I think Mr Hirvonen's Master Thesis would a be a great help for you to understand ABX tests and subjective audio test methodology.

http://www.acoustics.hut.fi/publicat...rvonen_mst.pdf

Enjoy!


That's a good find. Good thing it was in English
CharlesJ is online now  
post #101 of 2578 Old 06-27-2007, 08:44 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by CharlesJ View Post

I used measurement as you used and equal sign. Of course measurement is not required to do a DBT. But still if the outcome is null, it doesn't mean they are equal, just not audibly different.

Wrong, if the results are null, the outcome is null.
classic77 is offline  
post #102 of 2578 Old 06-27-2007, 08:45 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by CharlesJ View Post

Be careful here. Don't forget that you are comparing a recording of a vinyl to CD, then compare that CD to the vinyl. That is what has been discussed. Tests in the past of this has shown null results.

I'll take for your word about these results. If the results are null, the outcome is null.
classic77 is offline  
post #103 of 2578 Old 06-27-2007, 08:46 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by CharlesJ View Post

Be careful here. Don't forget that you are comparing a recording of a vinyl to CD, then compare that CD to the vinyl. That is what has been discussed. Tests in the past of this has shown null results.

An ABX test does not compare A to B. Listener must choose X from A and B.
classic77 is offline  
post #104 of 2578 Old 06-27-2007, 08:47 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by CharlesJ View Post

Unless I missed it, I didn't see any evidence that it was blind, how many trials used, how many correct guesses made, etc. So, it is unreliable at best. I have corresponded with at least one speaker company, or actually have seen a response from the engineering department, that they use it because the marketing department insisted, not because of engineering.

I am in no way endorsing this test or it's results. It's just an example.
classic77 is offline  
post #105 of 2578 Old 06-27-2007, 08:48 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
classic77 is offline  
post #106 of 2578 Old 06-27-2007, 08:50 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by CharlesJ View Post

Well, in a same/difference test, there are only two choices, same or different. If they cannot tell, then they would get that trial wrong. But, they do have a 50% chance of getting it right

This is not an ABX test. Is is a DBT test of another kind?
classic77 is offline  
post #107 of 2578 Old 06-27-2007, 08:53 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by m. zillch View Post

You have to explain this nomenclature to me. On my monitor it shows up as letter A exclamation point equal sign B.....................is this like an equal sign with a diagonal slash through it meaning "not equal to"?

Link still not working. maybe I need cookies turned on or to be a member? I take it it's an entry on hydrogen's forum from some lay person like us, not a scientist right? could you please cut and paste the info instead? Thanks.

!= is not equal to

Im not a member of this site.

From: http://www.hydrogenaudio.org/forums/...howtopic=16295


Blind listening tests
-------------------------

The question is to know wether a given factor, cable, speaker stand, mp3 codec, etc, that we will call "tweak", has an effect on the sound or not, without for the time being caring about knowing if this effect is positive.
An ABX test can give us an answer.

In this kind of test, the listener has access to three sources labeled A, B, and X. A and B are the references. They are the audio source with and without the tweak. For example the wav file and the MP3 file. X is the mystery source. It can be A or B. The listener must guess it comparing it to A and B.

But if the listener says that X is A, and that X is actually A. What does this prove ?
Nothing of course. If you flip a coin in my back and a state that it's heads, and I'm right, it doesn't prove the existence of my para-psychic abilities that allow me to see what's in my back. This is just luck, nothing more !
That's why a statistical analysis is necessary.

Let's imagine that after the listener has given his answer, the test is run again, choosing again X at random 15 times. If the listener gives the correct answer 16 times, what does it prove ? Can it be luck ?
Yes it can, and we can calculate the probability for it to happen. For each test, there is one chance out of two to get the right answer, and 16 independant tests are run. The probability to get everything correct by chance is then 1/2 at the power 16, that is 1/65536. In other words, if no difference is audible, the listener will get everything correct one time out of 65536 in average.
We can thus choose the number of trials according to the tweak tested. The goal being to get a success probability inferior to the likelihood, for the tweak, to actually have an audible effect.
For example if we compare two pairs of speakers. It is likely that they won't have the same sound. We can be content doing the test 7 times. There will be 1 chance out of 128 to get a "false success". In statistics, a "false success" is called a "type I error". The more the test is repeated, the less type I errors are likely to happen.
Now, if we put an amulet besides a CD player. There is no reason that it changes the sound. We can then repeat the test 40 times. The success of probability will then be one out of one trillion (2 to the power 40). If it ever happens, there is necessarily an explanation : the listener hears the operator moving the amulet, or the operator always takes more time to launch the playback once the amulet is away, or maybe the listener perceives a brightness difference through his eyelids if it is a big dark amulet, or he can smell it when it is close to the player...

Let p be the probability of getting a success by chance. It is generally admitted that a result whose p value is inferior to 0.05 (one out of 20) should be seriously considered, and that p < 0.01 (one out of 100) is a very positive result. However, this must be considered according to the context. We saw that for very suspectful tweaks, like the amulet, it is necessary to get a very small p value, because between the expected probability for the amulet to work (say one out of a billion, for example), and the probability for the test to succeed by chance (1 out of 100 is often chosen), the choice is obvious : it's the test that succeeded by chance !
Here's another example where numbers can fool us. If we test 20 cables, one by one, in order to know if they have an effect on the sound, and if we consider that p < 0.05 is a success, then in the case where no cable have any actual effect on the sound, since we run 20 tests, we should all the same expect in average one accidental success among the 20 tests ! In this case we can absolutely not tell that the cable affects the sound with a probability of 95%, even while p is inferior to 5 %, since anyway, this success was expected. The test failed, that's all.

But statistic analyses are not limited to simple powers of 2. If, for example, we get 14 right answers out of 16, what happens ? Well it is perfectly possible to calculate the probability that it happens, but mind that what we need here is not the probability to get exactly 14/16, but the probability to get 16/16, plus the one to get 15/16, plus the one to get 14/16.
An Excel table gives all needed probabilities : http://www.kikeg.arrakis.es/winabx/bino_dist.zip . It is based on a binomial distribution.

Now, how to setup the listening test so that its result, if positive, is really convincing ? There are rules to observe if you don't want, in case of a success, have all your opponent laugh at you.

Rule 1 : It is impossible to prove that something doesn't exists. The burden of the proof is on the side of the one pretending that a difference can be heard.
If you believe that a codec changes the sound, it is up to you to prove it, passing the test. Someone pretending that a codec is transparent can't prove anything.

2. The test should be performed under double blind conditions (*).
In hardware tests, this is the most difficult requirement to meet. Single blind means that you can't tell if X is A or B otherwise than listening to it. Double blind means that nobody in the room or the imediate surrounding can know if X is A or B, in order to avoid any influence, even unconcious, on the listener. This complicates the operations for hardware testing. A third person can lead the blindfolded listener out of the room while the hardware is switched. High quality electronic switches have been made for double blind listening tests ( http://sound.westhost.com/abx-tester.htm ) : a chip chooses X at random, and a remote control allows to compare it to A and B at will.
Fortunately, in order to double blind test audio files on a computer, some ABX programs are freely available. You can find some in our FAQ.

3. The p values given in the table linked above are valid only if the two following conditions are fulfilled :
-The listener must not know his results before the end of the test, exept if the number of trials is decided before the test.
...otherwise, the listener would just have to look at his score after every answer, and decide to stop the test when, by chance, the p value goes low enough for him.
-The test is run for the first time. And if it is not the case, all previous results must be summed up in order to get the result.
Otherwise, one would just have to repeat the serial of trials as much times as needed for getting, by chance, a p value small enough.
Corollary : only give answers of which you are absolutely certain ! If you have the slightest doubt, don't answer anything. Take your time. Make pauses. You can stop the test and go on another day, but never try to guess by "intuition". If you make some mistakes, you will never have the occasion to do the test again, because anyone will be able to accuse you of making numbers tell what you want, by "starting again until it works".
Of course you can train yourself as much times as you whish, provided that you firmly decide beforehand that it will be a training session. If you get 50/50 during a training and then can't reproduce this result, too bad for you. the results of the training sessions must be thrown away whatever they are, and the results of the real test must be kept whatever they are.
Once again, if you take all the time needed, be it one week of efforts for only one answer, in order to get a positive result at the first attempt, your success will be mathematically unquestionable ! Only your hifi setup, or your blind test conditions may be disputed. If, on the other hand, you run again a test that once failed, because since then, your hifi setup was improved, or there was too much noise the first time, you can be sure that there will be someone, relying on statistic laws, to come and question your result. You will have done all this work in vain.

4. The test must be reproducible.
Anyone can post fake results. For example if someone sells thingies that improve the sound, like oil for CD jewel cases of cable sheath, he can very well pretend to have passed a double blind ABX test with p < 0.00001, so as to make people talk about his products.
If someone passes the test, others must check if this is possible, by passing the test in their turn.


We saw what is an ABX test, with the associated probability calculus, that is perfectly suited for testing the transparency of a codec, or the validity of a hifi tweak. But this is only the ABC of statistic tests.
For example, in order to compare the quality of audio codecs like MP3, in bigger scaled tests, ABC/HR test are used (see http://ff123.net/abchr/abchr.html ), that are more sophisticated. Each listener has two sliders and three buttons for every audio codec tested. A and B are the original and the encoded file. The listener doesn't know which one is which. C is the original, that stands as a reference. He must give, using the sliders, a mark between 1 and 5 to A and B, the original getting 5 in theory.
A probability calculation allows then not only to know if the tested codec audibly alters the sound, but also to estimate the relative quality of the codecs for the set of listeners involved, and this, still under double blind conditions, and with a probability calculus giving the relevance of the result. These calculus, according to the needs of the test, can be performed with the Friedman method, for example, that gives a ranking for each codec, or also with the anova one, that gives an estimation of the subjective quality perceived by the listeners on the 1 to 5 scale.

Note that this kind of statistical analysis is mostly used in medicine, and that to get an aothorization, any drug must prove its efficiency in double blind tests (both the physicians and the patients ignore if the pill is a placebo or a medication) against placebo (the drug must not only prove that it works, but that it works better than a placebo, because a placebo alone works too), and the decision is based on mathematical analyses such as the one we just saw. Thus they are not quickly made guidelines for hifi tests. They are actually general testing methods used in scientific research, and they remain entierely valid for audio tests.
classic77 is offline  
post #108 of 2578 Old 06-27-2007, 09:03 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by classic77 View Post

Another example. Picture A is different from picture B. This time sight instead of sound. Lets take a 1920*1080 (True HD) still picture. This is 2,073,600 pixels. Picture A is this picture. Picture B is picture A with one of the 2,073,600 pixels changed from blue to a slightly darker blue. Viewer is then presented with picture X, it could be picture A or picture B. What do you think the chances of that viewer noticing that one slightly changed pixel? Identifying whether X is A or B? Fat chance really, well who am I to say for sure, we could test it. Lets say the results are 52/48 correct/incorrect after 100 trials. Obviously this is not enough so A != B is not proven. But we can't assume A = B. In this case, we know that A != B, because we changed it!

Does this make sense? This test (assuming the results would go someway like I estimated) could not even prove what we already know. We knew A!=B, but viewer could not verify this.

I eagerly await anyone's opinion on this example. An in case anyone thinks computers are not useful and we should only rely on humans, a computer could compare bit for bit the pictures and pick the sucker every time!
classic77 is offline  
post #109 of 2578 Old 06-27-2007, 09:54 PM
AVS Special Member
 
m. zillch's Avatar
 
Join Date: Oct 2006
Posts: 3,832
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 29 Post(s)
Liked: 103
Quote:
Originally Posted by classic77 View Post

!= is not equal to

Im not a member of this site.

From: http://www.hydrogenaudio.org/forums/...howtopic=16295

this link works, thanks for cut n paste though.

I like that you bring up a new category here, pixels, but I would need to know more like is the odd blue pixel is in the middle of a field of white or other similar blues? image in motion or static? are you told which pixel to investigate? I hope you don't mind but I'm going to change your pixels into grains of sand in 3 separate cups. Cup A holds
2,000,000 grains of sand, B holds 2,000,001, and X could be either 2,000,000 or 2,000,001 for each trial. The test subject has to determine if X=A or X=B by judging their weights by picking them up by their hands, alone. A super precision scale could tell the difference but a human can't. Ideally you then get test subjects that might be better than average at judging the weight of things, lets say diamond merchants. If we test hundreds and hundreds of them and none of them get better than say a 52/100 correct score then we assume that to average human perception A=B [why you don't agree with this baffles me].

So what if that 52/100 score was a 60? 70? 80? Then you keep testing that guy! In science, it has to be repeatable. The 1/20 figure that guy mentioned is what I've used myself when I did an ABX test. good night.

In A/V reproduction accuracy, there is no concept of "accounting for taste". We don't "pick" the level of bass any more than we get to pick the ending of a play. High fidelity is an unbiased, neutral, exact copy (or "reproduction") of the original source's tonal balance, timing, dynamics, etc..

m. zillch is online now  
post #110 of 2578 Old 06-27-2007, 09:58 PM
AVS Special Member
 
m. zillch's Avatar
 
Join Date: Oct 2006
Posts: 3,832
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 29 Post(s)
Liked: 103
Quote:
Originally Posted by classic77 View Post

I eagerly await anyone's opinion on this example. An in case anyone thinks computers are not useful and we should only rely on humans, a computer could compare bit for bit the pictures and pick the sucker every time!

If you want to know "is it worth it to buy more expensive B over lower price A? " Then it doesn't matter if a computer can tell a difference, only if a human can.

In A/V reproduction accuracy, there is no concept of "accounting for taste". We don't "pick" the level of bass any more than we get to pick the ending of a play. High fidelity is an unbiased, neutral, exact copy (or "reproduction") of the original source's tonal balance, timing, dynamics, etc..

m. zillch is online now  
post #111 of 2578 Old 06-27-2007, 10:06 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by m. zillch View Post

If you want to know "is it worth it to buy more expensive B over lower price A? " Then it doesn't matter if a computer can tell a difference, only if a human can.

Yep.
classic77 is offline  
post #112 of 2578 Old 06-27-2007, 10:30 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by m. zillch View Post

this link works, thanks for cut n paste though.

I like that you bring up a new category here, pixels, but I would need to know more like is the odd blue pixel is in the middle of a field of white or other similar blues? image in motion or static? are you told which pixel to investigate? I hope you don't mind but I'm going to change your pixels into grains of sand in 3 separate cups. Cup A holds
2,000,000 grains of sand, B holds 2,000,001, and X could be either 2,000,000 or 2,000,001 for each trial. The test subject has to determine if X=A or X=B by judging their weights by picking them up by their hands, alone. A super precision scale could tell the difference but a human can't. Ideally you then get test subjects that might be better than average at judging the weight of things, lets say diamond merchants. If we test hundreds and hundreds of them and none of them get better than say a 52/100 correct score then we assume that to average human perception A=B [why you don't agree with this baffles me].

So what if that 52/100 score was a 60? 70? 80? Then you keep testing that guy! In science, it has to be repeatable. The 1/20 figure that guy mentioned is what I've used myself when I did an ABX test. good night.


Your questions raise all the issues. Like what kind of music was played? How long did it go for? How well was it recorded? Remastered? All that kind of stuff, so many variables.

Relating to the example, I did say a still and not moving picture. What kind of picture is it? Are there lots of blues in it? Is pic A all blue and B all blue with just one pixel a slightly darker blue (probably not too hard to spot) ?

Yes, yes, it's all up to interpretation isn't it! This way the guys who sells the sand could sell bags full of 2,000,000 grains instead of 2,000,001 grains because it's cheaper, and none of his customers can tell differently. But that doesn't mean that 2,000,000 is equal to 2,000,001 mathematically, cause it ain't.
classic77 is offline  
post #113 of 2578 Old 06-28-2007, 05:19 AM
 
PULLIAMM's Avatar
 
Join Date: Aug 2005
Location: Oklahoma City
Posts: 8,516
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by m. zillch View Post

"...because after all, the test subjects may be deliberately sabotaging the results and are selecting A or B at random, even though in truth they can quite clearly hear which is which, but they don't want their tester to know that they can tell the difference."

Is that your point?

The type of lying that annoys me is not deliberate sabotage but, rather, those with inflated egos who claim to hear things they cannot simply to try and make themselves appear superior.
PULLIAMM is offline  
post #114 of 2578 Old 06-28-2007, 05:32 AM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by PULLIAMM View Post

The type of lying that annoys me is not deliberate sabotage but, rather, those with inflated egos who claim to hear things they cannot simply to try and make themselves appear superior.

So deliberate sabotage is a type of lying that doesn't annoy you?
classic77 is offline  
post #115 of 2578 Old 06-28-2007, 05:53 AM - Thread Starter
AVS Special Member
 
JasonColeman's Avatar
 
Join Date: May 2003
Location: In Stereo
Posts: 3,246
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Of course not, PULLIAMM is the master of deliberate sabotage... Just look at his "well-intentioned" interest in vinyl in the other vinyl/turntable threads...

J.

Custom Built For The Win!
JasonColeman is offline  
post #116 of 2578 Old 06-28-2007, 05:55 AM
 
PULLIAMM's Avatar
 
Join Date: Aug 2005
Location: Oklahoma City
Posts: 8,516
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by classic77 View Post

So deliberate sabotage is a type of lying that doesn't annoy you?

It does, but not nearly as much as the other kind.
PULLIAMM is offline  
post #117 of 2578 Old 06-28-2007, 07:39 AM
AVS Addicted Member
 
Chu Gai's Avatar
 
Join Date: Sep 2002
Location: NYC area
Posts: 14,732
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 124 Post(s)
Liked: 419
Quote:


It's impossible to prove that something doesn't exist!

You're absolutely right. Consider though, classic77, that there'll be two camps here. One that has gotten pitifully tired of all the nulls. One that's selling the difference.

With regards to your visual example, look up what technique was used to discover the planet Pluto.

"I've found that when you want to know the truth about someone that someone is probably the last person you should ask." - Gregory House
Chu Gai is online now  
post #118 of 2578 Old 06-28-2007, 04:49 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Chu Gai View Post

You're absolutely right. Consider though, classic77, that there'll be two camps here. One that has gotten pitifully tired of all the nulls. One that's selling the difference.

With regards to your visual example, look up what technique was used to discover the planet Pluto.

I thought Pluto was no longer a planet?
classic77 is offline  
post #119 of 2578 Old 06-28-2007, 05:01 PM
Senior Member
 
classic77's Avatar
 
Join Date: Apr 2005
Location: Australia
Posts: 319
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Chu Gai View Post

You're absolutely right. Consider though, classic77, that there'll be two camps here. One that has gotten pitifully tired of all the nulls. One that's selling the difference.

With regards to your visual example, look up what technique was used to discover the planet Pluto.

Still you MUST consider this! Who in the hell buys Vinyl to copy to CD to listen to, other than for testing? 0.001% of the population? When you think about it's a pretty useless test even if someone could pick whether it's CD or Vinyl every time. At least comparing 320kps MP3 to CD is actually useful, because people are using these formats in real world listening.

Now if in fact Vinyl is only better because on average the albums put out are superiorly remastered over CD, well this COULD mean that in real world listening, on average, the listening experience on Vinyl is better. Not because of the format, but the effort put into the re-mastering process.

One of my favorite recordings on CD is of a 1958 performance of Beethoven's 9th. There's tape hiss in the background but the recording and re-master are so good that you quickly forget about it, getting caught up in the fantastic enveloping sound.

Forget audiophile snobbery, I very rarely see it on these forums. I for one am tired of all the cynical snobbery around here, and I don't think I'm the only one.
classic77 is offline  
post #120 of 2578 Old 06-28-2007, 05:47 PM
AVS Special Member
 
m. zillch's Avatar
 
Join Date: Oct 2006
Posts: 3,832
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 29 Post(s)
Liked: 103
Quote:
Originally Posted by classic77 View Post

Your questions raise all the issues. Like what kind of music was played? How long did it go for? How well was it recorded? Remastered? All that kind of stuff, so many variables.

Relating to the example, I did say a still and not moving picture. What kind of picture is it? Are there lots of blues in it? Is pic A all blue and B all blue with just one pixel a slightly darker blue (probably not too hard to spot) ?

Yes, yes, it's all up to interpretation isn't it! This way the guys who sells the sand could sell bags full of 2,000,000 grains instead of 2,000,001 grains because it's cheaper, and none of his customers can tell differently. But that doesn't mean that 2,000,000 is equal to 2,000,001 mathematically, cause it ain't.

Sorry I missed the "still" picture part.

Here's a hypothetical question for you. Say you conduct an ABX DBT where people are 100% honest (no liars or cheating etc), the test material is "perfect" (say designed by extra terrestrials/God/ whatever), the test set up and methodology is also perfect and your sample of people is, get this, the entire human population . Everyone gets an exactly 50% correct score. Hey, its my hypothetical situation so I get to design it! Here's my question to you: Do you still feel the results are inconclusive because "you can't prove a negative"? Please explain.

Also, to my way of thinking the test is to see if to human perception does A=B. Not does A=B in regards to weight, pixels etc. You agree right?

In A/V reproduction accuracy, there is no concept of "accounting for taste". We don't "pick" the level of bass any more than we get to pick the ending of a play. High fidelity is an unbiased, neutral, exact copy (or "reproduction") of the original source's tonal balance, timing, dynamics, etc..

m. zillch is online now  
Closed Thread CD Players & Dedicated Music Transports

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off