AVS Forum banner

Processor Blind Listening Test

38K views 503 replies 59 participants last post by  SteveCallas 
#1 ·
Yesterday, myself and a few other enthusiasts got together to conduct a processor blind listening test. Our goal was to determine whether or not the different DACs and analog prestages of different receivers and pre/pros can affect sound quality. Amplification was not being tested, just processor sound quality, so a discrete amp was used to do all of the amplification. The units we used, in my opinion, were a good representation of various levels of product that most of us in this hobby will consider.


Processors:

Pioneer VSX 1014 - Essentially the same unit as the newer 1015, this receiver has become the standard for entry-level receivers. Plenty of features, decent amp section, and a reputation as being great for movies and not so great for music.

Harman Kardon AVR 635 - Not a hi end receiver by any means, but definitely regarded as a step up from entry level. Tons of features, beefy amp section, and a reputation as being one of the top receivers in regards to sound quality.

Audio Refinement Pre-2DSP - A dedicated AV preamp processor that is regarded as another step up from receivers. While this unit is not the most expensive and doesn't have the longest list of features, it is regarded as being one of the most musical pre/pros, having great sound quality.


Amplifier:
PS Audio HCA 2 - Quality 2 channel amp that is also regarded for its great sound quality.


Speakers:
Totem Acoustics Forrests


CD Player:
Panasonic DVD S77


Cables

RS Gold analog stereo, Dayton Audio digital coax, DIY dual 14 gauge twisted pair speaker wire


The participants were Jon, his girlfriend Gudrun, my friend Tyler, and myself. ---k---, a member from htguide.com, and another professor from Purdue were all scheduled to come as well, but they wimped out..buncha wimps. Again, since we were only interested in testing processing, we used the digital coax output from the cd player to each unit and then the analog preouts from each unit to the amplifier. All of the equipment (aside from the speakers of course) was kept in a second room with doors shut so neither it nor the moderator were visible to the listeners - the speaker cables ran under the door. Jon will post a few pictures of the setup when he replies to this thread. We made sure to eliminate all variables that may affect the sound that aren't related to the actual processing, so EQ, tone controls, distance settings, subwoofer functions, etc., in each processor were turned off. The units simply had to decode the incoming digital stream and send full range signals out to the speakers with no post processing. We didn't use a subwoofer because each unit may have different crossover slopes or bass management methods, and that could possibly affect what we heard and would not be attributed to the DACs or analog prestage. We didn't test surround sound quality performance because I have already had discussions with an algorithm engineer from Dolby about whether or not different DSPs can affect what info steered to different channels or whether they can affect sound quality.


The units were calibrated to each other by using a test cd with a wide band pink noise tone and a digital RS meter mounted to a tripod that was placed on the main seating position. We plugged in the L channel output from one unit and adjusted the master volume until we registered 66db from the tone. Then we unplugged the L channel, plugged in the R channel, and adjusted the individual R channel settings until it also read 66db. Then we plugged in both channels and measured the output, which in Jon's room was 70db. We did this with each unit until we had identical output levels.


Our test consisted of two parts. The first part was to test whether or not there were sound quality differences between units - not which unit we preferred, not what differences we noticed, JUST whether or not we heard differences. The second part, which was to be conducted only if our first test statistically proved to us that differences did exist (based on a 70% accuracy level), was to test which unit had the best sound quality based on our preferences. We wrote out the three combinations of units - HK vs PI, PI vs AR, and HK vs AR - on three strips of paper, folded them up, and placed them in a basket. Three of us reached in the basket and selected a piece of paper, which we then put in our pockets. When it was someone's turn to be moderator, they opened up the paper, and were allowed to use only those two units for playback during their test. This way, nobody knew what pieces of equipment were being tested when. The fourth person (who won this spot through rock, paper, scissors) had free reign to use all three units during their testing.


The procedure for each test is written down on our log sheets, so see attached. If anyone has any questions or if it is not clear enough, feel free to ask. I'd write it all out here again, but then this post would be nearly twice as long, and none of us want that. So just take a look at the attached log sheets, maybe zoom in a bit, and read the procedure. One difference is that we did not have to start on a silent track, as you'll read later, I inserted 3.5 seconds of silence at the beginning of each track, so the mod just had to que up the track number and press play. The audible difference part consisted of four tests with three listeners and one moderator for each test, and the moderator changed for each test. There were three listening positions, left, middle, and right, and the listeners rotated their seats between tests so everyone got a chance at each seat. Listeners were not allowed to speak of the test of share any impressions at all until all four tests were done. The moderator could not be seen or heard in the other room behind the doors. We all did a few dry runs as both listeners and mods so that everyone was clear in how to conduct the test - it is difficult in explaining it, but very easy and intuitive in practice.


The songs we chose for the audible difference testing were selected back in January. Each participant chose a couple of songs that they both enjoyed and were confident that they knew very well. They then sent me these songs, and I isolated a ~35 second long clip from each song that we agreed captured its essence and a range that we felt would be easy to distinguish between if audible differences were present. I compiled these clips in addition to the full songs on cds and sent them out to each participant in early February. By doing this, each participant has been able to listen to and become very familiar with the songs and exact clips we would be using for this audible difference testing for over three months. Basically, by the time the test finally took place, the participants knew the samples through and through. A note of interest is that I received the HK 635 earlier this week and found that it will mute the first second or so of playback from a digital stream, so at the last minute, I had to pull up the clips again and add 3.5 second of silence to the beginning of each clip. In doing this, I eliminated any chance of this oddity tipping us off as to whether the HK was being used. Taking this into consideration, I think we successfully covered all aspects of the test that could have possibly kept it from truely being blind.


Before we get to the results, I just want to make some points clear so we can avoid some of the nastiness that resulted from our last test. Whether you agree or disagree with our results is fine, just don't try and convince us of something otherwise, as we just spent a 10 hour day testing. We aren't trying to pass off our test as a given fact in every single circumstance for every single person, but our results are fact in this listening room with this equipment with these people. If you disagree with some part of the methodology, that is fine, just politely express it as a logical point and I will address it. If you don't agree with our results, DO NOT try and find imaginary faults within our test to try and justify yourself.


The raw results from the audible differences test showed that as a group, we were correct 61 times out of 120, or 51% accuracy. To break it down by comparison:


HK vs PI - correct 21 out of 39, or 54% accuracy

PI vs AR - correct 15 our ot 36, or 42% accuracy

HK vs AR - correct 19 out of 30, or 63% accuracy


To break that down further, these are the results we got when removing the trials in which the moderator chose to use the same unit twice in a row. In other words, these results are purely of the direct comparison of switching from one unit to the other, and because of that, the most significant in our opinion.


HK vs PI - correct 18 out of 33, or 55% accuracy

PI vs AR - correct 9 out of 24, or 38% accuracy

HK vs AR - correct 10 out of 18, or 56% accuracy


To examine it a different way, here are the results by person:


Jon - correct 14 out of 30, or 47% accuracy

Gudrun - correct 16 out of 30, or 53% accuracy

Tyler - correct 12 out of 30, or 40% accuracy

Steve - correct 19 out of 30, or 63% accuracy


No combination resulted in 70% or greater accuracy, and no single person achieved greater than 70% accuracy. Because of this, and because we agreed afterwards that it was very difficult to try and pick something out to base your decision on, we did not continue on with the sound quality preference testing.


The closest we came to statistically proving there were audible differences was with the HK vs the AR, using the song Arousing Thunder by Grant Lee Buffalo, which has some bass from a drum being struck throughout the clip. As a group, we were correct 12 out of 15 times, or 80% accurate. Tyler had actually taken down a few notes during this test, and on Trials 3 and 5 he jotted that the second playback had heavier or deeper bass - the HK was used for the second playback on both of those trials. Later in the evening, we did a quick test of the HK using it's internal amplification vs the AR using the PS Audio amp, and I also noted that the first playback had more punch to the bass - it turned out to be the HK as well. Unfortunately, I don't know how much significance we can draw from only 15 samples on that combination with that song. Our collective score of the HK vs the AR never got higher than 63%. If we had more time, we could have examined this further, but it was already into the night and we needed to refit the baseplate on Jon's kickass subwoofer.


To be honest, the results were pretty surprising to me. Had you asked me prior to a few months ago whether DACs made a difference, I would have said no. But in doing my research for a new receiver purchase, I came upon several first hand user reviews from this website and others, some from users whose opinion I really respect, that different DACs truly do make a difference. So in the last few months, I thought for sure we would be able to identify differences..I guess not. If we were able to measure level matched outputs of the same clips from two units on a computer screen, we may notice small differences do exist, but in actual practice, they were not readily discernable. Will this test affect my purchasing decision as I claimed it would for months leading up to it? Yes. My HK 635 has a couple of glitches and needs to go back. Since I will be using a Carvin hd1800 to power my mains, this test proves to me I can buy a less expensive receiver and still get the same sound quality from the processing. A Pioneer 1015 might be the ticket.


UPDATE: A Pioneer 1015 was NOT the ticket lol. Yamaha HTR 5890 did the trick.


As a side blind test, one that I have always wanted to do but never got around to, mainly because I haven't drank a soda in years, we tested Pepsi vs Coke. There was a pretty big difference between the two that we all picked up on, one had a lot more carbonation and had a hint of citrus, the other was sweeter and smoother, almost more syrupy tasting. Only problem was that Gudrun and I assumed Coke was the more carbonated soda, so we were incorrect, but it still stands that the difference between the two is quite evident.


Big thanks to Jon for hosting and providing us with a nice spread of food. And Jon, though I said it like 20 times yesterday, that audio rack looks great! I want to get started on mine asap.

 
See less See more
2
#3 ·
Steve, outstanding work. Greatly appreciated.


i find the H/K "bass-thumping" thing quite interesting. Soundsl ike H/K may have emphasized the bass a little bit more. Would be interesting to use a signal generator to confirm that. For example, you can use a CD to record sine tones at various frequencies and use the RS meter to measure loudness and how different pre-amps reproduce the tone at what loudness.


Again, great work.
 
#4 ·
Great post, thanks. Your conclusion supports Stereo Review's conclusion when they did double-blinded test on the sound quality of amplifiers back in the early '80s. They could not reliably pick out a pioneer receiver from a Mark Levinson amplifier. An amplifier is complex whereas a wire is simple, draw your own conclusions.
 
  • Like
Reactions: SteveCallas
#6 ·

Quote:
i find the H/K "bass-thumping" thing quite interesting. Soundsl ike H/K may have emphasized the bass a little bit more. Would be interesting to use a signal generator to confirm that. For example, you can use a CD to record sine tones at various frequencies and use the RS meter to measure loudness and how different pre-amps reproduce the tone at what loudness.

The bass wasn't necessarily louder with the HK, I felt it had slightly more impact. Tyler's exact notes are "#2 deeper" for TRIAL 3 and "#2 heavier" for TRIAL 5. Mind you, this was a very, very small difference. It still seems kinda surprising to me everything sounded virtually identical, but when I stop to think of what a DAC is designed to do, as long as it is working correctly, and the sampling rate is the same between different DACs, there really is no reason they should sound different.


Thanks for the kind words guys.
 
#9 ·
After analyzing the results of the first test, we couldn't go into the second test with confidence, as we would have just been guessing then. We all had confidence in the tesing procedure, so we all openly accepted the conclusion that there are no audible differences as a result of the processing of these three units.
 
#11 ·

Quote:
As a side blind test, one that I have always wanted to do but never got around to, mainly because I haven't drank a soda in years, we tested Pepsi vs Coke. There was a pretty big difference between the two that we all picked up on, one had a lot more carbonation and had a hint of citrus, the other was sweeter and smoother, almost more syrupy tasting. Only problem was that Gudrun and I assumed Coke was the more carbonated soda, so we were incorrect, but it still stands that the difference between the two is quite evident.

You DRANK soda?


Not just one, but BOTH?


Who are you and what have you done with SteveCallas?!?!
 
#12 ·

Quote:
Originally Posted by SteveCallas /forum/post/0


The bass wasn't necessarily louder with the HK, I felt it had slightly more impact. Tyler's exact notes are "#2 deeper" for TRIAL 3 and "#2 heavier" for TRIAL 5. Mind you, this was a very, very small difference. It still seems kinda surprising to me everything sounded virtually identical, but when I stop to think of what a DAC is designed to do, as long as it is working correctly, and the sampling rate is the same between different DACs, there really is no reason they should sound different.

Well, I don't believe that all DACs will sound the same, because there are some really really cheesy ones out there to be had. The Wikipedia page on Digital-to-Analog converters does a decent job of providing a summary on the operation and performance-defining characteristics of a DAC:

http://en.wikipedia.org/wiki/Digital...alog_converter


What I will agree with, however, is that once you've achieved some basic level of performance, any further improvement on the performance of the DAC is no longer audible, or is only audible under ideal circumstances to a very small number of people. And I believe that to be the case with all three of the processors you tried. If the HK truely did sound different, I would think it was built-in or a system glitch rather than a characterisic of the DAC it is using.
 
#16 ·
Nice write up. I wish I could have been there. Unfortuantely, I had to be at a charity walk for one of my best friends.


The results are somewhat suprising to me also. Just like you, Steve, I have long thought that once you reached a specific level of performance that the differances in minor improvements would be hard to hear. But reading about how specific people (who I won't mention now) swear by their Benchmark DAC-1 had made me start to really reconsider this. Maybe there is a differance???


I know that Jon has treated his room. Maybe he wants to speak about that a little so that we can understand whether corruption of your tests is because of the room.
 
#17 ·
Unlike a self proclaimed golden ear, I have and will continue to post the results from my physical hearing evaluations (needed because I occassionally spend a few days in one of our manufacturing plants) on this board
I posted the first one back in my Funny Story Ascend thread, and I had a second one not too long ago. When I get back to work next week, I will post the results from that one.

Quote:
Who are you and what have you done with SteveCallas?!?!

I know, I broke my ~6 year streak. If it's worth anything though, I only had about 4 ounces of each.....and it was in the name of science



---k---, we will be planning another one in the not too distant future (which in our case will probably be another 6 months lol) where we will compare the Ascend 340s to his Totem Forrests to the Modula MTs he is beginning to build - blind and level matched of course. If only I lived a bit closer, say 2 hours instead of 4.5, I would be willing to bring my Boston VR3s as well. You gotta make it to that one.
 
#18 ·
Hi Folks,


My turn to chime in here. First, I'll say that we had a really great time.
I'm very glad everyone came over. And if any of you out there ever wonder about one type of audio gear versus another, I highly recommend doing some testing yourself. It's quite fun, getting a bunch of enthusiasts together.


At the end of the day, all the gear sounded really similar. We each said that we had a difficult time picking out any differences. And the statistics seem to show that we weren't able to pick things out too well.


Of course, our results are only valid for the specific gear and conditions under which they were tested. Please refrain from telling us that our grand conclusion of all receivers and processors sound the same is not valid. Because we're not making that claim at all. The last time we did a blind amp test:
http://www.avsforum.com/avs-vb/showt...ght=blind+test

Things got a little nasty. So it would be nice if we can keep this discussion civil. The usual disclaimers apply your mileage may vary, not available in all location, do not try this at home, some side effects may occur, professional driver on a closed course, allow 6-8 weeks for delivery...


One clarification from Steve's first post: Steve, Tyler, and I had heard all the songs and clips prior to the test. Gudrun did not. So we had someone without knowledge of the music and people with.


Maybe we can get someone with a better knowledge of statistics (outlier2- was that his name?) to comment on our numbers.


For what it's worth, we're not a group of people who think that all audio gear necessarily sounds the same. For example, we all think that speakers sound incredibly different. From one manufacturer to another, for example, we have all found speakers to vary quite widely in characteristics. So we're not out trying to prove that cheap gear is just as good as more expensive gear.


I think the test conditions were pretty good. We had all the receivers, etc. in a separate room from the speakers. With a closed door between. So we really had no hint as to what was going on as far as the switching went. I also found it good to use so many different song samples for the testing, to help tease out any potential differences with new songs. And it was interesting that all 3 listeners seem to think that one song, in particular (Arousing Thunder by Grant Lee Buffalo), showed more differences. I was moderating on that test, so I didn't get to hear it myself. We all agreed that quicker switching times might be nice. Easier said than done, though.


That was fun! Maybe we'll be doing more testing in the future. I sense a receiver versus separates test on the horizon...



-Jon
 
#19 ·

Quote:
Originally Posted by SteveCallas /forum/post/0


Later in the evening, we did a quick test of the HK using it's internal amplification vs the AR using the PS Audio amp, and I also noted that the first playback had more punch to the bass - it turned out to be the HK as well.

For what it's worth, I also got to have a blind listen to this test. It's very quick and very preliminary. So I wouldn't make too much of this. But it seemed to me that one setup, of the two, had a little more bass thump and a little more clarity overall. Just a teeny bit, but I kind of thought it was there. When they told me which it was... I thought the AR and PS Audio combo was the one with the better clarity and more bass. Steve and Tyler thought it went the other way. So the only thing I would take from that is we need a receiver vesus separates test in the future.
 
#20 ·

Quote:
Originally Posted by ericgl /forum/post/0


Thanks for your efforts and report Steve. I 'upgraded' from a HK 240 as a pre-pro to a Pre-2 a while back, and while I was hoping for you to report a big improvement, I suspected there wouldn't be any.


Good work.

Interesting. So can you discuss anything you noticed- or did not- in the switch from the HK to the Pre2?
 
#21 ·

Quote:
Originally Posted by ---k--- /forum/post/0


I know that Jon has treated his room. Maybe he wants to speak about that a little so that we can understand whether corruption of your tests is because of the room.

The room treatments haven't come along too far. Just been busy. The room is about 13'x19'. There are wide pocket door openings to 1 other room and one hallway. (And another, but that was closed for the test). There is a 9x11 carpet in the middle. Otherwise, it's wood floors and ceilings and walls of plaster (or something). I have 2 room treatment panels hanging. Not quite at the first reflection points, but as close as I can get them. Each panel is 1' wide and 5' tall. Make of 3" thick Owens Corning rigid 703 fiberglass. In 1x4 pine frames to which I have cut out about 22 2.5" circles in the wood. I've got plans to add 9 more panels to the room. But first I want to build 3 (one center and 2 rears) speakers.
 
#22 ·

Quote:
Originally Posted by Bhagi Katbamna /forum/post/0


Great post, thanks. Your conclusion supports Stereo Review's conclusion when they did double-blinded test on the sound quality of amplifiers back in the early '80s. They could not reliably pick out a pioneer receiver from a Mark Levinson amplifier. An amplifier is complex whereas a wire is simple, draw your own conclusions.

Interesting. Do you have a link to that? Or info to help me dig up that old issue? I subscribe to Stereophile now. And for now, they seem to be very much against blind testing. Maybe because of the result you mention?
 
#23 ·
Yesterday, myself and a few other enthusiasts got together to conduct a processor blind listening test.


A confirmation of old news



This is linked to the amplifier sonic debate, how two amplifiers matched yield the

same type of percentages on blind testing. Zero out processing, match the electronics

and watch people fail miserably.


Snip.... {The word Steve in the post below is not a reference to the Steve here
}


Richard Clark;
actually guys i can't take claim to be the first to make this a scientific double blind test--------as far as i know that honor goes to David Clark of DLC design-------he's the same guy that also designed the DUMAX speaker testing machine that is so popular---------David first did this in the early 80's and published the results in several of the popular audio mags of the day--------naturally it went over like a lead balloon--------Steve is like 99% of the audio world------they honestly believe that amps create sonic character independent of response, distortion, filters, etc--------they believe that sonic character is the result of "something beyond our ability to measure" and that specs are inedaquate to describe the sound quality of an amp--------i do believe this is the case with speakers--------specs are useful but i strongly believe no one can totally evqaluate a speaker with only measurements------and i also believe they all sound different---------i do however believe i can define the SQ ability of an amp with measurements---------if there were something beyond our ability to measure in amps we should be able to prove it with a careful double blind test---------think most of the world doesn't believe a watt is a watt?????--------read the mag reviews of amps--------even our mag------sonic virtues are attributed to amps EVEN IF THEY ARE SIMPLE GAIN BLOCKS----------once again i'll relate the test conducted by David Clark at the Los Angeles AES (Audio Engineering Society) show in the late 80's---------David did this test with the help of the Absolute Magazine over a 3 day period---------over 200 profesional audio engineers took the test----------the test amps were straight gain blocks (basic amps with no signal filters and/or processing)--------everyone was confident they could easily pass the test------the Absolute Mag was there in support of their journalistic claims and subjective test reports and approved the test set-up and procedures---------the amps were a generic Crown PSA-2 (about 1K at the time), a class A Threshold (about 10K at the time), and a OTL tube amp (about 15K at the time)--------final results at the end of the sessions????--------49/51---------as an additional note they also tested exotic Monster wire against solid 12ga THHN like is used to wire a house--------results of the wire????------49/50 as well---------did anyone learn anything???--------not at all--------the Absolute Sound decided that tests were invalid since they forced one to use a different side of the brain compared to when they are "enjoying music"--------there were also a lot of other creative excuses----------when i got involved in car audio the "amp sound" thing was at least as strong as it is today-------at the time dave and i were doing seminars and i felt it would be of good value to teach installers that an amp wasn't gonna make or break the sound of their install--------the highest regarded amp of the day for "sound quality" was soundstream---------it was advertized as a class A amp with sweet sonic virtue-----no one ever called BS on the class A claim not even the mags-------it was as class A/B as every other amp on the market----------and nearly everyone believed it---------now remember THIS WAS BACK WHEN 90% OF CAR AMPS WERE STRAIGHT GAIN BLOCKS----------finally after doing multiple seminars and fine tuning my test (by tuning i mean setting the matching thresholds based on listeners abilities) i offered the 10k prize to emphasize how strongly i believed this and how fundamental to audio education I felt it was----------over the years car amps evolved into a market where 95% of them have filters and processing devices and folks seem to feel the test is proof of the obvious since they think i am trying to show that amps sound the same with the processing bypassed----------this guy Steve is from the part of the audio world where they maintain that amps sound different even if they have flat response and similar power-----------why do i want $10K for wasting a weekend teaching someone something they will never admit even if they lose the test----------------ten years ago I would have jmuped at the chance to embarass this guy--------but no more---------thousands of losers later and equally as many excuses for failing "I TIRE OF THIS" ...............RC


archived data;
http://www.carsound.com/forum/forumdisplay.php?f=16
 
#24 ·
The SoundStream amps... they were also the ones that came out with a "Low Impedance" switch to handle low impedance drivers, because amplifiers capable of handling low impedance drivers were da snitz. It was later revealed that the switch was simply a current limiter that prevented the amplifier from self-destructing when presented with a low impedance load.


In other words, that SoundStream amplifier *LOST* power while driving a low impedance load, unlike other true high current amplifiers capable of driving low impedance loads, like the Orion HCCAs.


SoundStream does make good quality stuff, but they have done so many of these underhanded things that I am happy to have never owned a single piece of their gear during my years of competing in local IASCA sound-offs.
 
#25 ·
I have to agree with the above.


An excellent offering but highly flawed and unable to extrapolated for any pre/pro vs receiver in terms of providing the best surround experience.


I think it could be done by with a surround track as the test media but it would be difficult. Music may not be the best media when trying to determine which pre/pro does best with movies..


Ideally, you'd have a reference amp / surround speaker set up and some trained listeners you'd be in better shape. But what a pain in the ass.


First we have to define what makes an ideal surround experience.


1. Ability to create a holosonic environment

2. Dialog intelligibility at high levels and with concomitant loud action

3. channel separation

4. Big soundstage

5. Dynamic range (now this could be measured).

6. etc.


Now most receivers have low powered amps which would lead some speakers to less than ideal dynamics or lacking headroom. At least with a pre/pro you can match your speakers ideally with the amp. I can only imagine the heat in a receiver with 4 of my large QSC's and 2 crown K2s at full bore. I am sure it would fry the preamp and cause audible distortion...
 
#26 ·

Quote:
Originally Posted by thebland /forum/post/0


An excellent offering but highly flawed and unable to extrapolated for any pre/pro vs receiver in terms of providing the best surround experience.


I think it could be done by with a surround track as the test media but it would be difficult. Music may not be the best media when trying to determine which pre/pro does best with movies..


First we have to define what makes an ideal surround experience.


1. Ability to create a holosonic environment

2. Dialog intelligibility at high levels and with concomitant loud action

3. channel separation

4. Big soundstage

5. Dynamic range (now this could be measured).

6. etc.

Amen. While the results of Steve's test can be helpful in a stereo playback context, it becomes irrelevant for surround sound purposes. I know from experience that HK's EZ Set EQ system has helped lift my surround sound experience to a level greater than any high dollar separates I've owned. Movie playback is completely engrossing and bass integration is seemless. These (and theblands other bullet points) are things that can not be judged using stereo playback; and the results of Steve's test ignore these very important factors.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top