Originally Posted by TuteTibiImperes
The first is the measurement of how you hear, which is obtained from sticking the microphones in your ears. This is the personalized part.
The second is the measurement of how a particular room and set of speakers sound, this seems like it should not have to be personalized, it could be obtained through a mic at a set position in that room.
By taking the data from the room/speaker measurements and filtering it through the lens of the personalized ear measurements, why wouldn't it be possible to recreate what it would sound like if you had actually been measured in that room? From reading on the Kickstarter page, it sounds like that's something they're going for as an option with this.
Taking it a step further it seems like it could be possible to even split it up into three parts: your own ear measurements, measurements of particular sets of speakers captured anechoically, and measurements of various rooms.
That would really be the ultimate application for this tech - an automated calibration that measures your own ears/head-related-transfer-function, and can then apply that to any set of virtual speaker and rooms you want to lay on top of it.
For example, you could measure your own hearing/head response, then download speaker data files to create a virtual surround system of B&W 802 D3s, Focal Utopias, Magnepan 20.7s, JL Gothams, or whatever other speakers/subs were in the database, then download the room response file for Skywalker Ranch's Stag Theater, the El Capitan, or whatever other room/theater you wanted, and have the Smyth unit apply the proper algorithms to make it sound like you're there listening to those speakers in that setting.
It doesn't sound like it's quite there yet, but possibly moving in that direction, and certainly pretty cool even where it is right now.
I believe your thoughts here are absolutely on target, as to how the math behind all of this really works and what the true potential is for creation of a PRIR from its underlying fundamental raw elements.
While the current "analog" method of PRIR creation is derived from the sum total of how the calibration microphones "hear" the sound of the room and speakers and equipment, as interpreted by your own head and ears, in truth there really are those three separate and discrete components working together for what makes up this single "analog" result:
(a) your own hearing/head response to any sound, which is truly unique to you (i.e. your hearing "corrective prescription ear-glasses", giving you "20-20 reference hearing" no matter what your auditory characteristics are)
(b) the "pure" anechoic characteristics of the electronics, cabling and speakers which are reproducing the sounds you are hearing, and
(c) the "colored" characteristics of the listening environment room/theater/auditorium, its size and shape, floor, wall and ceiling materials and treatments, sound baffles, horizontal/vertical speaker placement around the listening chair location position, etc.
I think you're absolutely right, that the above three "independent tracks" obtained separately and uniquely can be "mixed together" digitally, producing a PRIR that reflects precisely how you and your own head/ears WOULD HAVE HEARD sound from those electronics and speakers, in that listening room environment, if you had actually been there with calibration microphones in your ears and created a PRIR the old fashioned "analog" way.
Seems completely plausible that (b) and (c) can be "acquired" or bought, independently, and then "mixed" together with (a) your own "prescription ear-glasses lens filter" that is the precise description of your own personal auditory hearing system. Voila! A PRIR is born for you.
The math involved in a PRIR clearly makes this possible. I had Smyth correct my AIX 7.1 PRIR, because Lorr had mistakenly set one of the speaker angles incorrectly in the A8 Realiser setup prior to my calibration session. He keeps his own personal copy of the PRIR's he measures for people, and in reviewing it noticed that one of the speakers was set wrong. In theory, the "playback" through this PRIR would have simulated that speaker at the incorrect virtual location. Smyth was able to "reverse out" the mathematical effect of the incorrect speaker angle setting in the PRIR, accomplishing a "virtual relocation" of the speaker over to where it really was relative to where I was sitting. So now sound coming from that "virtual speaker" in my headphones appeared to be coming at me from the proper angle. Amazing! PRIR post-processing.
It's completely reasonable that the above three "tracks" could similarly be captured separately, mixed together, and even manipulated as desired or required (e.g. as is already possible playing back through a given PRIR, to add or remove "room reverb" as desired or required).