AVS Forum banner

1401 - 1420 of 1423 Posts

·
aka jfinnie
Joined
·
5,444 Posts
I guess the Lumagen hardware architecture is better suited to exact frame rate generation than the pc architecture. Frame drops to me is more objectionable, even if the Envy is able to tonemap 2 frames slightly better in that the Meg movie.
Yes, the Lumagen has specific hardware - a PLL "flywheel" circuit - that when in Genlock mode is able to lock onto the input clock and create an output clock that is synchronous to it (either the same clock or some divisor / multiplier thereof, as might be required if scaling). Downside is that some display devices aren't tolerant of a clock that is derived in this way, and input switching takes longer. So the out of box behaviour is genlock off. I've always enabled it on my units.

In a typical PC the video input and output clocks are not locked, they free run and you have to do "something" to reconcile the difference; eg drop or add frames. I guess VRR on HDMI2.1 perhaps may provide some opportunity to deal with this in interesting ways, though of course it isn't supported as yet by any of the major HT projectors...
 

·
Premium Member
Joined
·
806 Posts
If I could buy one this second, I would definitely lean Lumagen. It can do what I'm looking for full stop. Sounds like with the Envy a lot of that is up in the air and may never be included.
If that´s ("It can do what I'm looking for full stop") the case, then i would recommend the Lumagen as well.
I´m pretty sure the Envy will activate the additional GPU outputs one day and perhaps this will allow dual display 3D, but at the moment that´s highly speculative and i defenitly wouldn´t take a bet on that.
 
  • Like
Reactions: Technology3456

·
Registered
Joined
·
2,675 Posts
If that´s ("It can do what I'm looking for full stop") the case, then i would recommend the Lumagen as well.
I´m pretty sure the Envy will activate the additional GPU outputs one day and perhaps this will allow dual display 3D, but at the moment that´s highly speculative and i defenitly wouldn´t take a bet on that.
Even if it does, would they be in sync or does that require it use the quatro card inside the box? Or as long as its dual from the same box, it will sync?
 

·
Premium Member
Joined
·
806 Posts
Even if it does, would they be in sync or does that require it use the quatro card inside the box? Or as long as its dual from the same box, it will sync?
You might be right that a Quadro is needed. I talked with madshi about the possibility to extend the geometry correction to more than one display outputs to allow a geo-corrected edge-blending stack with multiple projectors, but he said this most likely would require a Quadro card because of their ability to sync multiple outputs.
So i guess the same would apply to your use-case.
 
  • Like
Reactions: Technology3456

·
Registered
Joined
·
2,675 Posts
You might be right that a Quadro is needed. I talked with madshi about the possibility to extend the geometry correction to more than one display outputs to allow a geo-corrected edge-blending stack with multiple projectors, but he said this most likely would require a Quadro card because of their ability to sync multiple outputs.
So i guess the same would apply to your use-case.
I was one letter off maybe being right :ROFLMAO: . "Quatro" lol. Maybe if you need four of them, then I was right :ROFLMAO: :ROFLMAO:

Thanks for your help. Sounds like Lumagen would work better for what Im doing. Once I get some other aspects of my HT sorted out, I will see whether the $ is left. But except for that, I would go that direction right now.
 

·
Registered
Joined
·
3,321 Posts
according to nvidia you don't even need quadros to sync two output they are need to sync 2 cards even that is not accurate anymore.
given that 3D wasn't even planned at the start and was added because user asked for you should move on from the envy until it's confirmed.

sync offset on a PC system are done for decades now is very trivial to reach days or more of continues playback without a dropped or repeated frame. even setups with an calculated "infinity" are not rare.
 

·
Registered
Joined
·
2,675 Posts
according to nvidia you don't even need quadros to sync two output they are need to sync 2 cards even that is not accurate anymore.
given that 3D wasn't even planned at the start and was added because user asked for you should move on from the envy until it's confirmed.

sync offset on a PC system are done for decades now is very trivial to reach days or more of continues playback without a dropped or repeated frame. even setups with an calculated "infinity" are not rare.
How would it be done without a quadro? Im not sure if the fact 3D wasnt planned is a bad thing, or the fact they added it just because one person requested it is a good thing. Maybe if a second person requested it, who knows what could happen. I mean, if they already added it just from one request, that's pretty cool.

What exactly did they add, and what is broken on it? There is right eye and left eye passive 3D demuxing. There is top/bottom. Did they all that stuff or one but not others? It would also be cool to find some comps of the geobox g-602 and Envy who can compare the geometry correction in each. Since the geobox really is only worthwhile for either 3D stacks, or edge blending applications, most have no reason to have it so it's hard to find comps.

Something that slipped through the cracks earlier in the topic, is there any cheap external box that can add custom delays to a signal? Although, it might be hard to calculate exactly the delay to set, so it's always better to have it baked in and not have to fool around with it.
 

·
Registered
Joined
·
3,321 Posts
madVR had a lot of rare case 3D options before the envy was released. there was no demuxing of both eyes. there is no word that the envy can do anything other than 3D frame packed.

after that 3D died i mean it it was removed from the nvidia driver.
everyone was a outraged at nvidia because they are "always" watching 3d. couple month later barely anyone cared anymore even the tricks to get 3D working with new driver where pretty much ignored by those i "use 3D every day".

than madshi said something like 3D works again and the new version will work with 3d again without "hacks" that never happend and no one even ask about it anymore.
that's the madVR story (still works with AMD).
the envy can do 3D frame packed that's all i know. if you want more move on or get a feature conformation before you consider this device.

madVR envy and madVR or better a PC are not comparable i can add what ever shader i want on PC i can even write my own and it will work the envy is a closed product on the other hand.
 

·
Premium Member
Joined
·
806 Posts
the envy can do 3D frame packed that's all i know.
It works with the older 2xxx GPU based Envys - with the newer 3xxx GPUs, 3D is currently broken and madshi is working on a fix right now.
 
  • Like
Reactions: Technology3456

·
aka jfinnie
Joined
·
5,444 Posts
Just bear in mind that @Technology3456 isn't just trying to do 3D frame packed display, they want 2 synchronous individually 3DLUT corrected alternate eye outputs. So although the availability or not of frame packed 3D on the Envy is important, there are significant issues which I think fundamentally make it likely not a workable option at the moment, even if 3D makes it back in. Talk of custom shaders, Quadro cards etc only serve to confuse with respect to the Envy, as none of them are options that are open to the Envy user.

It does seem however to be a problem that could be ripe for trying to be addressed on a custom built PC with appropriate hardware, but that really is a discussion for elsewhere and one I can't add to as I've not done that sort of thing in recent times. However, done that way, no 3D capable output hardware is required as the eye-splitting would happen in software, you would just need to solve the synchronous outputs issue, and the projectors they are using are all 1080p, so the output hardware requirements may not be that high. Whether or not that kind of thing is within @Technology3456 's capability to assemble, I have no idea.
 

·
Registered
Joined
·
2,675 Posts
Just bear in mind that @Technology3456 isn't just trying to do 3D frame packed display, they want 2 synchronous individually 3DLUT corrected alternate eye outputs. So although the availability or not of frame packed 3D on the Envy is important, there are significant issues which I think fundamentally make it likely not a workable option at the moment, even if 3D makes it back in. Talk of custom shaders, Quadro cards etc only serve to confuse with respect to the Envy, as none of them are options that are open to the Envy user.

It does seem however to be a problem that could be ripe for trying to be addressed on a custom built PC with appropriate hardware, but that really is a discussion for elsewhere and one I can't add to as I've not done that sort of thing in recent times. However, done that way, no 3D capable output hardware is required as the eye-splitting would happen in software, you would just need to solve the synchronous outputs issue, and the projectors they are using are all 1080p, so the output hardware requirements may not be that high. Whether or not that kind of thing is within @Technology3456 's capability to assemble, I have no idea.
In the Ultimate 3D thread there are a lot of posts of people trying to do things with PC's and just lots of issues, but most those posts were 10 years ago so who knows. The PC I own is one I found the different parts for on PC Parts picker, and then followed the instructions to assemble it, and against all odds, it worked :D But I have no clue about modding software and all that stuff. Every time I watch a youtube video since I got this PC and a 144hz monitor, there is screen tearing, and it doesnt go away when I enable v-sync, or change monitor output to 60hz, or enable to g-sync, or many combinations I've tried. This is just an example of how little I know how to make things function correctly on the PC software side of things. But can I follow instructions and guides, yes, at least until I do what is in the guide and it still doesnt work, or something is working differently on my computer than in the guide, then I have to ask for help.
 

·
Registered
Joined
·
2,675 Posts
Probably a dumb question, but has anyone used both a Lumagen, and madvr, at the same time? Playing a disc off a HTPC, using madvr on the HTPC to do let's say noise reduction, then sending the signal out to a Lumagen to do noise reduction a second time, but with its own different algorithm?

I am imagining sifting liquid by pouring it through a sifter, and how pouring it just through one sifter, with say square shaped holes in it, will not have as big an effect as if you then also pour it through a second sifter, this one with round shaped holes. Each is doing the same function, yes, so it could be redundant, but since each are slightly difference, the first would catch some things, but the second would catch others. I'm not saying VP's work like this, just explaining the idea behind the question to help people answer.
 

·
aka jfinnie
Joined
·
5,444 Posts
Probably a dumb question, but has anyone used both a Lumagen, and madvr, at the same time? Playing a disc off a HTPC, using madvr on the HTPC to do let's say noise reduction, then sending the signal out to a Lumagen to do noise reduction a second time, but with its own different algorithm?

I am imagining sifting liquid by pouring it through a sifter, and how pouring it just through one sifter, with say square shaped holes in it, will not have as big an effect as if you then also pour it through a second sifter, this one with round shaped holes. Each is doing the same function, yes, so it could be redundant, but since each are slightly difference, the first would catch some things, but the second would catch others. I'm not saying VP's work like this, just explaining the idea behind the question to help people answer.
You might make some improvement, and you might make the image worse. You wouldn't know until you tried some combination. But it is rare for people to chain many processing devices, in part because most only have one device, but also because while processing may address some issue, it can equally add other small artefacts that on their own may not be objectionable, but as they are artificial subsequent processing may exaggerate them.

Best to use sparingly. I'm very much of the opinion that in video processing, less is more. Apply just the correction you need to the very specific issue you have.

In your specific example. Noise reduction for the most part I find unnecessary. Noise that is in most quality sources befitting of a decent HT is not there through the transport being unable to be noise-free, it's usually character from the transfer. Radiance Pro doesn't have noise reduction (the original Radiance did have I think courtesy of the Gennum chip, but I always had it disabled).
 

·
Premium Member
Joined
·
806 Posts
What i could imagine is letting different VPs tackle different issues.
So for instance let the one doing the task it is obviously better in or where one VP is lacking functionality at all.
That was discussed as a possible solution in the Envy thread to take off some workload of the smaller Envy Pro and let an HTPC with madVR do some pre-processing.
But i wouldn´t throw two VPs onto the same issue and let both do e.g. noise reduction.
 
  • Like
Reactions: Technology3456

·
Registered
Joined
·
2,675 Posts
Ty both for explaining. Sounds like it's not worth it except maybe with one or two combinations of features. If you have ideas of any you would recommend combining, please post them. I noticed in the Darbee thread a lot of people say it has the best... whatever it does... some people called it the best "upscaling" not sure if that's accurate or not... so maybe that's a feature worth combining.

I also have a separate processing question, hopefully a new and interesting one that if it's not possible now, at least has the potential for big innovation in the future. I posted it in the HDR Duo Plus thread, but I think this might actually be the better topic for it so I'll move it here, although it relates equally to both.

Question: the way the Sim2 HDR Duo Plus has a built-in double stack and uses it to expand the dynamic range by simultaneously projecting a darker image from one projector and a brighter image from the other, is it possible, with the right processors, to do that "custom" with any two matching projectors? To have one processor make a dark tonemap for one projector, and the other processor send a bright one to the other, to replicate this aspect of what the HDR Duo Plus does?

I say "this aspect" because clearly the great stack-alignment that the Duo Plus achieves due to its custom design, as well as the blackened light path for the darker of the two projectors and the zero lens drift, etc, cannot be easily duplicated with a custom stack. I am only asking about the part that involves sending an optomized dark version of the image to one projector and an optomized bright version to the other, using external processors. Might that be possible with madvr or Lumagen, or something else?
 

·
Premium Member
Joined
·
806 Posts
Well, i can´t think of a reason why that shouldn´t work (if i understand the question right) with two Envy/Lumagens:
You double the signal via a HDMI splitter, feed the separate path into each of them and then let them do their work with different settings.
 
  • Like
Reactions: Technology3456

·
Registered
Joined
·
2,675 Posts
Well, i can´t think of a reason why that shouldn´t work (if i understand the question right) with two Envy/Lumagens:
You double the signal via a HDMI splitter, feed the separate path into each of them and then let them do their work with different settings.
So you mean send the same HDR, but have one projector set really dark, and the other really bright? Have the projectors do the work?

That could work but I wouldnt know to much to calibrate each above and below middle ground. What I was asking is can each processor make a different HDR calculation or gamma curve (or everything together) for each projector? To have the processors control it instead of the projectors? Maybe the processors could calculate how much to offset one projector from the other. Maybe they could take the proper HDR data that would normally work for one projector, then split that into two sets of data to each projector, one with only the upper 50%, the other with only the lower 50%, or something.

I dont really know how the HDR Duo Plus does it, but however it's done, the question is how to replicate it with external processors and two projectors.
 

·
aka jfinnie
Joined
·
5,444 Posts
Ty both for explaining. Sounds like it's not worth it except maybe with one or two combinations of features. If you have ideas of any you would recommend combining, please post them. I noticed in the Darbee thread a lot of people say it has the best... whatever it does... some people called it the best "upscaling" not sure if that's accurate or not... so maybe that's a feature worth combining.
Darbee is not upscaling.

Personally, no devices I'd recommend combining and "stacking" features operating on same aspect of the image.

For a while waaaay-back-when I used a couple of EEcolor boxes as LUT holders with a DVDO Iscan Duo - one for my TV and one for my projector. But that was because the DVDO had the dual outputs but didn't have the 3DLUT function - so they were being combined to make up for a lack of a feature, not a perceived benefit of one feature implementation over another. That system just about worked but was a bit flaky and showed up everything that is bad about overcomplicating your video chain.

Question: the way the Sim2 HDR Duo Plus has a built-in double stack and uses it to expand the dynamic range by simultaneously projecting a darker image from one projector and a brighter image from the other, is it possible, with the right processors, to do that "custom" with any two matching projectors? To have one processor make a dark tonemap for one projector, and the other processor send a bright one to the other, to replicate this aspect of what the HDR Duo Plus does?

I say "this aspect" because clearly the great stack-alignment that the Duo Plus achieves due to its custom design, as well as the blackened light path for the darker of the two projectors and the zero lens drift, etc, cannot be easily duplicated with a custom stack. I am only asking about the part that involves sending an optomized dark version of the image to one projector and an optomized bright version to the other, using external processors. Might that be possible with madvr or Lumagen, or something else?
I'm sure it is technically possible with enough skill, but you're venturing it seems into systems that for DIY appear so complicated that there is probably more chance of making the image worse than better, without becoming an expert in the field. This seems a common theme...!

My understanding of that particular system is that success depends on a very complex and sophisticated mechanical setup (which you are quick to dismiss), expert calibration, and crucially the two projectors are NOT identical, so it isn't really applicable to two standard common-or-garden projectors, and you can't really brush that aspect aside. They have different lenses at least, one of which is tweaked for contrast. Without there being an optical difference between the two projectors I believe there's no point in this setup.

Obviously, there is nothing to stop you splitting an HDMI signal, doing two different tone maps in different video processors, and sending to two projectors. But what have you achieved if they're not optically different.

Perhaps something similar might be achieved if you had two projectors with moveable irises (such as two JVC family units) and set each iris differently. But stacking two JVCs is definitely not for the feint hearted as they are known to have some drift and slop in their lens mechs.
 

·
Premium Member
Joined
·
806 Posts
So you mean send the same HDR, but have one projector set really dark, and the other really bright? Have the projectors do the work?

That could work but I wouldnt know to much to calibrate each above and below middle ground. What I was asking is can each processor make a different HDR calculation or gamma curve (or everything together) for each projector? To have the processors control it instead of the projectors?
Yes, that´s what i mean. The processors get the same input signal but have different setting and a different image to the projector attached to it.
You need to check if the specific feature set is available for the setup you have in mind, but of course you can split up the incoming signal into to separate paths with two processors with diffferent settings for the projector attached to any of them them.
 
  • Like
Reactions: Technology3456
1401 - 1420 of 1423 Posts
Top