AVS Forum banner

921 - 940 of 954 Posts

·
aka jfinnie
Joined
·
4,867 Posts
Sounds like the plus points for A lens just get bigger and bigger... :p (I'm lucky I don't need the light in my small room).

Still, I guess if it is a big deal then it sounds like a useful feature for the niche of a niche of a niche market (high end projection systems... that use A lens... that have enough distortion to be bothered by it...).
 

·
Premium Member
Joined
·
565 Posts
Still, I guess if it is a big deal then it sounds like a useful feature for the niche of a niche of a niche market (high end projection systems... that use A lens... that have enough distortion to be bothered by it...).
Well, IMHO it´s a sort of home-made problem. Initially, a curved screen was the tool to correct the distortion of an A-lens.
However, this went a bit into the opposite direction: people like the look of the curved screen and now try to find ways to correct it. For a moderate curve, an A-lens would be able to correct it. Now, you´re actually doing it the other way round: instead of using a curved screen to correct an A-lens distortion, you use an A-lens to correct the distortion introduced by the curved screen.
This works for a moderate curved that fits to the A-lens, but if you´re going for a "curvier" screen, you need other tools to correct it. And with a video processor that can do all the geometry correction and also do the screen adjustment (16:9 <> cinemascope), you don´t need an A-lens at all.

Anyway, you could avoid all the hassle in the first place by just not using a curved screen - but doesn´t the fun start when it gets difficult? ;)
 
  • Like
Reactions: Karl Maga

·
Registered
Joined
·
2,991 Posts
Speaking of curved screens and a-lenses.... Does anyone remember the actual ratio to achieve perfect pincushion for a 40-degree curved screen? Assuming you use a lens like the ISCO IIIL. I remember there was an excel file that gave you the amount of pincushion depending on throw ratio and screen size but I can't remember if someone actually posted the ratio for perfect alignment of the pincushion to the 40-degree curved screens.
 

·
Premium Member
Joined
·
1,618 Posts
This is what you´re doing anyway when using a physical mask. You zoom out to the extend that the distortion ends up in the mask. Currently, i´m doing it that way in my own install.
That might a solution for smaller screen and/or slight degree of curving, but with a larger screen and more curve, this isn´t a feasable option.
At full screen yes, you don't need any masking. I'm sorry if I wasn't clear, what I was talking about was removing the curve from the sides of the visible image when using an aspect ratio inside the screen such as 16:9. The DCR lens is designed for a flat screen, and even at short throw I don't find the slight curve an issue but some do.
 

·
Registered
Joined
·
19,063 Posts
Sounds like the plus points for A lens just get bigger and bigger... :p (I'm lucky I don't need the light in my small room).

Still, I guess if it is a big deal then it sounds like a useful feature for the niche of a niche of a niche market (high end projection systems... that use A lens... that have enough distortion to be bothered by it...).
I don't find the barrel distortion with my DCR lens to be great enough to need this. It's off in the masking on the sides. That's why there is masking. No need for any re-scaling or other solutions with potential side effects that effect the picture. IMO, it's a non issue.
 

·
Premium Member
Joined
·
585 Posts
I don't find the barrel distortion with my DCR lens to be great enough to need this. It's off in the masking on the sides. That's why there is masking. No need for any re-scaling or other solutions with potential side effects that effect the picture. IMO, it's a non issue.
Same here Craig - I don't notice barrel distortion with my DCR lens either, but I do notice the big, bright picture. This is a feature that I would not use with my A lens. But, some may want it.
 

·
Registered
Joined
·
19,063 Posts
Same here Craig - I don't notice barrel distortion with my DCR lens either, but I do notice the big, bright picture. This is a feature that I would not use with my A lens. But, some may want it.
The DCR lens allows me to run in low laser for 1080p and mid laser for 4K, for scope films, with a much brighter picture than not using one. Rather than using mid laser / high laser. That adds as much as 10,000 hours of life to my projector before hitting 1/2 brightness. That's worth using one right there.
 

·
Premium Member
Joined
·
3,078 Posts
The DCR lens allows me to run in low laser for 1080p and mid laser for 4K, for scope films, with a much brighter picture than not using one. Rather than using mid laser / high laser. That adds as much as 10,000 hours of life to my projector before hitting 1/2 brightness. That's worth using one right there.
Craig,

If I had a Scope screen I'd definitely have the Panamorph lens.

Terry
 

·
Registered
Joined
·
244 Posts
I don't find the barrel distortion with my DCR lens to be great enough to need this.
Yes me neither and I am at a 1.45 throw. However I don't like the effect DCR has on pixel alignment. I am forced to use fine pixel alignment (zone) and it opens a whole new can of warms.
 

·
Registered
Joined
·
2,589 Posts
I don't find the barrel distortion with my DCR lens to be great enough to need this. It's off in the masking on the sides. That's why there is masking. No need for any re-scaling or other solutions with potential side effects that effect the picture. IMO, it's a non issue.
There always is rescaling and mostly in every direction with your lens. This is how you get to use the full 4096 by 2160 panel of your projector. Obviously you are not bothered by it nor by its side-effects so it would seem rather unlikely that you would be bothered if the scaling would also be used to correct geometry.
 

·
Registered
Joined
·
19,063 Posts
There always is rescaling and mostly in every direction with your lens. This is how you get to use the full 4096 by 2160 panel of your projector. Obviously you are not bothered by it nor by its side-effects so it would seem rather unlikely that you would be bothered if the scaling would also be used to correct geometry.
But that would be yet more scaling, for something that doesn't currently need fixing IMO.
 

·
Registered
JVC RS4500 | ST130 G4 135" | MRX 720 | MC303 MC152 | 6.1.4: B&W 802D3, 805D3, 702S2 | 4x15 IB Subs
Joined
·
9,563 Posts
Bingo! Sorry to hear he is taking the torch for their owners thread. Not worth it to me to unignore sadly.
That would be my opinion as well. At least the thread wasnt started with massive fonts and colors.
 

·
Registered
JVC RS4500 | ST130 G4 135" | MRX 720 | MC303 MC152 | 6.1.4: B&W 802D3, 805D3, 702S2 | 4x15 IB Subs
Joined
·
9,563 Posts
The Envy Pro has only 1 input and 1 output. Is that right? Is that a joke?! You don’t want to run the video through a receiver, as it definitely degrades the image (I tried it with my Denon 8500 and I can confirm that even with all processing Off, the image suffers when routing it through another component). So you want your VP to act as your HDMI input switch too.
Not sure where you are getting this information. You just state it as an outright fact that video through a receiver definitely degrades the image. Please back that up. I'll call pure BS on that. You're the same guy that was arguing that the Sony 295ES was higher contrast than the JVC's I think. You need to check your facts before adamantly arguing things.
 

·
Registered
Joined
·
2,589 Posts
But that would be yet more scaling, for something that doesn't currently need fixing IMO.
Well once you do not have 1:1 pixel mapping any more it is just about mapping to another pixel area. Obviously with no visible distortion to begin with it would not make any sense to play with geometry correction but with a more deeply curved screen or a projector with an anamorphic lens that is very close to the screen it would make a lot more sense.
 

·
aka jfinnie
Joined
·
4,867 Posts
Replying here as I think it is a more appropriate place to discuss this kind of thing.
So there are basically 3 levels of chips,

ASIC - developed at the hardware level for certain functions, data comes in and is processed, then comes out. But you can't change anything, once the chip made there is no flexibility. Extremely fast, no flexibility.

FPGA - like an ASIC processing is done in the chip but it is flexible and can be programmed for different needs. Also extremely fast, depending on the need may not be as fast as an ASIC, very flexible. Extremely Fast and flexible comes at a price.

X86/ARM Processor (intel, AMD, etc) - relies on software to tell it what to do, which is slower, but you can do almost anything with them. Slower with ultimate flexibility.
Not really.

ASICs are just any chip foundried with a fixed function. Processors and GPUs are technically ASICs - just that their fixed function come together in such a way that they implement one or more programmable processors.
FPGAs are different in that they are composed of IO, logic and memory that can be linked together in (almost) arbitrary ways to implement specific function, via a programmable fabric (that often can only be re-configured at boot, though some partial reconfiguration is possible in some devices).

If you happen to need the operations implemented exactly as they are in the CPU/GPU, it is likely to be the fastest option available (as the processor manufacturers are at the bleeding edge of process advances), but the reality is most jobs don't need things exactly as implemented in a CPU or GPU, so it gets much harder to benchmark... A CPU implemented in an FPGA would be slower for a similarly affordable FPGA.

There are whole papers on trying to compare the raw performance, performance per $ and performance per watt, and you need to know so much about the workload it is generally speaking impossible for the layman to have any idea whether a given tech is better for a given workload.

There are many other factors in selecting devices - such as what clocking reqs you have, what IO you need, thermal considerations, what tech your engineers have expertise to deliver a solution on... The list goes on. Attempting to reduce these to binary "is A better than B" is pointless.


Not really a fair comparison though, because no-one buying any volume of FPGA is paying anything at all remotely like the retail price - the difference is MASSIVE between retail and what is known as "supported pricing", yet the PC market parts are commodity and there is very little margin available on them for a PC building shop, with the pricing very well known.

I'd imagine the net result is probably a not massively different overall cost.
Not to quibble but you just repeated what I said in a different way. I didn't address price vs performance since that wasn't his question. He wanted to know why FPGAs are so expensive.



ASICs are just any chip foundried with a fixed function

vs

ASIC - developed at the hardware level for certain functions, data comes in and is processed, then comes out. But you can't change anything, once the chip made there is no flexibility


FPGAs are different in that they are composed of IO, logic and memory that can be linked together in (almost) arbitrary ways to implement specific function, via a programmable fabric (that often can only be re-configured at boot, though some partial reconfiguration is possible in some devices).

vs

FPGA - like an ASIC processing is done in the chip but it is flexible and can be programmed for different needs.


And CPU as I stated and you stated are software dependent, so if your software needs line up with it then great but for certain functions even if you had software to process it a CPU/GPU would be much slower than an ASIC developed to do the same process. Examples can be found throughout the networking industry, or in crypto mining.
The part I was most objecting to was the ranking of the performance of each solution in the description thereof, as I think this becomes misleading to the layman. The what they are is separate from the performance, because the performance is so application specific.

Arguments on "t'interweb" like to get reduced to "this is better because it's got an FPGA and FPGAs are faster" on the basis of someone reading that FPGAs are "extremely fast" and CPUs are "slower" and I don't think that element of what you wrote is helpful, or even accurate.

In general it is the flexibility of the FPGA coupled to lower volumes that makes it expensive (for what it is), not really the raw "speed". And when you look at application of these chips to your examples of mining and networking - two areas where they have "faster" performance; they're usually faster not because the silicon is quicker and can do the same operations quicker, but actually because these are particularly "hardware friendly" applications. A lot of the operations you need to do in something like an ethernet switch or an SHA256 hash are not doing any calculation at all in a procedural way if you are good at implementing an FPGA / ASIC - they leverage the fact that the FPGA or ASIC can do many logical operations in 0 cycles just by being wired up in the right way. It is the flexibility to do things differently that makes them faster at certain loads...

Even in the realm of crypto mining, where there have been notable large successes for ASICs and FPGAs - there are coins which do not mine on ASIC or FPGA systems well (eg Monero) so you can't even count on generalized "this type of solution is faster for this type of application".
 

·
Registered
Joined
·
267 Posts
Replying here as I think it is a more appropriate place to discuss this kind of thing.



The part I was most objecting to was the ranking of the performance of each solution in the description thereof, as I think this becomes misleading to the layman. The what they are is separate from the performance, because the performance is so application specific.

Arguments on "t'interweb" like to get reduced to "this is better because it's got an FPGA and FPGAs are faster" on the basis of someone reading that FPGAs are "extremely fast" and CPUs are "slower" and I don't think that element of what you wrote is helpful, or even accurate.

In general it is the flexibility of the FPGA coupled to lower volumes that makes it expensive (for what it is), not really the raw "speed". And when you look at application of these chips to your examples of mining and networking - two areas where they have "faster" performance; they're usually faster not because the silicon is quicker and can do the same operations quicker, but actually because these are particularly "hardware friendly" applications. A lot of the operations you need to do in something like an ethernet switch or an SHA256 hash are not doing any calculation at all in a procedural way if you are good at implementing an FPGA / ASIC - they leverage the fact that the FPGA or ASIC can do many logical operations in 0 cycles just by being wired up in the right way. It is the flexibility to do things differently that makes them faster at certain loads...

Even in the realm of crypto mining, where there have been notable large successes for ASICs and FPGAs - there are coins which do not mine on ASIC or FPGA systems well (eg Monero) so you can't even count on generalized "this type of solution is faster for this type of application".
I would disagree on the accuracy of the statement in the context of the subject we are talking about which is video processing. From what I've seen a general purpose CPU could not keep up with a properly designed/programmed ASIC/FPGA in real time processing.

I would agree on the lower volumes of scale being a factor. FPGAs are more expensive because of their flexibility but also their speed, or we could say their ability to shortcut, or lower latency, however you want to think about it. If they weren't "faster" than a general purpose CPU in certain tasks/algos/solutions then they wouldn't be as popular.

Generally speaking the applications where FPGAs or ASICs are utilized they are used because it is the better solution. There is a reason why they exist is my point. Price, performance, heat/thermals, and size of the silicon itself all come into play into why a particular solution is chosen. But for high performance/real time processing (where I was aiming) it is chosen over a general purpose CPU because it is (generally) the faster solution for that task. There are situations/tasks where it makes no sense and may be impossible to utilize an FPGA or ASIC.

Also with regards to Monero the devs for that crypto currency have forked over the years to keep ASICs from being able to mine, as well as using randomized algorithms to keep ASICs out. So it isn't that ASICs couldn't be used from a technical perspective, it is that devs are actively trying to keep them out.
 

·
aka jfinnie
Joined
·
4,867 Posts
I should preface this with both that I own an FPGA based Radiance Pro, and work in pro audio EE design where pretty much all the boards in a box have FPGAs... I have no axe to grind against the tech...

I would disagree on the accuracy of the statement in the context of the subject we are talking about which is video processing. From what I've seen a general purpose CPU could not keep up with a properly designed/programmed ASIC/FPGA in real time processing.
But where do GPUs sit in your 3 types of chips? I had assumed you were lumping them in with CPUs as you didn't acknowledge their existence as a separate category... As we're talking in a thread talking about video processing, it doesn't really make sense to be drawing comparisons to devices (CPUs) that aren't typically used for video processing (in general).

I'll certainly agree that the Lumagen does far, far better in terms of realtime latency through the system (which may be important; I'm not totally convinced all audio systems can compensate out the 150ms video delay in Envy + display delay). If you were doing a realtime computer vision process, where, for instance, a mechanical operation could only happen once you'd analyzed the video of the current position, this would be a crucial and massive performance benefit for FPGA, but this isn't that application.

Plus, owing to dedicated HW it has the capability to maintain input and output clock synchronization (ironically though the use of this advantage isn't always recommended), which the Envy can't do (though it has tricks to simulate getting close by opportunely dropping or adding blank frames where least noticeable).

For many graphics & video loads, on many metrics, GPUS can outclass FPGAs for throughput, particularly if cost is added in. Of course, just having more performance in one or more areas in a particular chip, vs in a different chip, does not a better end user result make. (I am yet to go into my theatre and say... you know what, I wish I had more TFLOPS!).

Also with regards to Monero the devs for that crypto currency have forked over the years to keep ASICs from being able to mine, as well as using randomized algorithms to keep ASICs out. So it isn't that ASICs couldn't be used from a technical perspective, it is that devs are actively trying to keep them out.
I guess you can argue they've crafted the solution to keep the ASICs etc out, but regardless of what they did, the net result is until some technical leap is made, the ASICs and FGPAs can't be economically used, because they'd need to implement general purpose processors, which they're worse at. It is merely held out as an example of where crude performance generalisations don't hold up.
 
921 - 940 of 954 Posts
Top