I should preface this with both that I own an FPGA based Radiance Pro, and work in pro audio EE design where pretty much all the boards in a box have FPGAs... I have no axe to grind against the tech...
I would disagree on the accuracy of the statement in the context of the subject we are talking about which is video processing. From what I've seen a general purpose CPU could not keep up with a properly designed/programmed ASIC/FPGA in real time processing.
But where do GPUs sit in your 3 types of chips? I had assumed you were lumping them in with CPUs as you didn't acknowledge their existence as a separate category... As we're talking in a thread talking about video processing, it doesn't really make sense to be drawing comparisons to devices (CPUs) that aren't typically used for video processing (in general).
I'll certainly agree that the Lumagen does far, far better in terms of realtime latency through the system (which may be important; I'm not totally convinced all audio systems can compensate out the 150ms video delay in Envy + display delay). If you were doing a realtime computer vision process, where, for instance, a mechanical operation could only happen once you'd analyzed the video of the current position, this would be a crucial and massive performance benefit for FPGA, but this isn't that application.
Plus, owing to dedicated HW it has the capability to maintain input and output clock synchronization (ironically though the use of this advantage isn't always recommended), which the Envy can't do (though it has tricks to simulate getting close by opportunely dropping or adding blank frames where least noticeable).
For many graphics & video loads, on many metrics, GPUS can outclass FPGAs for throughput, particularly if cost is added in. Of course, just having more performance in one or more areas in a particular chip, vs in a different chip, does not a better end user result make. (I am yet to go into my theatre and say... you know what, I wish I had more TFLOPS!).
Also with regards to Monero the devs for that crypto currency have forked over the years to keep ASICs from being able to mine, as well as using randomized algorithms to keep ASICs out. So it isn't that ASICs couldn't be used from a technical perspective, it is that devs are actively trying to keep them out.
I guess you can argue they've crafted the solution to keep the ASICs etc out, but regardless of what they did, the net result is until some technical leap is made, the ASICs and FGPAs can't be economically used, because they'd need to implement general purpose processors, which they're worse at. It is merely held out as an example of where crude performance generalisations don't hold up.