Originally Posted by naschbac
The PowerPC 750 lineage of CPU's still relies ENTIRELY on the PPC60x bus protocol and signaling specification. The boost in performance seen by the 750 compared to the 604 series, despite the 604 being a more formidable design, was due almost entirely to the implementation of L2 cache that resided off the 60x bus rather than on it like every other previous design. As you may be aware this was called "backside" cache.
Despite the differences in the 60x series and 750 series CPU's they both used the same bus interface.
First, you sound like the 750 aka G3 wasn't a greatly improved CPU over the old 6xxx - and that was the base of Gekko, the old GC CPU.
Current one is a .09 micron SoI chip, most likely some slightly modified version - ie cache size etc - of the PowerPC 750CL, the latest, most energy-efficient iteration of 750GX, announced by IBM last October
With respect to the "die size" of both the CPU and GPU in the Wii. The only reports I can find peg the Wii's die at just under half the size of the die for the GameCube's Gekko CPU. Those being 18.9 mm² and 43 mm² respectively. Which makes complete sense considering the fab process dropped to 90nm from 180nm.
Errr how so? If it'd scale linear - as it isn't due to higher leaking at lower process - it should be quarter
of it, shouldn't be?
Besides I just found an EXCELLENT summary of the rumors here
- you might wanna read it, digest it then come back to edit your comments.
I've read it, and the last question is really the only one that talks about the hardware, and all he does is wax dissappointment that current titles for the Wii aren't doing what was already solved and possible using the GameCube. Specifically he sites one of their studio's own franchises, and I have to ultimately agree with him.
However it says NOTHING at all about the Wii having some phenomenally improved system bandwidth.
Errr c'mon, it's not rocket science...
1. how could be "insane fillrate" possible without memory improvement?
2. the "GDDR3" title alone makes it quite obvious it's a vastly improved memory architecture.
Besides this you have the crazy fast 1T memory etc.
As I said before. I've not found anything that indicates that the Wii has abandoned the 60x bus interface it would have rightly inherited from the GameCube.
Which, let me remind you doesn't prove or disprove anything,
as we mostly talk about graphics
, not system memory (let alone your apparent lack of ability to find most of the information I'm citing here
You are grossly misunderstanding the role that AGEIA is playing here. In addition to their PC-based accelerator they also make a software SDK with several branches to provide a common API for developing physics in applications. Essentially the same kind of thing as HAVOC, only they're using it as a point of leverage to get developers to write for their SDK so that when/if they deploy on Windows they'll get the benefit of taking advantage of a possibly present PhysX acceleration card.
In that vein AGEIA does some work in their SDK to try and leverage the resources of the system its running on. Like utilizing spare SPE's on the Playstation 3.
To that end it is entirely possible that AGEIA's SDK for the Wii leverages the GPU in some manner for helping with some physics code. This would be no different than the same methods both ATI and nVidia are pursuing of the same avenue with their standard GPU lines as each generation becomes more programmable.
Doing this however does limit the available resources for actual graphics processing for obvious reasons.
Posting your personal opinion as a fact can make you look silly when some facts, unknown to you, will show up and turn your well-thought opinion upside down - I have a feeling you probably missed this interview as well
Since you also missed the "insane fillrate" part in my earlier link, this time I'll quote the relevant part here:
IGN: Is the hardware as easy to use on the Wii as it was with the GameCube? The two systems are very similar is structure we're told.
Konami: Yes, the structure is very similar to GameCube, but you already knew that. The development was not that difficult, as the Wii system has built in physics simulation. That helped the process.
FYI: I assumed it's quite obvious AGEIA did not supply the hardware (physically), only some IP.
Both Bluetooth and WiFi take CPU resources to manage. Neither of them are smart interfaces like a SCSI, Firewire, or SATA host controller which truly offer a series of "fire-and-forget" operations. Both still require a fair amount of control and oversight by processes that run on the CPU... even if they are at the kernel level.
The information coming off the Wiimote is entirely different than a signal generated by a hard-wired digital controller. Resolving how to manage the data describing changes in acceleration and spatial orientation are more intensive than simply receiving the interrupt that "the A button was pressed, do -> this."
I can't see anything that disproves my argument on this - see my driver notes.
First, where did you get that the Wii runs Linux? Kiyoshi Saruwatari's blog post was admitted to be a prank of some sort.
While that's true, I think it's pretty safe to say that N did not evolve into some kind of MS or Apple or DEC and did not develop their own kernel, drivers everything from ground up but simply took some available kernel base and put some stripped-down system with GUI on it.
Add into this the fact that N wanted to create the most energy saving yet the cheapest configuration, so you can throw out any NT-family kernel and all the expensive kernels - what you've left are the open source *nixes, including the most widely supported, flexible linux, perhaps some BSD variant.
Again, while I don't have a statement from N to link but based on pure logic I believe Wii OS is based on a very stripped version of one of these kernels, most likely some kind of linux.
Secondly, everything you listed is more than what the GameCube was running. While both are minimalist compared to what either you or I are typing these posts on, the Wii OS is observably more robust, and thus resource hungry, than the GameCube OS.
Yes and GC was running on G3 paired with a pretty limited hardware, was never thought to be an internet machine etc - simply not a match to Wii.
Other than specious speculation, inferring information from empty developer comments, and misunderstanding how a software SDK translates into "physics acceleration" I'm seriously failing to find your insurmountable proof.
Other than pointing out the misunderstood role of the CPU "bus topology", the difference between physics IP, providing apparently fresh information on the existence of built-in physics simulation, linking explanation about the missed/miscalculated point behind the die dimensions, let alone the importance of graphics memory architecture, I can't really make any further comment...