Scott Wasson over at TR had expressed some doubts that putting a GPU into a CPU socket would actually be desirable. I've seen these doubts expressed elsewhere in response to our AMD-ATI article. The reasoning goes that a PCIe video card with high-bandwidth GDDR3 will outperform a GPU that's placed in a cHT socket to share a pool of DDR2 with the CPU. As far as it goes, it is, in fact, correct that putting 512MB of GDDR3 on a daughtercard with the GPU and linking it to the CPU over PCIe will get you more performance than just dropping a GPU into a cHT socket. This is just throwing hardware and money at the problem, though. The point of doing a cHT-compatible GPU that's a drop-in replacement for a second Athlon is that it's much, much cheaper and less wasteful than a dedicated daughtercard (with a dedicated cache of GDDR3), and the performance is pretty good. So from a price/performance standpoint, glueless cHT and a shared CPU-GPU memory pool will beat the more expensive daughtercard solution by a significant enough margin to make it attractive to many gamers. The other issue that I want to address is this article over at the Inquirer. Clearly, Charlie has some of the same information that I have about Intel's various internal research initiatives. Intel is a big, research-driven company that has many teams working on many different types of projects at any given moment. I know for a fact that they have teams looking at Cell-like projects that combine DSP and general-purpose cores. They're also looking at low-power x86 cores, and Niagara-style chip multiprocessing, and lots of other exotic stuff.