On the day after Sheldon Cooper declares "Ubunutu: You are my favorite linux-based operating system" (followed immediately by the phrase 'crap' in my house), three new pieces of news on hybrid computing models cross my desk.
The first is a story in Scientific American on the work Wu Feng and his colleagues are doing at Virginia Tech. http://www.scientificamerican.com/article.cfm?id=opencl-smooths-supercomputing Wu has long been on the leading edge of what defines power efficient computing and the placement of data next to compute.
The second was the announcement from IBM & NVIDIA that you can buy GPUs in a IBM server chassis. When IBM does it, it must be real? Or has the canceling of the next generation of Cell leave Dave Turek grasping at straws?
I'm going to close with another reference to Wu, the prolific Hokie. His group published a paper last month identifying missing genes in the microbial DNA sequences to date. The compute model was an "ephemeral supercomputer" of 12k cores across seven sites within the US. This may not look like hybrid computing, but it is about moving data independently of compute in order to manage the latencies between the compute nodes.
None of the compute Wu's team is doing, or the Tesla examples from NVidia tackle the really hard, communication dependent problems such as multiphase physics or simulation with dynamic meshing, but they do demonstrate that there are plenty of problems to be tackled with intelligent programming and massive parallel processing.
I still believe in the hybrid computing models and explicit data placement. Though my friend Mark may feel that he can show that density and efficiency of general purpose compute engines will subsume the specialty silicon market, we are certainly not yet there.