The current accelerated computing work is going full force. The options exist and the barrier to experimentation is very low. This is very good for Accelerated Computing.
However, I don't think the motivations are pure. The chief reason for working on silicon outside the mainstream x86 is fear of many-core. There is an expectation that x86 complexity is also dramatically increasing, while the performance is stagnant. The the cost of overcoming x86 many-core complexity is unknown.
The presentations are by smart motivated people who are exploring the alternatives. What they have in common seemed to be the following:
- Current scaling options are running out. All presenters provided scale up on dual and quad core x86 CPUs. They are all asymptotic.
- The compute is data driven. That is to say there is a lot of data to be worked upon - and it is increasing.
- Performance achievements on a greater scale of current performance x86 cores is going to be more expensive than historical trends. Complexity of application management is emerging as both a motivator and a barrier to Accelerated Computing
- They need to touch the compute kernels anyway. If they are going to rewrite the compute intensive sections, why not try code on a different piece of silicon or ISA. They have been moving away from hardware specific code anyway.
Mainstreaming Accelerated Computing will not happen without addressing the complexity of systems and application management. I don't know who is really working on this... Do you?