Friday, August 3, 2012

Marketing Stealth

I have a lot of responsibilities at our little, stealthy company called Tonian. I am responsible for products, marketing and assisting on business development/technology meetings. It is the kind of project I like best - wide ranging with huge potential. The hard part is finding enough time to do the important, but not time critical things. Highest on that list is general marketing. I've launched out a Tonian Blog and we're working on building out a set of resources and positioning around our leadership in pNFS, but I am lucky to spend 1 day a week on it. We don't have a product to sell (yet), but we know what we want to stand for. We are going to define what Software Defined Storage really should mean. And it isn't the Storage Hypervisor. This isn't a trivial task and it will take time. So here I sit in Ben Gurion Airport for the 3rd time in 4 months. No customers here. I owe engineering a note on a feature. The board presentation for next week is in my CEO's hands. Time to think about marketing in stealth mode as I cross the Atlantic.

Sunday, May 6, 2012

From little bits to big bits

An update from my world. I left the publicly traded mid-sized company to jump back into the small pond. I'm still thinking about data. Not the little bits that need to be correctly ordered and available in CPUS, but the bigger data that needs to be available in milliseconds over the network. I'm on the road more, but enjoying the adventure at Tonian. Blogs to come will include musing on Open Compute/Storage, the development of big "S" Standards, how virtualization changes the organization and the conflicts of being a company in stealth mode. Stay tuned...

Friday, February 10, 2012

Conflicted about HSAs....

I am having a very mixed reaction to the recent public announcements of AMD's Heterogeneous System Architecture (HSA) roadmaps. As anyone who stumbles across this blog should know, I did a lot of work in that area. I really do (or is it "did"?) think it represents a real future of computing silicon. It should reduce overall power and is the roadmap to create truly powerful single-chip computers that do everything. Building hotter FPUs or more cache or more generic cores is a race of diminishing returns.

We already have significant sections of modern x86 CPUs dedicated to special purpose functions. Vector instructions with SSE started us down the path, but now we have instruction sets and silicon that speed encryption and other low level functions. The advantages of this model is a single instruction set and, most importantly, a flat memory space.

It is the overhead of memory copy that actually eats up most of the advantage of dedicated silicon. The problem has to be large enough to be worth shipping around the data. Phil Rogers, Mark Hummel & others at AMD know this well. But so do the teams at NVidia, Intel and every other silicon company in the world.

Unfortunately, the silicon industry has also shown stamping out small power-efficient general purpose cores is easy. The difference in power consumption between on-load dedicated cores as NIC and purpose built silicon is shrinking. However, the design overhead of developing that silicon has not. There are only a handful of compute problems worth solving in silicon, but there should be 1000s of compute functions to which it is worth dedicating a core.

AMD has an approach to coherent memory across the different silicon environments. I know enough of the people involved to be confident the solution is elegant and functional. I have deep confidence that it can be game changing in the HPC space where uber-FLOPS still matter and adoption is a matter of compiling in the libraries.

Unfortunately, I don't think it is industry changing. Current software trends are not toward getting more from a single program, but dividing up the problem into smaller, general purpose compute elements. Software architects realized it was too hard to copy the data, so they moved the compute to the data. Yes, I'm talking about Map/Reduce.

AMD's HSA and Map/Reduce represent two directions for software's use of silicon. In my opinion, the need to speed up specific algorithms has been circumvented already by software architecture. AMD is shooting where the duck was, not where it is going. That is the problem with silicon engineering in the current age. It doesn't move fast enough.

That's why Dan Reed, Burton Smith and others are working for MSFT these days and Peter Ungaro is giving the keynote at a Big Data conference. They are trying to shoot ahead of the duck.

If AMD can improve memory efficiency (e.g. garbage collection) and/or messaging primitives (e.g. collectives) they may not change the industry, but you certainly have a competitive advantage in the modern age of distributed software architectures.

http://www.hpcwire.com/hpcwire/2012-02-09/amd_opens_up_heterogeneous_computing.html

Thursday, February 2, 2012

Thinking about Start-ups?

Navin Thandani recently left Red Hat and decided to talk about the StartUp experience.

It's a classic tale of failure, woe & iteration, but with a happy ending. A couple of particularly nice points about the ease of getting someone to care about you.

http://nthadani.wordpress.com/2012/02/02/a-retrospective-analysis-on-the-road-to-red-hat/

Sunday, January 29, 2012

Happy New Year

A belated New Year greeting. It has been a crazy year for accelerated computing and the deployment of special purpose hardware.

1. The HPC world clearly has embraced GPUs as a route to getting more Flops. It is unclear if any of it actually adds up to a lot more science, but the tools are in the hands of the experts.

2. Though we have many more cores, not a single new paradigm for using them has emerged. Libraries are being optimized for MPI on the system and interest in OpenMP has increased. These modest incremental updates are, in a word, useless.

3. Virtualization in the real world is there. With most jobs able to max out a single socket, virtualization of clusters is starting to make sense - not just for deployment scenarios, but also for job scheduling.

4. The problem right now isn't memory bandwidth. It's I/O. Big Data style solutions that move the compute to the data are going to be increasingly popular. However, isn't this what siesmic has been doing for years? What is really going to be new?

5. SSSDs are mainstream. But the reference architectures by use case are still being developed.

6. Rise of PaaS should create opportunities for more specialized deployments, similar to the combined software and hardware solution of Amazon's latest DB efforts. NoSQL on SSSDs... what next?

Thoughts for January. Tune in for more thoughts in February.

doug