Sunday, November 15, 2015

Good to be IBM

This week is I will be at SuperComputing in Austin. (#SC15 for those who know).  It will be wonderful to be back among the brightest minds I've ever met. Those drawn to SC15 are dedicated to solving hard problems with insight and validated computational results. They are rigorous and creative thinkers.

Among the attendees at SC15 are people who modeled turbulent flow problems well before airlines saved trillions with wingtips. Built huge systems to understand climate change before it was a political debate. I will be meeting with scientists and engineers who use the computational power behind the nuclear stockpile to advance drug research and protein folding. SC15 represents those who are relentless in using the latest technology to solve difficult problems.

However, they have an almost universal design flaw. The SC15 crowd is almost never interested in easy or good enough. This puts them at odds with most of the rest of the technology world, which brings me to my point.

The biggest impact of technology is not computational, nor algorithmic, nor mechanical. It is and will


be the human ability to interact, absorb, interpret and control. This is the age of design - and Apple is our Merlin.

My revisionist history of modern IBM starts with Watson - an engineering approach to solving this problem. You have seen the IBM Watson ads. Algorithms and systems so smart that a child, a genius and Bob Dylan can relate to it.

But what if you change the paradigm to focus on the people 1st and engineering second? 

Here is the answer in a  NY Times article on pervasive Design-Think at IBM. The shift from starting with the engineering constraints to the human interaction is profound. Systems and software developed to do what a human wants to do, rather than solve an engineering problem, is innovation, not incremental evolution.

If IBM can continue down this path, it will be a good time to be IBM.

I may work for IBM, but these thoughts are my own. Any resemblance to IBM statements is merely coincidence. 

Thursday, November 5, 2015

The important big data

IBM took another step in software-defined storage this week  - announcing Spectrum Scale 4.2. (and a lot more.) This product has a long history of leading performance in scale-out technical computing. (and still growing in that market - See IBM at SC15)  However, this release goes well beyond that installed base. 

It used to be the important big data was the raw data from which great decisions could be made. The old important big data were things like manufacturing simulation, financial risk analysis, genomics, and seismic. Things computers could crunch! 

The new important big data is user behaviors, digital marketing results, output from 1000s of sensors, healthcare records and images -- lots and lots of images, videos, digitized voice. Some of it for big, but most of it we just used to throw away. This data isn't just crunched. It's massaged, interpreted visualized and factored. The inputs are messy and the outputs can be surprising. 

The new important big data needs to be accessed via object because RESTful APIs are easier to program. The spectrum of HDFS tools are needed to walk through it, plus the various commercial plugins to validate and visualize it. The new release of Spectrum Scale has unified object, HDFS transparency, as well as the traditional file support. For the new big important data, file servers don't cut it. 

Going even further, the new important data isn't just for experts. Spectrum Scale  has a UI - and a pretty good one by the look of it. (See Bob Oesterlin's post on his first impressions.) The new important big data needs to be quick and adaptable and multi-application - and easy to use. 

Spectrum Scale 4.2 - a good example of why it isn't just GPFS anymore. 

Cross Posted to Linked-In