Tuesday, April 8, 2008

HPCSW: Since when are MPI & SPMD the same?

YAHPCC: Yet Another High Performance Computing Conference

I go to a number of these and there seem to be more every year to attend. This post is about the HPC Science Week sponsored by several government agencies, plus some vendors.

However, as I prepped this I realized I also missed another one the same week. Suzy Tichenor and the Council of Competitiveness hosted an HPC Application Summit the same week. Here's coverage from HPC Wire. My highlight from Michael's write up is
"There was also extensive discussion of how best to conceive a software framework for integrating codes that must work together to run multiphysics models. Codes written in-house have to work with codes provided by independent software vendors and open-source codes being built by far-flung communities. A software framework could be the solution."

This was also a theme at HPCSW. Some people make rash statements like 1000 core chips and others talk about new programming languages, but everyone agrees that humans can not program at the anticipated level of future complexity. Since we can't go faster, we're going to have to have more and there is a limit to how much more a person can manage.

There are a few themes that are becoming inescapable.

Layers
There will be experts, near experts and the rest of us. Experts will want to be as close to the silicon as possible, while the rest of us are most interested in it working. This is going to require layers.

Data Movement is Expensive
We have plenty of computation, but moving the data to it is hard. We need methods that minimize the cost of data movement. Asynchronous threads, hardware synchronization and message queues are in vogue.

Is it a Framework, or a Library?
The data is also more complicated, so a library doesn't seem to sufficient. Programmers need constructs that handle data parameters and other odd bits. Libraries take rigor. Frameworks are application modules with architecture and external hooks. (see Guido's blog & comments for more on this.) Accelerated computing will take frameworks.

More User Control
Only the programmer knows... According to the pundits, future operating systems will allow the user to schedule threads manage data access patterns etc.

More on all these in the near future...

2 comments:

Amir said...

In order to minimize Data movement "Place and route" will become a fundamental component of all multicore operating systems. The ratio of arc length between worst and best case placement for two communicating threads in a NxN multicore array is of order 2N and this factor grows for each arc added to the DFG.

Systems with linear pipelines and systolic structures won't burden a placement algorithm very much and for irregular pipelines we'll be able to use the traditional algorithms used for FPGAs and ASICs to see some nice performance boosts, but it is impractical to optimize placement of dynamically spawned pipeline components since optimizing thread placement may take longer than the thread life.

The RAW project worked on a lot of stuff.

Unknown said...

I had to look up DFG (Data Flow Graph) before I decided if I were in agreement with Amir.

Although I am certain his use of the phrase 'place & route' will be a red flag for some, his statement of the importance of data locality in the thread context is right on.