Tuesday, March 4, 2008

O&G HPC: Afternoon on Storage

The afternoon session is on storage growth. In the Oil & Gas market it quickly becomes about really big chunks of data.

Presenters are:
Jeff Denby of DataDirect Networks
Per Brashers of EMC NAS SW Engineering
Larry Jones of Panasas
Sean Cochrane of Sun Microsystems
Tom Reed of SGI
Dan Lee of IDC HPTC


Jeff talked about...
* SATA is 'the way to go' in geophysics
> Design has some issues. Lower reliability including silent data corruption
* Infiniband is becoming popular
* For DDN, a Petabyte shipment is not unusual

The state of the art is 6GB/s for network attached storage.


Per talked about the architecture of the pNFS NAS stack different than traditional NAS architectures, different from EMC's MPFS.
* Advantage of pNFS is separate route for metadata from data
* MPFS adds awareness of caches in client and storage array to assist in throughput.
* Increase in concurrency due to byte level locking, not file
* IB is about 650MB/s; quad Ethernet is 300 to 400MB/s

Larry talked about...
* Panasas uses an object based iSCSI SAN model using "Direct Flow Protocol"
* Parallel I/O for windows apps is underway
* Landmark paper said Direct Flow improves application performance improvements by greatly reducing CPU wait on data.
* Reliablity is important
* Also support pNFS (NFS 4.1)
* Targeting 12GB/s


Sean talked about...

* He leads the HPC storage out of the CTO office
* He presented the Sun best practices
* Describing two bottlenecks: MetaData & connecting cluster storage with archive. Sun uses dedicated boxes (Thumpers & 4100s respectively) at those pain points.

Tom talked about...
* Humans: Interactive Serial, costly interupts, open loop, non-deterministic, expensive
* "Time Slicing and Atrophy make a bad lifestyle cocktail"
* Current storage solutions can't serve everyone

Dan says...
* HPC server revenue grew to over $11B in 2007 - It's mostly clusters
* YOY growth of 15% - double digit over last 5 years
* Oil & Gas reached $650m in 2007
* HPC storage exceeded $3.7B in 2006 at a faster growth than HPC servers
* pNFS is getting close to general availability.
* It eliminates custom clients
* Key driver is to have several threads access the same file system concurrently
* Block and Object versions may not be included in final spec.


===
Q (Keith Gray): What's the largest number of concurrent clients & largest file?
A (Jeff of DDC): ORNL, LANL, 25k clients. Performance throughput for GPFS at Livermore is 100s of GB/s
A (Per of EMC) 12k clients
A (Larry of Panasas) 8k clients at Intel and growing. LANL with 150GB/s
A (Sean of Sun): 25k of Luster, about 120GB/s at CEA (same as DDN); 10s of petabytes on SANFS (tape)
A (SGI): depends upon file systems

Q: What's the next technology

A: (Sean of Sun) Likely flash for heavy metadata access
Q: Availability?
A: (Sean of Sun) Something from Sun later on this year. Can't comment on sizes, stay tuned.
A: (Per of EMC) EMC already has flash as option. Good for small random IO ops. Good for metadata, but not for throughput

Q: What happened with next gen, like Cray's old fast ram disks.

Comment from the audience... Look at fabric cache from SciCortex
A: (DDN): The duty cycle of flash for lots of writes is about 2 years, so it doesn't map well to what we have. DDN is waiting on phase change memory to emerge.
A: (Panasas): The storage models are closer to workflow models, not raw data transfer. That usage model works well with fast cache.

Q: Will the storage vendors *really* going to get behind pNFS and drive it.

A (sun): Yes and on Luster and ZFS backend filesystems
A (panasas): There are pluggable modules in the standard which does allow customization.
A (EMC): yes, and our shipping code should be very close to final spec.

1 comment:

Strypdbass said...

interesting that there was no mention of memory subsystems.we have higher performance processors and more storage requirements. It would seem some of the interesting memory technology today would be part of a balanced solution