Where Hard Drives Are Headed

In Backup & Archiving Hardware by joseph gilbertLeave a Comment

The first hard drive roared to life in 1956. Designed and built by IBM, it was as big as a refrigerator and loaded with 50 two-foot aluminum platters, each coated with an iron oxide paint that served as the magnetic storage medium. The 1200rpm drive – dubbed the 305 RAMAC – was accessed by a pneumatic read/write mechanism and provided a whopping 5MB of storage space.

 

Flash to 2002, and the more things change, the more they stay the same. Millions of transistors have been added – the device itself has shrunk as dramatically as the hero of a grade-B science fiction movie – but the basic mechanics remain the same. Look ahead three to five years, and perhaps even more surprising is that no immediate successor for this amazingly durable technology seems close to realization. “I spent my early career looking for replacements for magnetic recording, and I think 10 years ago those replacements were closer than they are today,” says Vic Jipson, an executive vice president for Maxtor, Milpitas, CA.

 

“Call me skeptical, but I’ve been hearing about [hard drive] replacement technologies for a long time and they haven’t happened,” says Mark Geenen, president of the research firm TrendFocus, Los Altos, CA. “I just don’t see there being any significant competing technology in PCs and servers for a long time.”

 

What accounts for this amazing longevity? Continuous improvement, mainly. Most impressive has been the increase in areal density – the number of bits per square inch of storage space – which rose by 60% per year from 1991 through 1996 and a stunning 100% per year from 1997 to 2002. Hard drive performance increases have been less spectacular, but new caching techniques and plummeting costs per gigabyte have neutralized competition from ultra-fast solid state disks, which continue to cost hundreds of times more than conventional drives.

 

Peer far enough down the road, of course, and the outlines of the hard drive’s successors begin to emerge, including new storage schemes involving lasers or plastic disks. But over the next three to five years, the hard drive almost certainly will stand alone as the workhorse of computer storage. The general consensus is starting next year, areal density increases will slip back to 60% year over year, but that will still result in terabyte drives by 2005. For mainstream servers and desktops, however, the most important hard drive improvements will derive from smarter interfaces and new breed of high-performance 2 1/2-inch drives.

 

Drowning in gigabytes

Server and desktop hard drives have different requirements, but their immediate futures hinge on similar market forces. On the desktop, hard drives’ capacity have already far outstripped necessity, according to TrendFocus’ Geenen. “The disk drive industry has for years been deluding itself that, oh God, people need 40, 60, 80, 100GB of storage,” he says. “That’s beyond the lunatic fringe. I don’t know what subspecies of the race would demand 120GB on their desktop.”

 

As most storage professionals know, there’s even less motivation to buy large-capacity drives for servers. The higher the areal density, the more data a platter can hold – and the more a platter’s read/write head must skip around to access that data. Today, wide interfaces, low CPU utilization and high rotational speeds have largely made transfer rates a moot point – the average ATA hard drive can stream half a dozen MPEG2 videos simultaneously without choking. Instead, the number I/Os per second (I/Ops) is the main constraint on server hard drive performance – which drops as the number of gigabytes per read/write head increases.

 

That makes I/Ops – not capacity – the real frontier for future server drives. Currie Munce, advanced hard disk drive technology director for IBM, says that fact is reflected in a pattern prevalent today: “People willing to trade dollars per gigabyte in exchange for I/Ops per gigabyte,” so they connect smaller-capacity drives to RAID controllers that stripe data for fast access.

 

The road to higher I/Ops rates is paved with fast-spinning drives working in parallel. Instead of areal density barriers, the main obstacles are rotational speed, physical space, cooling concerns and power consumption. As an example, Munce recounts the anecdote about a server farm once planned for the Seattle suburbs, which would have sucked 1/8 the watts of Seattle’s total capacity. Watts per gigabyte, he says, is an underrated metric.

 

That’s one reason why hard drive technologists agree that lower-power drives with form factors 2-1/2 inches or smaller represent the next major step in hard drive evolution. “The normal response in the disk drive industry over the last 45 years has been: ‘When the mechanics get tough, the tough get smaller’,” says Maxtor’s Jipson.

But hasn’t the hard drive already gotten smaller? Laptops typically use 2 1/2-inch models, while IBM’s Microdrive for consumer devices sports a tiny, 1-inch platter. Munce says, “Mobile drives today are optimized for high shock and minimum power operation for longer battery life.” Not to mention that mobile drives spend most of their time idle – as opposed to server drives that must run flawlessly 24×7. To be viable in server applications, Munce believes that mini drives must be “re-optimized” for reliability and performance, something that IBM is “certainly looking at.”

 

A clear leader in this area is Seagate, Scotts Valley, CA, which already uses 2 1/2-inch platters in its 15,000rpm SCSI drives. “The principal reason for going to smaller discs in enterprise 15K devices is to hold power consumption at a practical level – and also to improve time to data by reducing the physical distances that need to be covered,” says Nigel Macleod, senior vice president for Seagate’s Advanced Concepts Labs. But Macleod believes Seagate may have already reached the practical limit for rotational speed, noting that pushing drives faster than 15,000rpm produces “diminishing returns” in improved access times. And he offers no road map for drives with platters smaller than 2 1/2 inches.

 

The path to miniaturization has several obstacles, the immediate one being the finely-etched photolithography required for read/write heads. “Previously, the disk drive industry lagged the semiconductor industry relative to line-width requirements in photolithography by two to three years,” says Geenen. “Right now, we’re pushing the envelope as to what they can do. And hitting the next generation of recording heads will require tolerances smaller than anything the semiconductor industry can currently provide.”

 

Thanks to IBM, the areal density of magnetic media itself shouldn’t be a gating factor in miniaturization for some time. In 2001, the company unveiled its antiferromagnetically-coupled (AFC) media, a multilayer scheme currently used in mainstream drives that will result in drives with areal densities of 100 gigabits per square inch – or desktop drives topping 400GB – by 2003. Technologists speculate that AFC will be pushed even further to 150Gb. Beyond this, perpendicular recording will be required.

 

ATA meets the back office

While mini server drives may take awhile to reach the mainstream, Serial ATA II – a new drive interface spec that should be finalized in the second half of this year – may have a big impact on server-based storage as early as next year. An extension to the Serial ATA spec – which stipulates higher transfer rates and easier device installation for desktop IDE drives – Serial ATA II will pose a direct challenge to today’s parallel SCSI by adding server and networked storage features to ATA drives.

 

Serial ATA, just like plain old ATA, is a one-to-one controller-to-drive architecture rather than a full-fledged, parallel storage bus like SCSI or Fibre Channel (FC). But Serial ATA reduces the number of leads in the controller-to-drive connection, enabling manufacturers to consolidate perhaps as many as eight ATA controllers on a single die. More to the point, Serial ATA II will at last provide enhanced IDE (EIDE) drives with a switched architecture, enabling multiple servers to be connected together at ATA interface speeds.

 

The Serial ATA II spec isn’t complete yet, but observers believe that chipsets will be priced well below equivalent SCSI chipsets. In part, that’s because Serial ATA II in its initial version won’t attempt to emulate the sophistication of the SCSI command set. But a second version of the Serial ATA II spec – slated for development in 2003 – may well include command queuing and reordering for routing simultaneous requests among multiple drives – the key SCSI advantage in server I/Ops today. According to the non-profit Serial ATA Working Group, devices compliant with the second version of the Serial ATA II spec should appear on the market by 2004.

 

But can ATA drives match the reliability of SCSI models? Absolutely, says Maxtor’s Jipson, who observes that Network Appliance already employs ATA drives in its NearStore line of storage solutions. “We believe that ATA drives – when properly done – can take reliability off of the table as a concern,” Jipson says. However, he says I/Ops-intensive applications will continue to demand high-rpm SCSI drives.

 

TrendFocus’ Geenen agrees that Serial ATA II drives won’t knock high-end SCSI or FC off the top of the stack, but he thinks the effect on the server drive market as a whole could be earthshaking. “If the Serial ATA promise holds,” he says, “a lot of companies will choose to walk away either partially or fully from SCSI.”

 

New life for SCSI

Seagate’s Macleod disagrees, arguing that Serial ATA “simply does not have the functionality and reliability levels needed for mission-critical class data storage.” Macleod acknowledges that Serial ATA could make sense for near-line storage and tape replacement, “but, in general, Serial ATA is a poor choice for truly mission-critical applications.”

 

Instead, Macleod believes that the forthcoming Serial-Attached SCSI (SAS) specification – which he sees as a “natural evolution” of current parallel SCSI technology – will have a better chance at succeeding parallel SCSI as the enterprise-class interface of choice. Championed by Compaq, Seagate, IBM, and Maxtor, SAS will exploit some of the development done on Serial ATA, including smaller connectors and a lower command-set overhead. According to the Serial Attached SCSI Working Group, the ultimate goal is to create a universal connector for hard drives, so a single server or storage appliance could accommodate Serial ATA and SAS devices.

 

The first SAS products are expected to appear in 2004. The new specification will enable 128 devices to be attached to one SCSI bus, a giant step up from the current 16-device limit. And while the fastest current SCSI interface tops out at 320MB/s, top-end SAS devices will sport 600MB/s interfaces by 2005 – twice the top speed proposed for Serial ATA II devices.

 

Of course, faster interfaces simply raise the ceiling on throughput; it’s up to the drive itself to fill that capacity. In fact, today’s parallel SCSI drives rarely bump up against the prevalent 160MB/s interface limit. But as Macleod says, “Interface performance is critical in accessing cached data.” Over the next few years, he anticipates that Seagate and others will be likely to add larger, smarter buffers to hard drives to increase performance. Already, the incremental performance benefits incurred by the large 8MB buffer in Western Digital’s Caviar Special Edition drives have attracted industry attention.

 

A near future of McDrives?

IBM’s Munce notes that new hard drive standards always present multiple opportunities for innovation. For example, future Serial ATA II specs might piggyback support for the development of intelligent drives that understand more about the data they store. We could then see drives that write to disk intelligently – with, say, large media files on the outer tracks for faster transfer rates and small, frequently-accessed files on the inner tracks where everything is packed closer together.

 

Enterprise drives already use Self Monitoring Analysis and Reporting Technology (SMART) to alert IT to potential problems before they occur. “We see SMART’s role expanding to provide our engineers and quality technologists with information to enable ever-more-reliable storage devices,” says Macleod.

 

But added intelligence of this kind, which would require substantial R&D, has no specific time frame. IBM’s Almaden research facility is packed with all sorts of great ideas – some of which have made it to market and some which never will. According to Geenen, even miniaturization may take awhile to progress further. “There are a lot of issues that may slow it down, not the least of which is that it will take a significant level of investment and risk-taking to push form-factor downsizing,” he says. “And there are very few companies today that are equipped to fund or to take that risk.” As Geenen points out, IBM’s recent decision to spin off its hard drive operation into a joint venture with Hitachi speaks volumes about the commoditization of mass storage.

 

True, areal density will continue to increase dramatically. But high-performance server hard drives will be unlikely to exploit those advances, because more bits per square inch means fewer I/Ops per gigabyte – unless high-speed drives get a whole lot smaller in a hurry, which no one anticipates. But that doesn’t mean huge-capacity drives will be left out of the data center, says Seagate’s Macleod. “For insanely high I/Ops systems, 15K drives are the answer – and in the future, even faster drives or parallel arrays of tiny drives will be the direction. For less I/O-intensive applications like backup storage larger, high-capacity, slower drives provide a much more cost effective answer.”

 

Storage technologists agree that the magnetic hard drive will endure at least until the end of the decade. By that time, new storage schemes – involving lasers or plastic discs – may replace magnetic drives in many applications. Remember, though, rumors of the hard drive’s demise have been greatly exaggerated before.

 

The trend, says Macleod, will be toward drives with ever more application-specific performance and cost characteristics. And for mainstream storage systems, the immediate future appears to lie in cheaper, incrementally faster drives that use redundancy and better monitoring to provide fault tolerance at lower cost. New storage technology just doesn’t appear to be right around the corner, astonishing increases in capacity notwithstanding. But cheaper, smaller and slightly faster hardware? You can always count on the computer industry for that.

 

Eric Knorr is a freelance writer working in Northern California.