SSD Myths: 5 Myths About SSD Issues Debunked

In Backup & Archiving Hardware by Michael GreccoLeave a Comment

5 SSD myths debunked
Solid-state drives are no longer brand new technology–it’s been around the block, so to speak. SSDs are mature now, and the technology has surpassed hard disk drives with superior performance, manageability and pricing for enterprise storage.

Although SSDs have a firm foothold in the data center mainstream, misconceptions persist around usage, performance, and cost.

IT pros and storage admins regularly reach for solid-state drives (SSDs) to replace hard disk drives (HDDs). Once they break down the myths behind SSD issues, they’ll find they are a boon to storage management and computing and can improve data center efficiency.

SSD Myth 1: SSDs aren’t big enough

One of the lingering SSD issues has been complaints about their smaller capacity; in fact, SSDs have surpassed bulk HDD drives in capacity. A 2.5-inch 32 TB SSD is shipping, and IT pros can expect 50 TB or even more shortly. With HDD stuck at 16 TB or less, SSDs will use fewer enclosures, less power and deliver an all-round better result. SSD capacity has become a non-issue in today’s data center.

SSD Myth 2: SSDs are too expensive

Another of the major SSD issues is drive cost. Pricing for these drives has dropped rapidly over the last few years but has flattened due to product delays moving to the new 3D-NAND fabrication method. This issue has been resolved, and we can expect prices to drop again.

Even so, there is still a price gap compared to HDDs. Bear in mind that servers with SSDs can do much more work and faster, so the cost differential is more than offset. And don’t forget that compression will lower the per-terabyte price of SSDs well below HDDs.

Moreover, the price difference often discussed in industry circles is a conflation of SATA HDDs and enterprise-performance SSDs. The SATA solid-state units are already less than half the price of a comparably sized SAS drive but outperform them by a big margin, which is why SAS drives are in decline. Of course, there are some high-priced NVMe drives, as well, but all the indications are that they’re closing quickly on the SATA SSD prices for equivalent capacity.

SSD Myth 3: SSDs aren’t durable

There’s a bit of truth behind the myth that SSDs wear out, but today’s SSD products are specified to last many years. SSDs last longer these days thanks to better electronics, signal processing, and smarter failure detection and correction.

In addition, SSDs are specified for light or heavy write workloads — measured in whole disk writes per day. In contrast, drives that handle heavy write loads only have more spare space allocated, which increases cost or reduces capacity.

Specifications for some HDDs include a disk writes per day figure. They’re usually not much different from SSD specs, implying that HDDs aren’t immune to expiration. The bottom line is that SSDs are as reliable as HDDs and a whole lot faster.

SSD Myth 4: SSDs can replace HDDs in arrays

This is another one of the SSD issues that may vex IT pros and storage admins. The problem is that today’s SSDs are so fast that typical array controllers can only keep up with a handful of them. Arrays were designed around the I/O performance of HDDs, which means 1,000x slower random I/O, and as much as 100x slower sequential operation.

Array controllers are designed to consolidate many slow HDD data streams into a couple of moderately fast Fibre Channel links, so they become a serious bottleneck with SSDs. Use SSD-centric storage appliances instead, and consider a multichannel 100 GbE storage backbone.

There’s a similar bottleneck problem with servers, as the old Serial-Attached SCSI (SAS) and Serial Advanced Technology Attachment (SATA) interfaces just can’t keep up with drive speed. The new NVMe protocol is much faster, and it also reduces system overhead dramatically by consolidating interrupts and simplifying queue management. IT pros and storage admins are moving to NVMe over Ethernet as a way to share drives across a whole server cluster, speeding up HCI systems.

SSD Myth 5: SSD management is complicated

Another of the early SSD issues was write amplification. This is an artifact of how a block is deleted on an SSD. Instead of just being tagged to a list of empty free blocks, data on SSDs needs to be reset to an unwritten state to make the block available. The complication is that this can only be achieved in flash-based storage by heating the material, which is typically done in mega-blocks of 2 MB. Any valid data in the mega-block must first be rewritten elsewhere.

Doing this when the write is executed from the server will slow down the write considerably, even though a fast memory buffer holds data just after it is sent from the server. It’s best if the block has been cleared well in advance using a TRIM command. TRIM is built into the driver, but you may need to verify that it has been turned on in the OS. With TRIM, writes appear as fast as when the drive was empty.

SSDs last longer these days thanks to better electronics, signal processing, and smarter failure detection and correction.

In this same vein, don’t defragment the SSD; it just wastes time and I/O performance and reduces drive life. The reason is simple: Because of the write process, blocks are randomly positioned over the whole SSD drive space, but there’s no latency penalty to get to any one of them, unlike HDDs.

On the other hand, do look at compressing the flash data on your SSDs. This will increase performance even further, as it writes and reads typically 5x fewer blocks; it will also increase effective drive capacity by around 5x.

There is another boost if the drives are in any type of networked storage system, since the network data load is also reduced by 5x if compression and decompression are done on the server. This is a huge money saver, and it is only possible because SSDs have a lot of extra I/O cycles that can be used for background compression.