Tag Archives: SSDs

FreeBSD ZFS vs. Linux EXT4/Btrfs RAID With Twenty SSDs


With FreeBSD 12.0 running great on the Dell PowerEdge R7425 server with dual AMD EPYC 7601 processors, I couldn’t resist using the twenty Samsung SSDs in that 2U server for running some fresh FreeBSD ZFS RAID benchmarks as well as some reference figures from Ubuntu Linux with the native Btrfs RAID capabilities and then using EXT4 atop MD-RAID.

FreeBSD 12.0 with ZFS and its default configuration was tested on a single disk, ZFS in a striped configuration with all twenty disks, FreeBSD ZFS in a RAIDZ1 configuration, FreeBSD in a RAIDZ3 configuration, and lastly FreeBSD ZFS in a RAID10 configuration.

When switching the Dell PowerEdge R7425 server over to Ubuntu 18.10 and upgrading to the Linux 4.20 kernel, Btrfs was tested on a single disk, in a RAID10 configuration, and in a RAID0 configuration. EXT4 using Linux Software RAID was benchmarked as well on a single disk, RAID10, and RAID0 across the twenty Samsung 860 EVO SSDs. The mount options used were the defaults as were other settings kept at their OS vendor defaults. ZFS Linux benchmarks will come when the upcoming ZOL 0.8 release is available.

Via the Phoronix Test Suite a variety of I/O storage benchmarks were run on this Dell PowerEdge R7425 server with dual EPYC 7601 processors and twenty Serial ATA 3.0 SSDs.


6 Reasons SSDs Will Take Over the Data Center


The first samples of flash-based SSDs surfaced 12 years ago, but only now does the technology appear poised to supplant hard drives in the data center, at least for primary storage. Why has it taken so long? After all, flash drives are as much as 1,000x faster than hard-disk drives for random I/O.

Partly, it has been a misunderstanding that overlooks systems, and focuses instead on storage elements and CPUs. This led the industry to focus on cost per terabyte, while the real focus should have been the total cost of a solution with or without flash. Simply put, most systems are I/O bound and the use of flash inevitably means needing fewer systems for the same workload. This typically offsets the cost difference.

The turning point in the storage industry came with all-flash arrays: simple drop-in devices that instantly and dramatically boosted SAN performance. This has evolved into a model of two-tier storage with SSDs as the primary tier and a slower, but cheaper, secondary tier of HDDs

Applying the new flash model to servers provides much higher server performance, just as price points for SSDs are dropping below enterprise hard drive prices. With favorable economics and much better performance, SSDs are now the preferred choice for primary tier storage.

We are now seeing the rise of Non-Volatile Memory Express (NVMe), which aims to replace SAS and SATA as the primary storage interface. NVMe is a very fast, low-overhead protocol that can handle millions of IOPS, far more than its predecessors. In the last year, NVMe pricing has come close to SAS drive prices, making the solution even more attractive. This year, we’ll see most server motherboards supporting NVMe ports, likely as SATA-Express, which also supports SATA drives.

NVMe is internal to servers, but a new NVMe over Fabrics (NVMe-oF) approach extends the NVMe protocol from a server out to arrays of NVMe drives and to all-flash and other storage appliances, complementing, among other things, the new hyper-converged infrastructure (HCI) model for cluster design.

The story isn’t all about performance, though. Vendors have promised to produce SSDs with 32 and 64TB capacity this year. That’s far larger than the biggest HDD, which is currently just 16TB and stuck at a dead-end at least until HAMR is worked out.

The brutal reality, however, is that solid-state opens up form-factor options that hard disk drives can’t achieve. Large HDDs will need to be 3.5 in form-factor. We already have 32TB SSDs in a 2.5 inch size and new form-factors, such as M2.0 and the “ruler“(an elongated M2.0), which will allow for a lot of capacity in a small appliance. Intel and Samsung are talking petabyte- sized storage in 1U boxes.

The secondary storage market is slow and cheap, making for a stronger barrier to entry against SSDs. The rise of 3D NAND and new Quad-Level Cell (QLC) flash devices will close the price gap to a great extent, while the huge capacity per drive will offset the remaining price gap by reducing the number of appliances.

Solid-state drives have a secret weapon in the battle for the secondary tier. Deduplication and compression become feasible because of the extra bandwidth in the whole storage structure, effectively multiplying capacity by factors of 5X to 10X. This lowers the cost of QLC-flash solutions below HDDs in price-per-available terabyte.

In the end, perhaps in just three or four years flash and SSDs will take over the data center and kill hard drives off for all but the most conservative and stubborn users. On the next pages, I drill down into how SSDs will dominate data center storage.

(Image: Timofeev Vladimir/Shutterstock)



Source link

Hard Disk Drives Cling to Life as SSDs Take Off


Despite recent developments in hard-disk drive technology, solid-state drives are on the way to becoming the solution of choice for enterprise storage. They have cost more than HDDs, which held back adoption considerably for the last few years, but the situation changed this year, as I predicted.

The secondary storage market is characterized as using high-capacity 3.5 inch hard drives. The segment is price-sensitive rather than performance oriented and, due to hard-drive capacity growth over the last half-decade, remains the primary market for HDDs.

The HDD capacity growth curve has, however, stalled out. Moving beyond 14TB – the capacity Western Digital announced in October and the largest HDD to date — will prove difficult technically. The common approach to larger capacity has been Heat-Assisted Magnetic Recording (HAMR), where a laser is used to magnetically “soften” the area to write a bit. Vendors promise to reach 100TB capacity in eight years, for example, but the technology is very difficult to get right, never mind produce in volume.

WD said it’s using an alternative approach, MAMR, which uses microwaves to achieve softening. The company promises to ship its first MAMR product in 2019, which may prove a bit optimistic. The problem with either approach is that we already have 32TB SSDs, which fit in a smaller 2.5 inch footprint, effectively half the size of these bulk HDDs.

The evolution of SSD capacity is moving rapidly along with 3D NAND, die stacking, and QLC cell architectures all poised to drop prices rapidly in 2018 and to make even larger capacities available. We can certainly expect 64TB SSDs in 2018 and perhaps even see the 100TB units several vendors have promised.

These large SSDs will likely be at a dollars/terabyte premium over HDDs for a while, but the capacity increase and reduced size means many fewer appliances will be needed to store secondary, cold, data. Moreover, non-traditional drive packaging such as Intel’s elongated M2 blades, with 32 blades of 32TB each in a 1U appliance, promise 5PB in a 1U appliance when combined with compression.

Already this year, perceptions around SSD costs have been impacted by a realization that the much greater throughput possible with SSD primary storage means that fewer servers are needed to run a given workload, with the resulting savings more than offsetting extra pricing for the drives. Also, the cost of NVMe flash drives has moved close to SAS and SATA pricing, resulting in NVMe now being the interface of choice in servers and even desktops.

At the same time, SSD-based all-flash arrays (AFAs) have displaced the RAID array as the preferred approach in networked storage. Here, because of the high bandwidth available, AFAs support compression of both the primary data they store and the secondary data stream being offloaded to other storage appliances. For most applications, compression results in a 5:1 effective multiplication of the raw capacity. Because HDD primary storage is way too slow, compression is not a viable option for RAID arrays.

A third trend benefiting SSDs is the growth of hyperconverged infrastructure. Based originally on SSDs, HCI has migrated to NVMe SSDs to obtain the response times and throughput those drives bring. Led by Excelero, the next step in HCI is direct connection of NVMe drives to the RDMA Ethernet fabric of the cluster of nodes, removing latency and providing very high throughput. Excelero’s approach opens up directly connecting future NVMe Ethernet drives to the cluster fabric, allowing a great deal of parallelism in the cluster storage design.

The result of all of these trends is that this year the battle for market share is heating up, with SSD pulling ahead strongly in the enterprise drive class. At the same time, SAN-based primary storage is in decline, with RAID array sales falling quarter by quarter.

Can desktops hold the HDD market up for a while? I just bought a new system, with a mid-market motherboard. It has four slots for M2 NVMe SSDs! Gamers will generally go for speed, especially more so when the price of the NVMe drive is identical to the SATA equivalent!

One bump in the road for SSDs is flash die production capacity. The conversion to 3D NAND was more difficult for suppliers than expected, causing shortages in the first half of this year. With that problem moving into the history column and real capacity gains from the recent innovation of die stacking  for 3D NAND, 2018 will see supply moving close to demand. Still, demand will be high for NAND die so shortages may persist during the transition from HDDs.

With few factors in their favor, hard drives are looking to go the way of the Dodo. This isn’t going to be an overnight phenomenon. Radical changes like this take half a decade or more to complete and even at the end there will be a market for legacy systems.

 



Source link

6 Ways SSDs Are Cheaper Than Hard Drives


With all the hype and counter-hype on the issue of solid-state drives versus hard-disk drives, it’s a good idea to step back and look at the whole pricing picture. This is a confluence of the relative cost per TB of flash die versus HDD assemblies, the impact of SSD performance on server count for a given workload, and the differential in markups by OEM vendors to their end users.

The capacity of flash die has been increasing at an explosive rate over the last year. The “simple” concept of stacking flash cells in the third dimension, coupled with the stacking of these 3D die on top of each other to make a “super-die” has grown capacity by as much as 256 times per flash chip. To put this in perspective, HDD capacity took over 20 years to achieve what SSDs have done in a single year.

I believe SSDs beats HDDs in most use cases today based on total cost of ownership. I’m not just talking power savings, which are typically $10 or $12 per year. SSDs are blindingly fast and that makes jobs run fast, too. The result is you need fewer servers and in many cases these savings offset the additional costs of SSDs.

TCO calculation and the cost comparison between SSD and HDD is complicated by model class and drive markup approaches by vendors. Traditionally, we distinguished enterprise drives with dual-port SAS interfaces from nearline drives with SATA. This distinction has fallen apart in SSDs. Many storage appliances don’t need enterprise dual-port drives, while NVMe is replacing SAS and soon SATA as the SSD interface. For many applications, low-cost SSDs are adequate for the job, which changes buying patterns.

Typical OEM vendor markup ratios are as much as 14X for SSDs, making them even more expensive than raw cost would suggest compared with HDDs that typically see 10X markups or less. COTS systems are starting to drive these markups down, while buying from drive makers directly (if you are a major cloud service provider) or from master distributors (for mere mortals) opens the door to much lower SSD prices.

There are underlying trends in IT that factor into the cost of storage. First, we are rapidly migrating away from the traditional mainstay of storage, the RAID array, to more compact storage appliances that have much more software content, and, with fewer SSD drives, are able to deliver much more data. Second, the new storage appliances use the high bandwidth of SSDs or flash to compress stored data as a background job. HDDs are too slow to do this. The result is much more storage for the same price.

Let’s look more closely at these factors that make SSDs more economical in the long run.

(Image: jules2000/Shutterstock)



Source link