Tag Archives: Storage

Haiku OS Picks Up An NVMe Storage Driver


Back during the BeOS days of the 90’s, NVM Express solid-state storage obviously wasn’t a thing but the open-source Haiku OS inspired by it now has an NVMe driver.

Haiku that aims to be an open-source OS based off BeOS now has support for NVMe SSDs. This driver didn’t make last September’s Haiku R1 beta but now being found within the latest development code is for NVMe SSD hardware.

As of the latest Haiku code, NVMe SSDs should be fully usable now under their BeOS-inspired operating system. More details via Haiku.org.

Cloud Storage and Policies: How Can You Find Your Way? | IT Infrastructure Advice, Discussion, Community

Cloud storage is one of the hottest topics today. Rightfully so, there seem to be new services being added seemingly daily. Storage services make up one of the most attractive cloud services, so it is only natural to find business problems to solve.

The reality is that storage in the cloud is a whole new discipline. Completely different. Like forget everything you know and let’s start from the beginning. Both Amazon Web Services and Microsoft Azure have many different storage services. Some are like what we have used on-premises, such as Azure File Storage and AWS Elastic Block Store. These resemble traditional file shares and block storage on-premises, yet how they are used can make a very big difference on your experience in the cloud. There are more storage services in the cloud (such as object storage, gateways and more), and they are different than what has traditionally been used on-premises, and that is where it gets interesting.

Let’s first identify why organizations want to leverage the cloud for storage. This may seem a needless step, but it is more critical than ever. The why is very important. The fundamental reason why should be that the cloud is the right platform for the storage need. Supporting reasons will also include cloud benefits such as these:

No upfront purchase: This is different than the on-premises storage practice of purchasing for the future capacity needs (best guesses, overspend or bad misses of targets are common with this practice!).

Effectively unlimited capacity: Ask any mathematician and they will quickly dispute the cloud is not unlimited, but from most customer perspective the cloud will provide effectively unlimited storage options.

Predictable pricing: While not exactly linear, it is pretty clear what consumption pricing will be with cloud storage.

These are some of the good reasons to embrace cloud storage, but beyond the reasons to go to the cloud the strong advice is to look at storage policies and usage to not have any surprises in the future. Some of this includes looking at the economics from a complete scope of use. Too many times pricing is just seen as how much consumption per month. Take AWS S3 for example, for S3 Standard Storage one can have the first 50 TB per month priced at $0.023 per GB (pricing as of March 2019, US East (Ohio) region). But other aspects of using the storage should absolutely be considered. Take for example the following other aspects:

Getting data into the cloud is often overlooked, but there is a cost to that as well. This makes how data is written to the cloud important. Is data sent in small increments (more write operations or put tasks) or in relatively fewer larger increments? This can change the cost profile.

Egress is where data is read from a cloud storage location, and that has a cost. One practical cost is to leverage solutions with cloud storage that retrieve the right pieces; versus entire datasets.

Deleting data Interesting to think about, not for costs per se; but deleting data should be considered. The data in the cloud will live as long as you pay for it, so give thought to ensure no dead data is living in the cloud.

But what can organizations do to manage cloud storage from a policy perspective? In a way, some of the same practices as before can be applied. But also leverage frameworks from the cloud platforms to help manage the usage and consumption. AWS Organizations is a good example for providing policy-based management of multiple AWS accounts. This will streamline account management, billing and control to cloud services. Similar capabilities exist in Azure with Subscription and Service Management along with Azure RBAC.

Between taking a responsible look at new cloud services from what we have learned in the past coupled with what new frameworks are available to use in the cloud, organizations can easily and confidently embrace cloud storage services to not only solve the right platform question, but also manage it in a way that lets CIOs and decision makers sleep at night.

Source link

Assess USB Performance While Exploring Storage Caching | Linux.com

The team here at the Dragon Propulsion Laboratory has kept busy building multiple Linux clusters as of late [1]. Some of the designs rely on spinning disks or SSD drives, whereas others use low-cost USB storage or even SD cards as boot media. In the process, I was hastily reminded of the limits of external storage media: not all flash is created equal, and in some crucial ways external drives, SD cards, and USB keys can be fundamentally different.

Turtles All the Way Down

Mass storage performance lags that of working memory in the Von Neumann architecture [2], with the need to persist data leading to the rise of caches at multiple levels in the memory hierarchy. An access speed gap three orders of magnitude between levels makes this design decision essentially inevitable where performance is at all a concern. (See Brendan Gregg’s table of computer speed in human time [3].) The operating system itself provides the most visible manifestation of this design in Linux: Any RAM not allocated to a running program is used by the kernel to cache the reads from and buffer the writes to the storage subsystem [4], leading to the often repeated quip that there is really no such thing as “free memory” in a Linux system.

An easy way to observe the operating system (OS) buffering a write operation is to write the right amount of data to a disk in a system with lots of RAM, as shown in Figure 1, in which a rather improbable half a gigabyte worth of zeros is being written to a generic, low-cost USB key in half a second, but then experiences a 30-second delay when forcing the system to sync [5] to disk. 

Read more at ADMIN magazine

Working with the Container Storage Library and Tools in Red Hat Enterprise Linux | Linux.com

How containers are stored on disk is often a mystery to users working with the containers. In this post, we’re going to look at how containers images are stored and some of the tools that you can use to work with those images directly –PodmanSkopeo, and Buildah.

Evolution of Container Image Storage

When I first started working with containers, one of the things I did not like about Docker’s architecture was that the daemon hid the information about the image store within itself. The only realistic way someone could use the images was through the daemon. We were working on theatomic tool and wanted a way to mount the container images so that we could scan them. After all a container image was just a mount point under devicemapper or overlay.

The container runtime team at Red Hat created the atomic mountcommand to mount images under Docker and this was used within atomic scan. The issue here was that the daemon did not know about this so if someone attempted to remove the image while we mounted it, the daemon would get confused. The locking and manipulation had to be done within the daemon. …

Container storage configuration is defined in the storage.conf file. For containers engines that run as root, the storage.conf file is stored in /etc/containers/storage.conf. If you are running rootless with a tool like Podman, then the storage.conf file is stored in $HOME/.config/containers/storage.conf.

Read more at Red Hat blog

VMware vSphere Storage Types

VMware vSphere supports different types of storage architectures, both internally (in this case the controller is crucial, that must be in the HCL) or externally with shared SAS DAS, SAN FC, SAN iSCSI, SAN FCoE, or NFS NAS (in those case the HCL is fundamental for the external storage, the fabric elements, and the host adapters).

For local storage, with vSphere 6.x it’s possible to use USB disks, not only as boot disks, but also to run VMs. But note that USB datastores are just unsupported by VMware.

Storage types at the VM logical level

There are different types of virtual disks depending on the provisioning method, pre- allocated or dynamic. The type of virtual disks are mainly the same since vSphere 4.0:

  • An eager zeroed thick disk has all space allocated and wiped clean of any previous content on the physical media at creation time. Such disks may take a long time during creation compared to other disk formats. The entire disk space is reserved and unavailable for use by other VMs.
  • Thick or lazy zeroed thick VMDK: A thick disk has all space allocated at creation time. This space may contain stale data on the physical media. Before writing to a new block, a zero has to be written, increasing the input/output operation per second (IOPS) on new blocks compared to eager disks. The entire disk space is reserved and unavailable for use by other VMs.
  • Thin VMDK: Space required for the thin-provisioned virtual disk is allocated and zeroed on demand as space is used. Unused space is available for use by other VMs.

You can choose the disk provisioning type during virtual disk creation, but you can change the type using a cold VM migration across two datastores, or using Storage vMotion (if you have at least ESXi Standard edition). Note that you can also change the type of each individual disk, by choosing Configure per disk on the new HTML5 client shown as follows:

(Click on image for larger view)

There are also Raw Device Mapping (RDM) disks where a disk at ESXi level is mapped 1:1 to a VM (like a Passthrough mode), with two different types of compatibility (virtual or physical mode). Except for building guest clusters (clusters across VMs on different hosts), there is no need to use these types of disk.

There is no significant difference in performance for sequential I/O between the different types of virtual disks. For random I/O, thin VMDKs have the worst performance and higher latency (for lazy thick, it depends if you have to write a new block).

Storage types at the VM physical level

To access a block device, such as virtual disks VMDK, virtual CD/DVD-ROM, or other SCSI devices, each VM uses storage controllers; at least one is added by default when you create a VM.

There are different types of controller available for a VM running on ESXi which are described as follows:

  • BusLogic: This is one of the first emulated SCSI virtual controllers available in VMware ESX. Now it’s a legacy controller used mainly for legacy operating systems. It does not support VMDK larger than 2 TB.
  • LSI Logic Parallel: This was formally known as LSI Logic and was the other SCSI virtual controller available originally in VMware ESX, used for operating systems such as Windows Server 2003.
  • LSI Logic SAS: This was introduced in vSphere 4.0, and is the evolution of the parallel driver, working as a SAS virtual controller and used in Windows Server 2008 or newer.
  • VMware Paravirtual (or PVSCSI): This was introduced in vSphere 4.0, is an SCSI virtual controller designed to support very high throughput with minimal processing cost, working not in emulation mode, but in paravirtual mode (it requires the VMware Tools to be recognized).

Others virtual controllers are also possible in a VM, such as AHCI SATA (introduced in vSphere 5.5), IDE, and also USB controllers, but usually for specific cases (for example SATA or IDE are usually used for virtual DVD drives).

Note: When you create a VM, the default controller is optimized for good performance and compatibility. The controller type depends on the guest operating system (usually its driver is included in the operating system), the device type, and sometimes, the VMs compatibility. But sometimes you can choose a different controller to improve the performance, like the PVSCI (useful for VMFK with high load) or a new type available in vSphere 6.5.

With ESXi 6.5 and VM virtual hardware version 13, you can now also use a virtual NVMe. Virtual NVMe devices have reduced guest I/O processing overheads (over 50% compared to AHCI SATA SCSI device), which allows more VMs per host or more transactions per minute. Each virtual machine supports 4 NVMe controllers and up to 15 devices per controller.

Virtual NVMe controllers are supported on vSphere 6.5 only on the following guest operating systems:

  • Windows 7 and 2008 R2 (hotfix required, refer to https://support.microsoft.com/en-us/kb/2990941)
  • Windows 8.1, 2012 R2, 10, 2016
  • RHEL, CentOS, NeoKylin 6.5, and later Oracle Linux 6.5 and later
  • Ubuntu 13.10 and later
  • SLE 11 SP4 and later
  • Solaris 11.3 and later
  • FreeBSD 10.1 and later
  • Mac OS X 10.10.3 and later
  • Debian 8.0 and later

You can add a new NVMEe virtual controller using the vSphere Web Client (from the HTML5 web client is not yet possible) as shown in the following steps:

  1. Right-click on the virtual machine in the inventory and select Edit Settings option
  2. Click the Virtual Hardware tab, and select NVMe Controller from the New device drop-down menu
  3. Click on Add
  4. The controller appears in the Virtual Hardware devices list
  5. Click OK

(Click on image for larger view)

For more information on NVMe, see also KB 2147714—Using Virtual NVMe with ESXi 6.5 and virtual machine Hardware Version 13 (https://kb.vmware.com/kb/2147714).

For more information on PVSCI, see also KB 1010398—Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters (https://kb.vmware.com/kb/1010398).

Storage types at the ESXi logical level

At the high level, VMware vSphere will access each storage using datastores—a logical paradigm to abstract all storage types, like a common operating system uses letters or mount points to access a filesystem.

VMware vSphere 6.x has the following four main types of datastore:

  • VMware FileSystem (VMFS) datastores: All block-based storage must be first formatted with VMFS to transform a block service to a file and folder oriented services
  • Network FileSystem (NFS) datastores: This is for NAS storage
  • VVol: This is introduced in vSphere 6.0 and is a new paradigm to access SAN and NAS storage in a common way and by better integrating and consuming storage array capabilities
  • vSAN datastore: If you are using vSAN solution, all your local storage devices could be polled together in a single shared vSAN datastore

New datastores could be provisioned from the new HTML5 client, starting from a data centre, a cluster, or a host; just right-click on the object, choose storage, and then new datastore:

(Click on image for larger view)

For local disks, if you have configured the right RAID level from the controller (remember that ESXi does not provide software RAID features), you can just format the logical disks with a VMFS datastore.

But before external storage, before adding a new datastore, you must first configure the ESXi host, the fabric, (if present) and the storage itself. This depends on the storage type and vendor and will be discussed later. You cannot directly add a vSAN datastore; the vSAN configuration is quite different, but the final result will be a vSAN datastore with its own format.

Of course, on the same host you can have multiple datastores, also with different types:

(Click on image for larger view)

At the datastore level, there isn’t any difference between DAS or SAN, they are just block- based storage and become VMFS datastores. The functional difference is that a SAN disk could be shared across multiple hosts, not local DAS disks (but there are also shared SAS storages that are formally classified as DAS storage).

Storage types at the ESXi physical level

Excluding vSAN, which has a specific configuration, at the physical level we can have three different main types of storage:

  • Block-based storage acceded by a hardware adapter: This includes DAS storage or a SAN FC storage.
  • Block-based storage acceded by a software adapter: This is like the SAN iSCSI storage when the software initiator is used. In this case, you need first to properly configure the network connectivity. After that, it becomes very similar to the first case.
  • NFS storage: This is where you have to configure first the IP network connectivity to your storage and then connect the NFS datastore.

For the physical storage adapters, VMware ESXi supports several types of protocols and technologies (refer to the hardware compatibility list to check the supported level):

  • Fibre Channel Host Bus Adapter (FC HBA): This is the common and historical way to implement an FC-based storage, but using a dedicated full fabric.
  • iSCSI HBA: These are specialized PCIe cards that implement completely in hardware the entire iSCSI stack, reducing the load of the host CPU.
  • CNA adapters for FCoE or iSCSI: These are mostly 10 Gbps (or greater) Ethernet adapters providing hardware (or hardware assisted) FCoE or iSCSI functionality on converged (or also dedicated) networks.
  • RDMA over Converged Ethernet (RoCE): This is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. Starting with vSphere 6.5, RoCE certified adapters could be used for converged networks. InfiniBand HCA: Mellanox Technologies InfiniBand HCA device drivers are available directly from Mellanox Technologies. Mostly used for the network part instead of the storage part, they could be interesting in converged networks, and also in vSAN implementation.

This tutorial is an excerpt from “Mastering VMware vSphere 6.5” by Andrea Mauro, Paolo Valsecchi & Karel Novak and published by Packt. Get the ebook for just $9 until Aug. 31.

Source link