Tag Archives: 8

Choosing a Cloud Provider: 8 Storage Considerations


Amazon Web Services, Google, and Azure dominate the cloud service provider space, but for some applications it may make sense to choose a smaller provider specializing in your app class and able to deliver a finer-tuned solution. No matter which cloud provider you choose, it pays to look closely at the wide variety of cloud storage services they offer to make sure they will meet your company’s requirements.

There are two major classes of storage with the big cloud providers, which offer local instance storage with selected instances, as well as a selection of network storage options for permanent storage and sharing between instances.

As with any storage, performance is a factor in your decision-making process. There are many shared network storage alternatives, including storage tiers from really hot to freezing cold and within the top tiers, differences depending on choice of replica count, and variations in prices for copying data to other spaces.

The very hot tier is moving to SSD and even here there are differences between NVMe and SATA SSDs, which cloud tenants typically see as IOPS levels. For large instances and GPU-based instances, the faster choice is probably better, though this depends on your use case.

At the other extreme, the cold and “freezing” storage, the choices are disk or tape, which impacts data retrieval times. With tape, that can take as much as two hours, compared with just seconds for disk.

Data security and vendor reliability are two other key considerations when choosing a cloud provider that will store your enterprise data.  Continue on to get tips for your selection process.

(Image: Blackboard/Shutterstock)



Source link

8 Infrastructure Trends Ahead for 2018


The cloud is making inroads into the enterprise, but on-premises IT infrastructure remains a critical part of companies’ IT strategies. According to the Interop ITX and InformationWeek 2018 State of Infrastructure study, companies are continuing to invest in data center, storage, and networking infrastructure as they build out their digital strategies.

The survey, which polled 150 IT leaders and practitioners from a range of industries and company sizes, found that 24% said their organization plans to increase spending on IT infrastructure by more than 10% in the next year. Twenty-one percent plan to spend 5% to 10% more on IT infrastructure spending compared to last year while 18% expect to spend no more than 5%.

Twenty-seven percent of IT leaders surveyed said their organizations plan to increase build out or support of IT infrastructure in order to support new business opportunities. Another 30% cited increased workforce demands as the driver for a bigger focus on infrastructure.

Enterprises are investing in a variety of technologies to help them achieve their digital goals and keep up with changing demands, according to the study. Storage is a huge focus for companies as they try to keep pace with skyrocketing data growth. In fact, the rapid growth of data and data storage is the single greatest factor driving change in IT infrastructure, the survey showed.

Companies are also focused on boosting network security, increasing bandwidth, adding more servers to their data centers, and building out their WLANs.

At the same time, they see plenty of challenges ahead to modernizing their infrastructure, including cost of implementation, lack of staff expertise, and security concerns.

Read ahead to find out what organizations are planning in the year ahead for their IT infrastructure. For the full survey results, download the complete report. Learn more about infrastructure trends at Interop ITX in Las Vegas April 30-May 4. Register today! 

(Image: Connect world/Shutterstock)



Source link

Object Storage: 8 Things to Know


Object storage is one of the hottest technology trends, but it isn’t a particularly new idea: The concept surfaced in the mid-90s and by 2005 a number of alternatives had entered the market. Resistance from the entrenched file (NAS) and block (SAN) vendors, coupled with a new interface method, slowed adoption of object storage. Today, with the brilliant success of Amazon Web Services’ S3 storage system, object storage is here to stay and is making huge gains against older storage methods.

Object storage is well suited to the new data environment. Unstructured data, which includes large media files and so-called big data objects, is growing at a much faster rate than structured data and, overall, data itself is growing at a phenomenal rate.

Experience has taught us that traditional block systems become complex to manage at a relatively low scale, while the concept of creating a single pool of data breaks down as the number of appliances increases, especially if the pool crosses the boundaries of different equipment types. Filers have hierarchies of file folders which become cumbersome at scale, while today’s thousands of virtual instances make file-sharing systems clumsy.

An inherent design feature of object stores is distribution of objects across all of the storage devices, or at least into subsets if there is a large number of devices in the cluster. This removes a design weakness of the block/file approach, where failure in an appliance or in more than a single drive could cause either a loss of data availability or even loss of data itself.

Object stores typically use an algorithm such as CRUSH to spread chunks of a data object out in a known and predictable way. Coupling this with replication, and more recently with erasure coding, means that several nodes or drives can fail without materially impacting data integrity or access performance. The object approach also effectively parallelizes access to larger objects, since a number of nodes will all be transferring pieces of the object at the same time.

There are now a good number of software-only vendors today, all of which are installable on a wide variety of COTS hardware platforms. This includes the popular Ceph open source solution, backed by Red Hat. The combination of any of these software stacks and low-cost COTS gear makes object stores attractive on a price-per-terabyte basis, compared to traditional proprietary NAS or SAN gear.

Object storage is evolving to absorb the other storage models by offering a “universal storage” model where object, file and block access portals all talk to the same pool of raw object storage.  Likely, universal storage will deploy as object storage, with the other two access modes being used to create a file or block secondary storage to say all-flash arrays or filers. In the long term, universal storage looks to be the converging solution for the whole industry.

This trend is enhanced by the growth of software-defined storage (SDS). Object stores all run natively in a COTS standard server engine, which means the transition from software built onto an appliance to software virtualized into the instance pool is in most cases trivial. This is most definitely not the case for older proprietary NAS or SAN code. For object stores, SDS makes it possible to scale services such as compression and deduplication easily. It also opens up rich services such as data indexing.

Continue on to get up to speed on object storage and learn how it’s shaking up enterprise storage.

(Image: Kitch Bain/Shutterstock)



Source link

Object Storage: 8 Things to Know


Object storage is one of the hottest technology trends, but it isn’t a particularly new idea: The concept surfaced in the mid-90s and by 2005 a number of alternatives had entered the market. Resistance from the entrenched file (NAS) and block (SAN) vendors, coupled with a new interface method, slowed adoption of object storage. Today, with the brilliant success of Amazon Web Services’ S3 storage system, object storage is here to stay and is making huge gains against older storage methods.

Object storage is well suited to the new data environment. Unstructured data, which includes large media files and so-called big data objects, is growing at a much faster rate than structured data and, overall, data itself is growing at a phenomenal rate.

Experience has taught us that traditional block systems become complex to manage at a relatively low scale, while the concept of creating a single pool of data breaks down as the number of appliances increases, especially if the pool crosses the boundaries of different equipment types. Filers have hierarchies of file folders which become cumbersome at scale, while today’s thousands of virtual instances make file-sharing systems clumsy.

An inherent design feature of object stores is distribution of objects across all of the storage devices, or at least into subsets if there is a large number of devices in the cluster. This removes a design weakness of the block/file approach, where failure in an appliance or in more than a single drive could cause either a loss of data availability or even loss of data itself.

Object stores typically use an algorithm such as CRUSH to spread chunks of a data object out in a known and predictable way. Coupling this with replication, and more recently with erasure coding, means that several nodes or drives can fail without materially impacting data integrity or access performance. The object approach also effectively parallelizes access to larger objects, since a number of nodes will all be transferring pieces of the object at the same time.

There are now a good number of software-only vendors today, all of which are installable on a wide variety of COTS hardware platforms. This includes the popular Ceph open source solution, backed by Red Hat. The combination of any of these software stacks and low-cost COTS gear makes object stores attractive on a price-per-terabyte basis, compared to traditional proprietary NAS or SAN gear.

Object storage is evolving to absorb the other storage models by offering a “universal storage” model where object, file and block access portals all talk to the same pool of raw object storage.  Likely, universal storage will deploy as object storage, with the other two access modes being used to create a file or block secondary storage to say all-flash arrays or filers. In the long term, universal storage looks to be the converging solution for the whole industry.

This trend is enhanced by the growth of software-defined storage (SDS). Object stores all run natively in a COTS standard server engine, which means the transition from software built onto an appliance to software virtualized into the instance pool is in most cases trivial. This is most definitely not the case for older proprietary NAS or SAN code. For object stores, SDS makes it possible to scale services such as compression and deduplication easily. It also opens up rich services such as data indexing.

Continue on to get up to speed on object storage and learn how it’s shaking up enterprise storage.

(Image: Kitch Bain/Shutterstock)



Source link

Choosing a Cloud Provider: 8 Storage Considerations


Amazon Web Services, Google, and Azure dominate the cloud service provider space, but for some applications it may make sense to choose a smaller provider specializing in your app class and able to deliver a finer-tuned solution. No matter which cloud provider you choose, it pays to look closely at the wide variety of cloud storage services they offer to make sure they will meet your company’s requirements.

There are two major classes of storage with the big cloud providers, which offer local instance storage with selected instances, as well as a selection of network storage options for permanent storage and sharing between instances.

As with any storage, performance is a factor in your decision-making process. There are many shared network storage alternatives, including storage tiers from really hot to freezing cold and within the top tiers, differences depending on choice of replica count, and variations in prices for copying data to other spaces.

The very hot tier is moving to SSD and even here there are differences between NVMe and SATA SSDs, which cloud tenants typically see as IOPS levels. For large instances and GPU-based instances, the faster choice is probably better, though this depends on your use case.

At the other extreme, the cold and “freezing” storage, the choices are disk or tape, which impacts data retrieval times. With tape, that can take as much as two hours, compared with just seconds for disk.

Data security and vendor reliability are two other key considerations when choosing a cloud provider that will store your enterprise data.  Continue on to get tips for your selection process.

(Image: Blackboard/Shutterstock)



Source link