Tag Archives: 7

7 Enterprise Storage Trends for 2018


Enterprises today are generating and storing more data than ever, and the trend shows no sign of slowing down. The rise of big data, the internet of things, and analytics are all contributing to the exponential data growth. The surge is driving organizations to expand their infrastructure, particularly data storage.

In fact, the rapid growth of data and data storage technology is the biggest factor driving change in IT infrastructure, according to the Interop ITX and InformationWeek 2018 State of Infrastructure study. Fifty-five percent of survey respondents choose it as one of the top three factors, far exceeding the need to integrate with cloud services.

Organizations have been dealing with rapid data growth for a while, but are reaching a tipping point, Scott Sinclair, senior analyst at ESG, said in an interview.

“If you go from 20 terabytes to 100 terabytes, that’s phenomenal growth but from a management standpoint, it’s still within the same operating process,” he said. “But if you go from a petabyte to 10 or 20 petabytes, now you start taking about a fundamentally different scale for infrastructure.”

Moreover, companies today see the power of data and understand that they need to harness it in order to become competitive, Sinclair said.

“Data has always been valuable, but often it was used for a specific application or workload. Retaining data for longer periods was more about disaster recovery, having an archive, or for regulatory compliance,” he said. “As we move more into the digital economy, companies want to leverage data, whether it’s to provide more products and services, become more efficient, or better engage with their customers.”

To support their digital strategy, companies are planning to invest in more storage hardware in their data centers, store more data in the cloud, and investigate emerging technologies such as software-defined storage, according to the 2018 State of Infrastructure study. Altogether, they’re planning to spend more on storage hardware than other infrastructure.

Read on for more details from the research and to find out about enterprise storage plans for 2018. Click on the row of buttons below or on the arrows on either side of the images. For the full survey results, download the complete report.

(Image: Peshkova/Shutterstock)



Source link

Big Data Storage: 7 Key Factors


Defining big data is actually more of a challenge than you might think. The glib definition talks of masses of unstructured data, but the reality is that it’s a merging of many data sources, both structured and structured, to create a pool of stored data that can be analyzed for useful information.

We might ask, “How big is big data?” The answer from storage marketers is usually “Big, really big!” or “Petabytes!”, but again, there are many dimensions to sizing what will be stored. Much big data becomes junk within minutes of being analyzed, while some needs to stay around. This makes data lifecycle management crucial. Add to that globalization, which brings foreign customers to even small US retailers. The requirements for personal data lifecycle management under the European Union General Data Protection Regulation go into effect in May 2018 and penalties for non-compliance are draconian, even for foreign companies, at up to 4% of global annual revenues per affected person.

For an IT industry just getting used to the term terabyte, storing petabytes of new data seems expensive and daunting. This would most definitely be the case with RAID storage array; in the past, an EMC salesman could retire on the commissions from selling the first petabyte of storage. But today’s drives and storage appliances have changed all the rules about the cost of capacity, especially where open source software can be brought into play.

In fact, there was quite a bit of buzz at the Flash Memory Summit in August about appliances holding one petabyte in a single 1U rack. With 3D NAND and new form factors like Intel’s “Ruler” drives, we’ll reach the 1 PB goal within a few months. It’s a space, power, and cost game changer for big data storage capacity.

Concentrated capacity requires concentrated networking bandwidth. The first step is to connect those petabyte boxes with NVMe over Ethernet, running today at 100 Gbps, but vendors are already in the early stages of 200Gbps deployment. This is a major leap forward in network capability, but even that isn’t enough to keep up with drives designed with massive internal parallelism.

Compression of data helps in many big data storage use cases, from removing repetitive images of the same lobby to repeated chunks of Word files. New methods of compression using GPUs can handle tremendous data rates, giving those petabyte 1U boxes a way of quickly talking to the world.

The exciting part of big data storage is really a software story. Unstructured data is usually stored in a key/data format, on top of traditional block IO, which is an inefficient method that tries to mask several mismatches. Newer designs range from extended metadata tagging of objects to storing data in an open-ended key/data format on a drive or storage appliance. These are embryonic approaches, but the value proposition seems clear.

Finally, the public cloud offers a home for big data that is elastic and scalable to huge sizes. This has the obvious value of being always right-sized to enterprise needs and AWS, Azure and Google have all added a strong list of big data services to match. With huge instances and GPU support, cloud virtual machines can emulate an in-house server farm effectively, and make a compelling case for a hybrid or public cloud-based solution.

Suffice to say, enterprises have a lot to consider when they map out a plan for big data storage. Let’s look at some of these factors in more detail.

(Images: Timofeev Vladimir/Shutterstock)



Source link

Big Data Storage: 7 Key Factors


Defining big data is actually more of a challenge than you might think. The glib definition talks of masses of unstructured data, but the reality is that it’s a merging of many data sources, both structured and structured, to create a pool of stored data that can be analyzed for useful information.

We might ask, “How big is big data?” The answer from storage marketers is usually “Big, really big!” or “Petabytes!”, but again, there are many dimensions to sizing what will be stored. Much big data becomes junk within minutes of being analyzed, while some needs to stay around. This makes data lifecycle management crucial. Add to that globalization, which brings foreign customers to even small US retailers. The requirements for personal data lifecycle management under the European Union General Data Protection Regulation go into effect in May 2018 and penalties for non-compliance are draconian, even for foreign companies, at up to 4% of global annual revenues per affected person.

For an IT industry just getting used to the term terabyte, storing petabytes of new data seems expensive and daunting. This would most definitely be the case with RAID storage array; in the past, an EMC salesman could retire on the commissions from selling the first petabyte of storage. But today’s drives and storage appliances have changed all the rules about the cost of capacity, especially where open source software can be brought into play.

In fact, there was quite a bit of buzz at the Flash Memory Summit in August about appliances holding one petabyte in a single 1U rack. With 3D NAND and new form factors like Intel’s “Ruler” drives, we’ll reach the 1 PB goal within a few months. It’s a space, power, and cost game changer for big data storage capacity.

Concentrated capacity requires concentrated networking bandwidth. The first step is to connect those petabyte boxes with NVMe over Ethernet, running today at 100 Gbps, but vendors are already in the early stages of 200Gbps deployment. This is a major leap forward in network capability, but even that isn’t enough to keep up with drives designed with massive internal parallelism.

Compression of data helps in many big data storage use cases, from removing repetitive images of the same lobby to repeated chunks of Word files. New methods of compression using GPUs can handle tremendous data rates, giving those petabyte 1U boxes a way of quickly talking to the world.

The exciting part of big data storage is really a software story. Unstructured data is usually stored in a key/data format, on top of traditional block IO, which is an inefficient method that tries to mask several mismatches. Newer designs range from extended metadata tagging of objects to storing data in an open-ended key/data format on a drive or storage appliance. These are embryonic approaches, but the value proposition seems clear.

Finally, the public cloud offers a home for big data that is elastic and scalable to huge sizes. This has the obvious value of being always right-sized to enterprise needs and AWS, Azure and Google have all added a strong list of big data services to match. With huge instances and GPU support, cloud virtual machines can emulate an in-house server farm effectively, and make a compelling case for a hybrid or public cloud-based solution.

Suffice to say, enterprises have a lot to consider when they map out a plan for big data storage. Let’s look at some of these factors in more detail.

(Images: Timofeev Vladimir/Shutterstock)



Source link

7 Ways to Secure Cloud Storage


Figuring out a good path to security in your cloud configurations can be quite a challenge. This is complicated by the different types of cloud we deploy – public or hybrid – and the class of data and computing we assign to those cloud segments. Generally, one can create a comprehensive and compliant cloud security solution, but the devil is in the details and a nuanced approach to different use cases is almost always required.

Let’s first dispel a few myths. The cloud is a very safe place for data, despite FUD from those who might want you to stay in-house. The large cloud providers (CSPs) maintain a tight ship, simply because they’d lose customers otherwise. Even so, we can assume their millions of tenants include some that are malevolent, whether hackers, government spies or commercial thieves.

At the same time, don’t make the common assumption that CSP-encrypted storage is safe. If the CSP uses drive-based encryption, don’t count on it. Security researchers in 2015 uncovered flaws in a particular hard drive product line that rendered the automatic encryption useless. This is lazy man’s encryption! Do it right and encrypt in the server with your own key set.

Part of the data security story is that data must maintain its integrity under attack. It isn’t sufficient to have one copy of data; just think what would happen if the only three replicas of a set of files in your S3 pool are all “updated” by malware. If you don’t provide a protection mechanism for this, you are likely doomed!

We are so happy with the flexibility of all the storage services available to us that we give scant consideration to what happens to, for example, instance storage when we delete the instance. Does it get erased? Or is it just re-issued? And if erasure is used on an SSD, how can we get over the internal block reassignment mechanism that just moves deleted blocks to the free pool? A tenant using the right software tool can read these blocks. Your CSP may have an elegant solution, but good governance requires you to ask them and understand the adequacy of the answer.

Governance is a still-evolving facet of the cloud. There are solutions for data you store, complete with automated analysis and event reporting, but the rise of SaaS and all the associated flavors of as-a-Service leaves the question of where your data is, and if it is in compliance with your high standards.

The ultimate challenge for cloud storage security is the human factor. Evil admins exist or are created within organizations and a robust and secure system needs to accept that fact and protect against it with access controls, multi-factor authentication, and a process that identifies any place that a single disgruntled employee can destroy valued data. Be paranoid; it’s a case of when, not if!

Let’s dig deeper into the security challenges of cloud storage and ways you can protect data stored in the cloud.

(Image: Kjpargeter/Shutterstock)



Source link

7 Myths About How the Internet Works


The internet is a vast and complicated set of interconnected networks, tying internet service providers, cloud service providers and enterprises together. While the cloud is an exciting new technology that is changing the way the world watches videos, hails taxis, uses money, and shares pictures, it’s not clear how these service providers work together in the background to create the value we all enjoy.

Cloud computing enables companies to create real-time transactions and collaborate to produce applications that are valuable for the real world. However, while cloud computing sounds like it is the same thing as the internet, it’s actually a metaphor. Cloud computing uses the internet and obscures the interconnecting infrastructure, platforms, and applications to make transactions seamless, immediate, and convenient for the entire interconnected world. 

Thanks to this obfuscation, there is a great deal of historical fact and fiction about the origins of the internet, networking, computing, and the interlocking pieces that’s melded together to produce myths about how the internet actually works. Let’s take a look at some of these internet myths on the following pages.

(Image: nednapa/Shutterstock)

Jim Poole is the Vice President for Global Ecosystem Development at Equinix. His mission is to explore new and emerging digital ecosystems with a focus on how interconnection can be used to strategic advantage by Equinix customers. Prior to his current role, Jim served as the Vice President for Global Service Provider Marketing, where he was responsible for vertical strategy, messaging and sales activation. Jim has an over 20-year background in the ICT industry. He has held executive level positions at Roundbox, Savvis, C&W Americas, dynamicsoft and UUNET.



Source link