Tag Archives: Center

10 Silly Data Center Memes


[Security Breach Report] Overall Impact of & Steps to Prevent Breaches

Despite the escalation of cybersecurity staffing and technology, enterprises continue to suffer data breaches and compromises at an alarming rate. How do these breaches occur? How are enterprises responding, and what is the impact of these compromises on the business? This report offers new data on the frequency of data breaches, the losses they cause, and the steps that organizations are taking to prevent them in the future.

MORE REPORTS



Source link

10 Silly Data Center Memes


Surviving the IT Security Skills Shortage

Cybersecurity professionals are in high demand — and short supply. Find out what Dark Reading discovered during their 2017 Security Staffing Survey and get some strategies for getting through the drought. Download the report today!

MORE REPORTS



Source link

Data Center Architecture: Converged, HCI, and Hyperscale


A comparison of three approaches to enterprise infrastructure.

If you are planning an infrastructure refresh or designing a greenfield data center from scratch, the hype around converged infrastructure, hyperconverged infrastructure (HCI) and hyperscale might have you scratching your head. In this blog, I’ll compare and contrast the three approaches and consider scenarios where one infrastructure architecture would be a better fit than the others.

Converged infrastructure

Converged infrastructure (CI) incorporates compute, storage and networking in a pre-packaged, turnkey solution. The primary driver behind convergence was server virtualization: expanding the flexibility of server virtualization to storage and network components. With CI, administrators could use automation and management tools to control the core components of the data center. This allowed for a single admin to provision, de-provision and make any compute, storage or networking changes on the fly.

Converged infrastructure platforms use the same silo-centric infrastructure components of traditional data centers. They’re simply pre-architected and pre-configured by the manufacturers. The glue that unifies the components is specialized management software. One of the earliest and most popular CI examples is Virtual Computing Environment (VCE). This was a joint venture by Cisco Systems, EMC, and VMware that developed and sold various sized converged infrastructure solutions known as Vblock. Today, Vblock systems are sold by the combined Dell-EMC entity, Dell Technologies.

CI solutions are a great choice for infrastructure pros who want an all-in-one solution that’s easy to buy and pre-packaged direct from the factory. CI is also easier from a support standpoint. If you maintain support contracts on your CI system, the manufacture will assist in troubleshooting end-to-end. That said, many vendors are shifting their focus towards hyperconverged infrastructures.

Hyperconverged infrastructure

HCI builds on CI. In addition to combining the three core components of a data center together, hyperconverged infrastructure leverages software to integrate compute, network and storage into a single unit as opposed to using separate components. This architecture design offers performance advantages and eliminates a great deal of physical cabling compared to silo- and CI-based data centers.  

Hyperconverged solutions also provide far more capability in terms of unified management and orchestration. The mobility of applications and data is greatly improved, as is the setup and management of functions like backups, snapshots, and restores. These operational efficiencies make HCI architectures more attractive from a cost-benefit analysis when compared to traditional converged infrastructure solutions.

In the end, a hyperconverged solution is all about simplicity and speed. A great use case for HCI would be a new virtual desktop infrastructure (VDI) deployment. Using the orchestration and automation tools available, you have the ideal platform to easily roll out hundreds or thousands of virtual desktops.

Hyperscale

The key attribute of hyperscale computing is the de-coupling of compute, network and storage software from the hardware. That’s right, while HCI combined everything into a single chassis, hyperscale decouples the components.

This approach, as practiced by hyperscale companies like Facebook and Google, provides more flexibility than hyperconverged solutions, which tend to grow in a linear fashion. For example, if you need more storage on your HCI system, you typically must add a node blade that includes both compute and built-in storage. Some hyperconverged solutions are better than others in this regard, but most fall prey to linear scaling problems if your workloads don’t scale in step.

Another benefit of hyperscale architectures is that you can manage both virtual and bare metal servers on a single system. This is ideal for databases that tend to operate in a non-virtualized manner. Hyperscale is most useful in situations where you need to scale-out one resource independently from the others. A good example is IoT because it requires a lot of data storage, but not much compute. A hyperscale architecture also helps in situations where it’s beneficial to continue operating bare metal compute resources, yet manage storage resources in elastic pools.



Source link

Data Center Transformation at ConocoPhillips


IT leaders at ConocoPhillips were already working on a major data center consolidation initiative before oil prices plummeted. The company couldn’t keep adding storage and servers; it just wasn’t sustainable, especially for a company that was looking to get serious about the cloud. The industry downturn added urgency to their efforts.

That meant taking some dramatic action in order to cut IT operating costs and save jobs, according to Scott Duplantis, global IT director of server, storage and data center operations at ConocoPhillips. The transformation, which focused on two data centers in the US, included fast-tracking adoption of newer technology like all-flash arrays with full-time data reduction, and refreshing compute platforms with a control-plane software that manages virtual CPU and memory allocations.

All the hard work combined with a fearless approach to data center modernization paid off: The company reduced its data center footprint by more than 50%, slashed its SAN floor space consumption by 80%, cut its power and cooling costs by $450,000 a year, improved reliability, and saved jobs along the way, all in about 30 months.

“We have fewer objects under management, which means not having to add staff as we continue to grow,” Duplantis said. “Our staff can do a better job of managing the infrastructure they have, and it frees them up to pursue cloud initiatives.”

ConocoPhillips’ data center transformation initiative earned first place in the InformationWeek IT Excellence Awards infrastructure category.

Reducing the storage footprint

For its storage-area network, network-attached storage, and backup and recovery, ConocoPhillips traditionally relied on established storage vendors. The SAN alone had 62 racks of storage between the two data centers.

ConocoPhillips decided that flash storage was the way to go, and conducted a bakeoff between vendors that had the features it wanted: ease of management, data deduplication and compression, replication, and snapshotting. The company wound up choosing a relatively new vendor to supply all-flash storage for its SAN, and buying AFAs from one of its incumbent vendors for its NAS. The company also focused on buying larger controllers, which when combined with the flash, provided better performance and reduced the number of objects the staff has to manage.

The work reduced raw SAN storage from 5.6 to 1.8 petabytes. Altogether, the consolidation cuts down on object maintenance and support contracts tied to storage hardware.

Improved power and cooling efficiency from the flash storage adoption has ConocoPhillips revaluating how its data centers are cooled. “We have to do some reengineering in our data centers to accommodate for almost half of the power footprint they had, and a significant drop in heat because these all-flash arrays don’t generate much heat at all,” he said.

The company also is relearning how to track and trend storage capacity needs; with full-time data reduction, measuring capacities has become a bit tricky.

While some argue that flash has a limited lifecycle, ConocoPhillips has experienced improved SAN storage reliability, Duplantis said. 

Server consolidation

On the compute side, ConocoPhillips deployed faster, more powerful servers, along with the control-plane technology that automates the management of CPU and memory workloads. Virtual server densities shot up dramatically, from 20:1 to 50:1.

The control-plane technology, from a startup, provides a level of optimization that goes beyond human scale, according to Duplantis. Combined with the flash storage, it’s helped cut performance issues to near zero.

“You really can’t just stick with the mainstream players,” he advised. “In the industry today, a lot of the true innovation is coming out of the startup space.”

Lessons learned

While the data center modernization project went smoothly for the most part, without disrupting end users, there were some hiccups with the initial flash deployment. Duplantis said the company was pleased with the support they received from the vendor, which was especially important given that the vendor was newer.

Internally, the data center transformation did require a culture shift for the IT team. IT administrators become attached to the equipment they manage, so they need to see a lot of proof that the new technology is reliable and easy to manage, Duplantis said.

“Today, we understand mistakes are made and technology can fail,” he said. “Once they saw they could take a chance and wouldn’t be in trouble if it didn’t work perfectly, they could breathe easy.”

The fact that jobs were saved amid the economy downturn with all the cost-cutting measures turned employees into champions for the new technology, he said. “They see they’re part of the process that helped save jobs, save costs, and increase reliability,” he said.

Looking ahead

ConocoPhillips plans to continue to right-size its storage and virtual server environments; the process is now just part of the corporate DNA. On the virtual side, the team examines the number of hosts every month and decides to either keep them on premises or put them in a queue for the cloud, Duplantis said.

The team also is working to build up its cloud capability to ensure it’s ready when the economy picks up and the company increases its drilling activity. “We want to be nimble and agile when the business needs it,” he said.



Source link

Hot Storage Skills For The Modern Data Center


The world of data storage is evolving faster than dinosaurs after the asteroid struck. Much of the old storage “theology” is on the block as we move to a world of solid-state, software-defined, open source, cloudy appliances and leave RAID arrays behind. That inevitably means that the skills needed to be a successful storage administrator also are changing.

Let’s first look at some timelines. Solid state is already mainstream and 2017 will see a massive jump in usage as 3D NAND hits its stride. With the industry promising 100 TB 2.5 inch SSDs in 2017, even bulk storage is going to change from hard-disk drives. Software-defined storage (SDS) is really just getting started, but if its networking equivalent (SDN) is a guide,  we can expect to see it gain traction quickly.

Open source code, such as Ceph and OpenStack, is already a recognized business alternative. Cloud storage today is mainstream as a storage vehicle for cold data, but still emerging for mission-critical information. This year, we can expect OpenStack hybrid clouds to transition to production operations with the arrival of new management tools and approaches to storage.

Coupled with these storage changes are several transitions under way in servers and networking. The most important is the migration of virtual instances to the container model. Not only do containers impact server efficiency, the ability to manage them and integrate data and network storage resources across the hybrid environment is going to be an in-demand skill in the next-generation data center.

One poorly understood but important issue is how to tune performance in the new environment. We are still getting the wheels to turn in so much of this new stuff, but at some point the realization will hit that a well-tuned data management approach will prevent many of the ills that could arise in performance and security.

In this environment, demand for many traditional storage skills will decline. With cloud backup and archiving rapidly becoming standard, anything to do with traditional backup and tape libraries has to top the list of skills on the way out. Tape has been declared dead regularly for decades, but now the low prices and built-in disaster recovery benefits of the cloud make any tape-based approach impractical.

RAID-based skills are in the same boat. Array sales are dropping off as small Ethernet appliances make for more flexible solutions. In fact, the block-IO model, which struggles to scale, is in decline, replaced by REST and object storage. Skills ranging from building Fibre-Channel SANs to managing LUNs and partitions will be less needed as the decline of the traditional SAN occurs, though IT is conservative and the SAN will fade away, not instantly disappear.

NAS access is in many ways object storage with a different protocol to ask for the objects. While the file model will tend to stick around, just as block-IO will take time to go away, increasingly it will be offered on an object platform, which means that a NAS admin will need to become skilled with object storage approaches.

Continue on to find out what data storage skills will be in demand in the years ahead.

(Image: Mark Agnor/Shutterstock)



Source link