Tag Archives: Center

Combining Data Center Innovations to Reduce Ecological Footprints | IT Infrastructure Advice, Discussion, Community


The big tech companies are vying for positive coverage of their environmental initiatives. Microsoft just promoted its achievements in renewable energy, which will comprise 60 percent of the company’s electricity usage by the end of the year. Facebook made headlines for a forthcoming 100 percent renewable-powered facility in Los Lunas, New Mexico, while both Apple and Google claim 100 percent carbon neutrality.

These green milestones are important, but renewables represent only one environmental solution for the data center industry. Energy-intensive technologies, such as AI and blockchain, complicate the quest for clean, low-impact electricity generation. Additionally, the sector remains a large consumer of the planet’s other resources, including water and raw materials. Unfortunately, the search for energy efficiency can negatively affect other conservation efforts.

Current State of Play on the Search for Energy Efficiency

A case in point is adiabatic cooling, which evaporates water to ease the burden on HVAC systems. At a time when 2.7 billion people suffer from water scarcity, this approach can lead to intense resource competition, such as in Maharashtra, India, where drinking water had to be imported as thirsty colocation facilities proliferated.

Bolder strategies will be necessary to deliver the compute power, storage capacity, and network connectivity the world demands with fewer inputs of fossil fuels, water, rare earth metals, and other resources. Long range, there is hope for quantum computing, which has the potential to slash energy usage by more than 20 orders of magnitude over conventional technologies. This could cut Google’s annual burn rate, for instance, from gigawatt-hours to the nanowatt-hour range, reducing the need to produce more solar panels, wind turbines, and hydropower stations along the way.

Commercial launches – such as IBM’s Q System One – notwithstanding, the quantum moonshot still lies at least a decade away by most accounts, and the intervening barriers are significant. Quantum calculations remain vulnerable to complex errors, new programming approaches are required, and the nearest-term use cases tend toward high-end modeling, not replacing the standard web server or laptop.

Green Technology Solutions Closer to Earth

Fortunately, there are other technologies nearer at hand and more accessible for the average data center, colocation provider, or even regional office. For example, AI-based technologies are training as zombie killers, using machine learning to improve server allocation and power off the 25% of physical servers and 30% of virtual servers currently running but doing nothing. If underutilized IT assets are repurposed, this can not only help realize energy savings, it can delay new equipment purchases as well.

 

Then there is liquid cooling, well known from the industry’s mainframe origins. Although many companies won’t be able to redesign facilities a la Facebook’s designs, hardware manufacturers are delivering off-the-shelf liquid-cooled products. Use of rear-door heat exchangers and direct-to-chip cooling can help lower PUE from 1.5 or more down toward 1.1, and immersion cooling can deliver power savings of up to 50 percent. These technologies also enable greater density, which means doing more with less space—a good thing, as land, too, is a natural resource.

Consolidation trends will shift more of the environmental burden to the few outfits with pockets deep enough to do the seemingly impossible: sink data centers in the ocean for natural cooling, launch them into space, and “accelerate” workloads with the earliest, sure to be exorbitantly expensive, quantum computers ready for mission critical applications.

What’s Next for the “Green” Data Center

None of today’s available technologies, from AI-driven DCIM systems to advanced load balancers, is a panacea. With blockchain’s intense processing demands and consumers’ insatiable appetite for technology, among other pressures, the IT industry faces numerous forces working against its efforts to shrink resource consumption and carbon emissions.

While we await a breakthrough with the exponential impact of quantum computing, we will have to combine various solutions to drive incremental progress. In some cases, that will mean a return of cold storage to move rarely accessed information off powered storage arrays in favor of tape backups and similar “old school” methods. In others, it will mean allowing energy efficiency and component recyclability to tip the balance during hardware acquisition decisions. And in still others, newer edge computing applications may integrate small, modular pods that work on solar-wind hybrid energy systems.

Hopefully, the craving these dominant tech players display for positive environmental headlines, paired with a profit motive rewarding tiny efficiency gains achieved at hyperscale, will continue to propel advances in green solutions that can one day be implemented industry-wide.



Source link

How to Get Your Data Center Ready for 100G | IT Infrastructure Advice, Discussion, Community


Today, the focus for many data centers is accommodating 100 Gbps speeds: 28 percent of enterprise data centers have already begun their migration.

Here are three considerations to guide upgrade projects that take into consideration the current and future states of your data center.

1) Understand your options for 100G links

Understanding the options for Layer 0 (physical infrastructure) and what each can do will help you determine which best matches your needs and fits your budget.

Here are several options:

For example, if you’re at 10G right now and you have a fiber plant of OM3 with runs up to 65 meters, and you’re trying to move to 100G, you have two options (SWDM4 and BiDi) for staying with your legacy infrastructure.

On the other hand, if you’re at 10G and trying to get to 100G and you have a fiber plant of OM4 with many runs longer than 100 meters, you’ll need to upgrade these runs to single-mode fiber.

But there’s more.

For the longer runs, you have an option of using duplex or parallel SMF runs – which to choose? For “medium” length runs (greater than 100m but less than 500 meters), the extra cost of installing parallel vs. duplex SMF is moderate, while the savings in being able to use PSM4 optics instead of CWDM4 can be large (as much as 7x).

Bottom line: do your own cost analysis. And don’t forget to consider the future: parallel SMF has a less expensive upgrade path to 400G. Added bonus: the individual fiber pairs in parallel fibers can be separated in a patch panel for higher-density duplex fiber connections.

2) Consider your future needs before choosing new fiber

Again, it’s best to upgrade your data center with the future in mind. If you’re laying new fiber, be sure to consider which configuration will offer the most effective “future proofing.”

As you can see from the image above, for long runs you may be better off using parallel SMF. However, there’s a point at which the cost of extra fiber may outweigh the benefits to cheaper optics, so be sure to do the calculations for your data center.

And remember: planning for future needs is a business decision as much as a technical one, so you’ll want to consider questions like these:

How soon will you need to upgrade to 400G, based on elapsed time between when you upgraded from 1G to 10G and 10G to 100G?

Is upgrading to 100G capability right now the best move, given the planned direction of your business?

3) Consider the evolution of data center technology

Technology solutions get cheaper the longer they’re on the market. So if your data center can wait two to three years to make upgrades, then waiting may be the most cost-effective option.

For example, there’s a smaller form factor coming out soon. In the next two to three years, 100G will be moving to the SFP-DD form factor, which is higher density than QSFP-28, meaning you can get more ports in, which is good for tight server closets and those paying by the square foot for co-location.

SFP-DD ports are also backwards-compatible with SFP+ modules and cables, allowing users to upgrade at their own pace. So even if you’re not ready for all 100G ports, you can upgrade the switch but still use your existing 10G SFP+ devices until you need to upgrade them to 100G.

Proceed with caution

Upgrading a data center means managing a lot of moving pieces, so there’s plenty of room for things to go wrong. Consider this example: a data center manager noticed that his brand-new 25G copper links (server to switch) were performing poorly – dropping packets and losing the link.

Remote diagnostics showed no problems, so he decided to physically inspect the new installation. Since the last inspection, he saw the installers had used plastic cable ties to attach all the cables to the racks. This was fine for old, twisted pair cables, but the new 25G twinax copper cables are highly engineered and have strict specs on bend radius and crush pressure.

The tightly cinched cable ties bent the cables and put pressure on the jacketing, which actually changed the cables’ properties and caused intermittent errors. All the cables had to be thrown away and replaced – obviously, not a very cost-effective endeavor.

So, if you’re weighing your options, think through performance, cost, loss budgets, distance, and other features to consider as you upgrade your data center to 100G.



Source link

New World of Edge Data Center Management] |


Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex. IT management roles, and the supporting tools and infrastructure, must change in line with the transition to new edge data centers.

A new world of data center management is being driven by rapid growth in the implementation of edge computing environments and “non-traditional” IT, with analysts forecasting that 80% of enterprise applications will become cloud based by 2025. Underscoring these drivers is a hunger for data with actionable insights and an increased focus on customer experience. Whether internal users or external clients, the services received will be hosted and accessed from multiple locations. From wherever and however it is delivered is of no concern to the user, only quality of service is important.

Minimize Complexity in Edge Data Center Management for Better Business Outcomes

For IT teams, the shift is away from equipment management to application provision and service delivery – wherever and whenever the user wants it. The challenge for IT professionals is to deliver a seamless user experience.

This new focus is accelerated by both internal and external forces. Internally, business has traditionally had little interest in IT operations. Today, there is even less concern about what IT is – the business is really only interested in what it does and how much it costs. IT teams are being told to focus on running applications on behalf of the business, not operating the data center on behalf of the IT department. At the same time, the business expects the management of the infrastructure assets to be automated and efficiently provisioned from the centralized hub to the edge.

Externally, rapid and monumental changes in multi-cloud delivery, edge computing, and AI are creating new challenges and opportunities for IT management. For example, new applications such as AI will ingest data in the cloud, on-premise, and at the edge at volumes previously unseen. AI is about driving business value and cannot be constrained by equipment failures or sub-optimal performance. To ensure the infrastructure is available and performing as required will demand new levels of management and monitoring visibility.

Take Advantage of the Evolving Edge Ecosystem to Meet Business Demands

IT was once relatively simple: keep up with the latest tech industry advances which make it into product sets, and invest wisely in those with the clearest roadmaps. Today, with a focus on business outcomes and less resources, you need automation, AI, and technology to help manage the edge and the data center.

To respond to business’ demands for fast and accurate information, IT as a service has focused on application delivery, not infrastructure management. The choices available on how to deliver a particular application have never been greater and increasingly involve cloud hosting and edge solutions.

The Future Lies with Visibility at the Edge

In a widely read blog post, Gartner’s Dave Cappuccio provided a vision for the future of the data center, declaring that by 2025 the enterprise data center as we know it today will be dead.

Gartner’s obituary for the data center is timely and may prove to be correct. Cappuccio recognizes that it is not yet time to issue the last rites, and he is wise enough not to greatly exaggerate reports of its immediate demise. We can be certain that there is no exaggeration in reports of the need to transition to the next stage. This starts with gaining visibility of all infrastructure operations from the cloud to the edge. A successful edge data center management strategy should include a cloud-based management platform that offers visibility across the entire IT infrastructure. Coupled by a data lake empowered by the smarts of power and cooling specialists, IT teams have the time to focus on more strategic activities that drive business success.

 



Source link

Data Center Tech ‘Graduation:’ What IT Pros Have Learned


As schools around the country hold graduation ceremonies, classic songs like Green Day’s “Good Riddance (Time of Your Life)” will be sung, played, or reminisced about by students everywhere as they reflect on fond memories and lessons learned in school. Graduation is a symbol of transition and change, a milestone that represents growth, progress, and transformation.

Just as education fosters growth in students, digital transformation drives progress in an organization and ultimately leads to innovations in the data center, but not without a few lessons learned from setbacks and failures.

In the spirit of graduation season, we asked our THWACK IT community to tell us what technology they “graduated” to in 2018. According to the SolarWinds 2018 IT Trends Report, 94% of surveyed IT professionals indicated that cloud and/or hybrid IT is the most important technology in their IT organization’s technology strategy today. But what else have organizations experimented with over the last year? Check out some of the most popular technologies that THWACK community members tell us they have implemented this past year, in their words.

(Image: Nirat.pix/Shutterstock)



Source link

6 Reasons SSDs Will Take Over the Data Center


The first samples of flash-based SSDs surfaced 12 years ago, but only now does the technology appear poised to supplant hard drives in the data center, at least for primary storage. Why has it taken so long? After all, flash drives are as much as 1,000x faster than hard-disk drives for random I/O.

Partly, it has been a misunderstanding that overlooks systems, and focuses instead on storage elements and CPUs. This led the industry to focus on cost per terabyte, while the real focus should have been the total cost of a solution with or without flash. Simply put, most systems are I/O bound and the use of flash inevitably means needing fewer systems for the same workload. This typically offsets the cost difference.

The turning point in the storage industry came with all-flash arrays: simple drop-in devices that instantly and dramatically boosted SAN performance. This has evolved into a model of two-tier storage with SSDs as the primary tier and a slower, but cheaper, secondary tier of HDDs

Applying the new flash model to servers provides much higher server performance, just as price points for SSDs are dropping below enterprise hard drive prices. With favorable economics and much better performance, SSDs are now the preferred choice for primary tier storage.

We are now seeing the rise of Non-Volatile Memory Express (NVMe), which aims to replace SAS and SATA as the primary storage interface. NVMe is a very fast, low-overhead protocol that can handle millions of IOPS, far more than its predecessors. In the last year, NVMe pricing has come close to SAS drive prices, making the solution even more attractive. This year, we’ll see most server motherboards supporting NVMe ports, likely as SATA-Express, which also supports SATA drives.

NVMe is internal to servers, but a new NVMe over Fabrics (NVMe-oF) approach extends the NVMe protocol from a server out to arrays of NVMe drives and to all-flash and other storage appliances, complementing, among other things, the new hyper-converged infrastructure (HCI) model for cluster design.

The story isn’t all about performance, though. Vendors have promised to produce SSDs with 32 and 64TB capacity this year. That’s far larger than the biggest HDD, which is currently just 16TB and stuck at a dead-end at least until HAMR is worked out.

The brutal reality, however, is that solid-state opens up form-factor options that hard disk drives can’t achieve. Large HDDs will need to be 3.5 in form-factor. We already have 32TB SSDs in a 2.5 inch size and new form-factors, such as M2.0 and the “ruler“(an elongated M2.0), which will allow for a lot of capacity in a small appliance. Intel and Samsung are talking petabyte- sized storage in 1U boxes.

The secondary storage market is slow and cheap, making for a stronger barrier to entry against SSDs. The rise of 3D NAND and new Quad-Level Cell (QLC) flash devices will close the price gap to a great extent, while the huge capacity per drive will offset the remaining price gap by reducing the number of appliances.

Solid-state drives have a secret weapon in the battle for the secondary tier. Deduplication and compression become feasible because of the extra bandwidth in the whole storage structure, effectively multiplying capacity by factors of 5X to 10X. This lowers the cost of QLC-flash solutions below HDDs in price-per-available terabyte.

In the end, perhaps in just three or four years flash and SSDs will take over the data center and kill hard drives off for all but the most conservative and stubborn users. On the next pages, I drill down into how SSDs will dominate data center storage.

(Image: Timofeev Vladimir/Shutterstock)



Source link