Tag Archives: 6

6 Ways to Recycle Your IT Gear for Earth Day


We all love our smartphones, computers, tablets, and gadgets. Some of us wait in long lines the moment the latest tech hits the shelves, while others upgrade when our old devices finally kick the bucket. Either way, we are all inevitably left with obsolete technology that we need to discard. The hardware, batteries, cables, and accessories often become burdensome because we are not sure how to recycle this material. As digital transformation continues to permeate IT professionals’ data centers, the same is true of legacy infrastructure that is either rendered obsolete by new technology like cloud computing or are simply subject to an upgrade.

Recycling properly can take time that IT professionals may not have since they’re busy keeping organizational processes running smoothly, which means the environment often takes a backseat as old tech collects dust in the supply closet.

In the spirit of Earth Day this Sunday, SolarWinds polled its THWACK community of more than 145,000 IT professionals and collected their best tips and tricks for recycling or disposing of older hardware in an environmentally friendly way.

Here are some of the best ways to reuse and recycle old technology this Earth Day, along with advice on how to be more green by reducing your data center footprint.

(Image: ipopba/iStock)



Source link

Data Protection in the Public Cloud: 6 Steps


While cloud security remains a top concern in the enterprise, public clouds are likely to be more secure than your private computing setup. This might seem counter-intuitive, but cloud service providers have a leverage of scale that allows them to spend much more on security tools than any large enterprise, while the cost of that security is diluted across millions of users to fractions of a cent.

That doesn’t mean enterprises can hand over all responsibility for data security to their cloud provider. There are still many basic security steps companies need to take, starting with authentication. While this applies to all users, it’s particularly critical for sysadmins. A password compromise on their mobiles could be the equivalent of handing over the corporate master keys. For the admin, multi-factor authentication practices are critical for secure operations. Adding biometrics using smartphones is the latest wave in the second or third part of that authentication; there are a lot of creative strategies!

Beyond guarding access to cloud data, what about securing the data itself? We’ve heard of major data exposures occurring when a set of instances are deleted, but the corresponding data isn’t. After a while, these files get loose and can lead to some interesting reading. This is pure carelessness on the part of the data owner.

There are two answers to this issue. For larger cloud setups, I recommend a cloud data manager that tracks all data and spots orphan files. That should stop the wandering buckets, but what about the case when a hacker gets in, by whatever means, and can reach useful, current data? The answer, simply, is good encryption.

Encryption is a bit more involved than using PKZIP on a directory. AES-256 encryption or better is essential. Key management is crucial; having one admin with the key is a disaster waiting to happen, while writing down on a sticky note is going to the opposite extreme. One option offered by cloud providers is drive-based encryption, but this fails on two counts. First, drive-based encryption usually has only a few keys to select from and, guess what, hackers can readily access a list on the internet. Second, the data has to be decrypted by the network storage device to which the drive is attached. It’s then re-encrypted (or not) as it’s sent to the requesting server. There are lots of security holes in that process.

End-to-end encryption is far better, where encryption is done with a key kept in the server. This stops downstream security vulnerabilities from being an issue while also adding protection from packet sniffing.

Data sprawl is easy to create with clouds, but opens up another security risk, especially if a great deal of cloud management is decentralized to departmental computing or even users. Cloud data management tools address this much better than written policies. It’s also worthwhile considering adding global deduplication to the storage management mix. This reduces the exposure footprint considerably.

Finally, the whole question of how to backup data is in flux today. Traditional backup and disaster recovery has moved from in-house tape and disk methods to the cloud as the preferred storage medium. The question now is whether a formal backup process is the proper strategy, as opposed to snapshot or continuous backup systems. The snapshot approach is growing, due to the value of small recovery windows and limited data loss exposure, but there may be risks from not having separate backup copies, perhaps stored in different clouds.

On the next pages, I take a closer look at ways companies can protect their data when using the public cloud.

(Image: phloxii/Shutterstock)



Source link

6 Ways to Transform Legacy Data Storage Infrastructure


So you have a bunch of EMC RAID arrays and a couple of Dell iSCSI SAN boxes, topped with a NetApp filer or two. What do you say to the CEO who reads my articles and knows enough to ask about solid-state drives, all-flash appliances, hyperconverged infrastructure, and all the other new innovations in storage? “Er, er, we should start over” doesn’t go over too well! Thankfully, there are some clever — and generally inexpensive — ways to answer the question, keep your job, and even get a pat on the back.

SSD and flash are game-changers, so they need to be incorporated into your storage infrastructure. SSDs are better than enterprise-class hard drives from a cost perspective because they will speed up your workload and reduce the number of storage appliances and servers needed. It’s even better if your servers support NVMe, since the interface is becoming ubiquitous and will replace both SAS and (a bit later) SATA, simply because it’s much faster and lower overhead.

As far as RAID arrays, we have to face up to the harsh reality that RAID controllers can only keep up with a few SSDs. The answer is either an all-flash array and keeping the RAID arrays for cool or cold secondary storage usage, or a move to a new architecture based on either hyperconverged appliances or compact storage boxes tailored for SSDs.

All-flash arrays become a fast storage tier, today usually Tier 1 storage in a system. They are designed to bolt onto an existing SAN and require minimal change in configuration files to function. Typically, all-flash boxes have smaller capacities than the RAID arrays, since they have enough I/O cycles to do near-real-time compression coupled with the ability to down-tier (compress) data to the old RAID arrays.

With an all-flash array, which isn’t outrageously expensive, you can boast to the CEO about 10-fold boosts in I/O speed, much lower latency , and as a bonus a combination of flash and secondary storage that usually has 5X effective capacity due to compression. Just tell the CEO how many RAID arrays and drives you didn’t buy. That’s worth a hero badge!

The idea of a flash front-end works for desktops, too. Use a small flash drive for the OS (C-drive) and store colder data on those 3.5” HDDs. Your desktop will boot really quickly, especially with Windows 10 and program loads will be a snap.

Within servers, the challenge is to make the CPU, rather than the rest of the system, the bottleneck. Adding SSDs as primary drives makes sense, with HDDs in older arrays doing duty as bulk secondary storage, just as with all-flash solutions, This idea has fleshed out into the hyperconverged infrastructure (HCI) concept where the drives in each node are shared with other servers in lieu of dedicated storage boxes. While HCI is a major philosophical change, the effort to get there isn’t that huge.

For the savvy storage admin, RAID arrays and iSCSI storage can both be turned into powerful object storage systems. Both support a JBOD (just a bunch of drives) mode, and if the JBODs are attached across a set of server nodes running “free” Ceph or Scality Ring software, the result is a decent object-storage solution, especially if compression and global deduplication are supported.

Likely by now, you are using public clouds for backup. Consider “perpetual “storage using a snapshot tool or continuous backup software to reduce your RPO and RTO. Use multi-zone operations in the public cloud to converge DR onto the perpetual storage setup, as part of a cloud-based DR process. Going to the cloud for backup should save a lot of capital expense money.

On the software front, the world of IT is migrating to a services-centric software-defined storage (SDS), which allows scaling and chaining of data services via a virtualized microservice concept. Even older SANs and server drives can be pulled into the methodology, with software making all legacy boxes in a data center operate as a single pool of storage. This simplifies storage management and makes data center storage more flexible.

Encryption ought to be added to any networked storage or backup. If this prevents even one hacker from reading your files in the next five years, you’ll look good! If you are running into a space crunch and the budget is tight, separate out your cold data, apply one of the “Zip” programs and choose the encrypted file option. This saves a lot of space and gives you encryption!

Let’s take a closer look at what you can do to transform your existing storage infrastructure and extend its life.

(Image: Production Perig/Shutterstock)



Source link

6 Hot Tech Trends That Will Impact the Enterprise in 2018


The start of a new year always brings a flood of forecasts from technology pundits for what might happen in the next 12 months. For some reason, 2018 triggered even more prognostications from tech experts than usual. We received dozens of predictions for networking, storage, and data center trends that IT pros should expect to see this year.

After sorting through them, we noticed a pattern: many experts predict more of the same.  The trends and hot technologies from 2017 such as machine learning and automation will continue to influence IT infrastructure into 2018, but the pace and intensity of innovation and adoption seems likely to increase.

“It’s no secret that AI and machine learning are driving a lot of the innovation across the various ecosystems and technology domains that IT cares about,” Rohit Mehra, program VP of network infrastructure at IDC, said in a webcast on the firm’s 2018 predictions for worldwide enterprise infrastructure.

In fact, the rapid incorporation of AI into the workplace will mean that by 2021, more than half of enterprise infrastructure will use some form of cognitive and artificial intelligence to improve productivity, manage risk, and reduce costs, according to IDC.  

To be sure, 2018 will another year of rapid change for IT infrastructure. Read ahead for six key tech trends that infrastructure pros should keep an eye on in the months ahead.

(Image: alleachday/Shutterstock)



Source link

6 Ways SSDs Are Cheaper Than Hard Drives


With all the hype and counter-hype on the issue of solid-state drives versus hard-disk drives, it’s a good idea to step back and look at the whole pricing picture. This is a confluence of the relative cost per TB of flash die versus HDD assemblies, the impact of SSD performance on server count for a given workload, and the differential in markups by OEM vendors to their end users.

The capacity of flash die has been increasing at an explosive rate over the last year. The “simple” concept of stacking flash cells in the third dimension, coupled with the stacking of these 3D die on top of each other to make a “super-die” has grown capacity by as much as 256 times per flash chip. To put this in perspective, HDD capacity took over 20 years to achieve what SSDs have done in a single year.

I believe SSDs beats HDDs in most use cases today based on total cost of ownership. I’m not just talking power savings, which are typically $10 or $12 per year. SSDs are blindingly fast and that makes jobs run fast, too. The result is you need fewer servers and in many cases these savings offset the additional costs of SSDs.

TCO calculation and the cost comparison between SSD and HDD is complicated by model class and drive markup approaches by vendors. Traditionally, we distinguished enterprise drives with dual-port SAS interfaces from nearline drives with SATA. This distinction has fallen apart in SSDs. Many storage appliances don’t need enterprise dual-port drives, while NVMe is replacing SAS and soon SATA as the SSD interface. For many applications, low-cost SSDs are adequate for the job, which changes buying patterns.

Typical OEM vendor markup ratios are as much as 14X for SSDs, making them even more expensive than raw cost would suggest compared with HDDs that typically see 10X markups or less. COTS systems are starting to drive these markups down, while buying from drive makers directly (if you are a major cloud service provider) or from master distributors (for mere mortals) opens the door to much lower SSD prices.

There are underlying trends in IT that factor into the cost of storage. First, we are rapidly migrating away from the traditional mainstay of storage, the RAID array, to more compact storage appliances that have much more software content, and, with fewer SSD drives, are able to deliver much more data. Second, the new storage appliances use the high bandwidth of SSDs or flash to compress stored data as a background job. HDDs are too slow to do this. The result is much more storage for the same price.

Let’s look more closely at these factors that make SSDs more economical in the long run.

(Image: jules2000/Shutterstock)



Source link