Tag Archives: Data

Outdated Data Center Risks: How Organizational Mandates Help


Organizations often run on outdated data centers because they live in fear that an upgrade will cause an outage. While there’s no denying a network outage can be devastating for business, that hesitancy should not stand in the way of upgrading the data center and improving organizational efficiency.

The answer: If organizations get into the routine of regularly updating their data center technology, they can transform their networks to ensure security and efficiency, as well as enable digital transformation.

The dangers of outdated data centers

Ironically, outdated data centers actually increase the risk of creating an outage. Why? From new security measures to expanded storage capabilities, data center changes build over time and often complicate operations.

Older data centers rely on network operators to manually enact these changes – a tedious, time-consuming, and error-prone process. And without a single source of truth that documents all previous changes, there is a deviation between the current state of the network and the architect’s original intent when they designed it. Operators brought on to manage the network aren’t always clear on how the network was configured, what the original intent was or why certain changes were made over time, heightening risk.  

When operators are unsure whether the configuration changes, they perform may break something, major outages can occur. From a security standpoint, confusion about how the network was configured means operators don’t know if the network meets the data requirements of the organization or if any changes might push the network out of compliance.

Simply put, outdated data centers are risky and inefficient. Organizations that use switches and equipment longer than they should spend much more on their data center in the long run. Often these organizations are forced to rent additional space and draw more power – all while receiving lower performance from the network than they would have if they upgraded.

An outdated data center puts organizations at a competitive disadvantage like never before. Since 2000, 52 percent of Fortune 500 companies have gone bankrupt or been acquired because of digital disruption. At the end of the day, organizations of all sizes require consistent data center updates to remain ahead of the competition.

An organizational mandate leads to automation and efficiency

So, what can organizations with outdated data centers do to improve their networks and technology? Organizations should create mandates requiring upgrades to their data centers at a regular cadence. Without a mandate in place, leaders can be wary of updating their networks because the process of updating, if not done on a regular basis, is perceived as risky.

Organizations need to create a strategic plan for periodic upgrades and ensure the proper steps are taken through ongoing training. This practice should include IT teams investigating the latest technologies and vendors as well as training on the latest switches on the market.

By setting aside a transformation budget and resources, updates are embedded in routine processes. When IT teams set explicit plans for modernization every few years, they can improve performance by three to five times, delivering scalable, agile operations at a lower cost.

And if an organization leverages software that has automation capabilities, it can then automate the human error out of the loop. This type of self-documenting software runs through a single source of truth that documents the intent, configurations, and state of the network in a central location. Additionally, if a change pushed across a network has an unintended result, it can be rolled back to a previous version via automation.

Get into the practice of regular updates

A modern data center ensures that processes run smoothly while mitigating potential risk or margin of error. If organizations want to rise to the demands of digital transformation, better protect the security of their data and maintain a competitive edge, they need to create a practice around updating their data center technology.

Mansour Karam is VP Products at Juniper Networks.



Source link

Can Your Organization Benefit from Edge Data Centers?


Edge data centers are rapidly gaining popularity for a simple reason: they deliver faster services with minimal latency.

Edge data centers facilities, positioned close to the customers they serve, are designed to efficiently deliver cloud computing resources and cached content to end users. The facilities typically connect to a larger central data center or multiple data centers. By processing data and services as close as possible to end users, edge computing allows organizations to reduce latency and improve overall performance.

For IT leaders looking to deliver both resilience and performance, edge computing provided by edge data centers has the potential to be a transformative technology. IT market research firm IDC predicts that the global edge computing market will reach $250 billion by 2024, a compounded annual growth of 12.5 percent. Gartner, meanwhile, forecasts that by 2025 approximately 75 percent of enterprise-generated data will be created and processed outside of the traditional data center or cloud.

Multiple benefits

Edge data centers are valued for their ability to perform local data gathering and processing while maintaining a high level of availability. Organizations that design application and business processes to run in an independent manner can have survivability of critical business functions, said Carl Fugate, cloud and edge network center of excellence lead at business and IT consulting firm Capgemini Americas. “When designed properly, this can mean business as usual or maybe slightly reduced business functionality in the event of simple failures, like the loss of WAN connectivity,” he noted. “There are also key benefits for IoT where the loss of data, or the inability to process data and respond in real time, can render systems ineffective or unavailable.”

Edge data centers can deliver performance and efficiency benefits to offices, teams, retail locations, and other widely distributed sites. “Edge data centers remove much of the complexity that comes with forcing all offices, locations, and workers to go through a non-edge centralized data center,” said David Linthicum, chief cloud strategy officer at Deloitte Consulting.

Edge data centers also appeal to organizations with business or technology functions that can’t rely on conventional WAN data connections, as well as entities that require real-time processing and storage of locally generated data. “We see this in manufacturing, distribution, energy, hospitality, and retail, where services can’t be fully reliant on centralized services,” Fugate said. “We also frequently see this in environments with operational technology (OT) networks, where sites have machines and processes that are interdependent locally.”

Getting started

Organizations considering a move to edge computing should begin their journey by inventorying their applications and infrastructure. It’s also a good idea to assess current and future user requirements, focusing on where data is created and what actions need to be performed on that data. “Generally speaking, the more susceptible data is to latency, bandwidth, or security issues, the more likely the business is to benefit from edge capabilities,” said Vipin Jain, CTO of edge computing startup Pensando. “Focus on a small number of pilot projects and partner with integrators/ISVs with experience in similar deployments.”

Fugate recommended examining business functions and processes and linking them to the application and infrastructure services they depend on. “This will ensure that there isn’t one key centralized service that could stop critical business functions,” he said. “The idea is to determine what functions must survive regardless of an infrastructure or connectivity failure.”

Fugate also advised determining how to effectively manage and secure distributed edge platforms. “The consolidation of services to [the] cloud has made management and security easier through the tools offered by cloud platforms that may not be available with some edge deployments,” he observed. “It’s important to take this into consideration, as things such as patching, backups, and security can be much harder to implement and manage at remote sites.”

The key considerations in planning a highly federated data center model are automation, security, and resilience, said Simon Pincus, vice president of engineering at network monitoring company Opengear. “Recent high-profile failures of content delivery networks (CDNs) have shown how much damage can be done to a business if a service isn’t reliable,” he noted. “From the first planning session, organizations should consider failure scenarios and how networks and services will be managed and restored.”

Pincus also suggested that edge network designers should consider separating the management plane from their primary network to allow operations to continue even when primary connectivity is lost across a distributed network. “Ideally, the management plane will support network automation to provide reliable deployment, reconfiguration, and monitoring,” he said.

Takeaway

The diversity and number of platforms available for edge deployments allow for a great deal of flexibility. “In order to ensure the broadest potential for your organization, focus on developing teams and partnerships capable of building cloud-independent solutions,” Jain recommended. “There are a number of common architectures and open-source technologies available that you can use to increase the pace of development and also avoid vendor lock-in.”



Source link

Gaining Control Over Data Decay


Time takes its toll on everything, and enterprise data is no exception. As databases expand and multiply, a growing number of organizations are facing the prospect of data decay.

Data decay is any data that’s not useful, states Kathy Rudy, chief data and analytics officer for technology research and advisory firm ISG. “This can include not only data that’s outdated, but incomplete, inaccurate or duplicative.”

Data never sleeps

If a house isn’t properly maintained, decay can claim it within just a few years, observes Goutham Belliappa, vice president of AI engineering at IT services and consulting firm Capgemini Americas. “Data decay occurs in much the same way, when a lack of maintenance and continuous attention lead to irrelevant data sets that are no longer useful or are disorganized.”

A typical example of data decay is when a sales or prospecting contact list fails to reflect the fact that key individuals have shifted roles or moved to a different company. “Interacting with decayed lists like this can waste up to 70% of an organization’s prospecting efforts,” Belliappa says. “On the other hand, if some of that energy were diverted to contact list curation, the interaction efficiency could increase by over 300%.”

Data decay can also occur when files are improperly catalogued, particularly when the individuals responsible for retiring a vintage data group are unaware that the asset even exists, notes Robert Audet, director and data management leader at business and technology consulting firm Guidehouse. The same holds true when it’s unclear exactly who is responsible for retiring specific data assets.

Since decay is all but inevitable for many types of data, enterprises should consider deploying management and mastering strategies that are designed to keep pace with the fluid nature of enterprise databases. “Data entropy results in over 70% of B2B data decaying per year,” Belliappa observes. “For example, if B2B contacts are not managed for one year, less than one-third of the contacts will be relevant.”

Read the rest of this article on InformationWeek.



Source link

The Importance of Having a Good Data Destruction Policy


Our world is becoming increasingly data-driven, and as a result, vast amounts of data are being gathered every day. This data can often include personal and/or sensitive information regarding consumers and clients. In an effort to protect the privacy of internet users, governments around the world are introducing regulations and laws intended to protect privacy.

These laws envisage crippling fines for companies that do not abide by them. It is all too often thought that the responsibility for data safety and security only relates to data that is being actively used; however, inactive data sets are also protected under law. This gives rise to an important question: how can companies responsibly destroy data when they no longer need it?

Why is data destruction important?

Data destruction refers to the complete destroying of existing data. It is important to differentiate between simply deleting data and destroying data. It is vital that companies implement a reliable and proven data destruction policy so that data that is no longer required is destroyed fully and thus rendered useless.

Data and privacy protection legislation like the European GDPR holds data users (companies and/or individuals) responsible for the safe use and storage of the data that they have. This extends to both active and dormant data sets. The legislation also requires that data must be disposed of in such a way that it is irretrievable, and companies must be able to prove that they have done everything reasonably possible to destroy data fully in order to discharge their responsibilities under the law. Noncompliance with these regulations can result in the issuing of crippling fines that can force a company out of business. In addition to this, companies also face a big reputational risk should a data breach occur.

Is deleting data enough?

No, simply deleting data from hard drives and other storage mediums does not constitute data destruction. Although it may seem like deleted data has been destroyed, it is, in fact, still possible to retrieve deleted data from hard drives. Should a hacker gain access to old hard drives and manage to restore deleted data, the company could still be held liable for not destroying the data properly.

How can data be deleted securely?

Since storage mediums like hard drives are expensive, companies may wish to recycle drives and use them to store new data. In this case, it is important to make sure that the data is deleted from the drives by using specialized software that can completely destroy the data by overwriting the data with meaningless ones and zeros. By following this procedure, a company would be in the clear when it comes to laws like the GDPR, and as added insurance, a certificate that proves that the data destruction procedure has been carried out successfully can be issued by professional data destruction companies.

Key elements of a good data destruction policy?

While it is clear that a good data destruction policy is important, what exactly constitutes a good data destruction policy might not be so clear. Some of the key parts of a good data destruction policy are:

Tracing: Tracing allows data managers to keep track of exactly where storage mediums and the data they contain are. This is essential because it makes it possible to verify that all hard drives and/or other storage mediums are accounted for. Tracing is also useful in situations where storage mediums leave the direct control of a company, for example, when they are sent for destruction or data erasure. By having a log with serial numbers, data controllers can verify that all the storage mediums have been returned.

Access control: Access control is important in every aspect of data management, but it is especially important when it comes to data destruction. It is not unheard of for physical drives to be swapped out or stolen during transport, and therefore it is important to make sure that drives are always stored safely and securely.

Conclusion

Data destruction is an often-overlooked part of a company, but it is vital for those who wish to minimize the risk of data breaches. Implementing a good data destruction policy today can save you a lot of trouble further down the line and avoid potential financial implications.

Milica Vojnic is a Senior Digital Marketing Executive at Wisetek.



Source link

What Is OpenIDL, the Open Insurance Data Link platform?


OpenIDL is an open-source project created by the American Association of Insurance Services (AAIS) to reduce the cost of regulatory reporting for insurance carriers, provide a standardized data repository for analytics, and a connection point for third parties to deliver new applications to members. To learn more about the project, we sat down with Brian Behlendorf, General Manager for Blockchain, Healthcare and Identity at Linux Foundation, Joan Zerkovich, Senior Vice President, Operations at American Association of Insurance Services (AAIS) and Truman Esmond, Vice President, Membership & Solutions at AAIS.