Tag Archives: Approach

A Smarter Approach for Enterprise Cloud Migration | IT Infrastructure Advice, Discussion, Community


Modern enterprises need cloud-first strategies to stay competitive in today’s business environment. It’s no surprise that according to Gartner, by 2022, cloud shift across crucial enterprise IT markets will increase from 19 to 28 percent. Furthermore, IDG found that organizations have invested an average budget of $3.5 million on cloud applications, platforms, and services.

There are many compelling reasons for this, including a fundamental need for organizations to be more agile or to minimize costs. With cloud solutions, companies can also take advantage of quality, scalable, elastic storage. By contrast, an on-premises solution requires sufficient investment in hardware, software, and IT staff to cover peak storage requirements. Cloud offers more flexibility for redundancy and elastic scalability for compute rapid roll-out of new applications, and ready access to advanced cloud-based services like Machine Learning (ML) and Artificial Intelligence (AI)

Even when considering these benefits and available budget, there are still concerns around cloud migration. The main issue: cloud migration has traditionally been a complex and risky process. Depending on the size of the migration project, it can become a labor-intensive process and a drain on internal IT resources. So, how do organizations minimize the complexity and speed the time to return on investment (ROI)? By having a holistic view of the current legacy systems and a carefully mapped plan for implementation.

Reviewing internal infrastructure

Migrating to the cloud provides both application and cost advantages. However, the process must be properly planned. As part of the migration process, it is important to understand what makes up an organization’s existing content, including unstructured data, metadata, and custom components. 

When it comes to migrating to the cloud, a basic ‘lift and shift’ strategy of moving an existing Enterprise Content Management (ECM) platform can be ineffective and risky. As a result, organizations will need to adopt an approach that provides an uninterrupted service, while also de-risking the overall migration. Although there is no such thing as ‘a push-button migration,’ a programed approach that kick-starts the migration process can enable businesses to continue the migration at their own pace.

Outlining a clear strategy for migration

To help enterprises successfully move off outdated, legacy platforms, while mitigating the risk of migrating content to the cloud, organizations should follow a step-by-step approach that comprises three key components:

1) Plan for needed tools: To make the most of data from multiple repositories, organizations should have access to certain tools that allow them to plan and execute moving ECM systems to the cloud. Key instruments include dashboards, analytics, content services connectors, and migration servers.

2) Enlist experts: Utilize experienced consultants, who have planned and delivered many large-scale, legacy system migrations, and integrate best practices and transformation initiatives during the migration journey. They can help lift the weight off an organization’s IT team and provide needed direction.

3) Establish a robust process: Adopt a robust migration process that’s focused on efficiency and risk-mitigation. Audit the existing system, set up the migration tools and processes, and educate users on how to manage the migration.

Setting a firm timeline

As an enterprise builds a migration plan, it is important also to understand the migration timeline. For example, are there specific applications that must be migrated by a set date or is a phased migration an option? To adopt a phased, step-by-step approach, organizations should prioritize migrating recently accessed content first and leave content that has not been accessed for some time to a later phase. Grouping data by last time accessed or by application/department and then phasing the program is also a good approach to support user adoption of the new system, as well as de-risking the migration. One definite recommendation is not to let the timeline wavier too much as you risk the process extending far longer than originally intended and delay the ROI.

Ensuring a complete migration  

Typical migrations can incorporate billions of documents that are stored across a multitude of repositories, databases, and file stores. As organizations start to analyze their content, methods for moving content from on-premises to the cloud should also be reviewed. For small content migrations, this may include being able to stream the data over a high-speed Internet connection; however, this will not be sufficient for large scale migrations. For larger migrations, physical appliances, such as Amazon Snowball, for example, are often preferred.

Both techniques come into play as organizations consider the initial migration and any content deltas after the initial migration is complete. While many enterprises still see migration to the cloud as a daunting process, careful planning, and a strategic approach help ensure success. Making the move the cloud will not only simplify employees’ daily workload, it’ll further an organization’s digital goals while providing a better experience for customers.



Source link

A Pragmatic Approach to Network Automation | IT Infrastructure Advice, Discussion, Community


The evolution of network automation has been fraught with early challenges and setbacks. The unfulfilled promises of software-defined networking (SDN) and network functions virtualization (NFV) have led to inconsistent vendor implementations, limited equipment resources, unexpected complexity, and lack of expertise, resulting in stalled efforts made by early adopters of these concepts. As a result of those early hiccups, organizations have course-corrected and are pursuing more manageable and concrete network automation initiatives with a focus on simpler goals, a more targeted scope, faster time to value, and creation of abstraction layer for underlying equipment capabilities.

However, many networking teams already have collections of standard operating procedures, template config files and scripts in varying languages, and are confused about how to rationalize their existing investments and embark on a path toward true network automation. What is the key to success? Here are six steps for a pragmatic approach that discuss how to develop a network automation strategy that incorporates existing initiatives while still planning for future innovation.

Start with the end in mind

Before prescribing practical steps and building blocks to build a network automation strategy, it is important to note that a pragmatic approach does not limit or impact long term strategic initiatives like Machine Learning and Artificial Intelligence. Rather, it helps build a foundation on which an organization can continue its innovation within the industry.

However, to lay the proper foundation for your network automation strategy, it’s important to start with the end in mind. Since today’s complex networks require automation capabilities that span multiple networking domains from traditional physical networks, next-generation programmable networks, SD-WANS, cloud networks and more; a successful network automation strategy must allow for flexibility that expands across multiple networking domains without having to re-train, re-develop, or rip and replace existing technologies.

Define use cases

It’s all about the use case. Typically, network-related activities are categorized into a few main buckets, such as network operations and maintenance, configuration management, service orchestration, and policy management. As use cases stack up, something simple like operations and maintenance can lead into automation of device-specific configuration. As use cases around device configuration and lifecycle become manageable, users can pursue service orchestration, and ultimately policy management. Taking a steppingstone approach to network automation helps NetOps teams gain confidence while experiencing results.

Address network domains

Next, an organization must decide how to apply those use cases to specific domains. For example, teams may be looking to automate activities within the physical infrastructure like a branch or a data center switch, but others may have a pressing need to address network automation in the virtual environment residing in the cloud infrastructure. Determining the right combination of domain and use case would provide a great starting point with respect to planning automation projects that would pay immediate dividends.

Determine sources of truth

Historically, organizations have tried to centralize all network-related data either in a CMDB or an inventory platform. However, today’s complex network is more distributed than ever, especially when multiple domains make up the network ecosystem. The source of truth for network and inventory data is going to be highly fragmented across systems based on specific data sets. Using the source of truth, which is the most accurate for the use case and the domain would be an essential step when considering network automation. In today’s world, where the velocity of network services is growing rapidly, assuming that networks are static will result in a setback, especially when there are controllers or orchestrators in place that assist NetOps teams in making dynamic changes within the network. Ultimately, it is important to understand that successful automation is driven based on good data. Hence, if organizations want to achieve success in network automation, they must focus on choosing the right sources of truth for data as part of their overall approach.

Identify integrations

Organizations beginning a network automation journey will soon realize that without robust integrations, automating any activity will be difficult. It is imperative that once NetOps teams determine the sources of truth for their use cases, building integrations to each of these systems is the next step. The good news is that several vendors are pro-actively building robust integrations for their systems, reducing the need for customers to waste cycles doing so as part of their automation plan. Being able to integrate, either by leveraging DevOps platforms or directly through REST APIs, allows organizations to accelerate their efforts towards delivering successful automation for networking activities.

Understand personnel roles and skillsets

Finally, it’s important to understand who will perform which activities within these network automation systems. What are the skillsets needed to tackle and deliver successful automation initiatives and relevant ROI? NetOps skillsets including familiarity with network scripting (examples include Ruby, Python or YAML), DevOps and orchestration tools (Ansible, Puppet, NSO), network device communication protocols and modeling languages (like NETCONF and YANG), public cloud networking (AWS, Azure and associated APIs), and software development principles (such as agile and CI/CD) are all important for a successful network automation deployment. A correct assessment and selection of team members with the right skillsets is a trivial yet crucial step of an automation approach.

When planning a network automation strategy and roadmap, understand that it is acceptable to start slow and have incremental progress towards comprehensive automation. Don’t feel pressure to boil the ocean with your initiatives. Rather, pursue simpler goals and scope. Automation efforts that result in faster time to value will increase confidence and competencies to tackle more sophisticated and complex networks. Start with the pragmatic approach outlined in this article and reduce the chances of the project being derailed or stalled.

 



Source link

An (Ir)rational Approach to Interoperability – Interop 2019 Keynote | IT Infrastructure Advice, Discussion, Community


In his Interop19 keynote address, Brian McCarson, Vice President & Senior Principal Engineer, Industrial Solutions Division Internet of Things Group, Intel Corporation, discusses the many challenges with trying to drive openness in a world of walled gardens. But, those challenges are well worth taking on. Proprietary and open systems are not mutually exclusive. There is a rational approach to interoperability that can strike a balanced approach that enables consumer choice and proprietary value. Designing with openness and partnerships in mind will derive greatest value for your enterprise and the ecosystem.



Source link

OpenSUSE’s Spectre Mitigation Approach Is One Of The Reasons For Its Slower Performance


SUSE --

OpenSUSE defaults to IBRS for its Spectre Variant Two mitigations rather than the Retpolines approach and that is one of the reasons for the distribution’s slower out-of-the-box performance compared to other Linux distributions.

A Phoronix reader pointed out this opensuse-factory mailing list thread citing a “huge single-core performance loss” on a Lenovo laptop when using openSUSE. There’s a ~21% performance loss in single-threaded performance around the Spectre Variant Two mitigations, which itself isn’t surprising as we’ve shown time and time again about the performance costs of the Spectre/Meltdown mitigations.

OpenSUSE’s kernel is using IBRS (Indirect Branch Restricted Speculation) with the latest Intel CPU microcode images while most Linux distributions are relying upon Retpolines as return trampolines. The IBRS mitigation technique has the potential of incurring more of a performance loss than Retpolines, which has been known to incur a greater performance hit due to the more restricted speculation behavior when paired with the updated Intel CPU microcode.

Switching over to Retpolines for the workload in question restored the performance, per the mailing list discussion.

OpenSUSE users wanting to use that non-default approach can opt for it using the spectre_v2=retpoline,generic kernel command line parameter, which matches the behavior of most other Linux distributions’ kernels.

As for openSUSE changing their defaults, at least from the aforelinked mailing list discussion it doesn’t appear their kernel engineers have any interest in changing their Spectre mitigation default but are just blaming the poor performance on Intel as their problem.

Some have also suggested the openSUSE installer pick-up a toggle within its installer for informing users of security vs. performance preferences in better providing sane/informed defaults, but so far we haven’t seen any action taken to make that happen. It would make sense though considering some of openSUSE’s conservative defaults do have performance ramifications compared to most other Linux distributions, which we’ve shown in past benchmarks, albeit just written off by openSUSE as “mostly crap.”

Previously a barrier to Retpolines usage was needing the Retpolines compiler support, but that support has now been available for quite some time. There was also reported Retpolines issues with Skylake in the past, but those appear to have been resolved as well.


Fedora’s FESCo Approves Of A “Sane” Approach For Counting Fedora Users Via DNF


FEDORA --

Monday’s weekly Fedora Engineering and Steering Committee approved of a means for the DNF package manager to integrate some user counting capabilities as long as it’s a “sane” approach and not the UUID-driven proposal originally laid out.

Originally the plan was to come up with a new UUID identifier system just for counting Fedora users so those in the Fedora project and at Red Hat can have a better idea for the number of Fedora users and other insights. But the concept of having a unique identifier for Fedora users wasn’t well received, even if it was trying to not track users or reveal other personal information.

Baked over the past month was a new privacy-minded plan for counting users via DNF that relies upon a “countme” bit that will be incremented weekly or so and not have any UUID as originally envisioned. See that earlier article for more details on this current plan.

During Monday’s FESCo meeting, the members voted in favor of the plan as long as “the actual implementation is sane.” That was laid out in the meeting minutes.

We’ll see if this new DNF “countme” user counter gets wrapped up time for this spring’s Fedora 30 release or will be delayed until Fedora 31 in the autumn. At the FESCo meeting they also officially approved having GCC 9 be the default system compiler, which was widely expected anyhow given their preference for always shipping with the latest GNU compiler and in fact the developers had already landed the new compiler into Rawhide in its near-final state.