Category Archives: Stiri IT Externe

Red Hat Enterprise Linux 7.5 Released » Linux Magazine


Red Hat has released Red Hat Enterprise Linux (RHEL) 7.5, which has a strong focus on hybrid cloud. As the market is evolving, so is Red Hat. In 2014, Red Hat signalled a shift in focus from datacenters to mobile and cloud. Red Hat acquired companies like FeedHenry and Core OS to strengthen its mobile and cloud portfolio.

Now the cash cow of Red Hat, RHEL, is reflecting their changing focus. RHEL 7.5 offers enhanced security and compliance controls, in addition to better integration with Microsoft Windows infrastructure both on-premise and in Microsoft Azure.

Companies are mixing environments – spanning across on-prem, public cloud and private cloud. RHEL 7.5 tries to reduce the complexity, especially in terms of security, that comes with such a hybrid environment. Red Hat Enterprise Linux 7.5 has enhanced software security controls to mitigate risk while also complementing, rather than hindering, IT operations.

Red Hat said that a major component of these controls is security automation through the integration of OpenSCAP with Red Hat Ansible Automation. This is designed to enable the creation of Ansible Playbooks directly from OpenSCAP scans which can then be used to implement remediations more rapidly and consistently across a hybrid IT environment. Sensitive data can also now be better secured across varied environments with enhancements to Network-Bound Disk Encryption that support automatic decryption of data volumes.

RHEL 7.5 also comes with production ready container solutions. RHEL 7.5 comes with full support for Buildah, an open source utility designed to help developers create and modify Linux container images without a full container runtime or daemon running in the background.

RHEL 7.5 is available for multiple architectures including x86, IBM Power, IBM z Systems, and 64-bit Arm. While RHEL is available for subscription there is 30 day evaluation version that can be downloaded and used for free.

Sources: https://www.redhat.com/en/about/press-releases/red-hat-strengthens-hybrid-clouds-backbone-latest-version-red-hat-enterprise-linux

https://access.redhat.com/products/red-hat-enterprise-linux/evaluation



Source link

Disaster Recovery in the Public Cloud


Find out about the options for building highly available environments using public cloud providers, along with the benefits and tradeoffs.

I’ve had the opportunity to speak with many users about their plans for public cloud adoption; these discussions frequently revolve around how to avoid being impacted by potential cloud outages. Questions come up because public cloud outages do occur, even though they happen less frequently now than they may have in the past, and customers are concerned about mitigating the risk of disruption.

Thankfully, every major public cloud vendor offers options for building highly available environments that can survive some type of outage. AWS, for example, suggests four options that leverage multiple geographic regions. These options, which are also available with the other public cloud vendors, come with different price points and deliver different recovery point objectives (RPO) and different recovery time objectives (RTO).

 

Companies can choose the option that best meets their RPO/RTO requirements and budget. The key takeaway is that public cloud providers enable customers to build highly available solutions on their global infrastructure.

Let’s take a brief look at these options and review some basic principles for building highly available environments using the public cloud. I’ll use AWS for my examples, but the principles apply across all public cloud providers.

First, understand the recovery point objective (RPO) and recovery time objective (RTO) for each of your applications so you can design the right solution for each use case. Second, there’s no one-size-fits-all solution for leveraging multiple geographic regions. There are different approaches you can take depending on RPO, RTO, and the amount of cost you are willing and able to incur and the tradeoffs you are willing to make. Some of these approaches, using AWS as the example, include:

  • Recovering to another region from backups – Back up your environment to S3, including EBS snapshots, RDS snapshots, AMIs, and regular file backups. Since S3 only replicates data, by default, to availability zones within a single region, you’ll need to enable cross-region replication to your DR region. You’ll incur the cost of transferring and storing data in a second region but won’t incur compute, EBS, or database costs until you need to go live in your DR region. The trade-off is the time required to launch your applications.
  • Warm standby in another region – Replicate data to a second region where you’ll run a scaled-down version of your production environment. The scaled-down environment is always live and sized to run the minimal capacity needed to resume business. Use Route 53 to switch over to your DR region as needed. Scale up the environment to full capacity as needed. With this option, you get faster recovery, but incur higher costs.
  • Hot standby in another region – Replicate data to a second region where you run a full version of your production environment. The environment is always live, and invoking full DR involves switching traffic over using Route 53. You get even faster recovery, but also incur even higher costs.
  • Multi-region active/active solution – Data is synchronized between both regions and both regions are used to service requests. This is the most complex to set up and the most expensive. However, little or no downtime is suffered even when an entire region fails. While the approaches above are really DR solutions, this one is about building a true highly available solution.

One of the keys to a successful multi-region setup and DR process is to automate as much as possible. This includes backups, replication, and launching your applications. Leverage automation tools such Ansible and Terraform to capture the state of your environment and to automate launching of resources. Also, test repeatedly to ensure that you’re able to successfully recover from an availability zone or region failure. Test not only your tools, but your processes.

Obviously, much more can be said on this topic. If you are interested in learning more about disaster recovery in the cloud, you can see me in person at the upcoming Interop ITX 2018 in Las Vegas, where I will present, “Saving Your Bacon with the Cloud When Your Data Center Is on Fire.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Disaster Recovery in the Public Cloud


Find out about the options for building highly available environments using public cloud providers, along with the benefits and tradeoffs.

I’ve had the opportunity to speak with many users about their plans for public cloud adoption; these discussions frequently revolve around how to avoid being impacted by potential cloud outages. Questions come up because public cloud outages do occur, even though they happen less frequently now than they may have in the past, and customers are concerned about mitigating the risk of disruption.

Thankfully, every major public cloud vendor offers options for building highly available environments that can survive some type of outage. AWS, for example, suggests four options that leverage multiple geographic regions. These options, which are also available with the other public cloud vendors, come with different price points and deliver different recovery point objectives (RPO) and different recovery time objectives (RTO).

 

Companies can choose the option that best meets their RPO/RTO requirements and budget. The key takeaway is that public cloud providers enable customers to build highly available solutions on their global infrastructure.

Let’s take a brief look at these options and review some basic principles for building highly available environments using the public cloud. I’ll use AWS for my examples, but the principles apply across all public cloud providers.

First, understand the recovery point objective (RPO) and recovery time objective (RTO) for each of your applications so you can design the right solution for each use case. Second, there’s no one-size-fits-all solution for leveraging multiple geographic regions. There are different approaches you can take depending on RPO, RTO, and the amount of cost you are willing and able to incur and the tradeoffs you are willing to make. Some of these approaches, using AWS as the example, include:

  • Recovering to another region from backups – Back up your environment to S3, including EBS snapshots, RDS snapshots, AMIs, and regular file backups. Since S3 only replicates data, by default, to availability zones within a single region, you’ll need to enable cross-region replication to your DR region. You’ll incur the cost of transferring and storing data in a second region but won’t incur compute, EBS, or database costs until you need to go live in your DR region. The trade-off is the time required to launch your applications.
  • Warm standby in another region – Replicate data to a second region where you’ll run a scaled-down version of your production environment. The scaled-down environment is always live and sized to run the minimal capacity needed to resume business. Use Route 53 to switch over to your DR region as needed. Scale up the environment to full capacity as needed. With this option, you get faster recovery, but incur higher costs.
  • Hot standby in another region – Replicate data to a second region where you run a full version of your production environment. The environment is always live, and invoking full DR involves switching traffic over using Route 53. You get even faster recovery, but also incur even higher costs.
  • Multi-region active/active solution – Data is synchronized between both regions and both regions are used to service requests. This is the most complex to set up and the most expensive. However, little or no downtime is suffered even when an entire region fails. While the approaches above are really DR solutions, this one is about building a true highly available solution.

One of the keys to a successful multi-region setup and DR process is to automate as much as possible. This includes backups, replication, and launching your applications. Leverage automation tools such Ansible and Terraform to capture the state of your environment and to automate launching of resources. Also, test repeatedly to ensure that you’re able to successfully recover from an availability zone or region failure. Test not only your tools, but your processes.

Obviously, much more can be said on this topic. If you are interested in learning more about disaster recovery in the cloud, you can see me in person at the upcoming Interop ITX 2018 in Las Vegas, where I will present, “Saving Your Bacon with the Cloud When Your Data Center Is on Fire.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Red Hat Celebrates 25th Anniversary with a New… » Linux Magazine


Red Hat was founded in 1993, two years after Linus Torvalds announced the Linux kernel. The company just celebrated its 25th anniversary in March 2018.

Red Hat was co-founded by Bob Young as ACC Corporation to sell Linux and Unix accessories. One year later, Marc Ewing created a Linux distribution called Red Hat Linux. Later, Young acquired Ewing’s business and created what we know as Red Hat today.

Red Hat pioneered a business model around Linux and Open Source as it moved away from selling coffee mugs and merchandise and started a subscription-based business model. The company went public in 1999 and, fast forward to 2018, Red Hat reported revenue of ~ $3 billion in 2017.

Celebrating its anniversary in a pure open source manner, Red Hat announced a brand new GitHub page to host the source code of all of its projects. “The page will try to list every known free and open source project hosted on GitHub in which Red Hat staffers directly participate as part of their work,” Red Hat community analyst Brian Proffitt writes in a blog post.



Source link

Red Hat Celebrates 25th Anniversary » Linux Magazine


Red Hat was founded in 1993, two years after Linus Torvalds announced the Linux kernel. The company just celebrated its 25th anniversary in March 2018.

Red Hat was co-founded by Bob Young as an ACC Corporation to sell Linux and Unix accessories. One year later, Marc Ewing created a Linux distribution called Red Hat Linux. Later, Young acquired Ewing’s business and created what we know as Red Hat today.

Red Hat pioneered a business model around Linux and Open Source as it moved away from selling coffee mugs and merchandise and started a subscription-based business model. The company went public in 1999 and fast forward to 2018, Red Hat reported revenue of ~ $3 billion in 2017.

While subscriptions from Linux remain the core business for Red Hat, the company is fast evolving as a cloud player and experiencing around 50% growth in revenue from emerging technologies.

Celebrating its anniversary in a pure open source manner, Red Hat announced a brand new GitHub page to host the source code of all of its projects. “The page will try to list every known free and open source project hosted on GitHub in which Red Hat staffers directly participate as part of their work. As you can see, it’s gotten off to a good start,” wrote the company in a blog post.



Source link