Tag Archives: VMware

VMware vSphere Storage Types


VMware vSphere supports different types of storage architectures, both internally (in this case the controller is crucial, that must be in the HCL) or externally with shared SAS DAS, SAN FC, SAN iSCSI, SAN FCoE, or NFS NAS (in those case the HCL is fundamental for the external storage, the fabric elements, and the host adapters).

For local storage, with vSphere 6.x it’s possible to use USB disks, not only as boot disks, but also to run VMs. But note that USB datastores are just unsupported by VMware.

Storage types at the VM logical level

There are different types of virtual disks depending on the provisioning method, pre- allocated or dynamic. The type of virtual disks are mainly the same since vSphere 4.0:

  • An eager zeroed thick disk has all space allocated and wiped clean of any previous content on the physical media at creation time. Such disks may take a long time during creation compared to other disk formats. The entire disk space is reserved and unavailable for use by other VMs.
  • Thick or lazy zeroed thick VMDK: A thick disk has all space allocated at creation time. This space may contain stale data on the physical media. Before writing to a new block, a zero has to be written, increasing the input/output operation per second (IOPS) on new blocks compared to eager disks. The entire disk space is reserved and unavailable for use by other VMs.
  • Thin VMDK: Space required for the thin-provisioned virtual disk is allocated and zeroed on demand as space is used. Unused space is available for use by other VMs.

You can choose the disk provisioning type during virtual disk creation, but you can change the type using a cold VM migration across two datastores, or using Storage vMotion (if you have at least ESXi Standard edition). Note that you can also change the type of each individual disk, by choosing Configure per disk on the new HTML5 client shown as follows:

(Click on image for larger view)

There are also Raw Device Mapping (RDM) disks where a disk at ESXi level is mapped 1:1 to a VM (like a Passthrough mode), with two different types of compatibility (virtual or physical mode). Except for building guest clusters (clusters across VMs on different hosts), there is no need to use these types of disk.

There is no significant difference in performance for sequential I/O between the different types of virtual disks. For random I/O, thin VMDKs have the worst performance and higher latency (for lazy thick, it depends if you have to write a new block).

Storage types at the VM physical level

To access a block device, such as virtual disks VMDK, virtual CD/DVD-ROM, or other SCSI devices, each VM uses storage controllers; at least one is added by default when you create a VM.

There are different types of controller available for a VM running on ESXi which are described as follows:

  • BusLogic: This is one of the first emulated SCSI virtual controllers available in VMware ESX. Now it’s a legacy controller used mainly for legacy operating systems. It does not support VMDK larger than 2 TB.
  • LSI Logic Parallel: This was formally known as LSI Logic and was the other SCSI virtual controller available originally in VMware ESX, used for operating systems such as Windows Server 2003.
  • LSI Logic SAS: This was introduced in vSphere 4.0, and is the evolution of the parallel driver, working as a SAS virtual controller and used in Windows Server 2008 or newer.
  • VMware Paravirtual (or PVSCSI): This was introduced in vSphere 4.0, is an SCSI virtual controller designed to support very high throughput with minimal processing cost, working not in emulation mode, but in paravirtual mode (it requires the VMware Tools to be recognized).

Others virtual controllers are also possible in a VM, such as AHCI SATA (introduced in vSphere 5.5), IDE, and also USB controllers, but usually for specific cases (for example SATA or IDE are usually used for virtual DVD drives).

Note: When you create a VM, the default controller is optimized for good performance and compatibility. The controller type depends on the guest operating system (usually its driver is included in the operating system), the device type, and sometimes, the VMs compatibility. But sometimes you can choose a different controller to improve the performance, like the PVSCI (useful for VMFK with high load) or a new type available in vSphere 6.5.

With ESXi 6.5 and VM virtual hardware version 13, you can now also use a virtual NVMe. Virtual NVMe devices have reduced guest I/O processing overheads (over 50% compared to AHCI SATA SCSI device), which allows more VMs per host or more transactions per minute. Each virtual machine supports 4 NVMe controllers and up to 15 devices per controller.

Virtual NVMe controllers are supported on vSphere 6.5 only on the following guest operating systems:

  • Windows 7 and 2008 R2 (hotfix required, refer to https://support.microsoft.com/en-us/kb/2990941)
  • Windows 8.1, 2012 R2, 10, 2016
  • RHEL, CentOS, NeoKylin 6.5, and later Oracle Linux 6.5 and later
  • Ubuntu 13.10 and later
  • SLE 11 SP4 and later
  • Solaris 11.3 and later
  • FreeBSD 10.1 and later
  • Mac OS X 10.10.3 and later
  • Debian 8.0 and later

You can add a new NVMEe virtual controller using the vSphere Web Client (from the HTML5 web client is not yet possible) as shown in the following steps:

  1. Right-click on the virtual machine in the inventory and select Edit Settings option
  2. Click the Virtual Hardware tab, and select NVMe Controller from the New device drop-down menu
  3. Click on Add
  4. The controller appears in the Virtual Hardware devices list
  5. Click OK

(Click on image for larger view)

For more information on NVMe, see also KB 2147714—Using Virtual NVMe with ESXi 6.5 and virtual machine Hardware Version 13 (https://kb.vmware.com/kb/2147714).

For more information on PVSCI, see also KB 1010398—Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters (https://kb.vmware.com/kb/1010398).

Storage types at the ESXi logical level

At the high level, VMware vSphere will access each storage using datastores—a logical paradigm to abstract all storage types, like a common operating system uses letters or mount points to access a filesystem.

VMware vSphere 6.x has the following four main types of datastore:

  • VMware FileSystem (VMFS) datastores: All block-based storage must be first formatted with VMFS to transform a block service to a file and folder oriented services
  • Network FileSystem (NFS) datastores: This is for NAS storage
  • VVol: This is introduced in vSphere 6.0 and is a new paradigm to access SAN and NAS storage in a common way and by better integrating and consuming storage array capabilities
  • vSAN datastore: If you are using vSAN solution, all your local storage devices could be polled together in a single shared vSAN datastore

New datastores could be provisioned from the new HTML5 client, starting from a data centre, a cluster, or a host; just right-click on the object, choose storage, and then new datastore:

(Click on image for larger view)

For local disks, if you have configured the right RAID level from the controller (remember that ESXi does not provide software RAID features), you can just format the logical disks with a VMFS datastore.

But before external storage, before adding a new datastore, you must first configure the ESXi host, the fabric, (if present) and the storage itself. This depends on the storage type and vendor and will be discussed later. You cannot directly add a vSAN datastore; the vSAN configuration is quite different, but the final result will be a vSAN datastore with its own format.

Of course, on the same host you can have multiple datastores, also with different types:

(Click on image for larger view)

At the datastore level, there isn’t any difference between DAS or SAN, they are just block- based storage and become VMFS datastores. The functional difference is that a SAN disk could be shared across multiple hosts, not local DAS disks (but there are also shared SAS storages that are formally classified as DAS storage).

Storage types at the ESXi physical level

Excluding vSAN, which has a specific configuration, at the physical level we can have three different main types of storage:

  • Block-based storage acceded by a hardware adapter: This includes DAS storage or a SAN FC storage.
  • Block-based storage acceded by a software adapter: This is like the SAN iSCSI storage when the software initiator is used. In this case, you need first to properly configure the network connectivity. After that, it becomes very similar to the first case.
  • NFS storage: This is where you have to configure first the IP network connectivity to your storage and then connect the NFS datastore.

For the physical storage adapters, VMware ESXi supports several types of protocols and technologies (refer to the hardware compatibility list to check the supported level):

  • Fibre Channel Host Bus Adapter (FC HBA): This is the common and historical way to implement an FC-based storage, but using a dedicated full fabric.
  • iSCSI HBA: These are specialized PCIe cards that implement completely in hardware the entire iSCSI stack, reducing the load of the host CPU.
  • CNA adapters for FCoE or iSCSI: These are mostly 10 Gbps (or greater) Ethernet adapters providing hardware (or hardware assisted) FCoE or iSCSI functionality on converged (or also dedicated) networks.
  • RDMA over Converged Ethernet (RoCE): This is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. Starting with vSphere 6.5, RoCE certified adapters could be used for converged networks. InfiniBand HCA: Mellanox Technologies InfiniBand HCA device drivers are available directly from Mellanox Technologies. Mostly used for the network part instead of the storage part, they could be interesting in converged networks, and also in vSAN implementation.

This tutorial is an excerpt from “Mastering VMware vSphere 6.5” by Andrea Mauro, Paolo Valsecchi & Karel Novak and published by Packt. Get the ebook for just $9 until Aug. 31.



Source link

Shuttleworth on Ubuntu 18.04: Multicloud Is the New Normal | Software


By Jack M. Germain

Apr 29, 2018 5:00 AM PT

Canonical last week released the
Ubuntu 18.04 LTS platform for desktop, server, cloud and Internet of Things use. Its debut followed a two-year development phase that led to innovations in cloud solutions for enterprises, as well as smoother integrations with private and public cloud services, and new tools for container and virtual machine operations.

The latest release drives new efficiencies in computing and focuses on the big surge in artificial intelligence and machine learning, said Canonical CEO Mark Shuttleworth in a global conference call.

Ubuntu has been a platform for innovation over the last decade, he noted. The latest release reflects that innovation and comes on the heels of extraordinary enterprise adoption on the public cloud.

The IT industry has undergone some fundamental shifts since the last Ubuntu upgrade, with digital disruption and containerization changing the way organizations think about next-generation infrastructures. Canonical is at the forefront of this transformation, providing the platform for enabling change across the public and private cloud ecosystem, desktop and containers, Shuttleworth said.

“Multicloud operations are the new normal,” he remarked. “Boot time and performance-optimized images of Ubuntu 18.04 LTS on every major public cloud make it the fastest and most-efficient OS for cloud computing, especially for storage and compute-intensive tasks like machine learning,” he added.

Ubuntu 18.04 comes as a unified computing platform. Having an identical platform from workstation to edge and cloud accelerates global deployments and operations. Ubuntu 18.04 LTS features a default GNOME desktop. Other desktop environments are KDE, MATE and Budgie.

Diversified Features

The latest technologies under the Ubuntu 18.04 hood are focused on real-time optimizations and an expanded Snapcraft ecosystem to replace traditional software delivery via package management tools.

For instance, the biggest innovations in Ubuntu 18.04 are related to enhancements to cloud computing, Kubernetes integration, and Ubuntu as an IoT control platform. Features that make the new Ubuntu a platform for artificial intelligence and machine learning also are prominent.

The Canonical distribution of Kubernetes (CDK) runs on public clouds, VMware, OpenStack and bare metal. It delivers the latest upstream version, currently Kubernetes 1.10. It also supports upgrades to future versions of Kubernetes, expansion of the Kubernetes cluster on demand, and integration with optional components for storage, networking and monitoring.

As a platform for AI and ML, CDK supports GPU acceleration of workloads using the Nvidia DevicePlugin. Further, complex GPGPU workloads like Kubeflow work on CDK. That performance reflects joint efforts with Google to accelerate ML in the enterprise, providing a portable way to develop and deploy ML applications at scale. Applications built and tested with Kubeflow and CDK are perfectly transportable to Google Cloud, according to Shuttleworth.

Developers can use the new Ubuntu to create applications on their workstations, test them on private bare-metal Kubernetes with CDK, and run them across vast data sets on Google’s GKE, said Stephan Fabel, director of product management at Canonical. The resulting models and inference engines can be delivered to Ubuntu devices at the edge of the network, creating an ideal pipeline for machine learning from the workstation to rack, to cloud and device.

Snappy Improvements

The latest Ubuntu release allows desktop users to receive rapid delivery of the latest applications updates. Besides having access to typical desktop applications, software devs and enterprise IT teams can benefit from the acceleration of snaps, deployed across the desktop to the cloud.

Snaps have become a popular way to get apps on Linux. More than 3,000 snaps have been published, and millions have been installed, including official releases from Spotify, Skype, Slack and Firefox,

Snaps are fully integrated into Ubuntu GNOME 18.04 LTS and KDE Neon. Publishers deliver updates directly, and security is maintained with enhanced kernel isolation and system service mediation.

Snaps work on desktops, devices and cloud virtual machines, as well as bare-metal servers, allowing a consistent delivery mechanism for applications and frameworks.

Workstations, Cloud and IoT

Nvidia GPGPU hardware acceleration is integrated in Ubuntu 18.04 LTS cloud images and Canonical’s OpenStack and Kubernetes distributions for on-premises bare metal operations. Ubuntu 18.04 supports Kubeflow and other ML and AI workflows.

Kubeflow, the Google approach to TensorFlow on Kubernetes, is integrated into Canonical Kubernetes along with a range of CI/CD tools, and aligned with Google GKE for on-premises and on-cloud AI development.

“Having an OS that is tuned for advanced workloads such as AI and ML is critical to a high-velocity team,” said David Aronchick, product manager for Cloud AI at Google. “With the release of Ubuntu 18.04 LTS and Canonical’s collaborations to the Kubeflow project, Canonical has provided both a familiar and highly performant operating system that works everywhere.”

Software engineers and data scientists can use tools they already know, such as Ubuntu, Kubernetes and Kubeflow, and greatly accelerate their ability to deliver value for their customers, whether on-premises or in the cloud, he added.

Multiple Cloud Focus

Canonical has seen a significant adoption of Ubuntu in the cloud, apparently because it offers an alternative, said Canonical’s Fabel.

Typically, customers ask Canonical to deploy Open Stack and Kubernetes together. That is a pattern emerging as a common operational framework, he said. “Our focus is delivering Kubernetes across multiple clouds. We do that in alignment with Microsoft Azure service.”

Better Economics

Economically, Canonical sees Kubernetes as a commodity, so the company built it into Ubuntu’s support package for the enterprise. It is not an extra, according to Fabel.

“That lines up perfectly with the business model we see the public clouds adopting, where Kubernetes is a free service on top of the VM that you are paying for,” he said.

The plan is not to offer overly complex models based on old-school economic models, Fabel added, as that is not what developers really want.

“Our focus is on the most effective delivery of the new commodity infrastructure,” he noted.

Private Cloud Alternative to VMware

Canonical OpenStack delivers private cloud with significant savings over VMware and provides a modern, developer-friendly API, according to Canonical. It also has built-in support for NFV and GPGPUs. The Canonical OpenStack offering has become a reference cloud for digital transformation workloads.

Today, Ubuntu is at the heart of the world’s largest OpenStack clouds, both public and private, in key sectors such as finance, media, retail and telecommunications, Shuttleworth noted.

Other Highlights

Among Ubuntu 18.04’s benefits:

  • Containers for legacy workloads with LXD 3.0 — LXD 3.0 enables “lift-and-shift” of legacy workloads into containers for performance and density, an essential part of the enterprise container strategy.

    LXD provides “machine containers” that behave like virtual machines in that they contain a full and mutable Linux guest operating system, in this case, Ubuntu. Customers using unsupported or end-of-life Linux environments that have not received fixes for critical issues like Meltdown and Spectre can lift and shift those workloads into LXD on Ubuntu 18.04 LTS with all the latest kernel security fixes.

  • Ultrafast Ubuntu on a Windows desktop — New Hyper-V optimized images developed in collaboration with Microsoft enhance the virtual machine experience of Ubuntu in Windows.
  • Minimal desktop install — The new minimal desktop install provides only the core desktop and browser for those looking to save disk space and customize machines with their specific apps or requirements. In corporate environments, the minimal desktop serves as a base for custom desktop images, reducing the security cross-section of the platform.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Private Cloud May Be the Best Bet: Report | Enterprise


By Jack M. Germain

Jun 13, 2018 5:00 AM PT

News flash: Private cloud economics can offer more cost efficiency than public cloud pricing structures.

Private (or on-premises) cloud solutions can be more cost-effective than public cloud options, according to “Busting the Myths of Private Cloud Economics,” a report 451 Research and Canonical released Wednesday. That conclusion counters the notion that public cloud platforms traditionally are more cost-efficient than private infrastructures.

Half of the enterprise IT decision-makers who participated in the study identified cost as the No. 1 pain point associated with the public cloud. Forty percent mentioned cost-savings as a key driver of cloud migration.

“We understand that people are looking for more cost-effective infrastructure. This was not necessarily news to us,” said Mark Baker, program director at Canonical.

“It was interesting to see the report point out that operating on-premises infrastructure can be as cost-effective as using public cloud services if done in the right way,” he told LinuxInsider.

Report Parameters

The Cloud Price Index, 451 Group’s tracking of public and private cloud pricing since 2015, supplied the data underpinning the latest report. Companies tracked in the Cloud Price Index include but are not limited to Amazon Web Services, Google, Microsoft, VMware, Rackspace, IBM, Oracle, HPE, NTT and CenturyLink.

The Cloud Price Index is based on quarterly surveys of some 50 providers across the globe that together represent around nearly 90 percent of global Infrastructure as a Service revenue, noted Owen Rogers, director of the Digital Economics Unit at 451 Research.

“Most providers give us data in return for complimentary research. Canonical asked us if they could participate as well. Any provider is welcome to submit a quotation and to be eligible for this research,” he told LinuxInsider.

Providers are not compared directly with each other directly because each vendor and each enterprise scenario is different. It is not fair to say Provider A is cheaper than Provider B in all circumstances, Rogers explained.

“We just provide benchmarks and pricing distributions for a specific use-case so that enterprises can evaluate if the price they are paying is proportional to the value they are getting from that specific vendor,” he said. “Because we keep individual providers’ pricing confidential, we get more accurate and independent data.”

Private Cloud Trend

The private cloud sector continues to attract enterprise customers looking for a combination of price economy and cloud productivity. That combination is a driving point for Canonical’s cloud service, said Baker.

“We see customers wanting to be able to continue running workloads on-premises as well as on public cloud and wanting to get that public cloud economics within a private cloud. We have been very focused on helping them do that,” he said.

Enterprise customers have multiple reasons for choosing on-premises or public cloud services. They ranges from workload characteristics and highly variable workloads to different business types, such as retail operations. Public clouds let users vary their capacity.

“You see the rates of innovation delivered by the public cloud because of the new services they are launching,” said Baker, “but there is a need for some to run workloads on-premises as well. That can be for compliance reasons, security reasons, or cases where systems are already in place.”

In some cases, maintaining cloud operations on-premises can be even more cost-effective than running in the public cloud, he pointed out. Cost is only one element, albeit a very important one.

Report Highlights

The public cloud is not always the bargain buyers expect, the report suggests. Cloud computing may not deliver the promised huge cost savings for some enterprises.

Reducing costs was the enterprise’s main reason for moving to the cloud, based on a study conducted last summer. More than half of the decision-makers polled said cost factors were still their top pain point in a follow-up study a few months later.

Once companies start consuming cloud services, they realize the value that on-demand access to IT resources brings in terms of quicker time to market, easier product development, and the ability to scale to meet unexpected opportunities.

As a result, enterprises consume more and more cloud services as they look to grow revenue and increase productivity. With scale, public cloud costs can mount rapidly, without savings from economies of scale being passed on, the latest report concludes.

Private Clouds Can Be Cheaper If…

Enterprises using private or on-premises clouds need the right combination of tools and partnerships. Cost efficiency is only possible when operating in a “Goldilocks zone” of high utilization and high labor efficiency.

Enterprises should use tools, outsourced services and partnerships to optimize their private cloud as much as possible to save money, 451 recommended. That will enhance their ability to profit from value-added private cloud benefits.

Many managed private clouds were priced reasonably compared to public cloud services, the report found, providing enterprises with the best of both worlds — private cloud peace of mind, control and security, yet at a friendlier price.

Managed services can increase labor efficiency by providing access to qualified, experienced engineers. They also can reduce some operational burdens with the outsourcing and automation of day-to-day operations, the report notes.

Convincing Study

While public cloud services can be valuable in many circumstances, they are not necessarily the Utopian IT platform of the future that proponents make them out to be, observed Charles King, principal analyst at Pund-IT.

“As the report suggests, these points are clearly the case where enterprises are involved. However, they are increasingly relevant for many smaller companies, especially those that rely heavily on IT-based service models,” he told LinuxInsider.

An interesting point about the popularity of private cloud services is that their success relates to generational shifts in IT management processes and practices, King noted. Younger admins and other personnel gravitate toward services that offer simplified tools and intuitive graphical user interfaces that are commonplace in public cloud platforms but unusual in enterprise systems.

“Public cloud players deserve kudos for seeing and responding to those issues,” King said. “However, the increasing success of private cloud solutions is due in large part to system vendors adapting to those same generational changes.”

The Canonical Factor

Canonical’s managed private cloud compares favorably to public cloud services, the report found. Canonical last year engaged with 451 Research for the Cloud Price Index, which compared its pricing and services against the industry at large using the CPI’s benchmark averages and market distributions.

Canonical’s managed private cloud was cheaper than 25 of the public cloud providers included in the CPI price distributions, which proves that the benefits of outsourced management and private cloud do not have to come at a premium, according to the report’s authors.

High levels of automation drive down management costs significantly. Canonical is a pioneer in model-driven operations that reduce the amount of fragmentation and customization required for diverse OpenStack architectures and deployments.

That likely is a contributing factor to the report’s finding that Canonical was priced competitively against other hosted private cloud providers. Canonical’s offering is a full-featured open cloud with a wide range of reference architectures and the ability to address the entire range of workload needs at a competitive price.

Dividing Options

It is not so much a divide between private and public cloud usage in enterprise markets today, suggested Pund-IT’s King, as a case of organizations developing a clearer understanding or sophistication about what works best in various cloud scenarios and what does not.

“The Canonical study clarifies how the financial issues driving initial public cloud adoption can and do change over time and often favor returning to privately owned cloud-style IT deployments,” he explained. “But other factors, including privacy and security concerns, also affect which data and workloads companies will entrust to public clouds.”

A valid case exists for using both public and private infrastructure, according to the 451 Research report. Multicloud options are the endgame for most organizations today. This approach avoids vendor lock-in and enables enterprises to leverage the best attributes of each platform, but the economics have to be realistic.

It is worth considering private cloud as an option rather than assuming that public cloud is the only viable route, the report concludes. The economics showcased in the report suggest that a private cloud strategy could be a better solution.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Software-Defined Data Centers: VMware Designs


These are best practices and proven practices for how a design for all components in the SDDC might look. It will highlight a possible cluster layout, including a detailed description of what needs to be put where, and why a certain configuration needs to be made.

Typically, every design should have an overview to quickly understand what the solution is going to look like and how the major components are related. In the SDDC one could start drawing the vSphere Clusters, including their functions.

Logical overview of the SDDC clusters

This following image describes an SDDC that is going to be run on the three-cluster approach:

 

The three clusters are as follows:

  • The management cluster for all SDDC managing services
  • The NSX edge cluster where all the north-south network traffic is flowing through
  • The actual payload cluster where the production VMs get deployed

Tip: Newer best practices from VMware, as described in the VMware validated designs (VVD) version 3.0, also propose a two-cluster approach. In this case, the edge cluster is not needed anymore and all edge VMs are deployed directly onto the payload cluster. This can be a better choice from a cost and scalability perspective. However, it is important to choose the model according to the requirements and constraints found in the design.

The overview should be only as complex as necessary since its purpose is to give a quick impression over the solution and its configuration. Typically, there are a few of these overviews for each section.

This forms a basic SDDC design where the edge and the management cluster are separated. According to the latest VMware best practices, payload and edge VMs can also run on the same cluster. This basically is a decision based on scale and size of the entire environment. Often it is also a decision based on a limit or a requirement — for example, edge hosts need to be physically separated from management hosts.

Logical overview of solution components

This is as important as the cluster overview and should describe the basic structure of the SDDC components, including some possible connections to third-party integration like IPAM.

Also, it should provide a basic understanding for the relationship between the different solutions.

 

It is important to have an understanding of these components and how they work together. This will become important during the deployment of the SDDC since none of these components should be left out or configured wrong. For the vRealize Log Insight connects, that is especially important.

Note: If not all components are configured to send their logs into vRealize Log Insight, there will be gaps, which can make troubleshooting very difficult or even impossible. A plan, which describes the relation, can be very helpful during this step of the SDDC configuration.

These connections should also be reflected in a table to show the relationship and confirm that everything has been set up correctly. The better the detail is in the design, the lower the chance that something gets configured wrong or is forgotten during the installation.

The vRealize Automation design

Based on the use case, there are two setup methods/designs vRealize Automation 7 supports when being installed.

Small: Small stands for a very dense and easy-to-deploy design. It is not recommended for any enterprise workloads or even for production. But it is ideal for a proof of concept (PoC) environment, or for a small dev/test environment to play around with SDDC principles and functions.

The key to the small deployment is that all the IaaS components can reside on one single Windows VM. Optionally, there can be additional DEMs attached which eases future scale. However, this setup has one fundamental disadvantage: There is no built-in resilience or HA for the portal or DEM layer. This means that every glitch in one of these components will always affect the entire SDDC.

Enterprise: Although this is a more complex way to install vRealize Automation, this option will be ready for production use cases and is meant to serve big environments. All the components in this design will be distributed across multiple VMs to enable resiliency and high availability.

 

In this design, the vRealize Automation OVA (vApp) is running twice. To enable true resilience a load balancer needs to be configured. The users access the load balancer and get forwarded to one of the portals. VMware has good documentation on configuring NSX as a load balancer for this purpose, as well as the F5 load balancer. Basically, any load balancer can be used, as long as it supports HTML protocol checks.

Note: DNS alias or MS load-balancing should not be used for this, since these methods cannot prove if the target server is still alive. According to VMware, there are checks required for the load balancer to understand if each of the vRA Apps is still available. If these checks are not implemented, the user will get an error while trying to access the broken vRA

In addition to the vRealize Automation portal, there has to be a load balancer for the web server components. Also, these components will be installed on a separate Windows VM. The load balancer for these components has the same requirements as the one for the vRealize Automation instances.

The active web server must only contain one web component of vRA, while the second (passive) web server can contain component 2, 3, and more.

Finally, the DEM workers have to be doubled and put behind a load balancer to ensure that the whole solution is resilient and can survive an outage of any one of the components.

Tip: If this design is used, the VMs for the different solutions need to run on different ESXi hosts in order to guarantee full resiliency and high availability. Therefore, VM affinity must be used to ensure that the DEMs, web servers or vRA appliances never run on the same ESXi host. It is very important to set this rule, otherwise, a single ESXi outage might affect the entire SDDC.

This is one of VMware’s suggested reference designs in order to ensure vRA availability for users requesting services. Although it is only a suggestion it is highly recommended for a production environment. Despite all the complexity, it offers the highest grade of availability and ensures that the SDDC can stay operative even if the management stack might have troubles.

Tip: vSphere HA cannot deliver this grade of availability since the VM would power off and on again. This can be harmful in an SDDC environment. Also, to bring back up operations, the startup order is important. Since HA can’t really take care of that, it might power the VM back on at a surviving host, but the SDDC might still be unusable due to connection errors (wrong order, stalled communication, and so on).

Once the decision is made for one of these designs, it should be documented as well in the setup section. Also, take care that none of the limits, assumptions, or requirements are violated with that decision.

Another mechanism of resiliency is to ensure that the required vRA SQL database is configured as an SQL cluster. This would ensure that no single point of failure could affect this component. Typically, big organizations have already some form of SQL cluster running, where the vRA database could be installed. If this isn’t a possibility, it is strongly recommended to set up such a cluster in order to protect the database as well. This fact should be documented in the design as a requirement when it comes to the vRA installation.

This tutorial is a chapter excerpt from “Building VMware Software-Defined Data Centers” by Valentin Hamburger. Use the code ORSCP50 at checkout to save 50% on the recommended retail price until Dec. 15.



Source link