Tag Archives: servers

Software-Defined Data Centers: VMware Designs


These are best practices and proven practices for how a design for all components in the SDDC might look. It will highlight a possible cluster layout, including a detailed description of what needs to be put where, and why a certain configuration needs to be made.

Typically, every design should have an overview to quickly understand what the solution is going to look like and how the major components are related. In the SDDC one could start drawing the vSphere Clusters, including their functions.

Logical overview of the SDDC clusters

This following image describes an SDDC that is going to be run on the three-cluster approach:

 

The three clusters are as follows:

  • The management cluster for all SDDC managing services
  • The NSX edge cluster where all the north-south network traffic is flowing through
  • The actual payload cluster where the production VMs get deployed

Tip: Newer best practices from VMware, as described in the VMware validated designs (VVD) version 3.0, also propose a two-cluster approach. In this case, the edge cluster is not needed anymore and all edge VMs are deployed directly onto the payload cluster. This can be a better choice from a cost and scalability perspective. However, it is important to choose the model according to the requirements and constraints found in the design.

The overview should be only as complex as necessary since its purpose is to give a quick impression over the solution and its configuration. Typically, there are a few of these overviews for each section.

This forms a basic SDDC design where the edge and the management cluster are separated. According to the latest VMware best practices, payload and edge VMs can also run on the same cluster. This basically is a decision based on scale and size of the entire environment. Often it is also a decision based on a limit or a requirement — for example, edge hosts need to be physically separated from management hosts.

Logical overview of solution components

This is as important as the cluster overview and should describe the basic structure of the SDDC components, including some possible connections to third-party integration like IPAM.

Also, it should provide a basic understanding for the relationship between the different solutions.

 

It is important to have an understanding of these components and how they work together. This will become important during the deployment of the SDDC since none of these components should be left out or configured wrong. For the vRealize Log Insight connects, that is especially important.

Note: If not all components are configured to send their logs into vRealize Log Insight, there will be gaps, which can make troubleshooting very difficult or even impossible. A plan, which describes the relation, can be very helpful during this step of the SDDC configuration.

These connections should also be reflected in a table to show the relationship and confirm that everything has been set up correctly. The better the detail is in the design, the lower the chance that something gets configured wrong or is forgotten during the installation.

The vRealize Automation design

Based on the use case, there are two setup methods/designs vRealize Automation 7 supports when being installed.

Small: Small stands for a very dense and easy-to-deploy design. It is not recommended for any enterprise workloads or even for production. But it is ideal for a proof of concept (PoC) environment, or for a small dev/test environment to play around with SDDC principles and functions.

The key to the small deployment is that all the IaaS components can reside on one single Windows VM. Optionally, there can be additional DEMs attached which eases future scale. However, this setup has one fundamental disadvantage: There is no built-in resilience or HA for the portal or DEM layer. This means that every glitch in one of these components will always affect the entire SDDC.

Enterprise: Although this is a more complex way to install vRealize Automation, this option will be ready for production use cases and is meant to serve big environments. All the components in this design will be distributed across multiple VMs to enable resiliency and high availability.

 

In this design, the vRealize Automation OVA (vApp) is running twice. To enable true resilience a load balancer needs to be configured. The users access the load balancer and get forwarded to one of the portals. VMware has good documentation on configuring NSX as a load balancer for this purpose, as well as the F5 load balancer. Basically, any load balancer can be used, as long as it supports HTML protocol checks.

Note: DNS alias or MS load-balancing should not be used for this, since these methods cannot prove if the target server is still alive. According to VMware, there are checks required for the load balancer to understand if each of the vRA Apps is still available. If these checks are not implemented, the user will get an error while trying to access the broken vRA

In addition to the vRealize Automation portal, there has to be a load balancer for the web server components. Also, these components will be installed on a separate Windows VM. The load balancer for these components has the same requirements as the one for the vRealize Automation instances.

The active web server must only contain one web component of vRA, while the second (passive) web server can contain component 2, 3, and more.

Finally, the DEM workers have to be doubled and put behind a load balancer to ensure that the whole solution is resilient and can survive an outage of any one of the components.

Tip: If this design is used, the VMs for the different solutions need to run on different ESXi hosts in order to guarantee full resiliency and high availability. Therefore, VM affinity must be used to ensure that the DEMs, web servers or vRA appliances never run on the same ESXi host. It is very important to set this rule, otherwise, a single ESXi outage might affect the entire SDDC.

This is one of VMware’s suggested reference designs in order to ensure vRA availability for users requesting services. Although it is only a suggestion it is highly recommended for a production environment. Despite all the complexity, it offers the highest grade of availability and ensures that the SDDC can stay operative even if the management stack might have troubles.

Tip: vSphere HA cannot deliver this grade of availability since the VM would power off and on again. This can be harmful in an SDDC environment. Also, to bring back up operations, the startup order is important. Since HA can’t really take care of that, it might power the VM back on at a surviving host, but the SDDC might still be unusable due to connection errors (wrong order, stalled communication, and so on).

Once the decision is made for one of these designs, it should be documented as well in the setup section. Also, take care that none of the limits, assumptions, or requirements are violated with that decision.

Another mechanism of resiliency is to ensure that the required vRA SQL database is configured as an SQL cluster. This would ensure that no single point of failure could affect this component. Typically, big organizations have already some form of SQL cluster running, where the vRA database could be installed. If this isn’t a possibility, it is strongly recommended to set up such a cluster in order to protect the database as well. This fact should be documented in the design as a requirement when it comes to the vRA installation.

This tutorial is a chapter excerpt from “Building VMware Software-Defined Data Centers” by Valentin Hamburger. Use the code ORSCP50 at checkout to save 50% on the recommended retail price until Dec. 15.



Source link

IT Pros Review Top Vendors


Users cite pros and cons of HPE BladeSystem, Cisco UCS B-series, and Lenovo Flex System

In many enterprise organizations, blade servers reduce an enterprise’s footprint by saving space and reducing overall power consumption. IT professionals consider a number of factors when selecting a blade server for their enterprise, including a variety of hardware integrations, easy management, and minimal energy usage.

According to product reviews by IT Central Station users, top blade server vendors in the market include HPE BladeSystem, Cisco UCS B-Series Blade Servers, and Lenovo Flex System Blade Servers.

Here is what our users have to say about working with these products, describing which features they find most valuable and offering insight on where they see room for improvement.

HPE BladeSystem

A senior network administrator at a government agency said he finds HPE BladeSystems’ remote management capabilities as one of its most valuable features:

“Having implemented this solution, it has enabled us to have remote management of equipment problems, to identify the power for reviewing the status of errors without having to be on-site, but remotely from anywhere required. It allows immediate access to the server management and immediate detection of the access logs.”

An enterprise architect at a financial services firm lauds the virtualization capabilities of the product:

“The virtual connect side of networking and the manageability through that is by far the biggest win for us. The blades come and go as racks do, but the virtualization back of it means a lot less hands on and a lot more manageability.”

 

However, the systems engineer of business technology at a transportation company noted that HPE BladeSystems can improve in terms of scalability:

“I would like to see better scalability. We have been using this solution for five years, and sometimes there are scalability issues with relatively older generations. If planned well in advance, it will make your life easier.”

Cisco UCS B-Series

Matthew M., a data center practice manager, takes a holistic point of view on what makes the Cisco UCS B-Series blade server valuable.

“The UCS environment as a whole. The hardware is easily swappable and, utilizing the boot from SAN option, you can always keep your server intact due to the service profiles. So if your blade has failures and you have a hot spare, you can transfer the service profile to a new blade and be operational in mere minutes. Huge for uptime and perfect for environments like VMware ESXi hosts, which is what I use them for primarily.”

A senior system specialist at a construction company wrote that running Cisco UCS in a Vblock infrastructure is particularly beneficial for his company:

“Running in the VCE Vblock gives us the flexibility to deploy a large virtual workload of servers. We use a mix of mainly Windows servers and a few Linux appliances. I had one blade server fail. The replacement was up and operating quickly after the blade server was swapped over.”

But Brad F., a data center systems engineer, noted areas where the Cisco UCS B-Series that could improve:

“The HTML5 interface is a much needed improvement over the old Java interface, but still needs a little work. When customers are first introduced to UCS, the setup is somewhat complex. Yet the learning curve is reasonable.”

Lenovo Flex System Blade Servers

Alejandro D., system X & P/blade/storage/ SAN hardware and software support specialist, cited Lenovo Blade Servers’ redundancy as a valuable feature:

“The features of this product that I value most are total redundancy in all its components: power, cooling, communications, fiber, administration and blades, and a data center in 8U; you can accommodate 14 servers in a BladeCenter H chassis.”

Muhammad S., a senior system administrator at a consumer goods company, provided insight into the product’s central management capabilities:

“Central management of all blade servers and performance: It helps us to access blade servers remotely even at boot time, as well, when we can access the BIOS setup remotely. Other than that, we can restart and shut down blade servers from a single console.”

However, Amirreza Y., a design and development engineer at a communications service provider, said the Lenovo falls short on the storage front:

“The storage part of this product needs to be improved. If storage is also attached to this bundle, it would be a good solution for the databases… In the new version of this product, the Flex System, the storage feature is also available with the CPU and memory.”



Source link

How to Install Firefox Quantum in Linux | Linux.com


Finally, Firefox 57 was officially released for all major OS e.g. Linux (32/64 bit), Mac OSX, Windows and Android. The binary package are now available for download for Linux (POSIX) systems, grab desired one and enjoy the browsing with new features added to it.

What’s new in Firefox 57

The major new release comes with the following features:

  • A new design look thanks to a new theme, a new Firefox logo and new ‘New Tab‘ page.
  • A multi-core Rendering Engine that’s GPU efficient.
  • New Add-ons designed for the modern web.
  • Faster page load time with less RAM (according to Mozilla developers it should load pages 2 times faster).
  • Efficient memory management.

New Firefox has also added a lots of new interesting features to Android as well. So, don’t wait, just grab the latest Firefox for android from Google Play Store and have fun.

Read more at Tecmint

Click Here!

How to Set Up Easy Remote Desktop Access in Linux | Linux.com


Linux is a remarkably flexible operating system. One of the easiest means of understanding that is when you see that, given a task, there are always multiple paths to success. This is perfectly illustrated when you find the need to display a remote desktop on a local machine. You could go with RDP, VNC, SSH, or even a third-party option. Generally speaking, your desktop will determine the route you take, but some options are far easier than others. Once you understand how streamlined modern desktops have made this task, your remote administration of Linux desktops and servers (with GUIs) becomes much simplified.

As I mentioned, how you do this will depend upon your distribution. In this article, I’ll cover how this can be done between Ubuntu Desktop 18.04 and Fedora 26 and Fedora 26 to Kubuntu. The big issue you will come across is that some desktops simply don’t work well with this technology. For example, as it stands, Wayland has yet to find its way to supporting VNC. The same thing holds true with the Elementary OS desktop. So I’ll demonstrate connecting to Fedora 26 from Ubuntu 18.04 and then from Fedora 26 to Kubuntu 17.10. I’ll be using the tools remmina, krfb, and the GNOME built-in tools.

From Ubuntu to Fedora

With the latest release of Fedora 26, using the default GNOME desktop, setting up a remote connection is fairly straightforward (because everything is installed by default). The first thing you must do is enable sharing. If you open up the GNOME Dash and type sharing, you’ll see the Sharing option appear, which allows you to open the tool. When the window opens, click the ON/OFF slider to the ON position and then click Screen Sharing. In the resulting window (Figure 1), click the checkbox for Allow connections to control the screen.

You can also enable the access options for New connections must ask for access and requiring a password. I highly recommend, at a bare minimum, that you enable the option for New connections must ask for access. That way, when someone attempts to gain access to your remote desktop, the connection will not be made until it is approved. Once these options have been taken care of, you can close out that window.

Out of the box, Fedora must have the necessary port opened in the firewall, so this remote connection can work. Go back to the GNOME Dash and type firewall. When the firewall icon appears, click on it, and enter your admin password. In the resulting window, click on Services and scroll down until you see vnc-server (Figure 2).

Click to enable vnc-server and then, when prompted, type your admin password. Access to the VNC port is now enabled.

Head over to the Ubuntu machine. We need to install the remmina application (which is one of the better remote client applications). Because the version in the standard repository contains a few bugs, we’ll install the most recent version with the following steps:

  1. Add the necessary repository with the command sudo apt-add-repository ppa:remmina-ppa-team/remmina-next

  2. Update the apt sources with the command sudo apt update

  3. Install the software with the command sudo apt-get install remmina remmina-plugin-rdp remmina-plugin-gnome libfreerdp-plugins-standard

From the desktop menu, type remmina and open the newly installed software. In the address window (Figure 3), select VNC from the drop-down, enter the IP address of the Fedora machine, and hit Enter on the keyboard.

Once you hit Enter on the keyboard, the Fedora desktop notification will pop up. Hover over that notification and click Accept (Figure 4). The connection will be made and whoever is on the Ubuntu machine can control your Fedora desktop.

From Fedora to Kubuntu

Now we’re going to connect from Fedora to Kubuntu. Because we’re going to use the same client (remmina), we need to install it on Fedora. To do this, open up a terminal window and issue the command sudo dnf install remmina.

With that installed, we now have to add the necessary piece of software on the Kubuntu desktop. The application in question is krfb and can be installed with the command sudo apt install krfb. Once that is installed, you can open the KDE menu and type krfb. Click on the resulting entry and then, in the new window, click the checkbox associated with Enable Desktop Sharing (Figure 5).

The krfb tool also gives you the necessary IP address as well as the password to use in order to gain access from the client. If you don’t like the given password, it can be changed by clicking the associated edit button.

At this point, your KDE desktop is ready to share. Head over to Fedora, open the GNOME Dash, type remmina and click the icon to open the software. Select VNC from the drop-down, type the IP address of the Kubuntu machine, and hit enter. You will be prompted for the krfb password. Type that and click OK. Back on the Kubuntu desktop, you’ll be asked to accept the connection. Once accepted, the Kubuntu desktop will appear on the Fedora. You’re ready to work.

Simple remote desktop connection

And that’s all there is to it. Yes, there are plenty of other ways to enable these types of connections (and some desktops don’t make the process nearly as easy). Fortunately, modern desktop distributions do include everything necessary to make remote connections incredibly simple. If your particular desktop of choice doesn’t include the tools to make this easy, you’re looking at installing one of the many VNC servers available for Linux (such as vino, TigerVNC, or tightvnc). Going the standard VNC server route might not be as user-friendly as the methods I’ve explained here, but, once set up, it is equally reliable.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

4 Ways to Watch or Monitor Log Files in Real Time | Linux.com


How can I see the content of a log file in real time in Linux? Well there are a lot of utilities out there that can help a user to output the content of a file while the file is changing or continuously updating. Some of the most known and heavily used utility to display a file content in real time in Linux is the tail command (manage files effectively).

1. tail Command – Monitor Logs in Real Time

As said, tail command is the most common solution to display a log file in real time. However, the command to display the file has two versions, as illustrated in the below examples.

In the first example the command tail needs the -f argument to follow the content of a file.

Read more at Tecmint

 

Click Here!