Tag Archives: servers

What Users Say About Top Vendors


The all-flash array has matured to the point where it is now powering much of the growth in the enterprise storage business. Advances in the design, performance and management capabilities of solid state drive (SSDs), coupled with declines in cost, make flash storage viable for many workloads. Enterprise storage is relentlessly demanding, though, so potential buyers need to think critically when they choose an AFA.

According to product reviews by IT Central Station users, the top all-flash array vendors on the market are Hewlett-Packard Enterprise with 3PAR flash storageNetAppTintri, Nimble Storage (now part of HPE), Pure Storage, and IBM.

Based on their experience with AFAs from these vendors, contributors at IT Central Station shared their thoughts, including benefits the products provide and areas where they could improve.

HPE 3PAR

Brent Dunington, systems architect at a university, described his company’s decision-making process for choosing HPE 3PAR flash storage:

“We went through a whole data center refresh cycle and one of the things is that we needed to look at our disk system. Everything was for spinning disks, so we decided to make the leap to an all-SSD data center. We brought in all the competitors, went through an RFP process, and 3PAR came ahead.”

A system administrator at an insurance company shared how HPE 3PAR compares to other storage solutions he has used in the past:

“The speed of the Flash Array is better than what we had with the previous products. We like their blades better than the Cisco blades. It is easier to manage.”

Eric Slabbinck, project manager at a government agency, suggested specific features that could improve HPE 3PAR:

“From a personal point of view, what would interest me is a mechanism that detects file rot, i.e., whether a file or sector has become corrupt, e.g., as a result of copying the sector to other locations from the original location.”

NetApp

A lead storage/system engineer at a financial services firm described how NetApp All Flash has helped his organization:

“We have been looking for a flash solution that scales horizontally along with a proven application integration stack. NetApp has been helpful and stable, and enabled us to buy capacity as needed, as well as help in quickly refreshing UAT/DEV environments as needed.”

An R&D executive supervisor at a media company explained what he values most in All Flash FAS:

“It is very user friendly. Someone in my position needs to be able to bring up the system quickly, efficiently, and shut it down if there’s a power outage quickly and efficiently without having trouble. It also supports VMware, which is what we use; but we use the NetApp as our only filer.”

A computer systems engineer at a government agency wrote about product improvements that he’s looking forward to using once they’re released by NetApp:

“We’re interested or excited in getting to 32 GB fiber channel. With their new models, NetApp will be moving to 32 GB fiber. That would potentially raise performance and or lower our port counts, simplifying or minimizing the amount of cables we need to put in places.”

Tintri VMstore

Mike Geller, network administrator at a healthcare company, wrote about the value Tintri has added to his organization:

“Tintri has a great web UI that allows you to view performance of individual VMs, as well as performance of the overall VMstore. Code upgrades are really simple.”

Donald Lopez, IT manager at a tech services company, shared how his organization has benefitted from Tintri:

“Immediately upon installation, we benefited from a 5X speed/performance increase in the overall system for all of our VMs migrated to the unit from an old unreliable Synology storage unit.”

Raymond Handels, system engineer at a university, weighed in on how Tintri could further improve its storage solution:

“Speed of our VDI machines. We have a very high login and logout ratio and machines are being refreshed instantly so we have a constant boot storm on our storage.”

Nimble Storage

Brian Butler, senior network analyst at a financial services firm, explained how deploying Nimble Storage benefitted his organization:

“It has vastly improved the responsiveness of our servers. It adds snapshots to help with our DR. The snapshots are sent across the way into our DR site, so we have DR copies of everything. It’s all around just improved the flow of everything.”

Paul Sabin, senior network and infrastructure manager at a legal firm, noted a shortcoming with Nimble Storage:

“I really would like to see synchronous replication. This is something that when we have multiple arrays in our environment and being able to do something like a zero RPO. Being a law firm, we really want our data to be protected all the time.”

Pure Storage

An information systems analyst at a pharma/biotech company described the value in Pure Storage’s VDI capabilities:

“For VDI, there’s a consistent user experience. Users don’t switch to VDI if it’s not at the same speed as a laptop or desktop, and Pure Storage provides that.”

Andrea Spinazi, chief of information, facility, purchasing and services manager at Roma Metropolitane S.r.l., explained what he finds most beneficial with Pure Storage:

“The most valuable features are extremely low latency, high IOPS with VMware, inline deduplication and compression….We liked the non-disruptive downgrade from FA-420 (POC) to FA-405 in production and the non-disruptive upgrade from FA-405 to M20.”

However, Leonardo Perez, deputy head of IT at a government agency, warned of a Pure Storage drawback:

“Be careful with the type of information you allocate to this storage. The solution is good for virtual machines and databases, but not for images and videos. Compression rates are not good for these types of data.”

IBM FlashSystem

A design engineer at a recruiting/HR firm described the features he values most in IBM FlashSystem:

“The performance is really good. From an operations perspective, definitely the ease of use stands out. Compared to other products and other vendors, it’s much, much easier.”

A senior solutions architect at a tech services company shared how his company has benefitted from IBM FlashSystem:

“The V9000 incorporates both the Spectrum virtualization layer as well as flash technology. It does it in such a unique manner that it provides super-fast response times. There’s low latency for the customers. It’s very simple and easy.”

Joseph King, CTO at CAS Severn, suggested a way IBM FlashSystem could improve:

“We think that IBM has to continue to invest in additional data reduction capabilities, which are on their roadmap. Being able to use flash most efficiently, where the least amount of data is physically being stored on the V9000, is really where IBM needs to make additional investment. They are doing that.”

You can read more all-flash array reviews on IT Central Station.

 

 



Source link

Install Freeradius on ubuntu 17.04 Server and manage using daloradius (Freeradius web management application)


Sponsored Link

RADIUS, which stands for “Remote Authentication Dial In User Service”, is a network protocol — a system that defines rules and conventions for communication between network devices — for remote user authentication and accounting. Commonly used by Internet Service Providers (ISPs), cellular network providers, and corporate and educational networks, the RADIUS protocol serves three primary functions:

• Authenticates users or devices before allowing them access to a network

• Authorizes those users or devices for specific network services

• Accounts for and tracks the usage of those services

Freeradius Features

• An open and scalable solution

• Broad support by a large vendor base

• Easy modification

• Separation of security and communication processes

• Adaptable to most security systems

• Workable with any communication device that supports RADIUS client protocol

daloRADIUS is an advanced RADIUS web platform aimed at managing Hotspots and general-purpose ISP deployments. It features rich user management, graphical reporting, accounting, and integrates with GoogleMaps for geo-locating (GIS). daloRADIUS is written in PHP and JavaScript and utilizes a database abstraction layer which means that it supports many database systems, among them the popular MySQL, PostgreSQL, Sqlite, MsSQL, and many others.

It is based on a FreeRADIUS deployment with a database server serving as the backend. Among other features it implements ACLs, GoogleMaps integration for locating hotspots/access points visually and many more features. daloRADIUS is essentially a web application to manage a radius server so theoretically it can manage any radius server but specifically it manages FreeRADIUS and it’s database structure. Since version 0.9-3 daloRADIUS has introduced an application-wide database abstraction layer based on PHP’s PEAR::DB package which support a range of database servers.

Before Installing make sure you have Ubuntu 17.04 LAMP server installed and ready for freeradius.

Preparing your system

Open the terminal and run the following command

sudo apt-get install php-common php-gd php-curl php-mail php-mail-mime php-pear php-db php-mysql

Install freeradius using the following command

sudo apt-get install freeradius freeradius-mysql freeradius-utils

Create Freeradius Database

You can use the following command to create freeradius database

sudo mysql -u root -p

Enter password:

mysql> create database radius;

mysql> grant all on radius.* to radius@localhost identified by “password”;

Query OK, 0 rows affected (0.00 sec)

Insert the freeradius database scheme using the following commands

sudo mysql -u root -p radius

Enter password:

sudo mysql -u root -p radius

Enter password:

Create new user for radius database

sudo mysql -u root -p

mysql> use radius;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

Database changed

mysql> INSERT INTO radcheck (UserName, Attribute, Value) VALUES (‘sqltest’, ‘Password’, ‘testpwd’);

Query OK, 1 row affected (0.04 sec)

mysql> exit

Bye

Freeradius Configuration

You need to edit /etc/freeradius/sql.conf file

sudo vi /etc/freeradius/sql.conf

Make sure you have the following details

database = mysql
login = radius
password = password

Uncomment the following

readclients = yes

Save and Exit the file

Now you need to edit the /etc/freeradius/sites-enabled/default file

sudo vi /etc/freeradius/sites-enabled/default

Uncomment the sql option in the following sections

accounting

# See “Authorization Queries” in sql.conf

sql

session

# See “Authorization Queries” in sql.conf

sql

Post-Auth-Type

# See “Authorization Queries” in sql.conf

sql

Save and Exit the file

Now edit /etc/freeradius/radiusd.conf file

sudo vi /etc/freeradius/radiusd.conf

#Uncomment the following option

$INCLUDE sql.conf

Save and exit the file

Now you can stop the free radius server using the following command

sudo /etc/init.d/freeradius stop

Run freeradius in debugging mode. If there is no error, you are ready to go.

sudo freeradius -X

Start the freeradius using the following command

sudo /etc/init.d/freeradius start

Test the radius server using the following command

sudo radtest sqltest testpwd localhost 18128 testing123

Ouput as follows

Sending Access-Request of id 68 to 127.0.0.1 port 1812
User-Name = “sqltest”
User-Password = “testpwd”
NAS-IP-Address = 127.0.1.1
NAS-Port = 18128
Message-Authenticator = 0x00000000000000000000000000000000
rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=68, length=20

Daloradius Installation

You can download the Daloradius latest version from here

Once you downloaded the daloradius-0.9-9.tar.gz file you need to extract using the following command

$ tar xvfz daloradius-0.9-9.tar.gz

$ mv daloradius-0.9-9 daloradius

$ mv daloradius /var/www/html

Change Permissions

sudo chown www-data:www-data /var/www/html/daloradius -R

sudo chmod 644 /var/www/html/daloradius/library/daloradius.conf.php

Mysql database need to setup for daloradius.We need to do is to import the daloradius scheme into our existing radius database.

$ cd /var/www/html/daloradius/contrib/db

sudo mysql -u root -p radius
configure the following daloradius setting.

sudo vi /var/www/html/daloradius/library/daloradius.conf.php

Change the database password

$configValues[‘CONFIG_DB_PASS’] = ‘password’;

Save and exit the file

Now you need to configure daloradius website under /etc/apache2/sites-available

sudo vi /etc/apache2/sites-available/daloradius.conf

add the following lines

Alias /daloradius “/var/www/html/daloradius/”

<Directory /var/www/html/daloradius/>
Options None
Order allow,deny
allow from all
</Directory>

Save and exit the file

Enable daloradius website using the following command

sudo a2ensite daloradius

Enabling site daloradius.

To activate the new configuration, you need to run:

sudo service apache2 reload

Daloradius Web GUI

you can access daloradius GUI using http://server-ip/daloradius and the login screen as follows

1

Use the following login details

username: administrator
password: radius

If you are running PHP 7 then you might see the following error

Database connection error
Error Message: DB Error: extension not found

To fix the above error you need to do the following changes Credit goes here

Changing file library/daloradius.conf.php

It’s required to update daloRADIUS’s database connection code so that it identifies the MySQL server using the new and improved mysqli driver:

Open for editing the file library/daloradius.conf.php and locate the configuration variable CONFIG_DB_ENGINE and change it to the value of mysqli (it is now probably set to mysql, notice the extra i). It should end up looking as follows: $configValues[‘CONFIG_DB_ENGINE’] = ‘mysqli’;
Changing file library/opendb.php

Open for editing the file library/opendb.php

At the very end of the file just add this new line of code: $dbSocket->query(“SET GLOBAL sql_mode = “;”); which makes the MySQL version work with less strict SQL syntax

Once you logged in you should see similar to the following screen

2

Sponsored Link



Related posts

Easily Update Ubuntu and Debian Systems with uCareSystem | Linux.com


Updates are something that are often ignored for one reason or another. However, if you’re not making a daily (or at least weekly) habit of updating your systems, then you are doing yourself, your servers, and your company a disservice.

And, even if you are regularly updating your Ubuntu and Debian systems, you may be doing the bare minimum, thereby leaving out some rather important steps.

As with nearly every aspect of Linux, fortunately, there’s an app that does an outstanding job of taking care of those upgrading tasks. A single command will:

  • Update the list of available packages

  • Download and install all available updates for the system

  • Check for and remove any old Linux kernels (retaining the current running kernel and one previous version)

  • Clear the retrieved packages

  • Uninstall obsolete and orphaned packages

  • Delete package settings from previously uninstalled software

That’s a lot of jobs for one command—but ucaresystem-core handles all this with ease. Considering that one command takes the place of at least eight commands, that’s a big time saver.

In fact, here are the commands ucaresystem-core can take care of:

  • apt update

  • apt upgrade

  • apt autoremove

  • apt clean

  • uname -r (do NOT remove this kernel)

  • dpkg –list | grep linux-image

  • sudo apt-get purge linux-image-X.X.X-X-generic (Where X.X.X-X is the kernel to be removed)

  • sudo update-grub2

If you love spending time at a terminal window, that’s great. But if you have a lot of systems to update, you’re probably looking out for something to make your job a bit more efficient. That’s where ucaresystem-core comes in.

I’ve been using ucaresystem-core for more than a year now (with Elementary OS and Ubuntu) and have yet to encounter a single problem. In fact, this particular tool has become one of the first I install on all Ubuntu and Debian systems. I trust it…it works.

So, how can you get this incredibly handy tool? Let’s walk through the process of installing ucaresystem-core, how to use it, and how to automate it.

Installation

The first thing you must do is install ucaresystem-core. We’ll be downloading the .deb file (as the Utappia repository seems to no longer contain a release file). Here’s how:

  1. Download the .deb file that matches your operating system release into your ~/Downloads directory

  2. Change into the ~/Downloads directory with the command cd ~/Downloads

  3. Install the deborphan dependency with the command sudo apt install deborphan

  4. Install ucaresystem-core with the command sudo dpkg -i ucaresystem-core*.deb

That’s it for the installation; ucaresystem-core is ready to go.

Running ucaresystem-core

You might have guessed by now that running this all-in-one command is very simple, and you would be correct. To fire up ucaresystem-core, go back to your terminal and issue the command:

sudo ucaresystem-core

This will launch the tool, which will immediately warn you that it will kick off in five seconds (Figure 1).

As the command runs, it requires zero user input, so you can walk away and wait for the process to complete (how long it takes will depend upon how much needs to be updated, how much needs to be removed, the speed of your system, and the speed of your Internet connection).

The one caveat to ucaresystem-core is that it does not warn you should you need to reboot your machine (if a newer kernel be installed). Instead, you have to scroll up to near the beginning of the output to see what has been upgraded (Figure 2).

If you cannot scroll up in your terminal, you can always view the dpkg log found in /var/log/dpkg.log. In this file, you will see everything ucaresystem-core has upgraded (including a handy time-stamp — Figure 3).

How much space did we gain?

Since my Elementary OS is set up such that ucaresystem-core is run as a cron job, I installed a fresh instance on a Ubuntu 17.10 desktop to test how much space would be freed after a single run. This instance was a VirtualBox VM, so space was at a premium. Prior to running the ucaresystem-core command the VM was using 6.8GB out of 12GB. After the run, the VM was using 6.2GB out of 12GB. Although that may not seem like a large amount, when you’re dealing with limited space, every bit counts. Plus, if you consider it went from 37 percent to 34 percent usage, it might seem like a better savings. On top of that, the system is now clean and running the most recent versions of all software…with the help of a single command.

Automating the task

Because ucaresystem-core doesn’t require user input, it is very easy to automate this, with the help of cron. Let’s say you want to run ucaresystem-core every night at midnight. To do this, open a terminal window and issue the command sudo crontab -e. Once you’re in your crontab editor, add the following to the bottom of the file:

0 0 * * * /usr/bin/ucaresystem-core

Save and close the crontab file. The command will now run every night at Midnight. Thanks to the dpkg log file, you can check to see the results.

Should you want to set up ucaresystem-core to run at a different time/day, I suggest using the Crontab Guru to help you know how to enter the time/date for your cron job.

Keep it simple, keep it clean

You will be hard-pressed to find a simpler method to keep your Ubuntu and Debian systems both updated and clean, than with ucaresystem-core. I highly recommend you employ this very handy tool for any system that you want always updated and free of the cruft that can be left behind by such a process.

Of course, if you prefer to do everything by hand, that is an even more reliable method. However, when you don’t always have time for that, there’s always ucaresystem-core.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Kolab Now Integrates Collabora Online » Linux Magazine


Kolab Systems AG, a Switzerland-based, in cooperation with Collabora Productivity, a UK-based company that offers LibreOffice-based solutions, are offering a browser-based online office suite. Kolab Now customers can now run fully featured Collabora Online to create and edit all their documents.

Kolab offers standalone, fully open source Kolab Groupware solutions that anyone can run on their servers; they also offer Kolab Now, a software-as-a-service (SaaS) platform that is similar to Google Apps for businesses, but with privacy in mind.

In a press release, Kolab said, “With Kolab Now, your data is stored by a Swiss company; using open source, peer-reviewed and audited software; developed by some of the most privacy-conscious engineers in the world; and protected by Switzerland’s strictest privacy laws. We have integrated Kolab Now’s new office apps into a space so safe and private that future Edward Snowdens shall feel safe and secure.”

Because the political landscape is changing, with state-sponsored cyberattacks on the rise and governments becoming hostile toward the privacy of their citizens, it’s becoming increasingly important to protect one’s privacy, especially the many professionals, like political activists, researchers, and investigative journalists, who need tools to protect their sources and communications. This is the market to which Swiss-based Kolab Systems AG means to cater.



Source link

Data Center Architecture: Converged, HCI, and Hyperscale


A comparison of three approaches to enterprise infrastructure.

If you are planning an infrastructure refresh or designing a greenfield data center from scratch, the hype around converged infrastructure, hyperconverged infrastructure (HCI) and hyperscale might have you scratching your head. In this blog, I’ll compare and contrast the three approaches and consider scenarios where one infrastructure architecture would be a better fit than the others.

Converged infrastructure

Converged infrastructure (CI) incorporates compute, storage and networking in a pre-packaged, turnkey solution. The primary driver behind convergence was server virtualization: expanding the flexibility of server virtualization to storage and network components. With CI, administrators could use automation and management tools to control the core components of the data center. This allowed for a single admin to provision, de-provision and make any compute, storage or networking changes on the fly.

Converged infrastructure platforms use the same silo-centric infrastructure components of traditional data centers. They’re simply pre-architected and pre-configured by the manufacturers. The glue that unifies the components is specialized management software. One of the earliest and most popular CI examples is Virtual Computing Environment (VCE). This was a joint venture by Cisco Systems, EMC, and VMware that developed and sold various sized converged infrastructure solutions known as Vblock. Today, Vblock systems are sold by the combined Dell-EMC entity, Dell Technologies.

CI solutions are a great choice for infrastructure pros who want an all-in-one solution that’s easy to buy and pre-packaged direct from the factory. CI is also easier from a support standpoint. If you maintain support contracts on your CI system, the manufacture will assist in troubleshooting end-to-end. That said, many vendors are shifting their focus towards hyperconverged infrastructures.

Hyperconverged infrastructure

HCI builds on CI. In addition to combining the three core components of a data center together, hyperconverged infrastructure leverages software to integrate compute, network and storage into a single unit as opposed to using separate components. This architecture design offers performance advantages and eliminates a great deal of physical cabling compared to silo- and CI-based data centers.  

Hyperconverged solutions also provide far more capability in terms of unified management and orchestration. The mobility of applications and data is greatly improved, as is the setup and management of functions like backups, snapshots, and restores. These operational efficiencies make HCI architectures more attractive from a cost-benefit analysis when compared to traditional converged infrastructure solutions.

In the end, a hyperconverged solution is all about simplicity and speed. A great use case for HCI would be a new virtual desktop infrastructure (VDI) deployment. Using the orchestration and automation tools available, you have the ideal platform to easily roll out hundreds or thousands of virtual desktops.

Hyperscale

The key attribute of hyperscale computing is the de-coupling of compute, network and storage software from the hardware. That’s right, while HCI combined everything into a single chassis, hyperscale decouples the components.

This approach, as practiced by hyperscale companies like Facebook and Google, provides more flexibility than hyperconverged solutions, which tend to grow in a linear fashion. For example, if you need more storage on your HCI system, you typically must add a node blade that includes both compute and built-in storage. Some hyperconverged solutions are better than others in this regard, but most fall prey to linear scaling problems if your workloads don’t scale in step.

Another benefit of hyperscale architectures is that you can manage both virtual and bare metal servers on a single system. This is ideal for databases that tend to operate in a non-virtualized manner. Hyperscale is most useful in situations where you need to scale-out one resource independently from the others. A good example is IoT because it requires a lot of data storage, but not much compute. A hyperscale architecture also helps in situations where it’s beneficial to continue operating bare metal compute resources, yet manage storage resources in elastic pools.



Source link