Monthly Archives: September 2017

Linux Foundation LFCE Georgi Yadkov Shares His Certification Journey | Linux.com


The Linux Foundation offers many resources for developers, users, and administrators of Linux systems. One of the most important offerings is its Linux Certification Program. The program is designed to give you a way to differentiate yourself in a job market that’s hungry for your skills.

How well does the certification prepare you for the real world? To illustrate that, The Linux Foundation is highlighting some of those who have recently passed the certification examinations. These testimonials should help you decide if either the Linux Foundation Certified System Administrator or the Linux Foundation Certified Engineer certification is right for you. In this article, recently certified engineer Georgi Yadkov shares his experience.

Linux.com: How did you become interested in Linux and open source?

Georgi Yadkov: My first experience with Linux was 15 years ago when I received a CD with Knoppix as a gift one of the first live CD Linux distributions. Back then, I was totally amazed when I booted from the optical drive, and some minutes later I was browsing the Internet.

Some years later (while attending university), a couple of students and myself proposed to the faculty to install MOODLE (open source course management system) in order to improve the collaboration and communication between the students and teachers. That was the first time I’d ever installed a Linux web server. I was very pleased with the results. Later, it would become my career.

Linux.com: What Linux Foundation course did you achieve certification in? Why did you select that particular course?

Yadkov: My journey started a year ago. I decided to enroll myself for the e-learning course for LFCS (which was huge at that time) and if I managed to pass the exam to prepare for LFCE. At that point, I had approximately five years of experience with server and desktop Linux machines. My overall level was very low and my knowledge was condensed in the bookmarks with the tutorials which I used to configure my servers (like how to install Apache, etc.). I learned about the course from the edX free course and decided that it will be great opportunity to expand my knowledge in this area.

There were numerous challenges for me during the last 12 months, like difficulties in understanding some of the topics and grasping the concepts, technical failures during the exams, lack of time (my second daughter was born last June), and lack of peer support (we have a strong commitment to Microsoft in the office). The most difficult moment was when I finished the LFCS training materials and was heading to enroll for the exam. I checked the competency document, and I found that I was missing a lot of things and would have to go back to the basics with the edX course. It was quite frustrating and demoralizing, and it took me a considerable amount of time to overcome it (2-3 months).

In May 2017, I successfully passed LFCS, and in July 2017 the LFCE.

When I was researching the available options for certification, I chose The Linux Foundation because I liked the curriculum, the practice-oriented approach of the exam, and mainly because I was able to prepare and take the exam from home.

Linux.com: What are your career goals? How do you see Linux Foundation certification helping you achieve those goals and benefiting your career?

Yadkov: I see myself in the future designing, implementing, and maintaining complex infrastructure by integrating different technologies. For me, the Linux Foundation certification is a step toward that direction. I was able to grasp how some of the fundamental Internet technologies work and how they fit in the Linux server. Definitely the biggest eye-opener was that I realized how much I don’t know.

Linux.com: What other hobbies or projects are you involved in? Do you participate in any open source projects at this time?

Yadkov: In my free time, I tried to be offline as much as possible. Usually I spend my time with my family and friends.

Linux.com: Do you plan to take future Linux Foundation courses? If so, which ones?

Yadkov: Maybe in the near future the OpenStack Administration Fundamentals.

Linux.com: In what ways do you think the certification will help you as a systems administrator in today’s market?

Yadkov: The Linux Foundation courses cover the main foundations of Linux server administration. Passing the exam not only provides a credible certificate but validates that the candidate can apply the knowledge to solve practical problems related to server administration under time constraint.

Linux.com: What Linux distribution do you prefer and why?

Yadkov: Depends on the purpose of the installation Linux Mint as a desktop and CentOS for servers.

Linux.com: Are you currently working as a systems administrator? If so, what role does Linux play?

Yadkov: I’m working as IT manager in a language school. Maintaining our MOODLE installation and the underlying infrastructure is a part of my responsibilities. We rely on Linux for almost everything related to MOODLE — web server, DB server, mail server, backup server, etc. Also we rely solely on Linux for the software development process it’s the OS of the developers and testers, web server for the tasks and bug tracking system, etc.

Linux.com: Where do you see the Linux job market growing the most in the coming years?

Yadkov: Many organizations will switch to cloud services and infrastructure, and there will be a lot of demand for experts which could design and maintain complex infrastructure.

Linux.com: What advice would you give those considering certification for their preparation?

Yadkov: The preparation for the exams demands persistence and a lot of dedicated time for learning and practice. Looking back, I don’t think that cramming for a couple of days before the exam is a viable option for success.

Having a good plan is a huge step toward the certification. I personally prefer to imagine every step backwards from the exam date back to today. This exercise helped me to list all the required tasks, to evaluate the needed time, and to plan my daily schedule accordingly.

Another key aspect for me was practicing. You should practice, practice, practice, and while practicing you shouldn’t only try to do the things right but rather to examine how the system behave when something is wrong. I found, at least for myself, that by making mistakes I was able to understand and even memorize the things better and thus be better prepared when the time comes for the production servers.

The positives from this journey in the Linux world are many and with enormous value for me and the people around me (employer, colleagues and friends):

  • The exams for the certificate and the fee motivated me to keep going through the course (which can’t be said about the free edX course).

  • I can now explain some of the key Linux concepts.

  • I have a position in the emacs vs vim war — just joking. I know like 10 percent of the functionality, and it is awesome.

  • I gain confidence that I when don’t understand something I can read the manual and grasp the key points.

  • I can set up and maintain basic server configurations and desktops (doing it faster and better with each instance).

 

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Read more:

Linux Foundation LFCS and LFCE: Pratik Tolia

Linux Foundation Certified Engineer: Gbenga “Christopher” Adigun

Linux Foundation Certified Engineer: Karthikeyan Ramaswamy

Linux Foundation Certified System Administrator: Muneeb Kalathil

Linux Foundation Certified System Administrator: Theary Sorn

Linux Foundation Certified Engineer: Ronni Jensen

On-Prem IT Infrastructure Endures, Talent Needed


Despite steady adoption of public cloud services, organizations continue to invest in their on-premises IT infrastructure and the people who run it, according to a new report from 451 Research.

The firm’s latest “Voice of the Enterprise: Datacenter Transformation” study found that organizations are maintaining healthy capacity in their on-premises data centers and have no plans to cut back on the staff assigned to data center and facility operations. Almost 60% of the nearly 700 IT decision makers surveyed by the firm said they have enough data center floor space and power capacity to last at least five years.

Even though many companies expect the total number of IT staffers to decline over the next year, most expect the number of employees dedicated to data center and facilities will stay the same or increase, according to 451 Research.

The reason for the continued data center investment, cited by 63% of those polled, was fairly generic: business growth. Christian Perry, research manager and lead analyst of the report, said analysts dove a little deeper. As it turns out, companies are finding that keeping workloads long term on public cloud services isn’t all that cost effective.

Regardless of the type of workload in the cloud – ERP, communications, or CRM for example – or size of the company, when an organization expands a workload by adding new licenses, seats, or functions, the cost over time winds up close to what it would cost to keep the workload on-premises, Perry said. Costs include opex and capex for IT infrastructure – servers, storage and networking – as well as the facilities that contain it.

“It still is dirt cheap to go to the cloud, but to stay in the cloud, that’s a whole other story,” he told me in a phone interview.

While some companies manage their cloud costs well, unexpected growth, a massive new project or a new division coming online can make cloud costs unwieldy, Perry said.

Another factor that’s playing into the continued data center investment is the “cloudification” of on-premises IT infrastructure. Converged infrastructure has enabled companies to reach greater levels of agility, flexibility, and cost control, Perry said, adding that hyperconverged infrastructure boosts that trend.

Data center skills shortage

While organizations continue to invest their on-premises IT infrastructure and facilities, they’re running into staffing challenges, 451 Research found. Twenty-nine percent face a skills shortage when trying to find qualified data center and facilities personnel, Perry said.

As companies are shifting away from traditional IT architectures to converged and hyperconverged infrastructure, demand for IT generalists has grown, he said. “Specialists are still critical in on-prem environments, but we’ve definitely seen the rise of the generalist…There’s a lot of training going on internally in organizations to bring their specialists to a generalist level.”

Of the 29% facing staffing challenges, a majority (60%) are focused on training existing staff to fill the gaps. Those attending the training tend to be server and storage administrators, 451 Research found. “There’s a certain sense of fear that they’re going to become siloed and potentially irrelevant,” Perry said. “At the same time, there’s a lot of excitement about these newer architectures and software-defined technologies.”

Companies cited a big skills gap in the areas of virtualization and containers, technologies companies view as transformative to their on-premises infrastructure, he said. They’re also key technologies to facilitate the continued enterprise focus on data center consolidation.

“The jump in cloud has had an impact on IT staffing overall,” Perry said. “A lot of cloud service providers have scooped up a ton of good IT talent. That’s not just Tier 1 cloud providers, but also Tier 2…They’re pulling away skilled IT staff and leaving gaps for on-prem.”

A separate 451 Research report that looked into enterprise server and converged infrastructure trends found that VM administration was the top skill enterprises have trouble finding. A third of organizations reported a networking skills gap.

 

 

 

 

 

 

 

 



Source link

Solus 3 Brings Maturity and Performance to Budgie | Linux.com


Back in 2016, the Solus developers announced they were switching their operating system over to a rolling release. Solus 3 marks the third iteration since that announcement and, in such a short time, the Solus platform has come a long way. But for many, Solus 3 would be a first look into this particular take on the Linux operating system. With that in mind, I want to examine what Solus 3 offers that might entice the regular user away from their current operating system. You might be surprised when I say, “There’s plenty.”

This third release of Solus is an actual “release” and not a snapshot. What does that mean? The previous two releases of Solus were snapshots. Solus has actually moved away from the regular snapshot model found in rolling releases. With the standard rolling release, a new snapshot is posted at least every few days; from that snapshot an image can be created such that the difference between an installation and latest updates is never large. However, the developers have opted to use a hybrid approach to the rolling release. According to the Solus 3 release announcement, this offers “feature rich releases with explicit goals and technology enabling, along with the benefits of a curated rolling release operating system.”

Of course, no average user really cares if an operating system is a rolling release or a hybrid. From that particular perspective, what is more important is how well the platform works, how easy it is to use, and what it offers out of the box.

Let’s take a look at those three points to see just how well Solus 3 could serve even a new-to-Linux user.

What Solus 3 offers out of the box

On many levels, this is the most important point for first-time users. Why? Because there are many Linux distributions available that don’t meet the minimum needs, without having to tinker and add extra packages out of the box. This, however, is an area where Solus 3 really shines. Once installed, the average user will have everything they need to get their work done — and then some.

First off, Solus 3 features the Budgie desktop (Figure 1). Anyone that has ever used a PC desktop, since Windows XP, will be instantly at home. The standard features abound:

Once users get beyond the desktop interface, they’ll find all the applications necessary to go about their days:

  • Firefox web browser (version 55.0.3)

  • LibreOffice office suite (version 5.4.0.3)

  • Thunderbird email client with Lightning calendar pre-installed (version 52.3.0)

  • Rhythmbox audio player (version 3.4.1)

  • GNOME MPV movie player (version 0.12)

  • GNOME Calendar (version 3.24.3)

  • GNOME Files file manager (version 3.24.2)

Do note, the above version numbers reflect a system update upon initial installation.

Solus 3 also includes a fairly straightforward Software Center tool — one that has a nifty trick up its sleeve. Unlike many Linux distributions, the Solus Software Center includes a Third Party section that doesn’t require the user to have to install added repositories to add the likes of Android Studio, Google Chrome, Insync, Skype, Spotify, Viber, WPS Office Suite, and more. All you have to to do is open up the Software Center, click Third Party, and find the third-party software you want to install (Figure 2).

Beyond the desktop and the included software, Solus 3 offers the user a remarkably pain-free experience, right out of the box.

There are also a few small additions that go a long way to making Solus a special platform. Take, for instance, the Night Light feature, a tool that reduces eye strain by taking care of the  display’s blue light. From within the Night Light tool, you can even set a schedule to enable/disable the feature (Figure 3).

The only issue I can find with included packages is the missing Samba-GNOME Files integration. Normally, it is possible to right-click a folder within the file manager and enable the sharing of said folder, via Samba. Although Samba is pre-installed, there is no easy way to enable Samba sharing within the default file manager. For those that really need to share out directories with Samba, you’ll have to do it the old-school way … via the terminal.

Solus 3 does make it fairly easy to connect to other shares on your network (by clicking Other Locations in Files and then browsing your local network).

How easy is it to use?

By now, you’ve probably drawn the conclusion that Solus 3 is a new-user dream come true. That conclusion would be spot on. The developers have done an amazing job of ensuring nothing could possibly trip up a new user. And by “nothing,” I do mean nothing. Solus 3 does exactly what a Linux distribution should do — it gets out of the way, so the user can focus on work or social/entertainment distraction. From installation of the operating system, to installation of software, to daily use … the Solus developers have done everything right. I cannot imagine a single user type stumbling over this take on Linux. Period. This is one Linux distribution with barely a single bump in the learning curve.

How well does Solus 3 work?

Considering how “young” Solus is, it is remarkably stable. During my testing phase, I only encountered one issue with the platform—installing the third-party Spotify client (NOTE: Other third-party software installed fine, so this is, most likely, a Spotify issue). Even with that hiccup, a second attempt at installing the Spotify client succeeded. That should tell you how issue-free Solus is. Outside of that (and the Samba issue), I am happy to report that Solus 3 “just works” and does so with grace and ease. To be honest, Solus 3 feels like a much more mature platform than a “3” release should.

Give Solus 3 a try

If you’re looking for a new Linux distribution that will make the transition from any other platform a no-brainer of a task, you cannot go wrong with Solus 3. This hybrid release distribution will make anyone feel right at home on the desktop, look great doing so, and ease away any headache you might have ever experienced with Linux.

Kudos to the Solus developers for releasing a gem of a distribution.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

5 Disaster Recovery Tips: Learning from Hurricanes


Hurricanes Irma and Harvey highlight the need for DR planning to ensure business continuity.

 

This has been an awful year for natural disasters, and yet, we’re not even midway through a hurricane season that’s been particularly devastating. Hurricanes Irma and Harvey, and the flooding that ensued, has resulted in loss of life, extensive property damage, and crippled infrastructure..

Naturally, businesses have also been impacted. When it comes to applications, data and data centers, this is a wake-up call. At the same time, these are situations that motivate companies and individuals to introduce much-needed change. With this in mind, I’ll offer five tips any IT organization can use to become more resilient against natural disaster, no matter the characteristics of their systems and data centers. This can lead to better availability of critical data and tools when disaster strikes, continuity in serving customers, as well as peace of mind knowing preparations have been made and work can continue as expected.

1. Keep your people safe

When a natural disaster is anticipated (if there is notice), IT staffers need to focus on personal and family safety issues. Having to work late to take one more backup off-site shouldn’t be part of the last-minute process. Simply put, no data is worth putting lives at risk. If the rest of these tips are followed, IT staff won’t have to scramble in the heavy push of preparation to tie up loose ends of what already should be a resilient IT strategy.

2. Follow the 3-2-1 rule

In my role, I’ve long advocated the 3-2-1 rule, and we need to keep reiterating it: Have three different copies of important data saved, on two different media, one of these being off-site. Embrace this rule if you haven’t already. There are two additional key benefits of the 3-2-1 rule: It doesn’t require any specific technology and can address nearly any failure scenario.

3. 10 miles may not be enough

My third tip pertains to the off-site recommendation above. Many organizations believe the off-site copy or disaster recovery facility should be at least 10 miles away. This no longer may be sufficient; the path and fallout of a hurricane can be wide-reaching. Moreover, you want to avoid having personnel spend unnecessary time in a car traveling to complete the IT work. Cloud technologies can provide a more efficient and safer solution. This can involve using disaster recovery as a service (DRaaS) from a service provider or simply putting backups in the cloud.

4. Test your DR plan

Ensure that when a disaster plan is created there is particular focus on anticipating and eliminating surprises. This should involve regularly testing of backups to be certain they are completely recoverable, that the plan will function as expected and all data is where it needs to be (off-site, for example). The last thing you want during a disaster is to find that the plan hasn’t been completely implemented or run in months, or worse, discover there are workloads which are not recoverable.

5. Communications planning

My final recommendation is to work backwards in all required systems and with providers of all types to ensure you don’t have risks you can’t fix. Pay close attention to geography in relation to your own facilities, as well as country locations for data sovereignty considerations. This can apply to telecommunications providers, too. A critical component about response to any disaster is that organizations are able to communicate. Given what has happened in some locations in the path of Hurricane Irma, even cellular communication can be unreliable. Consider developing a plan to ensure communications in the interim if key business systems are down.

The recent flood and hurricane damage has been significant. The truth is, when it comes to the data, IT services, and more, there is a significant risk a business may never recover if it’s not adequately prepared. We live in a digitally transformed world and many businesses can’t operate without the availability of systems and data. These simple tips can bring about the resiliency companies need to effectively handle disasters, and prove their reliability to the customers they serve.

Rick Vanover is director of technical product marketing for Veeam Software.



Source link

Choosing a Cloud Provider: 8 Storage Considerations


Amazon Web Services, Google, and Azure dominate the cloud service provider space, but for some applications it may make sense to choose a smaller provider specializing in your app class and able to deliver a finer-tuned solution. No matter which cloud provider you choose, it pays to look closely at the wide variety of cloud storage services they offer to make sure they will meet your company’s requirements.

There are two major classes of storage with the big cloud providers, which offer local instance storage with selected instances, as well as a selection of network storage options for permanent storage and sharing between instances.

As with any storage, performance is a factor in your decision-making process. There are many shared network storage alternatives, including storage tiers from really hot to freezing cold and within the top tiers, differences depending on choice of replica count, and variations in prices for copying data to other spaces.

The very hot tier is moving to SSD and even here there are differences between NVMe and SATA SSDs, which cloud tenants typically see as IOPS levels. For large instances and GPU-based instances, the faster choice is probably better, though this depends on your use case.

At the other extreme, the cold and “freezing” storage, the choices are disk or tape, which impacts data retrieval times. With tape, that can take as much as two hours, compared with just seconds for disk.

Data security and vendor reliability are two other key considerations when choosing a cloud provider that will store your enterprise data.  Continue on to get tips for your selection process.

(Image: Blackboard/Shutterstock)



Source link