Tag Archives: cloud computing

Microsoft, OpenAI Shoot for the Stars | Emerging Tech


Microsoft wants to empower its Azure cloud computing service with yet-to-exist artificial general intelligence (AGI) technologies to create new goals for supercomputing.

Microsoft on Monday announced a US$1B investment through a partnership with
OpenAI to build new AI technologies. The two companies hope to extend Microsoft Azure’s capabilities in large-scale AI systems.

Microsoft and OpenAI want to accelerate breakthroughs in AI and power OpenAI’s efforts to create artificial general intelligence. The resulting enhancements to Microsoft’s Azure platform will help developers build the next generation of AI applications.

The partnership was motivated in part by OpenAI’s pursuit of enormous computational power. Based on a recently released analysis, the amount of compute used in the largest AI training runs grew by more than 300,000 times from 2012 to 2018, with a 3.5-month doubling time, far exceeding the pace of Moore’s Law, according to OpenAI cofounder Greg Brockman.

“We chose Microsoft as our cloud partner because we’re excited about Azure’s supercomputing roadmap. We believe we can work with Microsoft to develop a hardware and software platform within Microsoft Azure which will scale to AGI,” he told TechNewsWorld.

“The partnership will allow OpenAI to significantly increase the amount of compute it uses for training neural networks,” he noted.

Microsoft and OpenAI also are very aligned in their values, Brockman said. Both firms believe the technology should be used to empower everyone, and be deployed in a trustworthy way that is safe and secure.

“OpenAI believes they can work with Microsoft to develop hardware and software platform within Microsoft Azure which will scale to AGI,” a Microsoft spokesperson said in comments provided to TechNewsWorld by company rep Joel Gunderson.

What the Deal Delivers

Microsoft and OpenAI will collaborate on new Azure AI supercomputing technologies. OpenAI will port its services to run on Microsoft Azure.

OpenAI will use the Azure platform to create new AI technologies. OpenAI will license some of its technologies to Microsoft, which will commercialize them and sell them to as-yet-unnamed partners. It’s hoped that the result will deliver on the promise of artificial general intelligence.

Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies. OpenAI will enter into an exclusivity agreement with Microsoft to extend large-scale AI capabilities.

Both companies will focus on building a computational platform of unprecedented scale on the Azure cloud platform. They will train and run increasingly advanced AI models, including hardware technologies that build on Microsoft’s supercomputing technology.

The development teams will adhere to the companies’ shared principles concerning ethics and trust. This focus will create the foundation for advancements in AI to be implemented in a safe, secure and trustworthy way, and it is a critical reason the companies chose to partner.

AGI a Work in Progress

Innovative applications of deep neural networks coupled with increasing computational power have led to AI breakthroughs over the past decade. That progress occurred in areas such as vision, speech, language processing, translation, robotic control and even gaming, according to Microsoft.

Modern AI systems work well for the specific problems they have been trained to address. However, building systems that can tackle some of the biggest challenges facing the world today requires generalization and deep mastery of multiple AI technologies.

OpenAI and Microsoft’s vision is for artificial general intelligence to work with people to help solve currently intractable multidisciplinary problems, including global challenges such as climate change, personalized healthcare and education.

“This is truly going to help Microsoft. It has more technology in its marketplace to allow the rapid ascension of tools in the business workplace,” noted Chris Carter, CEO of
Approyo.

Combining these two entities to support the growth that is needed is “an absolute game-changer,” he told TechNewsWorld.

Chasing Computing Dragons?

A larger neural network is a more capable neural network, according to Brockman. Making larger systems will allow the two companies to solve more difficult problems going forward.

“We plan to keep doing this until we reach AGI,” he said.

The resulting enhancements to the Azure platform will help developers build the next generation of AI applications.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said Sam Altman, CEO of OpenAI.

It must be deployed “safely and securely with its economic benefits widely distributed,” he added.

“AI is one of the most transformative technologies of our time,” noted Microsoft’s CEO, Satya Nadella, with the “potential to help solve many of our world’s most pressing challenges.”

Grabbing for Powerful Straws

The most likely results of this partnership are that AI technology will grow faster and be utilized in more enterprise and business spaces. This partnership will enable the rapid indoctrination of AI technologies in the workplace, according to Approyo’s Carter.

“This will allow businesses to flourish. Individual workers will boost their productivity. They will also be able to support themselves on a day-to-day basis with technology rather than to be hindered by it,” he explained.

The partnership could hinder development of Cloud AI technologies, though, because Microsoft is prioritizing OpenAI over other emerging AI technologies that might be better, suggested Marty Puranik, CEO of
Atlantic.Net.

If the AI technologies are kept proprietary or work best only on Microsoft Azure, it will lead to Azure platform lock-in, he said.

“Many developers may develop services that use this technology, thereby forcing all their customers to use Microsoft. Microsoft historically has a huge advantage when it comes to enterprise development work, so this could be seen as a way they are trying to cement the position they had in enterprise software into the cloud,” he told TechNewsWorld.

It boils down to Microsoft trying to leverage new technologies, like AI, to be a leader in the cloud, Purani, maintained, similar to when Microsoft would make minority investments and take seats on the boards of hot companies a long time ago.

Ultimately, from Microsoft’s point of view, it would be ideal to have extensions for OpenAI that either would be exclusive or work best on Microsoft’s platform, similar to the “embrace and extend” ideas once applied to APIs, said Puranik.

Win-Win for Both

Microsoft is all about collaboration and open source since Satya Nadella took the reins. He recognizes that AI is the latest and greatest arms race, observed Rob Enderle, principal analyst at the Enderle Group.

“As a result, they are embracing Open AI to increase the speed of development for their projects largely with an IT focus,” he told TechNewsWorld.

Both partners in this deal can learn and benefit from this effort, which is collaborative by design. Participating allows not only earlier access to the result but also a deeper understanding of it, Enderle said.

A Large Promise to Fulfill

In promising to deliver on artificial general intelligence’s potential, the two companies are not dreaming small, noted Arle Lommel, senior analyst for
CSA Research, but that dream may be a reach too far.

“They intend to solve something that nobody has solved yet and that we aren’t remotely close to solving today,” he told TechNewsWorld, “but beyond that, accomplishing that will mean ‘solving’ language as well.”

That means having computers really understand language and use it on par with humans. Despite press release claims about getting near-human quality, that goal is as far beyond present capabilities as a moon landing is beyond a Roman chariot, Lommel quipped.

“That said, I suspect they will get much further along with machine vision, categorization, diagnostics, etc.,” he said. “In other words, I expect this could result in improved versions of what AI already does well. But unless there is some fundamentally different secret sauce, I don’t expect that it will ‘solve’ language and human intelligence.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Where Linux Went in 2018 – and Where It’s Going | Community


For those who try to keep their finger on the Linux community’s pulse, 2018 was a surprisingly eventful year. Spread over the last 12 months, we’ve seen various projects in the Linux ecosystem make great strides, as well as suffer their share of stumbles.

All told, the year wrapped up leaving plenty to be optimistic about in the year to come, but there is much more on which we can only speculate. In the interest of offering the clearest lens for a peek into Linux in 2019, here’s a look back at the year gone by for all things Linux.

Ubuntu Sheds Unity but Sees Silver Lining in Cloud

The last ripples from 2017 into 2018 came from Ubuntu’s decision to phase out the Unity desktop and switch its flagship desktop environment to Gnome. Ubuntu’s first image to ship with Gnome was with its October 2017 release of 17.10, but it was something of a trial run. With April’s 18.04, Ubuntu officially unveiled its first Long Term Support (LTS) track to feature Gnome 3.

With an LTS sporting Gnome and holding up to user testing, the countdown clock began on the eventual switch to the Wayland display server, intended to take over for the aging Xorg server. Think of display servers as the skeletal beams that a desktop is bolted to.

Ubuntu 17.10 tested Wayland waters, but although 18.04 shied away from Wayland, the fact that 18.04 seems to have Gnome under control means the Ubuntu flagship desktop developers can turn their attention to Wayland, hopefully catalyzing its evolution.

Many saw the end of Unity not so much as an admission of defeat in cementing Ubuntu’s own desktop vision, but as evidence of a pivot in Canonical’s focus to cloud computing and IoT. After months in the wild and the update to Ubuntu’s incremental patch, 18.04.1, it is clear by this point that the decision to abandon Unity did not so much as jostle the stability of Ubuntu’s release. In fact, 18.04 has proven exceptionally stable, polished and well-received.

Few are the distributions that can put out as robust and distinct a product as Ubuntu, while also maintaining their own desktop. The only one that might lay claim to this is Linux Mint, but its code base has far fewer deviations from Ubuntu than Ubuntu’s has from Debian. Put another way, Mint’s code base is similar enough to Ubuntu’s (Mint’s upstream) that it can afford to dedicate time and resources to in-house desktops.

Without its own desktop, Ubuntu doesn’t seem worse for wear, but as refined and dependable as ever, especially with the introduction of features like a minimal install option and restart-less kernel updates.

It will be hard to tell how the end of Unity ultimately will impact Ubuntu until its next LTS drops in April 2020 — but for now, Ubuntu fans can breathe a sigh of relief as the distribution continues to shine.

Linux Gamers Won’t Be Steamed at Valve Much Longer

Another major development in desktop Linux computing was Steam Play’s August announcement of
beta testing support for running Windows games on Linux. Steam evidently has been playing the long game (no pun intended) in backing work on the Windows compatibility program Wine, as well as the DirectX translation apparatus Vulkan, over the past couple of years.

This past summer, we saw these efforts coalesce. In a framework called “Proton,” Steam has bundled these two initiatives natively in the Steam Play client. This enables anyone running a Linux installation of Steam Play (who is enrolled in the beta test) to simply download and play a number of Windows games with no further configuration necessary.

A marked lack of access to top-tier games long has been a sticking point for Linux-curious Windows users considering a switch, so Steam’s ambitious embarkation on this project may prove to be the last encouragement this crowd needs to take the penguin plunge.

Steam has been exercising patience, as it has been maintaining a periodically updated list of the number and degree of Linux-compatible Windows games in its library of titles. It hasn’t been afraid to acknowledge that a number of Windows games still need work, another sign of sober expectations on the part of Valve.

Taken together, these steps suggest that Steam is in this for the long haul, rather than throwing together a quick fix to increase revenue from Linux-bound customers. If that weren’t proof enough, Steam even has gone so far as to post the code for Proton on GitHub, which is as good a sign as any that it is invested in the Linux community.

The entire undertaking holds promise to steadily improve the Linux desktop experience as more games reach mature compatibility, and Proton slowly crawls out of beta.

Red Hat Hangs Its Hat on IBM’s Rack

Although the Linux desktop landscape saw modest but undeniable progress, there was much more at play in the enterprise Linux arena.

Perhaps the single biggest Linux headline this year was IBM’s acquisition of Red Hat. IBM and Red Hat have enjoyed a long and fruitful partnership, and IBM’s shrewd tactic in competing with Microsoft more than a decade ago played the leading role in Red Hat’s rise in the first place.

Red Hat popularized, if not pioneered, the practice of selling support and tailored configuration as an open source business model. Fatefully for Red Hat, IBM was the big ticket customer that supercharged its revenue stream and confirmed the profitability of premium support. IBM minted its alliance with Red Hat because it wanted to compete with Microsoft in the server market without having to license an expensive operating system.

In some ways, IBM’s outright purchase of Red Hat may have been inevitable. The two have grown symbiotically for so long that subsuming Red Hat into IBM likely was the only way to squeeze more efficiency and return on investment out of the relationship.

You could even liken it to a couple who’ve been together for years finally announcing their engagement. Whatever else Red Hat’s purchase signifies, it legitimates Linux as an enterprise powerhouse, and lends credence to open source developers who long have touted the profitability of their work.

Amid all the deserved fanfare surrounding this betrothal, little attention has been paid to the reverberations it will send through the bedrock of the entire Linux space. Red Hat spearheads development of systemd, a replacement for the System V Linux init process that already has seen significant adoption among Linux distributions. This is no meager contribution, as the init system is the single most central component of the operating system after the kernel, and it dictates how the OS finishes booting.

Thus, the question on the minds of those who are giving this matter serious consideration is this: How will entrusting a (now) corporate-owned company to build the init process implemented in the vast majority of Linux distributions impact the course of Linux’s development?

Systemd of a Down

This leads perfectly into the next big story from the past year, because it demonstrates both the weight of the responsibility bestowed upon Red Hat in writing an industry standard init system, and the potential for harm, should this responsibility not be approached with proper humility and care.

Recently, a major bug affecting systemd was discovered. It allowed a user with a UID number higher than a certain value to
execute arbitrary “systemctl” commands without authenticating, granting what amounted to full root access to that UID.
The bug in question isn’t in systemd per se, but it pertains to systemd, in that systemd implicitly trusts the program containing the bug, polkit. So, because implicit trust itself is an unwise software development practice, to say the least, it equates to a bug in systemd, in some ways.

When systemd first took hold in the Linux biome, there was more than a little griping in the community. The central issue was that systemd contradicted the Unix philosophy by constructing and relying upon such a monolithic program (moreso than init intrinsically is).

To give a sense for how truly behemoth systemd is, it has swelled beyond the bounds of init’s reasonable purview to encompass DNS server IP assignment and regular task scheduling, relegating such venerable Unix stalwarts as /etc/resolv.conf and cron to (eventual) obsolescence. It seems that these Unix philosophers may have had a compelling, but ultimately unheeded, point.

Microsoft Opens the Open Source Patent Floodgates

IBM was not the only one to stake a claim to Linux: IBM’s perennial foe, Microsoft, made Linux maneuverings of its own in 2018. In October,
Microsoft joined the Open Invention Network (OIN), subsequently open-sourcing more than 60,000 patented pieces of its software.

The OIN is a coalition of partners committed to insulating Linux and Linux-based projects from patent lawsuits. To that end, all members not only are obligated to openly offer patented software for public use, but also are allowed to freely license patents from one another.

Aside from the benefits this obviously confers on Microsoft, especially with companies like Google for fellow members, it puts another power player squarely in Linux’s corner. This may be the final sign of good faith the Linux community needed that Microsoft sincerely has embraced Linux and, moreover, that it has substantial plans for Linux-related projects in its future plans.

Open Source and Open Silicon?

There is one more notable milestone on the desktop Linux front — notable for what it portends for Linux, and computing on the whole. System76, the foremost Linux-focused hardware manufacturer in the U.S. (and maybe the world) has announced a
line of high-end Linux desktops featuring open hardware specifications.

The Thelio line boasts an elegant, premium look that is sure to lure more than the privacy-conscious. Open hardware is the hardware analog to open source software, and while it has been an aim of the security-conscious and freedom-loving tech denizens, it has subsisted as little more than a pipe dream until recently.

The quest for open hardware arguably was accelerated by the Snowden disclosures, and the extent to which they revealed that hardware OEMs may not entirely deserve users’ trust.

Purism was the first consumer-oriented company to take up the charge but, as it will admit, its product is a work in progress, and not as open as the company and its privacy crusader allies envision.

Bringing more open hardware options to consumers, and thereby injecting competition into an otherwise sparse field, is an unalloyed good.

What Next?

While reviews of the year’s events certainly are interesting, if just for a sense of scope, retrospectives aren’t particularly useful unless they are applied. With all of these 2018 milestones in mind, what trajectory do they suggest for 2019?

Last year easily was one of the best years for the Linux desktop sphere since I started using Linux (which admittedly wasn’t very long ago). Alongside big news from Steam and a reassuringly strong LTS release from Ubuntu, came piecemeal strides by distros like Elementary and Solus in solidifying their work and their reputations as just-works, mass-appeal desktop systems.

Along with the production of first-class hardware like System76’s Thelio PCs, and even Manjaro’s Bladebook, desktop Linux has never looked better.

I won’t indulge in the clich and predict that 2019 will be “the year of the Linux desktop,” but I foresee it building on the gains from 2018 to make even sleeker, more modern, and more usable desktops with burgeoning appeal outside the Linux niche. 2018 saw some
high-profile publications giving Linux an open mind and a positive reception, so it wouldn’t be a far-fetched scenario for Linux to see an uptick in first-time users.

The enterprise realm is set to be much more tumultuous, as IBM and Microsoft have planted their respective flags in different corners of the Linux world. This could precipitate a wave of innovation in Linux as established corporate powers poise themselves for cloud supremacy.

On the other hand, this cloud computing contest could lead development of Linux and its satellite projects down a path that is increasingly dissonant — not just with Unix philosophy, but with the free software or open source ethos as well.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.


Jonathan Terrasi has been an ECT News Network columnist since 2017. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.





Source link

Q4OS: A Diamond in the Rough Gets Some Polish | Reviews


By Jack M. Germain

Dec 20, 2018 11:19 AM PT

Q4OS: A Diamond in the Rough Gets Some Polish

Sometimes working with Linux distros is much like rustling through an old jewelry drawer. Every now and then, you find a diamond hidden among the rhinestones. That is the case with
Q4OS.

I took a detailed first look at this new distro in February 2015, primarily to assess the Trinity desktop (TDE). That was a version 1 beta release. Still, Trinity showed some potential.

I have used it on numerous old and new computers, mostly because of its stability and ease of use. Every few upgrades I check out its progress. Key to this is watching the improvements and additional functionality of Trinity.

Q4OS is a lightweight Linux distro that offers some worthwhile alternatives to more established distros. Do not misunderstand what “lightweight” in Linux means, however.

Q4OS is designed with aging computer hardware in mind, but it does not ignore more modern boxes.

Its main claim to fame is the developing
Trinity project desktop. Trinity was forked in 2008 from the last official release of the K Desktop Environment’s third series (KDE 3), version 3.5.10.


Q4OS simplified KDE 3 design

Q4OS has a simplified KDE 3 design that has useful desktop applets for this alternative to the Trinity desktop. Other desktop options also are built in.

– click image to enlarge –


The Germany-based developers recently issued a significant update to the Q4OS snapshot of the distribution’s Testing branch, code-named “Centaurus.” Q4OS Centaurus 3.4 is based on the current Debian “Buster” and Trinity desktop (TDE) 14.0.6 development branches.

This distro is fast and runs extremely well on low-powered aging computers. Q4OS has superb performance on newer computers. Its design pushes classic style with a modern user interface in a new direction. Plus, it is very applicable for virtualization and cloud use.

From Rough to Polished

When I first started to monitor the Trinity desktop, I thought it had the potential for becoming a new attention-getter among up-and-coming Linux distros. The primary distro developer that implemented TDE was, and still is, Q4OS. The distro primarily is built around TDE as the default desktop.

It is easy to swap TDE into other more popular desktops without removing an easy return path to both TDE and KDE. Supported desktops include LXQT, LXDE, XFCE4, Cinnamon, KDE Plasma, Mate and GNOME. Installing a different desktop does not remove the TDE desktop. Instead, you can select between the alternative you installed and the TDE desktop at the login screen.

To install a different desktop environment, go to the Desktop Profiler tool and click the Desktop environments drop-down in the upper right corner of the window. A new window appears, where you can select your desktop of choice from the drop-down. Once back at the main Profiler Window, select which type of desktop profile you want, and then click Install.

These choices give both business and individual users lots of options. One of the big values in using Q4OS Linux is the add-on commercial support for customizing the distro to meet specific user needs. The name of the developers is not publicized on the website.

However, Q4OS clearly is intended to be more than a community-supported general purpose Linux distro. The website also invites businesses to makes use of Q4OS.org’s commercial support and software customization services.

What’s Inside

Q4OS is designed to offer a classic-style user interface (Trinity) or other alternatives with simple accessories. The distro provides stable APIs for complex third-party applications, such as Google Chrome, VirtualBox and development tools. The system also is ideal for virtual cloud environments, due to its very low hardware requirements.

One of the most important changes in this latest release is the switch to the Calamares installer. Calamares offers nice new installation features. For example, it offers optional full encryption of the target system, as well as easy disk drive partitioning.

Another important change is a move to the new Trinity 14.0.6 development version. All dependencies from the current stable Q4OS Scorpion version have been removed, making Centaurus fully independent, with its own repositories and dependencies.

Secure Boot support has been improved too. This is very handy if you install Q4OS on newer hardware hosting Microsoft Windows.

The Calamares installer detects if Secure Boot is active and adjusts the target system accordingly. If Secure Boot is switched off in the firmware, no Secure Boot files are installed.

Q4OS Centaurus offers the bleeding edge of Linux computing. It will be in development until Debian Buster becomes stable. Centaurus will be supported at least five years from the official release date.

The minimal hardware requirements are ideal for older hardware. The Trinity desktop needs at least a 300-MHz CPU with 128 MB RAM and 3 GB hard disk storage. Most of the other alternative desktops are lightweight and run with ease under the minimum resource requirements. The KDE Plasma desktop — and perhaps the Cinnamon desktop — thrive with at least a 1-GHz CPU, plus 1 GB RAM and 5 GB hard disk storage.

All About Trinity

The TDE project began as a continuation of the K Desktop Environment (KDE) version 3 after the Kubuntu developers switched to KDE Plasma 4. The name “Trinity” reflects that heritage. It means “three,” and TDE was a continuation of KDE 3.

The Trinity desktop design presents the simplified look of KDE applications while eliminating the layers of customization associated with KDE’s Activities and virtual desktop navigation. It displays the Bourbon start menu and taskbar.


Q4OS Trinity environment

Q4OS’s Trinity environment has a simplified desktop with bottom bar, classic menu options, and the ability to add/remove application icons on the desktop.

– click image to enlarge –


Timothy Pearson founded the TDE project and continues to lead it. He is an experienced software developer who was the KDE 3.x coordinator of previous Kubuntu releases.

TDE is both flexible and highly customizable. It has a pleasant visual appeal. Its desktop effects are compatible with older hardware. Trinity fills the gap left open with the other lightweight desktop options, which offer little in the way of desktop visual effects.

The field of new alternative desktop environments has created a clutter that may have blunted more interest in TDE. For instance, choices such as Pantheon, Enlightenment, Budgie and Awesome offer unique lightweight choices. Still, Q4OS levels that playing field by letting you use your desktop choice without undermining the unique system tools and customization opportunities the distro provides.

You will not find the Trinity desktop shipping as an option with most Linux distros. Those that use Trinity include Devuan, Sparky Linux, Exe GNU/Linux, ALT Linux, PCLinuxOS, Slax and Ubuntu Nightly.

TDE’s growth with Q4OS makes the combination a viable alternative to meet individual and small business computing needs. The TDE 14 series has been in development for more than two years. This extended development period has allowed the creation of a better and more stable feature-rich desktop environment than found in previous TDE releases.

Using It

Whether you adopt Q4OS to replace a Microsoft Windows experience or another Linux distribution, you will not have much of a learning curve. Out of the box, this distro works well with the default configurations.

Its simplified interface is intuitive. Whether you are a holdover from Windows XP or Windows 7 or even a disgruntled Window 10 refugee, Q4OS offers an inviting look and feel.

The basic collection of software barely gives you enough applications to get started. You will not find any bloat.

Installed titles include Google Chrome, Konqueror, KWrite text editor and a few system tools. From there, what you want to use is easily available through the software center and the Synaptic Package Manager (after you install it).

The Welcome screen makes it very easy to start setting up the desktop with just a few clicks. It is a good starting point. From that panel, you can add packages conveniently and quick start some of the unique features.

The Desktop Profiler lets you select which desktop environment to use. It also lets you select among a full-featured desktop, a basic desktop or a minimal desktop.

Install Applications installs the Synaptic Package Manager. Install Proprietary Codecs installs all the necessary media codecs for playing audio and video.

Turn On Desktop Effects makes it easy to activate more eye candy without having to wade through more detailed Control Panel options.

Switch to Kickoff Start Menu switches from the default Bourbon menu to either Classic or Kickoff styles. It is easy to try each one. Set Autologin allows you to set login to bypass requiring your password upon boot.


Q4OS desktop

A nice touch is the variety of background images and the right-click menu anywhere on the desktop.

– click image to enlarge –


Bottom Line

Q4OS has a focus on security, reliability, long-term stability and conservative integration of verified new features. This operating system is a proven performer for speed and very low hardware requirements. That performance is optimized for both new and very old hardware. For small business owners and high-tech minded home office workers, Q4OS is well suited for virtualization and cloud computing.

One of the hallmarks of this distro is to be a suitable powerhouse platform for legacy hardware. So the developers continue to resist a trend among Linux devs to drop support for old 32-bit computers.The 32-bit versions work with or without the PAE memory extension technology.

Want to Suggest a Review?

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Please
email your ideas to me, and I’ll consider them for a future Linux Picks and Pans column.

And use the Reader Comments feature below to provide your input!


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

$34B Red Hat Acquisition Is a Bolt Out of Big Blue | Deals


The cloud computing landscape may look much different to enterprise users following the announcement earlier this week of IBM’s agreement to acquire Red Hat.

IBM plans to purchase Red Hat, a major provider of open source cloud software, for US$34 billion. IBM will acquire all of the issued and outstanding common shares of Red Hat for $190 per share in cash, under terms of the deal. That stock purchase represents a total enterprise value of approximately $34 billion.

Once the acquisition is finalized, Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, and go-to-market strategy, plus its unique development culture.

Red Hat president and CEO Jim Whitehurst will continue in his leadership role, as will the other members of Red Hat’s current management team. Whitehurst also will join IBM’s senior management team, reporting to CEO Ginni Rometty. IBM intends to maintain Red Hat’s headquarters, facilities, brands and practices.

Following the acquisition, IBM will remain committed to Red Hat’s open governance, open source contributions, and participation in the open source community and development model.

IBM also will foster Red Hat’s widespread developer ecosystem. In addition, both companies will remain committed to the continued freedom of open source via such efforts as Patent Promise, GPL Cooperation Commitment, the Open Invention Network and the LOT Network.

The acquisition was a smart business move for both IBM and Red Hat, said Charles King, principal analyst at Pund-IT.

“It seems possible or likely that other vendors would be interested in purchasing Red Hat,” he told the E-Commerce Times. “By making a deal happen, IBM is bringing in-house a raft of technologies, solutions and assets that are both familiar and highly complementary to its own solutions.

Partnerships and Financial Oversight

Both IBM and Red Hat will continue to build and enhance Red Hat partnerships. These include the IBM Cloud and other major cloud providers such as Amazon Web Services, Microsoft Azure, Google Cloud and Alibaba. At the same time, Red Hat will benefit from IBM’s hybrid cloud and enterprise IT scale in helping expand its open source technology portfolio to businesses globally.

Partnerships between the two companies span 20 years. IBM served as an early supporter of Linux, collaborating with Red Hat to help develop and grow enterprise-grade Linux and more recently to bring enterprise Kubernetes and hybrid cloud solutions to customers.

These innovations have become core technologies within IBM’s $19 billion hybrid cloud business. Between them, IBM and Red Hat have contributed more to the open source community than any other organization, the companies noted.

“For Red Hat, IBM is an ideal partner to help the company scale its business to the next level. Really, no other vendor comes close to having IBM’s reach into and credibility among global enterprises,” said King.

IBM intends to close the transaction through a combination of cash and debt in the latter half of next year. The acquisition has been approved by the boards of directors of both IBM and Red Hat.

The deal is subject to Red Hat shareholder approval. It also is subject to regulatory approvals and other customary closing conditions.

IBM plans to suspend its share repurchase program in 2020 and 2021. The company expects to accelerate its revenue growth, gross margin and free cash flow within 12 months of closing.

Moving Forward

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” said IBM’s Rometty.

Most companies only progressed 20 percent along their cloud journey, renting compute power to cut costs, she said. The next chapter in cloud usage — the next 80 percent — involves unlocking real business value and driving growth.

“It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales,” Rometty pointed out.

Eighty percent of business workloads have yet to move to the cloud, according to IBM. Instead, they are held back by the proprietary nature of today’s cloud market. This prevents portability of data and applications across multiple clouds, data security in a multicloud environment, and consistent cloud management.

IBM and Red Hat plan to position the company to address this issue and accelerate hybrid multicloud adoption. Post-acquisition business will focus on helping clients create cloud-native business applications faster.

That will result in driving greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management. IBM and the absorbed Red Hat division will draw on their shared leadership in key technologies, such as Linux, containers, Kubernetes, multicloud management and automation.

Business Imperative

Red Hat/IBM is the second-largest computer software deal ever recorded globally, according to
Mergermarket data. In terms of computer software mergers and acquisitions in the U.S. alone, the sector already has hit a record high value of $138.3 billion this year, having surpassed all previous full years on record.

IBM/Red Hat accounts for nearly a quarter of total U.S. software deal value in the year to date. Red Hat is IBM’s largest transaction ever.

“IBM has been in need for some time of catching up with other tech giants, such as Amazon and Microsoft, in making a sizable investment like this in the cloud,” noted Elizabeth Lim, senior analyst at Mergermarket.

“It makes sense that IBM would pay such a large amount for a company like Red Hat, to try to outbid any potential competition,” she told the E-Commerce Times.

The deal with Red Hat marks a transition for the company toward hybrid cloud computing, after years of seeking growth with mixed results. For example, IBM made big bets on its artificial intelligence system Watson, but its traditional IT business has shrunk, Lim said.

“It is clear that CEO Ginni Rometty intends, with this deal, to try to propel IBM back into the ranks of the industry’s top players after falling behind in recent years, and that the company also felt the need to acquire outside tech instead of spending years trying to develop it in-house,” she explained.

The question now is how successfully IBM will integrate Red Hat, said Lim.

Smart Business

The acquisition comes as a surprise, but it is a smart move that makes a lot of sense, said Tim Beerman, CTO of
Ensono.

IBM has been a big supporter of open source and the Linux operating system, so Red Hat’s open source software portfolio, supported by value-added “paid” solutions, is the perfect investment, he told the E-Commerce Times.

“It is a big win for IBM, Red Hat and their customers. IBM gets to modernize its software services by adopting Red Hat’s technology,” Beerman noted.

“Red Hat gains IBM’s financial backing and the ability to scale its capabilities and offer a hybrid IT approach, and its customers receive the ability to go to market faster with the assurance their providers have the investment they need to excel in a hypercompetitive market,” he explained.

This acquisition reinforces the concept that open source tools are part of the answer to hybrid cloud solutions, added Beerman. IBM’s investment will allow the companies to increase their security profiles in open source systems.

Over the years, IBM’s technology portfolio, particularly on the software side, has dried up or been sold off, according to Todd Matters, chief architect at
RackWare. IBM really needs some of its own technology in their portfolio, so the Red Hat acquisition makes a lot of sense in those terms.

“Red Hat brings a long list of very good software products. Linux — and Red Hat in particular — has been able to purvey to the enterprise very successfully, and that is the sort of thing that IBM needs for its typical customer portfolio,” Matters told the E-Commerce Times.

IBM had little choice but to acquire Red Hat, observed Craig Rosenberg, chief analyst at research and advisory firm
Topo.

The deal is a “huge move for IBM and the industry,” he told the E-Commerce Times.

“In the multicloud market where AWS, Google and Microsoft have a clear head start, IBM had to make a move or risk being left behind. By acquiring Red Hat — and more specifically OpenShift — IBM becomes a major player, with a compelling developer-centric, open source offering and business model,” Rosenberg explained.

Deal Ramifications

With the Red Hat acquisition, IBM will get the industry’s premiere enterprise Linux distro and its most dynamic container platform, along with myriad other valuable assets, noted King. For Red Hat, the acquisition cements an alliance with one of its oldest strategic partners.

“IBM has also been among the industry’s staunchest and most generous supporters of open source projects and initiatives. Frankly, it is hard to think of similar deals that would have been as beneficial for both IBM and Red Hat,” said Pund-IT’s King.

That rosy view is not supported but some other onlookers, however.

IBM has committed to pay a huge price for the agile growth company, but it is far from a sure bet that the deal will transform IBM into a nimbler player, according to Jay Srivatsa, CEO of
Future Wealth.

“It paves the way for Amazon, Microsoft and Google to get stronger. IBM is counting on open source to cement the company’s credibility as a cloud player, but the train has left the station,” Srivasta told the E-Commerce Times.

“The risk of Red Hat simply becoming as irrelevant as IBM has in the cloud computing space is greater than the probability of IBM/RedHat becoming a leading player in this space,” he added.

One big stumbling block, according to Pete Sena, CEO of
Digital Surgeons, is the risky business of integrating Red Hat’s culture adequately. IBM has not matched Red Hat’s stewardship of open source.

“If IBM does not integrate the cultures effectively, Red hat employees may want to take their money and run,” Sena told the E-Commerce Times.

However, if IBM can deal with Red Hat’s proven successful open source format, the potential upside is nearly guaranteed, he noted.

“If you are a salesperson at either company, once this integration is rolled up together, then you have the ability to sell across various business units. The business implications point to IBM and Red Hat now having a ton of connected offerings,” Sena said.

Cloud Competition Impacted

Red Hat’s OpenShift container platform is being used or supported by virtually every major cloud vendor, noted King, and it’s likely those partnerships will persist.

“In fact, IBM emphasized that the deal would not disrupt any Red Hat customers,” he said, “but it is likely that the acquisition could spur interest in other container technologies by cloud companies.”

At the end of the day, though, mass defections are unlikely. It behooves service providers to support the technologies their customers prefer. For hybrid cloud customers, OpenShift is at or near the top of that list, according to King.

Because Red Hat will maintain its independence through the early part of the transition, it’s likely that things will remain relatively the same with respect to the e-commerce space relative, at least in the short-term, suggested Jonathan Poston, director of technical SEO at
Tombras Group.

“My guess is that IBM’s motive in the first place was less about controlling market supply and raising prices by buying out smaller, more competitive alternatives,” he told the E-Commerce Times, “and mostly about injecting vigor into a product inventory to extend the average life cycle through a classic strategic innovation acquisitions approach. An altruistic perspective, I know — but again, at least for the short-term, I suspect this will be the case.”

Open Source Reactionaries

The sudden unexpected announcement will no doubt produce some minor objections from the ranks of Red Hat workers. However, open source today is more commercial and institutionalized than it was even five years ago, so major turmoil over the business decision will not occur.

“Overall, I do not expect the deal to have any significant impact on open source culturally or as a practice,” said King. “IBM is too experienced and invested in open source to allow that to happen.”

However, the deal could spur interest in Red Hat’s competitors, like Suse and Canonical, as well as alternative container solutions, he suggested, and even might lead to other acquisitions in those areas.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Shuttleworth on Ubuntu 18.04: Multicloud Is the New Normal | Software


By Jack M. Germain

Apr 29, 2018 5:00 AM PT

Canonical last week released the
Ubuntu 18.04 LTS platform for desktop, server, cloud and Internet of Things use. Its debut followed a two-year development phase that led to innovations in cloud solutions for enterprises, as well as smoother integrations with private and public cloud services, and new tools for container and virtual machine operations.

The latest release drives new efficiencies in computing and focuses on the big surge in artificial intelligence and machine learning, said Canonical CEO Mark Shuttleworth in a global conference call.

Ubuntu has been a platform for innovation over the last decade, he noted. The latest release reflects that innovation and comes on the heels of extraordinary enterprise adoption on the public cloud.

The IT industry has undergone some fundamental shifts since the last Ubuntu upgrade, with digital disruption and containerization changing the way organizations think about next-generation infrastructures. Canonical is at the forefront of this transformation, providing the platform for enabling change across the public and private cloud ecosystem, desktop and containers, Shuttleworth said.

“Multicloud operations are the new normal,” he remarked. “Boot time and performance-optimized images of Ubuntu 18.04 LTS on every major public cloud make it the fastest and most-efficient OS for cloud computing, especially for storage and compute-intensive tasks like machine learning,” he added.

Ubuntu 18.04 comes as a unified computing platform. Having an identical platform from workstation to edge and cloud accelerates global deployments and operations. Ubuntu 18.04 LTS features a default GNOME desktop. Other desktop environments are KDE, MATE and Budgie.

Diversified Features

The latest technologies under the Ubuntu 18.04 hood are focused on real-time optimizations and an expanded Snapcraft ecosystem to replace traditional software delivery via package management tools.

For instance, the biggest innovations in Ubuntu 18.04 are related to enhancements to cloud computing, Kubernetes integration, and Ubuntu as an IoT control platform. Features that make the new Ubuntu a platform for artificial intelligence and machine learning also are prominent.

The Canonical distribution of Kubernetes (CDK) runs on public clouds, VMware, OpenStack and bare metal. It delivers the latest upstream version, currently Kubernetes 1.10. It also supports upgrades to future versions of Kubernetes, expansion of the Kubernetes cluster on demand, and integration with optional components for storage, networking and monitoring.

As a platform for AI and ML, CDK supports GPU acceleration of workloads using the Nvidia DevicePlugin. Further, complex GPGPU workloads like Kubeflow work on CDK. That performance reflects joint efforts with Google to accelerate ML in the enterprise, providing a portable way to develop and deploy ML applications at scale. Applications built and tested with Kubeflow and CDK are perfectly transportable to Google Cloud, according to Shuttleworth.

Developers can use the new Ubuntu to create applications on their workstations, test them on private bare-metal Kubernetes with CDK, and run them across vast data sets on Google’s GKE, said Stephan Fabel, director of product management at Canonical. The resulting models and inference engines can be delivered to Ubuntu devices at the edge of the network, creating an ideal pipeline for machine learning from the workstation to rack, to cloud and device.

Snappy Improvements

The latest Ubuntu release allows desktop users to receive rapid delivery of the latest applications updates. Besides having access to typical desktop applications, software devs and enterprise IT teams can benefit from the acceleration of snaps, deployed across the desktop to the cloud.

Snaps have become a popular way to get apps on Linux. More than 3,000 snaps have been published, and millions have been installed, including official releases from Spotify, Skype, Slack and Firefox,

Snaps are fully integrated into Ubuntu GNOME 18.04 LTS and KDE Neon. Publishers deliver updates directly, and security is maintained with enhanced kernel isolation and system service mediation.

Snaps work on desktops, devices and cloud virtual machines, as well as bare-metal servers, allowing a consistent delivery mechanism for applications and frameworks.

Workstations, Cloud and IoT

Nvidia GPGPU hardware acceleration is integrated in Ubuntu 18.04 LTS cloud images and Canonical’s OpenStack and Kubernetes distributions for on-premises bare metal operations. Ubuntu 18.04 supports Kubeflow and other ML and AI workflows.

Kubeflow, the Google approach to TensorFlow on Kubernetes, is integrated into Canonical Kubernetes along with a range of CI/CD tools, and aligned with Google GKE for on-premises and on-cloud AI development.

“Having an OS that is tuned for advanced workloads such as AI and ML is critical to a high-velocity team,” said David Aronchick, product manager for Cloud AI at Google. “With the release of Ubuntu 18.04 LTS and Canonical’s collaborations to the Kubeflow project, Canonical has provided both a familiar and highly performant operating system that works everywhere.”

Software engineers and data scientists can use tools they already know, such as Ubuntu, Kubernetes and Kubeflow, and greatly accelerate their ability to deliver value for their customers, whether on-premises or in the cloud, he added.

Multiple Cloud Focus

Canonical has seen a significant adoption of Ubuntu in the cloud, apparently because it offers an alternative, said Canonical’s Fabel.

Typically, customers ask Canonical to deploy Open Stack and Kubernetes together. That is a pattern emerging as a common operational framework, he said. “Our focus is delivering Kubernetes across multiple clouds. We do that in alignment with Microsoft Azure service.”

Better Economics

Economically, Canonical sees Kubernetes as a commodity, so the company built it into Ubuntu’s support package for the enterprise. It is not an extra, according to Fabel.

“That lines up perfectly with the business model we see the public clouds adopting, where Kubernetes is a free service on top of the VM that you are paying for,” he said.

The plan is not to offer overly complex models based on old-school economic models, Fabel added, as that is not what developers really want.

“Our focus is on the most effective delivery of the new commodity infrastructure,” he noted.

Private Cloud Alternative to VMware

Canonical OpenStack delivers private cloud with significant savings over VMware and provides a modern, developer-friendly API, according to Canonical. It also has built-in support for NFV and GPGPUs. The Canonical OpenStack offering has become a reference cloud for digital transformation workloads.

Today, Ubuntu is at the heart of the world’s largest OpenStack clouds, both public and private, in key sectors such as finance, media, retail and telecommunications, Shuttleworth noted.

Other Highlights

Among Ubuntu 18.04’s benefits:

  • Containers for legacy workloads with LXD 3.0 — LXD 3.0 enables “lift-and-shift” of legacy workloads into containers for performance and density, an essential part of the enterprise container strategy.

    LXD provides “machine containers” that behave like virtual machines in that they contain a full and mutable Linux guest operating system, in this case, Ubuntu. Customers using unsupported or end-of-life Linux environments that have not received fixes for critical issues like Meltdown and Spectre can lift and shift those workloads into LXD on Ubuntu 18.04 LTS with all the latest kernel security fixes.

  • Ultrafast Ubuntu on a Windows desktop — New Hyper-V optimized images developed in collaboration with Microsoft enhance the virtual machine experience of Ubuntu in Windows.
  • Minimal desktop install — The new minimal desktop install provides only the core desktop and browser for those looking to save disk space and customize machines with their specific apps or requirements. In corporate environments, the minimal desktop serves as a base for custom desktop images, reducing the security cross-section of the platform.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link