Tag Archives: Deployment

Five Considerations for a Wi-Fi 6 Deployment Plan | IT Infrastructure Advice, Discussion, Community


The saying: “Proper preparation, prevents poor performance,” holds true for technology deployments.

Whether you’re ready for an upgrade to the next version of Wi-Fi in your enterprise or venue or not, a plan for Wi-Fi 6 (aka 802.11ax), is only as good as the effort IT managers put into it. There’s no race to upgrade, despite numerous benefits, but there are considerations for those prepping for a deployment.

The greatest benefit of Wi-Fi 6 is that it handles more traffic more efficiently, meaning it can shoulder a heavier load than its widely deployed predecessors. Better still, the new Wi-Fi standard is backward compatible with earlier versions, letting users avoid stranded investments and flash cutovers.

Products emerging

Vendors continue to push products down the Wi-Fi 6 product pipeline. They include access points (APs) and routers, as well as clients. The vendor ranks include Aerohive, Aruba, Asus, Broadcom, Cisco, D-Link, Intel, Netgear, Qualcomm, and Samsung.

Some verticals will embrace the new standard earlier than others. But don’t expect a stampede once certified Wi-Fi 6 product become available late this year or early in 2020. That said, enterprises can plan now for a deployment, using the five key considerations below.

Are you congested? Though it offers large increases in data speed, Wi-Fi-6 is all about efficiency in wireless networks. With the number of attached devices soaring, the new standard was designed to deal with network congestion.

A congestion-combatting technology is uplink and downlink orthogonal frequency division multiple access (OFDMA), which increases efficiency and lowers latency for high demand environments. Another core feature is called multi-user multiple input, multiple output (MU-MIMO). It allows more data to be transferred at one time, enabling APs to handle a larger number of devices simultaneously.

Device support. While more Wi-Fi 6 APs and routers are coming through the product pipeline, devices such as smartphones, tablets, laptops, and more need to progress for implementors and end-users to enjoy the purported efficiency, lower power consumption, and speed benefits of Wi-Fi 6. Also, on this list are IoT devices, which are key to smart home efforts, and units for broader, outdoor applications.

Devices supporting Wi-Fi 5 or earlier standards will still work with the newer infrastructure, however.

Certification timeline. It looks like potential early implementors might want to wait a while, as the certification testing that the Wi-Fi Alliance will perform has been pushed back from its original target of the third quarter of this year (now), until yearend. It’s not clear how long the certification process will take. As a result, it’s looking as if certified products won’t be available until 2020.

Non-certified products can be an option but consider, for example, that WPA3 security is required for Wi-Fi 6 certification, but there’s a chance it wouldn’t be included in unapproved equipment.

The business of technology. What’s the business case for a Wi-Fi 6 deployment? It’s often difficult to justify spending on new IT infrastructure without a solid ROI. But that’s where consumer-first applications can continue to drive wireless technology implementations.

Enter the media, sports, and hospitality verticals. Wi-Fi 6 is already envisioned as delivering higher-speed streaming content services to consumers, at a price and likely with ads and sponsorships. That’s a clear ROI.

Many sports venues have been challenged to keep up with fan demand for more and faster wireless to access and share video clips and more on social media. But forward-thinking venue owners have used data analytics to learn more about and better engage with their customers. The new version’s efficiency and congestion-addressing features should help meet increasing demands for speed, data, and performance.

Wi-Fi in the hospitality industry has evolved from a nice-to-have perk to a must-have one, to one that must be challenge-free, or the customer experience goes from dependent to dissatisfied. It can be argued that providing a superior customer experience is one step removed from revenue generation. It can also be argued that a dissatisfied customer is a departing customer. Quality of experience is core to the hospitality industry.

Design considerations. It’s recommended that in preparing for Wi-Fi 6, firms continue the practice of designing high-quality 20 MHz channels. Planners must also remember that the new standard specifies lower power consumption for the devices connecting to networks with improved efficiency, especially sleep/wake time optimization. This benefits the mobile phones, laptops, or other IoT devices.

Also, remember to budget power as Wi-Fi 6 networks require even more power for the businesses and organizations deploying them. This makes power budgeting increasingly important as you add 802.11ax APs with more powerful processors and additional radio chains.

And when an upgrade plan progresses, don’t forget to save time for extensive field testing and validation as these functions are crucial to assembling high-quality and performance networks.

With efficiency-focused Wi-Fi 6, proper planning should lead to much more than just a smooth deployment.



Source link

Introduction to YAML: Creating a Kubernetes Deployment | Linux.com


There’s an easier and more useful way to use Kubernetes to spin up resources outside of the command line: creating configuration files using YAML. In this article, we’ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.

YAML Basics

It’s difficult to escape YAML if you’re doing anything related to many software fields — particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain’t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we’ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.

Using YAML for K8s definitions gives you a number of advantages, including:

  • Convenience: You’ll no longer have to add all of your parameters to the command line
  • Maintenance: YAML files can be added to source control, so you can track changes
  • Flexibility: You’ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you’re only ever going to write your own YAML (as opposed to reading other people’s) you’re all set. On the other hand, that’s not very likely, unfortunately. Even if you’re only trying to find examples on the web, they’re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it’s good to know that it’s available to you.

Read more at CNCF

Introduction to YAML: Creating a Kubernetes Deployment | Linux.com


There’s an easier and more useful way to use Kubernetes to spin up resources outside of the command line: creating configuration files using YAML. In this article, we’ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.

YAML Basics

It’s difficult to escape YAML if you’re doing anything related to many software fields — particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain’t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we’ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.

Using YAML for K8s definitions gives you a number of advantages, including:

  • Convenience: You’ll no longer have to add all of your parameters to the command line
  • Maintenance: YAML files can be added to source control, so you can track changes
  • Flexibility: You’ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you’re only ever going to write your own YAML (as opposed to reading other people’s) you’re all set. On the other hand, that’s not very likely, unfortunately. Even if you’re only trying to find examples on the web, they’re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it’s good to know that it’s available to you.

Read more at CNCF

Acumos Project’s 1st Software, Athena, Helps Ease AI Deployment | Software


By Jack M. Germain

Nov 16, 2018 5:00 AM PT

The
LF Deep Learning Foundation on Wednesday announced the availability of the first software from the
Acumos AI Project. Dubbed “Athena,” it supports open source innovation in artificial intelligence, machine learning and deep learning.

This is the first software release from the Acumos AI Project since its launch earlier this year. The goal is to make critical new technologies available to developers and data scientists everywhere.

Acumos is part of a Linux Foundation umbrella organization, the LF Deep Learning Foundation, that supports and sustains open source innovation in artificial intelligence, machine learning and deep learning. Acumos is based in Shanghai.

Acumos AI is a platform and open source framework that makes it easy to build, share and deploy AI apps. Acumos standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment, freeing data scientists and model trainers to focus on their core competencies, and accelerating innovation.

“The Acumos Athena release represents a significant step forward in making AI models more accessible for builders of AI applications and models, along with users and trainers of those models and applications,” said Scott Nicholas, senior director of strategic planning at The Linux Foundation. “This furthers the goal of LF Deep Learning and the Acumos project of accelerating overall AI innovation.”

The challenge with AI is that there are very few apps to use it, noted Jay Srivatsa, CEO of
Future Wealth.

“Acumos was launched to create an AI marketplace, and the release of Athena is a first step in that direction,” he told LinuxInsider.

The Acumos AI Platform

Acumos packages toolkits such as TensorFlow and SciKit Learn, along with models that have a common API that allows them to connect seamlessly. The AI platform allows for easy onboarding and training of models and tools.

The platform supports a variety of popular software languages, including Java, Python, and R. The R language is a free software environment for statistical computing and graphics.

The Acumos AI Platform leverages modern microservices and containers to package and export production-ready AI applications as Docker files. It includes a federated AI Model Marketplace, which is a catalog of community-distributed AI models that can be shared securely.

LF Deep Learning members contribute to the evolution of the platform to ease the onboarding and the deployment of AI models, according to LF Deep Learning Outreach Committee Chair Jamil Chawki. The Acumos AI Marketplace is open and accessible to anyone who wants to download or contribute models and applications.

“Acumos Athena is a significant release because it enables the interoperability of AI, DL and ML models and prevents the lock-in that usually occurs whenever projects are built using disparate configurations, systems and deployment techniques,” explained Rishi Bhargava, cofounder of
Demisto.

It will ease restrictions on AI, DL and ML developers by removing silos and allowing them to build standardized models, chain each other’s models together, and refine them through an out-of-the-box general AI environment, he told LinuxInsider.

“The efficiency of learning models is hugely contingent on the quality and uniqueness of data, the depth and repeatability of feature engineering, and selecting the best model for the task at hand,” Bhargava said. “Athena will free developers of extraneous burdens so they can focus on these core tasks, learn from each other, and eventually deliver better models to businesses and customers.”

Athena Release Highlights

Athena’s design is packed with features to make the software quick and easy to deploy, and to make it easy to share Acumos AI applications.

Athena can be deployed with one-click using Docker or Kubernetes. The software also can deploy models into a public or private cloud infrastructure, or into a Kubernetes environment on users’ own hardware, including servers and virtual machines.

It utilizes a design studio graphical interface that enables chaining together multiple models, data translation tools, filters and output adapters into a full end-to-end solution. Also at play is a security token to allow simple onboarding of models from an external toolkit directly to an Acumos AI repository.

Models easily can be repurposed for different environments and hardware. This is done by decoupling microservices generation from the model onboarding process.

An advanced user portal allows personalization of marketplace view by theme and data on model authorship. This portal also allows users to share models privately or publicly.

“The LF Deep Learning Foundation is focused on building an ecosystem of AI, deep learning and machine learning projects, and today’s announcement represents a significant milestone toward achieving this vision,” said LF Deep Learning Technical Advisory Council Chair Ofer Hermoni of Amdocs.

Unifying Factor

The Acumos release is significant for the advancement of AI, DL and ML innovation, according to Edgar Radjabli, managing partner of
Apis Capital Management.

The AI industry is very fragmented, with virtually no standardization.

“Companies building technology are usually required to write most from scratch or pay for expensive licensed cloud AI solutions,” Radjabli told LinuxInsider. “Acumos can help bring a base (protocol) layer standard to the industry, in the same way that HTTP did for the Internet and Linux itself did for application development.”

LF Deep Learning members are inspired and energized by the progress of the Acumos AI Project, noted Mazin Gilbert, vice president of advanced technology and systems at AT&T and the governing board chair of LF Deep Learning.

“Athena is the next step in harmonizing the AI community, furthering adoption and accelerating innovation,” he said.

Open Source More Suitable

Given the challenges of growing new technologies, open source models are better suited to the development process than those of commercial software firms. Open source base layer software is ideal. It allows greater adoption and interoperability between diverse projects from established players and startups, said Radjabli.

“I believe that Acumos will be used both by other open source projects building second-layer applications, as well as commercial applications,” he said.

Today, the same situation exists in other software development. Open source base layer protocols are used across the industry, both by other open source/nonprofit projects and commercial operations, he explained.

“Athena clearly is geared to an open source environment, given that it already has about 70 or more contributors,” said Future Wealth’s Srivats.

Benefits for Business and Consumers

The benefits to be gained from AI, DL and ML are very significant. Companies across the industry have been making progress in the development of unique applications for AI/DL/MO. More growth in this space will result from Acumos, according to Radjabli.

One example involves a company that uses neural networks for predictive healthcare analytics. This system allows it to diagnose breast cancer with zero percent false negatives simply from patient data correlation analysis. This does not involve any invasive testing or imaging, according to Radjabli.

“The correlation is comprised of over 40 variables, which means it would have never been found through traditional medical research data analysis and was only made possible through the use of convolutional and recurrent neural networks working in combination,” he said.

AI, DL and ML are all geared toward businesses understanding and predicting consumer behavior, added Srivatsa.

“Both will benefit,” he said.

What’s Next for Acumos AI

The developer community for Acumos AI already is working on the next release. The company expects it to be available in mid-2019.

The next release will introduce convenient model training, as well as data extraction pipelines to make models more flexible.

Additionally, the next release will include updates to assist closed-source model developers, such as secure and reliable licensing components to provide execution control and performance feedback across the community.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link