Tag Archives: Emerging

Why Low Power WANs Are Emerging as a Strong Option for IoT Apps | IT Infrastructure Advice, Discussion, Community


Enterprises looking to launch and expand IoT applications need not wait on superfast high-speed wireless. The broadening availability and allure of low power wide-area networks (LPWAN) is drawing increased interest – and use.

LPWAN is an umbrella term that covers a variety of established technologies that can be used to support long haul IoT applications comprised of devices such as sensors. Units (costing roughly $5 to $10 apiece) use batteries to transmit small amounts of crucial data to a central location, and can last 10 years.

Enabling technologies for these core functions include Narrowband IoT (NB-IoT), LoRa, SigFox, Ingenu’s Random Phase Multiple Access (RPMA), LTE-M and Weightless.

LPWANs are becoming a global phenomenon for supporting IoT applications. IHS Markit estimated that just 150 million LPWAN links were deployed in 2018, a figure that it expects to expand at a 63% compound annual growth rate to hit 1.7 billion links by 2023.

Note: To avoid confusion, LoRa is the technology foundation and LoRaWAN is the wireless communications standard which includes that and much more for this widely backed LPWAN IoT approach.

While high-speed 4G and 5G wireless services are first focused on consumers, it’s the very low-speed LPWANs that are widely embraced for enabling industrial IoT applications for businesses, municipalities and more.

To date, LPWANs have been used by utilities to collect small amounts of critical data from pipelines and other distribution networks. This helps enterprises monitor and manage conditions from afar. Use is increasing in the transportation vertical for the tracking of assets, such as vehicles – and often their contents.

So, what’s fueling enterprise use of LPWAN options?

Menu Choices: Given the huge number of devices projected to connect to LPWANs in the year ahead, enterprises can expect network operators to offer multiple service options. This should also benefit companies as there are many types of IoT devices, with differing connectivity abilities.

Some technology companies work directly with network operators to deploy LPWAN services, while others offer the services as well as the option of building private nets for enterprises.

Procurement: LPWAN network options include the DIY choice and an as-a-service approach (that obviates the need for firms to buy and manage onsite equipment and software). Enterprises can also choose a managed services option whereby the network operator can handle most all aspects of the LPWAN operation, monitoring, and management.

Spectrum: NB-IoT LPWANs operate in licensed LTE frequency bands, with LoRa, Sigfox, and RPMA networks use unlicensed spectrum. Sigfox and RPMA uses spectrum in the Industrial, Scientific, and Industrial (ISM) band.

Quality of Service: Since NB-IoT uses licensed spectrum, networks that use this technology can offer guaranteed quality of service (QoS). Not all applications require it, but the feature can’t be equaled with LoRa and Sigfox, which use unlicensed spectrum.

Data rates: Enterprise LPWANs are known for not requiring high maximum data rates for the IoT applications they support. However, there are a range of top rates supported, with Sigfox rated at 100 bps, LoRa at 50 kbps and NB-IoT at 200 kbps.

Coverage: Early on, LPWANs have been used to support applications featuring connected devices – such as pipeline monitoring – over long distances. More recently, however, IoT applications such as meter reading, which don’t require wide geographic coverage, have come into play.

Enterprises will need to factor the coverage of LPWAN technologies into their application creation and deployment plans as they support varying distances in urban and rural environments.

Sigfox can span 10 km urban and 40 km rural, while LoRa can cover 5 km in urban scenarios and 20 km in rural ones. NB-IoT offers the shortest distance coverage with 1 km in urban areas and 10 km in rural areas. That makes it the option with the lowest range and coverage capabilities of the trio.

Density: Device density per cell can be an issue when expanding a LPWAN. For example, NB-IoT can support roughly 100,000 devices per cell, while Sigfox and LoRa can both handle about 50,000 connected devices per cell. Enterprise application architects will also need to determine how many base stations will be needed to support their undertakings.

Security: Always a top consideration in evaluating network options, security can mean many things in the LPWAN space. When it comes to encryption methods, LoRa and RPMA support AES 128-bit encryption, while NB-IoT uses LTE encryption.

Roaming: The ability for a device originally connected to one type of LPWAN to roam to another is currently a work in progress. The capability is seen as initially allowing devices to roam from a public-to-public, or private-to-private, network.

Use cases for roaming would include international business applications such as the tracking of mobile assets such as freight that crosses country borders. Gateways between operators using different protocols may be required.

Related Network Computing articles:

Low-Power WANs Energize High-Priority Applications

Taking AI to the IoT Edge

 



Source link

Microsoft, OpenAI Shoot for the Stars | Emerging Tech


Microsoft wants to empower its Azure cloud computing service with yet-to-exist artificial general intelligence (AGI) technologies to create new goals for supercomputing.

Microsoft on Monday announced a US$1B investment through a partnership with
OpenAI to build new AI technologies. The two companies hope to extend Microsoft Azure’s capabilities in large-scale AI systems.

Microsoft and OpenAI want to accelerate breakthroughs in AI and power OpenAI’s efforts to create artificial general intelligence. The resulting enhancements to Microsoft’s Azure platform will help developers build the next generation of AI applications.

The partnership was motivated in part by OpenAI’s pursuit of enormous computational power. Based on a recently released analysis, the amount of compute used in the largest AI training runs grew by more than 300,000 times from 2012 to 2018, with a 3.5-month doubling time, far exceeding the pace of Moore’s Law, according to OpenAI cofounder Greg Brockman.

“We chose Microsoft as our cloud partner because we’re excited about Azure’s supercomputing roadmap. We believe we can work with Microsoft to develop a hardware and software platform within Microsoft Azure which will scale to AGI,” he told TechNewsWorld.

“The partnership will allow OpenAI to significantly increase the amount of compute it uses for training neural networks,” he noted.

Microsoft and OpenAI also are very aligned in their values, Brockman said. Both firms believe the technology should be used to empower everyone, and be deployed in a trustworthy way that is safe and secure.

“OpenAI believes they can work with Microsoft to develop hardware and software platform within Microsoft Azure which will scale to AGI,” a Microsoft spokesperson said in comments provided to TechNewsWorld by company rep Joel Gunderson.

What the Deal Delivers

Microsoft and OpenAI will collaborate on new Azure AI supercomputing technologies. OpenAI will port its services to run on Microsoft Azure.

OpenAI will use the Azure platform to create new AI technologies. OpenAI will license some of its technologies to Microsoft, which will commercialize them and sell them to as-yet-unnamed partners. It’s hoped that the result will deliver on the promise of artificial general intelligence.

Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies. OpenAI will enter into an exclusivity agreement with Microsoft to extend large-scale AI capabilities.

Both companies will focus on building a computational platform of unprecedented scale on the Azure cloud platform. They will train and run increasingly advanced AI models, including hardware technologies that build on Microsoft’s supercomputing technology.

The development teams will adhere to the companies’ shared principles concerning ethics and trust. This focus will create the foundation for advancements in AI to be implemented in a safe, secure and trustworthy way, and it is a critical reason the companies chose to partner.

AGI a Work in Progress

Innovative applications of deep neural networks coupled with increasing computational power have led to AI breakthroughs over the past decade. That progress occurred in areas such as vision, speech, language processing, translation, robotic control and even gaming, according to Microsoft.

Modern AI systems work well for the specific problems they have been trained to address. However, building systems that can tackle some of the biggest challenges facing the world today requires generalization and deep mastery of multiple AI technologies.

OpenAI and Microsoft’s vision is for artificial general intelligence to work with people to help solve currently intractable multidisciplinary problems, including global challenges such as climate change, personalized healthcare and education.

“This is truly going to help Microsoft. It has more technology in its marketplace to allow the rapid ascension of tools in the business workplace,” noted Chris Carter, CEO of
Approyo.

Combining these two entities to support the growth that is needed is “an absolute game-changer,” he told TechNewsWorld.

Chasing Computing Dragons?

A larger neural network is a more capable neural network, according to Brockman. Making larger systems will allow the two companies to solve more difficult problems going forward.

“We plan to keep doing this until we reach AGI,” he said.

The resulting enhancements to the Azure platform will help developers build the next generation of AI applications.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said Sam Altman, CEO of OpenAI.

It must be deployed “safely and securely with its economic benefits widely distributed,” he added.

“AI is one of the most transformative technologies of our time,” noted Microsoft’s CEO, Satya Nadella, with the “potential to help solve many of our world’s most pressing challenges.”

Grabbing for Powerful Straws

The most likely results of this partnership are that AI technology will grow faster and be utilized in more enterprise and business spaces. This partnership will enable the rapid indoctrination of AI technologies in the workplace, according to Approyo’s Carter.

“This will allow businesses to flourish. Individual workers will boost their productivity. They will also be able to support themselves on a day-to-day basis with technology rather than to be hindered by it,” he explained.

The partnership could hinder development of Cloud AI technologies, though, because Microsoft is prioritizing OpenAI over other emerging AI technologies that might be better, suggested Marty Puranik, CEO of
Atlantic.Net.

If the AI technologies are kept proprietary or work best only on Microsoft Azure, it will lead to Azure platform lock-in, he said.

“Many developers may develop services that use this technology, thereby forcing all their customers to use Microsoft. Microsoft historically has a huge advantage when it comes to enterprise development work, so this could be seen as a way they are trying to cement the position they had in enterprise software into the cloud,” he told TechNewsWorld.

It boils down to Microsoft trying to leverage new technologies, like AI, to be a leader in the cloud, Purani, maintained, similar to when Microsoft would make minority investments and take seats on the boards of hot companies a long time ago.

Ultimately, from Microsoft’s point of view, it would be ideal to have extensions for OpenAI that either would be exclusive or work best on Microsoft’s platform, similar to the “embrace and extend” ideas once applied to APIs, said Puranik.

Win-Win for Both

Microsoft is all about collaboration and open source since Satya Nadella took the reins. He recognizes that AI is the latest and greatest arms race, observed Rob Enderle, principal analyst at the Enderle Group.

“As a result, they are embracing Open AI to increase the speed of development for their projects largely with an IT focus,” he told TechNewsWorld.

Both partners in this deal can learn and benefit from this effort, which is collaborative by design. Participating allows not only earlier access to the result but also a deeper understanding of it, Enderle said.

A Large Promise to Fulfill

In promising to deliver on artificial general intelligence’s potential, the two companies are not dreaming small, noted Arle Lommel, senior analyst for
CSA Research, but that dream may be a reach too far.

“They intend to solve something that nobody has solved yet and that we aren’t remotely close to solving today,” he told TechNewsWorld, “but beyond that, accomplishing that will mean ‘solving’ language as well.”

That means having computers really understand language and use it on par with humans. Despite press release claims about getting near-human quality, that goal is as far beyond present capabilities as a moon landing is beyond a Roman chariot, Lommel quipped.

“That said, I suspect they will get much further along with machine vision, categorization, diagnostics, etc.,” he said. “In other words, I expect this could result in improved versions of what AI already does well. But unless there is some fundamentally different secret sauce, I don’t expect that it will ‘solve’ language and human intelligence.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Adapting IT Operations to Emerging Trends: 3 Tips


For infrastructure management professionals, keeping up with new trends is a constant challenge. IT must constantly weigh the potential benefits and risks of adopting new technologies, as well as the pros and cons of continuing to maintain their legacy hardware and applications.

Some experts say that right now is a particularly difficult time for enterprise IT given the massive changes that are occurring. When asked about the trends affecting enterprise IT operations today, Keith Townsend, principal at The CTO Advisor, told me, “Obviously the biggest one is the cloud and the need to integrate cloud.”

In its latest market research, IDC predicts that public cloud services and infrastructure spending will grow 24.4% this year, and Gartner forecasts that the public cloud services market will grow 18%in 2017. By either measure, enterprises are going to be running a lot more of their workloads in the cloud, which means IT operations will need to adapt to deal with this new situation.

Townsend, who also is SAP infrastructure architect at AbbVie, said that the growth in hybrid cloud computing and new advancements like serverless computing and containers pose challenges for IT operations, given “the resulting need for automation and orchestration throughout the enterprise IT infrastructure.” He added, “Ultimately, they need to transform their organizations from a people, process and technology perspective.”

For organizations seeking to accomplish that transformation, Townsend offered three key pieces of advice.

Put the strategy first

Townsend said the biggest mistake he sees enterprises making “is investing in tools before they really understand their strategy.” Organizations know that their approach to IT needs to change, but they don’t always clearly define their goals and objectives.

Instead, Townsend said, they often start by “going out to vendors and asking vendors to solve this problem for them in the form of some tool or dashboard or some framework without understanding what the drivers are internally.”

IT operations groups can save themselves a great deal of time, money and aggravation by focusing on their strategy first before they invest in new tools.

Self-fund your transformation

Attaining the level of agility and flexibility that allows organizations to take advantage of the latest advances in cloud computing isn’t easy or cheap. “That requires some investment, but it’s tough to get that investment,” Townsend acknowledged.

Instead of asking for budget increases, he believes the best way to do that investment is through self-funding.

Most IT teams spend about 80% of their budgets on maintaining existing systems, activities that are colloquially called “keeping the lights on.” That leaves only 20% of the budget for new projects and transformation. “That mix needs to be changed,” said Townsend.

He recommends that organizations look for ways to become more efficient. By carefully deploying automation and adopting new processes, teams can accomplish a “series of mini-transformations” that gradually decreases the amount of money that must be spent on maintenance and frees up more funds and staff resources for new projects.

Focus on agility, not services

In his work, Townsend has seen many IT teams often make a common mistake when it comes to dealing with the business side of the organization: not paying enough attention to what is happening in the business and what the business really wants.

When the business comes to IT with a request, IT typically responds with a list of limited options. Townsend said that these limited options are the equivalent of telling the business no. “What they are asking for is agility,” he said.

He told a story about a recent six-month infrastructure project where the business objectives for the project completely changed between the beginning of the project and the end. An IT organization can only adapt to those sort of constant changes by adopting a DevOps approach, he said. If IT wants to remain relevant and help organizations capitalize on the new opportunities that the cloud offers, it has to become much more agile and flexible.

You can see Keith Townsend live and in person at Interop ITX, where he will offer more insight about how enterprise IT needs to transform itself in his session, “Holistic IT Operations in the Application Age.” Register now for Interop ITX, May 15-19, in Las Vegas.



Source link