Tag Archives: Community

Digital Transformation: Trust but Verify | IT Infrastructure Advice, Discussion, Community


Moving digital assets to the public cloud reduces costs and increases productivity, but it poses some new information security challenges. Specifically, many Intrusion Detection and Prevention Systems (IDPS) that were designed for the on-premises network come up short when deployed in the public cloud. For this reason, public cloud providers have built-in security layers to manage information security using their own security monitoring infrastructure. Unfortunately, these built-in monitoring services are one-size-fits-all and may miss crucial customer-specific security requirements or user account compromises. This leaves cloud-based assets more vulnerable to data breaches.

Why public clouds are difficult to secure

Public clouds are great when it comes to providing shared compute resources that can be set up or torn down quickly. The cloud provider offers a basic software interface to provisioning storage, servers, and applications, and basic security monitoring that runs on top of that interface at the application layer. But the application layer runs on top of the network, and the network is the only place where certain classes of dangerous security breaches can be detected and prevented.

In the cloud, customers can’t conduct network-level traffic analysis because public clouds don’t give customers access to the network layer. Clouds restrict users from inspecting or logging the bits that go over the network wire. Inspecting a public cloud at the application layer can give customers information about what the network endpoints are doing, but that’s only part of the picture. For example, breaches due to users’ misbehavior are only visible at the network layer by observing the communication patterns that are inconsistent with company policies. The cloud’s built-in monitoring services would not be aware of it because they do not monitor network behavior on behalf of the enterprise. Importantly, if malware or a rogue application somehow makes it into a cloud instance or remote VM hosted in the cloud, native cloud monitoring services may not detect its malicious behavior at the network level. Because customers don’t have access to the bits being transmitted, they’ll never know the malware is there.

And the network threats are there. Over 540 million Facebook records were exposed on AWS. In 2017, 57 million Uber customer records were compromised because hackers extracted Uber’s AWS credentials from the company’s private account. Public cloud offers no tools for monitoring the network data that would have detected and prevented these breaches.

Public cloud operators could see what’s going on if they were to look at the network traffic, but they don’t provide that information to their customers. Most of the time, public cloud operators are focused on providing application-level security information from systems like firewalls or endpoint Antivirus solutions. Adding NG firewalls from third-party vendors to public cloud deployments adds the ability to customize the inspection of all the bits flying by. But this fails to detect communications within the cloud (for example, between a web server and a database) or lateral communications (for example, a compromised host trying to spread within the internal cloud network between VMs). This leaves blind spots that can allow malware to execute without the user’s knowledge. Lastly, when there is a breach, in most cases, cloud customers can’t even quantify, precisely, the number of records or the amount of data exfiltrated.

As it’s not feasible to deploy hardware on a public cloud provider’s premises, the way to eliminate these blind spots lies with software that can implement a virtual tap and monitor traffic at the network level. The industry is now moving away from dedicated hardware devices and toward multi-function software that will address these needs.



Source link

5G Is Coming, but When? | IT Infrastructure Advice, Discussion, Community


5G is coming, and while it is ultimately going to have a massive impact on nearly every aspect of our lives, the introduction of 5G will take many years to unfold. Understanding how and when the new capabilities of 5G will impact a particular area of our lives can be hard to understand. This is an area that many IT managers are considering for their businesses. Planning for 5G will be challenging. The reason: 5G will bring a significant change in the range of business models and services offered by mobile network operators (MNO)s to residential and business consumers. We are therefore entering a period of flux as these are gradually introduced.

Compared to previous 4G data-based services, 5G explodes the mobile world into a potentially vast array of different service types and business models.

Initial 5G standards, the so-called non-standalone (NSA) variant of the 5G specifications, enable MNOs to start rolling out 5G enhanced mobile broadband (eMBB) services. These services use new 5G radio capabilities but maintain the current 4G core. Essentially, these services are souped-up 4G services providing higher speed data services, generally known as “5G phase 1”. MNOs across the globe are in the early stages of rolling out these initial 5G phase 1 services. The services offer higher speeds than previous 4G services but with underlying service specifications and business models that are mostly the same as 4G services.

The next step for 5G is the expansion of the work within the standards bodies to bring the initial standardization of full 5G capabilities, in phase 2. Within the 3rd Generation Partnership Project (3GPP), the predominant standards body for mobile standards, this forms the bulk of the Release 16 specification which is currently underway and due to be completed in March 2020. This will enable system vendors, and ultimately MNOs, to build systems and then networks that support the broader range of 5G services and business models. Standardization work is an ongoing process. 5G specifications will continue to evolve for many years as technology and networking functionality continue to evolve and mature, enabling ever more advanced 5G services and business models.  

Opening up 5G with Phase 2

The key advance with the second phase of 5G is that mobile networks will open up to a significant broadening of services beyond voice and highspeed data. Massive machine-type communications will broaden the capabilities of mobile networks to address massive IoT applications with up to tens of thousands of connected devices per cell. Ultra-reliable low-latency communication brings a drop in round-trip service latency from 10 milliseconds to just 1 ms opening up 5G networks to a whole range of new applications. These new capabilities will bring expanded service offerings from MNOs that specifically address business and industrial customers, bringing a raft of extended applications for IT managers to handle.

There is still considerable uncertainty as to which new services and the timescales of when they will be available. Yet, many MNOs have publicized broad timelines with initial 5G phase 1 services entering the market in 2019-2020 and initial phase 2 services from 2022-2023 onwards.  

The Impact of 5G on IT Managers

The initial rollout of 5G phase 1 services mainly impacts the radio aspects of the mobile network with investments primarily targeting new 5G cell sites, with more modest investments in the backhaul transport network and back-office IT systems. IT managers will be able to take advantage of higher-speed mobile data services. However, other than increasing speeds and data usage, interaction with MNOs will largely continue as it did in the pre-5G era. However, the introduction of 5G phase 2 services brings the potential for radical changes in the interaction between MNOs and business customers that IT managers will need to prepare for.

The introduction of lower latency services and other more advanced networking functionality will shift MNO investments from the radio to the new 5G core, the underlying optical networking-based transport network, and the back-office management and control systems. This will bring significant changes in the relationship between IT managers and their MNO partners. To address challenges such as reducing other overall round trip latency, new capabilities such as multi-access edge compute (MEC) will be introduced. These capabilities bring storage and compute functions at new locations between the cell tower and the core that can be used to support services for the business community. Furthermore, to address the varying transport performance requirements for new services, MNOs will introduce new network slicing technology to nail up bandwidth and MEC resources within the transport to support specific service classes.

All of this requires sophisticated software-defined networking (SDN)-based network orchestration and cognitive networking capabilities that will automate many aspects of dynamic network control. Using standard open application programming interfaces (API)s to enable direct interaction with 3rd party systems, this new control environment will potentially open up many new opportunities for IT managers to streamline their interaction with MNO partners as they embrace 5G.

In summary, initial 5G services may not drastically impact IT managers, but the advanced services that 5G will ultimately bring will have a huge impact on many organizations. The advanced 5G-based services and business models that will be offered in the near future will require IT managers to build strong partnerships with MNOs to fully embrace the changes in automation and control of their services.



Source link

Taking AI to the IoT Edge | IT Infrastructure Advice, Discussion, Community


Two disruptive technologies, artificial intelligence (AI) and edge computing, are joining together to help make yet another disruptive technology, the Internet of Things (IoT), more powerful and versatile.

AI on the IoT edge is increasingly seen as a technology that will be critical to the success of IoT networks covering many different applications. When IoT technology first appeared, many observers thought that most computing tasks would be handled entirely in the cloud. Yet when it comes to IoT deployments in areas such as manufacturing and logistics, and technologies like autonomous vehicles, decisions have to made as fast as possible. “There’s a huge benefit in getting the analytics capability or the AI capability, to where the action is,” said Kiva Allgood, Ericsson’s head of IoT.

In the years ahead, IoT sensors will collect and stream increasingly large amounts of data, stretching the cloud’s ability to keep pace. “Data growth drives network constraints, as well as the need to analyze and act on this information in near real-time,” observed Steen Graham, general manager of Intel’s IoT ecosystem/channels unit. “Deploying AI at the edge enables you to address network constraints by discarding irrelevant data and compressing essential data for future insights and drive actionable insights in near-real-time with AI.”

S. Hamid Nawab, chief scientist at Yobe, a company that makes AI on the edge software for voice recognition, agreed. “AI on the edge can evaluate the local situation and determine whether or not it’s necessary to send information to the cloud for further processing,” he explained. “It can also provide signal-level pre-processing of the cloud-bound stream so that the cloud-based processing can focus its resources on higher-level issues.”

Use cases

AI on the IoT edge promises to make its biggest impact on organizations that require real-time data analytics for immediate decision-making, such as whether to immediately raise or lower prices depending on consumer demand or other factors, such as time, temperature or inventory level. “Another example is use cases where constant cloud connectivity is simply not available,” observed Tim Sherwood, vice president of IoT and mobility at telecom firm Tata Communications.

Edge AI can also help IoT devices conserve power by limiting communication with the cloud to times when it is strictly necessary to do so, Nawab noted. “There are [also] ‘secure’ use cases where the security risks in sending data streams on the IoT network need to be minimized,” he added.

Industries that can expect to see the most benefits from AI on the IoT edge include healthcare, manufacturing, retailing, and smart cities projects. “The application of IoT in healthcare might bring the most impact on humanity,” Graham stated. “The combination of AI and IoT is streamlining drug discovery and speeding up genomics processing and medical imaging analysis, making the latter more accurate for personalized treatment.”

Security concerns

While AI on the IoT edge promises many benefits, it also possesses some inherent drawbacks. Chris Carreiro, CTO of Park Place Technologies’ ParkView data center monitoring service, warned that the approach potentially gives data centers slightly less control over collected data. “Business systems would now be pushed down from a central data center out to a local plant or branch,” he explained. “This would decentralize the Infrastructure, changing requirements for security, both physical and network.”

Security is, in fact, a top AI on the IoT edge concern. “A person would have to be pretty advanced to hack into some of the [IoT] networks, but it’s essential to be aware that some people want to do that,” Allgood reported.

When you give endpoints more control over data, they become a target for cyberattacks,” Sherwood observed. He noted that to deal with this vulnerability, Tata is exploring the possibility of updating its SIM cards to improve device authentication and network policy controls, limiting the data sources that can be reached by the device, and providing enhanced security for IoT data in motion.

Getting started

Give the fact that AI on the IoT edge is still an emerging technology with relatively few real-world deployments. It’s important for potential adopters to temper their excitement with pragmatism. “The cost of adopting edge AI may outweigh the benefits of real-time intelligence and decision making in some use cases, so this is the first point to consider,” Sherwood advised. He noted that IT leaders also need to fully understand their needs and goals before reaching a final decision on whether or not to bring AI to the edge of their IoT network. Still, for many organizations, the answer will be affirmative. “If you need your IoT application to analyze data at rapid intervals for immediate decision making, you need edge AI,” Sherwood said.

Graham predicted that the next five to ten years will see a rapid move toward a software-defined, more autonomous world that will pave the way for transformation and innovation across industries. “AI, IoT, and edge computing are at the center of this transformation,” he noted. “To paraphrase [former Intel CEO] Andy Grove, you can be the subject of a strategic inflection point or the cause of one—companies that embrace this transformation will thrive and others will falter.”

 



Source link

Turning Silos into Success with DevOps | IT Infrastructure Advice, Discussion, Community


Silos within IT may be inevitable.

Silos are, after all, a natural reflection of the human tendency to specialize. It’s not just that storage is different from networking is different from security; it’s that the technology, tools, and skills required to implement, operate, and manage these disparate systems are different. Expertise in storage does not necessarily translate to expertise in networking.

Because of this basic truth, silos formed in which domain expertise aggregated and became akin to tribal knowledge. Very little is shared outside the domain because honestly, no one else understands it or has time to learn it. They’ve got their own domain to worry about.

This is why the introduction of cloud – and in particular, multiple clouds – can often result in more siloes. Each public – and private, to be fair – cloud has its own operational models, APIs, and methods of management. They each operate distinctly differently, which results in experts who are great at operating and managing one cloud but not so able to transfer that knowledge to another one.

You see, it isn’t enough to understand how to interact using a REST API – or any API for that matter. The expertise is not in being able to use HTTP constructs and communication methods to exchange information, it’s in the ability to understand what methods to call – and when – that makes someone an expert in an API. That extends to cloud, where the general operating model is based on the same principles, but implementation varies so widely as to render knowledge of one nearly useless in another. It’s domain knowledge all over again.

That means that the 87% of organizations operating in multiple clouds are likely to mirror their cloud presence on-premises with specialized teams (silos) within IT. We see this in research, in which nearly half (46%) of organizations operate in “single function teams.” Which is really a euphemism for “silos.”

DevOps can help. Not because of its emphasis on agility or speed, but because of its emphasis on collaboration and sharing. It’s not that DevOps are expected to become experts in other domains, it’s that they’re expected to share goals and collaborate on how to achieve them best – together.  

Cross-functional or combined operations teams aren’t meant to replace expertise; they’re designed to bring expertise to the same table and create an environment in which that team can collectively work toward a common goal – that of delivering and deploying a secure application at speed. Such teams often include a cloud expert, an app infra expert – an expert from every domain that’s required.

But it takes more than a seat at a common table to achieve the speed expected in today’s application-driven business. There is a very real danger of creating silos even in a DevOps-driven process through the tools and technology used to automate it.

That’s because different teams bring different tools and technologies to the table. This often leads to more time spent on integration and hand-offs, negating the time savings that should be realized from leveraging technology to automate and orchestrate processes across data center and cloud properties. Integration has always been and continues to be a significant challenge for teams across IT – whether it’s the tools and technologies used to deploy and operate apps and their supporting app services or those that manage the increasingly complex set of devices and services that form the enterprise infrastructure fabric. 

Enterprises have long understood the value of standardization, and when it comes to cloud and DevOps that value holds true. Being able to build a pipeline into which each domain expert can plug-in their piece of the delivery and deployment puzzle is a critical component to speeding up the process and realizing the value of cloud. Cloud-related expertise remains a requirement, but the languages and toolsets that execute on that knowledge should be shared across a team. 

If you’re using separate toolchains, you’re asking everyone to become experts in multiple technologies in addition to maintaining their domain expertise. It introduces more possibility for errors and misconfiguration and ultimately slows down the entire process. By standardizing, organizations can reduce the burden on domain experts and eliminate a source of frustration that is too often redirected from technology to people. That causes friction that slows down progress.

It isn’t enough to bring people to the same table if their chairs are facing different directions. By standardizing on tools and technologies, organizations can better enable every member of the team to apply their expertise toward the goal of speeding up delivery and deployment.

By aligning seats at the table with common tools and technologies, organizations can increase the probability of success when adopting cloud and its cousin, DevOps. 

 



Source link

IT Careers: How to Get a Job in DevOps | IT Infrastructure Advice, Discussion, Community


If you’ve ever considered working in DevOps, now might be a good time to pursue that option. Over the past few years, demand for IT professionals with experience in DevOps has skyrocketed. At the time of writing, a search for U.S. job postings that include the word “DevOps” turned up 5,733 jobs on Dice, 26,168 on Indeed.com and 65,727 on LinkedIn.

The 2019 Robert Half Technology Salary Guide said that “DevOps Engineer” was one of the hardest IT positions to staff. And that difficulty appears to be driving up wages. The report noted that DevOps engineer salaries range between $90,250 and $178,250, with a median of $110,500.

In addition, the DevOps trend seems unlikely to end anytime soon. The Interop and InformationWeek 2018 State of DevOps report found that 84% of organizations had either already implemented DevOps or planned to do so. That was up from 64% of respondents who said the same thing in 2017, and given the continued buzz around the approach, the 2019 numbers will probably be even higher.

If you already have a job in IT but haven’t yet worked in a DevOps-related position, the transition to a DevOps job shouldn’t be difficult. In fact, if you are already a developer or already work in IT operations, “working in DevOps” might simply mean doing the job you’ve already been doing but at a company that has embraced DevOps principles and practices. However, this change will require you to rethink the way you’ve always done things and adopt a new mindset. On the other hand, you might be interested in becoming a DevOps engineer or DevOps manager, which could require upgrading some of your skills if you don’t have previous DevOps experience.

Read the rest of this article on Information Week.



Source link