Tag Archives: Shift

AMD Has Yellow Carp Ready For Linux 5.14, More Smart Shift Updates + Display Fixes


RADEON --

Along with Intel having wrapped up their graphics driver feature work for Linux 5.14, AMD sent in another pull request too with more feature code they have ready for their AMDGPU kernel driver in 5.14 and will likely be their last major pull for this cycle too.

The AMD Radeon kernel graphics driver code for Linux 5.14 has already seen a number of features and improvements queue in DRM-Next. The exciting bits so far for Linux 5.14 on the red side include more Aldebaran accelerator bring-up work, HMM SVM support, PCI Express ASPM being enabled by default for relevant GPUs, TMZ support for Renoir, Van Gogh APU updates, Beige Goby support, GPU hot unplug support, AMD Smart Shift support for laptops, 16 bpc support for use by their Vulkan drivers, and a lot of smaller changes.

Within today’s potentially final feature pull request, AMDGPU has ready Yellow Carp as the newest RDNA2 GPU. AMD published their initial Yellow Carp hardware enablement driver code earlier this month and it’s ready to be introduced in Linux 5.14 in continuing the recent trend of providing launch day open-source AMD GPU support in the mainline kernel.


AMD’s Linux catered codenames for volleying early hardware bring-up for their GPUs continue to involve an X11 color followed by a fish species.

Besides having Yellow Carp support, there are SR-IOV fixes, updates to the new Smart Shift support, GPUVM TLB flushing changes, cleanups for BACO (Bus Active, Chip Off), various DC display code fixes and improvements, and a variety of other internal code clean-ups/changes.

The full list of AMDGPU changes heading to Linux 5.14 with this pull by way of DRM-Next can be found with this pull request.


The Real Source of Cloud Overspend? The Shift from CapEx to OpEx


The rise of the cloud has changed the face of Big Data. Whether through lift-and-shift or re-architecting, almost every modern enterprise is now managing a hybrid, and usually a multi-cloud Big Data environment.

The problem? The shift to a hybrid environment has created a cost crisis. When enterprise IT organizations receive their first few cloud bills, many are shocked. Bain & Company asked more than 350 IT decision-makers what aspects of their cloud deployment had been the most disappointing. The top complaint was that the cost of ownership had either remained the same or often increased.

Gartner estimates that “through 2020, 80 percent of organizations will overshoot their cloud IaaS budgets due to a lack of cost optimization approaches.” 80 percent!

The CapEx to Opex challenge

What happens when large-scale cloud migration begins? In an on-prem data center, there is an inherent and internal limit to compute capacity. An on-prem data center will never double its capacity overnight. Any utilization gains are hard-won, and IT teams can struggle to free up resources to meet business demands.

The cloud is seen as the obvious solution to this problem. With AWS, Azure, or Google Cloud, you face none of the baked-in limitations of an on-prem data center. The technical and internal bottlenecks of the legacy architecture vanish.

However, the legacy, on-prem data center operated within a CapEx model. Though the tech was constrained, so was the budget. But as the infrastructure migrates to the cloud, a CapEx model is exchanged for an OpEx model. And here’s where the trouble starts.

In the CapEx framework, the balance sheet was very clear, and projections were simpler. Traditionally, the CFO would oversee strict cost control mechanisms. Though this translated to constrictions on compute capacity, the trade-off was watertight budgeting.

But in the cloud-based OpEx paradigm, the control flows of how money is being spent suddenly become much looser and harder to define as there is no hard-coded capacity ceiling. For every internal team, an all-you-can-eat approach to resources sounds like the promised land.

An OpEx spending model plus the infinite resources of the cloud equals a recipe for overspending. Suddenly, an engineer can spin up a hundred-node cluster in AWS on a Friday, forget about it, go home, and discover a month later that it racked up thousands in cloud costs.

How to gain control in an OpEx model

Controlling spend in an OpEx model requires one thing: visibility.

Even with the best cloud migration strategy, and even the most dedicated attempts to curb cost, there are inherent features of the cloud landscape that make managing resources – and therefore cost – much more difficult.

In a large system, hundreds of thousands of instances will be supporting thousands of workloads, all of which are running big data computations for a range of internal customer teams. The range of ways to provision resources and compose the instance is much larger in the cloud than in a legacy architecture. With so many live instances, the implications for cost can be very hard to track.

The answer? Workloads need to be rightsized. And the key to rightsizing lies in visibility. You need to determine usage patterns, understand average peak computing demand, map storage patterns, determine the number of core processors required, and treat nonproduction and virtualized workloads with care. To stay rightsized post-migration, you need full insight into the actually required CPU, memory, and storage.

Get the software you need

This sort of visibility is what can give people an understanding of what cloud costs they are generating. However, the data and insights that IT operations teams need are almost impossible to acquire without the right tool. Even if they had the expertise, most organizations don’t have the human resources or hours to dedicate to reducing cloud spend in a granular way. This would require expertise and time. Even someone with the skills would be playing a whack-a-mole of workload management.

IT leaders need visibility to determine usage patterns, understand average peak computing demand, map storage patterns, and determine the number of core processors required. They need software that can take a targeted approach to rightsizing by identifying wasted, excess capacity in big data cluster resources. By monitoring cloud and on-premises infrastructure in real-time, and by leveraging machine learning with active resource management, they can automatically re-capture wasted capacity from existing resources and add tasks to those servers.

Related articles from the Network Computing archives:



Source link

The Serverless Security Shift | IT Infrastructure Advice, Discussion, Community


Security in the cloud has always followed a shared responsibility model. What the provider manages, the provider secures. What the customer deploys, the customer secures. Generally speaking, if you have no control over it in the cloud, then the onus of securing it is on the provider.

Serverless, which is kind of like a SaaS-hosted PaaS (if that even makes sense), extends that model to reach higher in the stack. That extension leaves the provider with most of the responsibility for security with very little left for the customer.

The problem is that the ‘very little left’ actually carries the bulk of risk, especially when we consider Function as a Service (FaaS).

Serverless shrinks the responsibility stack

Serverless seeks to eliminate (abstract away) even more of the application stack, leaving very little for the customer (that’s you) to secure. On the one hand, that seems like a good thing. After all, if you have only the application layer (layer 7) to worry about securing, that should be easier than trying to secure the application layer, the platform (web or app server), and its operating system.

Serverless expands entry points

But the thing is that serverless may constrict the vertical depth into the stack you need to protect, but it simultaneously broadens the horizontal surface by introducing greater decomposition and distribution of that layer.

What this means is that you need to apply security on a function by function basis. To put that in perspective for non-developer types, there are hundreds (thousands even) of functions per application. So instead of multiple products to secure, you have multiple points of entry to secure.

But wait, there’s more.

Each of those functions may be invoking external services or loading externally sourced components. That’s nearly a given because modern applications leverage open source components on average for about 80% of the total functionality of an app. That means your responsibility is not only to secure the code your developers write, but other developers as well. That’s no small ask. Studies show (through scans and analysis of those components) that there has been an 88% growth rate in app vulnerabilities in such packages over the past two years. (Source: Snyk, 2019 State of Open Source Security)

The increase in dependency on externally sourced components combined with the broadening of the attack surface thanks to decomposition means that securing serverless apps must focus on the code itself. This requires a shift away from after delivery, network-deployed services and into the CI/CD pipeline.

It means continuous security scans – static and dynamic. It means code reviews. It means increasing attention to what components are used and from where they are obtained. 

It means employing more security practices earlier in the development cycle. To be cliché, it means shifting security left.

But that doesn’t mean there aren’t more traditional security options available to secure serverless apps. It turns out that you can use familiar security services with serverless to protect and defend against attacks.

Traditional security and serverless

Web application firewalls are designed to intercept, scan, and evaluate application layer requests. That means HTTP-based messages, which are pretty much the bulk of serverless functions today. By forcing function calls (serverless requests) through a web application firewall, you can provide an extra layer of assurance that the inbound request isn’t carrying malicious data/code.

API gateways, too, are expanding beyond management functions to include security. An API gateway or API security service can provide similar capabilities as that of a WAF. There are many cloud-native and traditional API security options you can take advantage of to secure serverless applications.

You’ll note that regardless of what methods you employ to secure serverless apps, you are going to need to be familiar with app layer concepts and technologies. That means fluency in HTTP. That means being able to recognize imports of externally sourced components and questioning their status. It means creating a new checklist for your “go live gate” that focuses almost entirely on the application and virtually ignores anything in the stack below it.

Modern means apps

Most of the modern and emerging architectures and deployment models, like serverless, are heavily tilted toward applications. That’s unsurprising in the era of application capital, where apps can make or break businesses. That focus should be reflected in IT and, in particular, security. That means more attention to securing the application layer whether deployed as functions in a serverless model or in the data center.

Serverless security doesn’t have to be a struggle if you pay attention to the apps and focus on securing them.

 



Source link