Monthly Archives: May 2021

Experimental RADV Code Allows Vulkan Ray-Tracing On Older AMD GPUs


RADEON --

AMD currently just supports Vulkan ray-tracing with their Radeon RX 6000 series graphics cards while now there is independent work being done on Mesa’s unofficial Radeon Vulkan driver (RADV) to allow ray-tracing to work with older generations of GPUs like Vega and Polaris.

Joshua Ashton who is known for his work on VKD3D-Proton, DXVK/D9VK, and related projects while working under contract for Valve has been experimenting with bringing RADV Vulkan ray-tracing to pre-RDNA2 GPUs.

While RDNA2 GPUs offer hardware acceleration around BVH ray intersection tests, there isn’t much more that is actually new silicon for ray-tracing with these latest consumer GPUs. But the ray intersection tests can also be handled as a SPIR-V shader for any GPU as well, so that is what Ashton has been experimenting with.

With a lot of work, he does have some RADV experimental code working that besides using the branched code also requires some environment variables be set (RADV_PERFTEST=rt RADV_DEBUG=nocache). He has some very basic Vulkan ray-tracing demos now rendering for Polaris/Vega graphics processors.

RADV in general still needs more Vulkan ray-tracing wokr before it can handle more advanced Vulkan RT demos or games like Quake II RTX. There is also the in-progress VKD3D-Proton support for DirectX Ray-Tracing over Vulkan Ray-Tracing, which will be another target to experiment with in time.

So there’s more work ahead before this RADV code would really be usable or ready for mainlining to entertain Linux gamers on older graphics cards. It also remains to be seen how this shader-based implementation will perform if it will even be good enough for handling any ray-traced games.

In any case, see Joshua’s blog for more details on this ongoing effort for Vulkan ray-tracing on older generations of AMD GPUs.


Intel Launches Core i5-1155G7 + Core i7-1195G7 Tiger Lake Processors


INTEL --

Intel is kicking off Computex Taipei 2021 week by announcing new 11th Gen Intel Core processors as well as Intel 5G Solution 5000 as their first 5G product intended for next-gen PCs.

The new 11th Gen Tiger Lake processors being announced today are the Core i5-1155G7 and Core i7-1195G7 processors. The new flagship Core i7-1195G7 allows for boosting up to 5GHz (single core turbo), a first for their U-series processors. Like the Core i7-1165G7 and i7-1185G7, the i7-1195G7 has Intel Iris Xe Graphics with 96 EUs and is a 4 core / 8 thread processor design with 12MB cache and supporting DDR4-3200 / LPDDR4x-4266. The i7-1195G7 carries a 2.9GHz base frequency and a maximum single core turbo of 5.0GHz and all-core turbo of 4.6GHz. The graphics clock is also slightly higher at 1.4GHz compared to 1.30~1.35GHz on the existing Core i7 Tiger Lake processors.

Intel says there will be more than sixty designs based on these new Tiger Lake processors debuting ahead of the 2021 holiday season.

The Intel 5G Solution 5000 comes via Intel’s partnership with MediaTek and Fibocom. Initial devices using the Intel 5G Solution 5000 are expected this year while more designs in 2022.


What’s the Best Service Mesh Proxy?


The service mesh space is crowded with offerings each purporting to have a unique value proposition. Despite this, many mesh offerings have the exact same engine under the hood:  the general-purpose Envoy proxy.

Of course, there are many reasons to build on top of Envoy. First and foremost is that writing a proxy is hard—just ask the folks at Lyft, NGINX, or Traefik. To write a modern proxy, you need to both—ensure that it’s highly performant and highly secure. A proxy is part of the critical path for all the applications in a service mesh and therefore critical to your applications. If a proxy introduces new security vulnerabilities or significantly degrades performance, it will have a big impact on your applications.

However, the danger with using a general-purpose proxy is that to use your mesh well your users will also need to learn how to use, configure, and troubleshoot your proxy—in short, they will need to become proxy experts as well as service mesh experts. While that cost may work for some organizations, many find that the tradeoff isn’t worth the additional feature set.

For example, browsing through Istio’s repo on GitHub you can see that, at the time of this writing, there are over 280 open issues referencing Envoy. This implies that Envoy remains an active source of friction for the Istio mesh users.

A better choice: building a service-mesh-specific data plane

Imagine for a moment that you were freed of these constraints and could create the ideal service mesh proxy. What would that look like? For starters, it would need to be small, simple, fast, and secure. Let’s dig into this a bit.

Small – Size matters: Your proxy sits beside every single application in your mesh, intercepts, and potentially transforms every call to and from your apps. The lighter weight it is and the lower its performance and compute tax, the better off you are. Heavy-weight proxies need to provide heavy-weight benefits to offset the additional cost of running a proxy for every app. If we were building our new mesh today, we’d pick the smallest possible proxy.

Simple – KISS: Or, as my drill sergeant used to say, keep it simple, buddy. (Well, he didn’t actually say “buddy”.) Every feature that your proxy implements is going to be offset by security, performance, and size costs. When it comes right down to it, adopting an all-purpose proxy is great because it will likely do more than your control plane needs. Unfortunately, a feature implemented in the proxy that isn’t used by the control plane is wasted and, even worse, while it helps the mesh developer, it hurts their customers by exposing them to more vulnerabilities and making them deal with more operational complexity. A perfect mesh has a proxy that only implements the features it needs and nothing more.

Fast – High speed, low drag: Any latency added to your transactions by your proxy is latency added to your application. Now, to be clear, there are a lot of ways a service mesh can make your application faster, including by optimizing what endpoints it talks to and changing how inter-app traffic is handled, but the slower the proxy, the slower the mesh. When thinking about the ideal proxy, it would be extremely fast. That means, it’d be written in a language that compiles into native code and that isn’t garbage collected (GC). Native code for the execution speed and, as useful as GC is, it will periodically slow the performance of the proxy.

Secure – First, do no harm: Anytime you add a new piece of software to your stack, you add a new avenue for vulnerabilities. A service mesh is a critical portion of your infrastructure, particularly if you rely on it to secure all inter-app communication. The proxies will have access to every piece of PII, PCI, PHI, and any other data your application processes. So, as we think about those super-fast native code languages, we need to consider the security impact they have. C and C++ are great for performance but they are vulnerable to all sorts of memory management exploits. When writing our own proxy, we’d probably want to write or adopt a proxy written in Rust. Rust gives you the speed of C and C++ with much stronger memory guarantees. 

Where Does That Leave Us?

The perfect service mesh implementation wouldn’t use a general-purpose proxy, but would instead use a service mesh specific proxy—one that does no more than the mesh needs and that is written in a performant language with strong security guarantees like Rust.

The Linkerd “micro-proxy”

The Linkerd project chose to do exactly that, in the form of the Linkerd “micro proxy”, designed to keep the data plane’s resource cost and vulnerability surface as small as possible. William Morgan, the author of the “meshifesto” referenced at the beginning, goes into depth about why Linkerd chose to write their own proxy. To keep complexity down, compute costs low, and create the fastest mesh on the market while minimizing the security impacts of the proxy, Linkerd wrote the linkerd2-proxy. This leaves Linkerd with the fastest, lightest weight, and easiest to use service mesh available. Unlike many other mesh offerings Linkerd only works on Kubernetes but we—and our adopters—are happy with that trade-off.

Wrapping Up

We’ve covered a lot of ground in this article and its first part—from what a service mesh is to why its proxy matters. Hopefully, you have a sense of why service meshes are compelling, and what choices and tradeoffs service mesh implementations make. Hopefully, you’ve also seen  how, while the implementation detail of which proxy a mesh uses is important shaping the mesh itself, the proxy is just that—an implementation detail. The real value of a service mesh is in the capabilities, performance, and security outcomes it provides.

Jason Morgan is Technical Evangelist for Linkerd at Buoyant.

Read part one of this article: Service Mesh and Proxy in Cloud-Native Applications



Source link

GCC Rust Front-End Continues Advancing With Plans To Eventually Upstream


GNU --

While the official/reference Rust compiler implementation is LLVM-based, there continues to be the independent effort working on a GCC Rust front-end as an alternative full implementation of the Rust programming language.

The GCC front-end for Rust continues advancing as an alternative compiler moving forward for Rust code though at the moment isn’t feature complete or close to it for major features.

Per recent discussions though, the GCC Rust front-end developers are working to establish a GCC Git branch that will mirror their GitHub project. By having a formal GCC Git repository branch, they are hoping to help solidify their intention on getting the front-end upstreamed when ready. Similarly, they have already been enforcing copyright assignment to the FSF in preparing for that eventual upstreaming. Additionally, they are working to establish a separate GCC mailing list for this front-end for handling patch submission/review that way to complement their GitHub workflow as well. Their GitHub repository will continue to co-exist.

For those wanting to monitor the status of the GCC Rust front-end can do so via the weekly status reports. Most recently, they reached their generics milestone and are figuring out their traits support before moving onto pattern matching and imports/visibility. There are also two students working to improve GCC Rust this summer as part of Google Summer of Code.


Linux 5.14 To Feature Enhanced Support For MikroTik 10G/25G NIC


LINUX NETWORKING --

The Linux 5.14 kernel this summer will feature improved support for a new MikroTik 10G/25G NIC.

This network card works with the Linux kernel’s existing Atheros atl1c network driver in the Linux kernel but for the 5.14 cycle is being extended to better support the capabilities of this MikroTik NIC. Details on this NIC though are light with it seemingly not launched yet.

Two pull requests so far have made it into “net-next” ahead of the Linux 5.14 cycle for improving this MikoTik 10G/25G NIC. First up is the initial support so the MikoTik NIC with the atl1c driver can enjoy a higher link speed, RX checksum offload, improved TX performance, and other improvements.

Interesting them on the performance front is this pull adapting the atl1c driver to support more RX/TX queues on the network card for spreading CPU load. The MikroTik network card when using this driver now allows for four RX/TX queues rather than just two. This doesn’t change the behavior though for other hardware supported by this driver that can’t handle the extra queues.

With the four RX/TX queues, the performance improvements are paying off. “Simultaneous TX + RX performance on AMD Threadripper 3960X with Mikrotik 10/25G NIC improved from 1.6Mpps to 3.2Mpps per port.

These MikoTik improvements and a whole lot more is queuing up in net-next ahead of the Linux 5.14 cycle opening up in roughly one month. Another notable network change coming is Intel’s IGC driver supporting AF_XDP zero-copy.