Tag Archives: Whats

What’s a TAM and why might you want to be one?




The technical account manager (TAM) is a key customer service role in the enterprise. Here’s everything you need to know.
Read More at Enable Sysadmin



What’s AWS Conveying With Its Acquisition of Wickr?


Last week, AWS announced the acquisition of encrypted messaging provider Wickr in a company post by CISO Stephen Schmidt. Wickr provides collaborative communication services, such as messaging, content sharing, and video and voice communications, but it’s primarily known for its privacy features.

Encrypted communications and collaboration seem like a savvy acquisition in 2021, as the technology will help AWS expand its enterprise and federal offerings.

Enterprise encrypted communications

Encryption and privacy are topics that are getting a lot of recent attention. “Privacy is a basic human right” is the new mantra from tech CEOs. I’ve heard it, or variations of it, from Satya Nadella at Microsoft, Chuck Robbins at Cisco, Tim Cook at Apple, and others. While the statement seems irrefutable, the interpretation and implications vary widely.

It is easy to understand privacy in a personal sense. We all have information that we prefer not to share. It might be age, weight, medications, income, or any number of other concerns. It’s a basic concept that, as individuals, we control our information. Even if we share private information with a doctor or attorney, we have the expectation that they will also keep it private.

But things get more complicated at work. If an employee arrives at work late, that’s relevant information to a manager. Is it any different if a remote employee checks email late? To whom does “privacy is a right” apply to when the employee is accessing company data or company systems using company software or services? Employees should assume that corporate communications are subject to monitoring, including highly automated technologies that track usage, sentiment, education level, and policy compliance.

However, communications surveillance becomes much more difficult when encrypted end-to-end or when they are automatically destroyed. That’s what Wickr offers, and this is a rare feature in enterprise communications. End-to-end encryption (E2EE) is not a crime. If privacy is a basic right, then everyone deserves the right to encrypt their communications, and there is nothing suspicious about restricting who can see or hear private conversations.

Read the rest of this article on NoJitter.



Source link

What’s the Best Service Mesh Proxy?


The service mesh space is crowded with offerings each purporting to have a unique value proposition. Despite this, many mesh offerings have the exact same engine under the hood:  the general-purpose Envoy proxy.

Of course, there are many reasons to build on top of Envoy. First and foremost is that writing a proxy is hard—just ask the folks at Lyft, NGINX, or Traefik. To write a modern proxy, you need to both—ensure that it’s highly performant and highly secure. A proxy is part of the critical path for all the applications in a service mesh and therefore critical to your applications. If a proxy introduces new security vulnerabilities or significantly degrades performance, it will have a big impact on your applications.

However, the danger with using a general-purpose proxy is that to use your mesh well your users will also need to learn how to use, configure, and troubleshoot your proxy—in short, they will need to become proxy experts as well as service mesh experts. While that cost may work for some organizations, many find that the tradeoff isn’t worth the additional feature set.

For example, browsing through Istio’s repo on GitHub you can see that, at the time of this writing, there are over 280 open issues referencing Envoy. This implies that Envoy remains an active source of friction for the Istio mesh users.

A better choice: building a service-mesh-specific data plane

Imagine for a moment that you were freed of these constraints and could create the ideal service mesh proxy. What would that look like? For starters, it would need to be small, simple, fast, and secure. Let’s dig into this a bit.

Small – Size matters: Your proxy sits beside every single application in your mesh, intercepts, and potentially transforms every call to and from your apps. The lighter weight it is and the lower its performance and compute tax, the better off you are. Heavy-weight proxies need to provide heavy-weight benefits to offset the additional cost of running a proxy for every app. If we were building our new mesh today, we’d pick the smallest possible proxy.

Simple – KISS: Or, as my drill sergeant used to say, keep it simple, buddy. (Well, he didn’t actually say “buddy”.) Every feature that your proxy implements is going to be offset by security, performance, and size costs. When it comes right down to it, adopting an all-purpose proxy is great because it will likely do more than your control plane needs. Unfortunately, a feature implemented in the proxy that isn’t used by the control plane is wasted and, even worse, while it helps the mesh developer, it hurts their customers by exposing them to more vulnerabilities and making them deal with more operational complexity. A perfect mesh has a proxy that only implements the features it needs and nothing more.

Fast – High speed, low drag: Any latency added to your transactions by your proxy is latency added to your application. Now, to be clear, there are a lot of ways a service mesh can make your application faster, including by optimizing what endpoints it talks to and changing how inter-app traffic is handled, but the slower the proxy, the slower the mesh. When thinking about the ideal proxy, it would be extremely fast. That means, it’d be written in a language that compiles into native code and that isn’t garbage collected (GC). Native code for the execution speed and, as useful as GC is, it will periodically slow the performance of the proxy.

Secure – First, do no harm: Anytime you add a new piece of software to your stack, you add a new avenue for vulnerabilities. A service mesh is a critical portion of your infrastructure, particularly if you rely on it to secure all inter-app communication. The proxies will have access to every piece of PII, PCI, PHI, and any other data your application processes. So, as we think about those super-fast native code languages, we need to consider the security impact they have. C and C++ are great for performance but they are vulnerable to all sorts of memory management exploits. When writing our own proxy, we’d probably want to write or adopt a proxy written in Rust. Rust gives you the speed of C and C++ with much stronger memory guarantees. 

Where Does That Leave Us?

The perfect service mesh implementation wouldn’t use a general-purpose proxy, but would instead use a service mesh specific proxy—one that does no more than the mesh needs and that is written in a performant language with strong security guarantees like Rust.

The Linkerd “micro-proxy”

The Linkerd project chose to do exactly that, in the form of the Linkerd “micro proxy”, designed to keep the data plane’s resource cost and vulnerability surface as small as possible. William Morgan, the author of the “meshifesto” referenced at the beginning, goes into depth about why Linkerd chose to write their own proxy. To keep complexity down, compute costs low, and create the fastest mesh on the market while minimizing the security impacts of the proxy, Linkerd wrote the linkerd2-proxy. This leaves Linkerd with the fastest, lightest weight, and easiest to use service mesh available. Unlike many other mesh offerings Linkerd only works on Kubernetes but we—and our adopters—are happy with that trade-off.

Wrapping Up

We’ve covered a lot of ground in this article and its first part—from what a service mesh is to why its proxy matters. Hopefully, you have a sense of why service meshes are compelling, and what choices and tradeoffs service mesh implementations make. Hopefully, you’ve also seen  how, while the implementation detail of which proxy a mesh uses is important shaping the mesh itself, the proxy is just that—an implementation detail. The real value of a service mesh is in the capabilities, performance, and security outcomes it provides.

Jason Morgan is Technical Evangelist for Linkerd at Buoyant.

Read part one of this article: Service Mesh and Proxy in Cloud-Native Applications



Source link

A Look At What’s On The Table For Linux 5.3 Features


LINUX KERNEL --

With the Linux 5.2 kernel due to be released in a few weeks and that marking the opening of the Linux 5.3 merge window, here is a look at some of the likely features coming to this next version of the Linux kernel.

Based upon our close monitoring of the different “-next” Git branches of the Linux kernel and mailing lists, here is a look at what you’re likely to see merged with Linux 5.3 in July. Linux 5.3 will then debut as stable in September.

– AMD Radeon RX 5700 “Navi” support is coming!

– Continued HMM work for AMDGPU as well as PowerPlay improvements.

– Adreno 540 support within MSM DRM.

– The Ingenic KMS driver is new in the DRM-Next tree.

– Nouveau Turing TU116 support.

– HDR support for the Intel graphics driver for use with Icelake and Geminilake hardware and newer.

– Better Intel performance with FSGSBASE.

– Icelake NNPI support as the Nervana Neural Network Processor for Inference.

– Official Zhaoxin x86 CPU processor support.

– Intel UMWAIT support.

– LZ4 in-place decompression for the EROFS read-only file-system.

– Better performance for case-insensitive EXT4 lookups.

– Possibly SMR/zoned device support for Btrfs and also for Btrfs there’s been reworked locking code..

– Other Icelake bits.

– Wacom MobileStudio Pro support and Wacom Intuos Pro Small.

– There might be the long work-in-progress LOCKDOWN patches.

– /proc/pid/arch_status support for showing AVX-512 usage currently.

– ACRN guest hypervisor support.

– FEC support for Intel’s ICE network driver.

– Preparations for EFI special purpose memory might be ready.

– 100GbE networking driver improvements.

Of course, there will be a lot more too, so stay tuned for our Linux 5.3 merge window coverage in July. What are you hoping to see or looking forward to the most with the Linux 5.3?


As Cloud Services Evolve, What’s Next? | IT Infrastructure Advice, Discussion, Community


Since its inception, it’s no exaggeration to say that cloud computing has become one of the pillars on which modern society is built. Yet while the concept of the cloud has fully entered the popular imagination (most people associate it with digital storage services like Google Drive or Dropbox), in truth, we have only scratched the surface of cloud computing’s potential.

But simply storing documents for simultaneous access is only one facet of the cloud, and arguably not even the most important one. In fact, just as cryptocurrency combined several existing technologies to create a new, profitable whole, so too will cloud computing form the backbone of something new.

What’s next for cloud computing?

It seems clear that the next milestone for cloud will be mixed realities (MR), virtual reality (VR), and augmented reality (AR). One possibility includes virtual conferencing; in contrast to video conferences, where several participants are splashed across a screen, a VR (or AR) meeting allows people to sit together in a conference room. Rather than talking over each other or misreading social cues, attendees can carry on a meeting as if they were physically present in the same room, allowing for more productive (and less tense) gatherings.

Another possibility is a Blockchain-based cloud. Combining the two is a logical step: the system would feature the security of blockchain’s tamper-resistant record, as well as the ease and convenience of cloud computing. In many ways, the two are a perfect match for each other. Like the cloud, blockchain is decentralized, as it relies on a network of computers to verify transactions and continually update the record. Dispersing cloud-based blockchain technologies could lead to more secure record-keeping in such vital areas as global finance and manufacturing, where transparency is difficult to come by.

Smart cities are also likely to see significant boosts from cloud computing in the near future. Cloud computing would connect with Internet of Things (IoT) devices to allow for improvements like intelligent traffic and parking management, regulation of reduced cost of power and water, and optimization of other automated devices. Smart cities can lead to greater scalability of cloud-based computing, which can, in turn, make it easier to create common smart city services that can be reused and implemented across other cities.

The edge and the cloud: rivals or friends?

While cloud computing is still considered a relatively new technology, many experts also believe that it will give way to edge computing, which looks to reduce latency and connectivity costs by keeping relevant data as close to its source as possible. While this might seem like the new technology trumps cloud computing as a whole, edge computing is preferred for systems with specialized needs that require lower latency and faster data analysis, such as in fields like finance and manufacturing. Cloud computing alternatively works well as part of a general platform or software, like Amazon Web Services, Microsoft Azure, and Google Drive.

Ultimately, we will see edge computing as a tool to work alongside cloud computing in furthering our technological capabilities. Modern cloud computing hasn’t been around for very long and still has much room for growth. Instead of one form of computing replacing another in order to handle data and the Internet of Things (IoT), they work together to optimize computing and processing performance. As we continue to develop new technologies, both cloud and edge computing will become just two of the many ways we will be able to optimize and effectively navigate our highly interconnected world.

From its conception as an amorphous database of information accessible from any computer on a certain network, to its future incarnations as mediums for mixed realities and blockchain, to the addition of new technologies that work with the cloud like edge computing, the cloud has certainly come a long way in a short time. It’s easy to see that the future of the cloud is bright, and cloud computing is only going to become more capable as we move forward.

 



Source link