Tag Archives: Ready

When will my instance be ready? — understanding cloud launch time performance metrics



Understanding cloud launch time performance metrics
Click to Read More at Oracle Linux Kernel Development

The post When will my instance be ready? — understanding cloud launch time performance metrics appeared first on Linux.com.

AMD Has Yellow Carp Ready For Linux 5.14, More Smart Shift Updates + Display Fixes


RADEON --

Along with Intel having wrapped up their graphics driver feature work for Linux 5.14, AMD sent in another pull request too with more feature code they have ready for their AMDGPU kernel driver in 5.14 and will likely be their last major pull for this cycle too.

The AMD Radeon kernel graphics driver code for Linux 5.14 has already seen a number of features and improvements queue in DRM-Next. The exciting bits so far for Linux 5.14 on the red side include more Aldebaran accelerator bring-up work, HMM SVM support, PCI Express ASPM being enabled by default for relevant GPUs, TMZ support for Renoir, Van Gogh APU updates, Beige Goby support, GPU hot unplug support, AMD Smart Shift support for laptops, 16 bpc support for use by their Vulkan drivers, and a lot of smaller changes.

Within today’s potentially final feature pull request, AMDGPU has ready Yellow Carp as the newest RDNA2 GPU. AMD published their initial Yellow Carp hardware enablement driver code earlier this month and it’s ready to be introduced in Linux 5.14 in continuing the recent trend of providing launch day open-source AMD GPU support in the mainline kernel.


AMD’s Linux catered codenames for volleying early hardware bring-up for their GPUs continue to involve an X11 color followed by a fish species.

Besides having Yellow Carp support, there are SR-IOV fixes, updates to the new Smart Shift support, GPUVM TLB flushing changes, cleanups for BACO (Bus Active, Chip Off), various DC display code fixes and improvements, and a variety of other internal code clean-ups/changes.

The full list of AMDGPU changes heading to Linux 5.14 with this pull by way of DRM-Next can be found with this pull request.


Are You Ready to Support the “Branch Office of One”?


While the pandemic is often credited with creating the current work-from-home movement, the fact is it simply accelerated a process of network expansion that was already in process, creating literally billions of new network edges. For many years, digital innovation efforts have been focused on moving applications and network resources to environments that can be reached by any user or any device from any location. Public, private, and hybrid cloud networks, virtualized data centers, and SaaS applications have enabled the broad distribution of networks, resulting in millions of new network edges across LANs, WANs, data centers, and cloud edges. Hybrid work models and the widespread adoption of even permanent remote work has created a “branch office of one.”

While this remote work strategy has enabled organizations to be agile, resilient, and adaptive, it has also created complex issues around networking and security that few were ready to address when the pandemic hit. As a result, many responded with temporary fixes without considering long-term implications – from network architecture to real estate planning.  For example, many organizations now rely on VPN, a technology that has been around for decades, to provide secure remote access. However, temporary solutions are not ideal for permanent changes because many companies accelerated cloud migrations and are now trying to optimize networking and security after the fact.

The challenge is that this new “branch office of one” has a number of serious liabilities. The biggest challenge is that most remote workers now access corporate resources from a home network that may have little to no actual security in place.  In addition to the vulnerable devices also running on that network, like entertainment systems and IoT devices, it is also shared with other users accessing school, work, entertainment, shopping, or just general browsing.  Second, there is probably no one at that location capable of implementing or troubleshooting complex security and connectivity solutions. This means whatever solution your organization puts in place needs to be as simple to deploy and manage as possible and yet, deliver enterprise-grade performance. Consider this: if you have 1,000 remote workers, and each worker only experiences a connection outage once a year, your IT team is suddenly having to troubleshoot three complex system failures a day—a workload that didn’t exist before, now added to a team already stretched to the breaking point.

In such a situation, simplicity is critical. As much as possible, organizations should be looking for a converged solution that combines networking with advanced security. This will allow them to do things like segment corporate resources from the home network and create and maintain self-healing connections while providing deep inspection on all data, including streaming video and rich media services.

Note, the home network is only one scenario that needs to be addressed. Devices and users are also highly mobile.  So, converged connectivity and security needs to be extended everywhere: at home, in the car, at the coffee shop, or from a park bench or hotel room.  Today, secure remote access solutions must not only secure connections back to corporate resources, but they also must protect the enterprise edge where data, applications, and decision-making all happen locally, often on a temporary, ad hoc network.

The need for a security-driven network

In these cases, it’s more important than ever to implement a security-driven network that allows networking technology and security to be deployed, managed, and operated as a single, unified system. These solutions can range from new Zero Trust Internet Access (ZIA) solutions that provide a more advanced and more secure VPN connectivity option to SD-WAN and SASE working together to optimize user experience while providing end-to-end protection for applications and workflows. 

Of course, as systems converge, they have to support a wider variety of functions, including connectivity, availability, security, and more. That is why these tools also need to be integrated with artifical intelligence and machine learning. When transactions happen locally and at extreme speeds—think smart cars making split-second safety decisions at freeway speeds—then security cannot afford to make a round-trip to some remote location to decide what to do. Lag times in security don’t just affect business outcomes and user experience. In today’s hyperconnected world, they can also affect public safety.

The challenge is that in an era of specialization, it is extremely difficult to find a vendor capable of blending advanced network technology, enterprise-grade security, and best-in-class AI and ML. The better approach is to work with a security vendor that has blended critical networking functionality into an advanced security system that can be easily deployed in the cloud, whether as an infrastructure solution or a service, in the data center, on edge network devices, and even as an advanced endpoint security solution.

In today’s networks, computing, networking, and security must operate as an integrated solution. The security-driven networking approach provides organizations with the flexibility to deploy security wherever it is needed, whether on-premises, in cloud, or delivered as a cloud-based service. However, this flexibility should not be a reason for security to be a “bolted-on” as an afterthought. If you’re not looking at your increasingly distributed network through a security lens, then you’re setting yourself up for a more complex and less secure network.

Of course, since protecting distributed and mobile networks has become a critical issue, there is a growing number of networking vendors trying to stitch these different technologies together. The problem is that most of the time, the resulting solution is highly complex, very expensive, and extremely inefficient to deploy and manage. Indeed, in many instances, when the vendor is a networking company, the security capabilities do not meet even minimal standards for protection. The practical reality is that enterprises need best-in-class security that has a proven track record, backed by independent testing and validation.

Networks based on ad hoc solutions and lacking strategic planning often create so much complexity that it’s virtually impossible to effectively support hundreds or thousands of users and devices. Instead, the key elements of any effective security-driven networking strategy must be broad, integrated, and automated. Security needs to be an organic extension of the network, where all functions can be easily and remotely deployed, configured, managed, and orchestrated through a unified console. With a unified and integrated approach to security and networking, digital innovation can proceed uninterrupted, enabling users to effectively and securely operate from a branch office of one or from just about any location.

Jonathan Nguyen-Duy is vice president, global field CISO team at Fortinet.



Source link

Linux is ready for the end of time


On 03:14:08 Greenwich Mean Time (GMT, aka Coordinated Universal Time) January 19, 2038 (that’s a Tuesday), the world ends. Well, not in the biblical Book of Revelations sense. But, what will happen is the value for time in 32-bit based Unix-based operating systems, like Linux and older versions of macOS, runs out of numbers and starts counting time with negative numbers. That’s not good. We can expect 32-bit computers running these operating systems to have fits. Fortunately, Linux’s developers already had a fix ready to go.

The problem starts with how Unix tells time. Unix, and its relations — Linux, macOS, and other POSIX-compatible operating systems — date the beginning of time from the Epoch: 00:00:00 GMT on January 1, 1970. The Unix family measures time by the number of seconds since the Epoch.

[Source: ZDNet]

Linux 5.6 Is The First Kernel For 32-Bit Systems Ready To Run Past Year 2038





On top of all the spectacular work coming with Linux 5.6, here is another big improvement that went under my radar until today: Linux 5.6 is slated to be the first mainline kernel ready for 32-bit systems to run past the Year 2038!

On 19 January 2038 is the “Year 2038” problem where the Unix timestamp can no longer fit within a signed 32-bit integer. For years the Linux kernel developers have been working to mitigate against this issue also commonly referred to as the “Y2038” problem, but with Linux 5.6 (and potentially back-ported to 5.4/5.5 stable branches) is the first where 32-bit kernels should be ready to go for operating past this threshold.

[Source: Phoronix]