Tag Archives: Enterprise

Radeon Pro Software for Enterprise 19.Q3 for Linux Released


RADEON --

On Wednesday marked the release of AMD’s Radeon Pro Software for Enterprise driver package for Windows and Linux.

The Radeon Pro Software for Enterprise 19.Q3 on the Windows side added more optimizations for workstation software, wireless VR visualization, and other bits to improve the AMD Radeon Pro support in the workstation software ecosystem. On the Linux side, the changes are a bit more tame.

Per the release notes for the Radeon Pro Software for Enterprise 19.Q3 Linux driver release, the changes come down to just fixes mentioning:

– Fixed display issues that users may encounter with some video settings in Autodesk MotionBuilder.
– Fixed some display issues that may be encountered while performing Smooth Shade mode with rotations in Houdini.
– Audio end points may not show in control panel options with legacy products.
– System issues may be observed with 1024×768 resolution virtual display creation using EDID emulation.

This 19.Q3 Radeon Pro driver package for Linux systems is officially supported for RHEL/CentOS 7.6 and RHEL/CentOS 6.10, Ubuntu 18.04.2 LTS, and SUSE SLED/SLES 15. Unfortunately, no official builds yet for RHEL 8.0 or the newly-updated Ubuntu 18.04.3 LTS with its revised Linux kernel as part of the HWE stack upgrade.

The new Radeon Pro Software for Enterprise 19.Q3 for Linux driver can be downloaded from AMD.com.


Can You Hear Me Now? Staying Connected During a Cybersecurity Incident | Cybersecurity


We all know that communication is important. Anyone who’s ever been married, had a friend, or held a job knows that’s true. While good communication is pretty much universally beneficial, there are times when it’s more so than others. One such time? During a cybersecurity incident.

Incident responders know that communication is paramount. Even a few minutes might mean the difference between closing an issue (thereby minimizing damage) vs. allowing a risky situation to persist longer than it needs to. In fact, communication — both within the team and externally with different groups — is one of the most important tools at the disposal of the response team.

This is obvious within the response team itself. After all, there is a diversity of knowledge, perspective and background on the team, so the more eyes on the data and information you have, the more likely someone will find and highlight pivotal information. It’s also true with external groups.

For example, outside teams can help gather important data to assist in resolution: either technical information about the issue or information about business impacts. Likewise, a clear communication path with decision makers can help “clear the road” when additional budget, access to environments/personnel, or other intervention is required.

What happens when something goes wrong? That is, when communication is impacted during an incident? Things can get hairy very quickly. If you don’t think this is worrisome, consider the past few weeks: two large-scale
disruptions impacting Cloudflare (rendering numerous sites inaccessible) and a
disruption in Slack just occurred. If your team makes use of either cloud-based correspondence tools dependent on Cloudflare (of which there are a few) or Slack itself, the communication challenges are probably still fresh in your mind.

Now imagine that every communication channel you use for normative operations is unavailable. How effective do you think your communication would be under those circumstances?

Alternate Communication Streams

Keep in mind that the middle of an incident is exactly when communications are needed most — but it also is (not coincidentally) the point when they are most likely to be disrupted. A targeted event might render critical resources like email servers or ticketing applications unavailable. A wide-scale malware event might leave the network itself overburdened with traffic (impacting potentially both VoIP and other networked communications), etc.

The point? If you want to be effective, plan ahead for this. Plan for communication failure during an incident just like you would put time into preparedness for the business itself in response to something like a natural disaster. Think through how your incident response team will communicate with other geographic regions, distributed team members, and key resources if an incident should render normal channels nonviable.

In fact, it’s often a good idea to have a few different options for “alternate communication channels” that will allow team members to communicate with each other depending on what is impacted and to what degree.

The specifics of how and what you’ll do will obviously vary depending on the type of organization, your requirements, cultural factors, etc. However, a good way to approach the planning is to think through each of the mechanisms your team uses and come up with at least one backup plan for each.

If your team uses email to communicate, you might investigate external services that are not reliant on internal resources but maintain a reasonable security baseline. For example, you might consider external cloud-based providers like ProtonMail or Hushmail.

If you use VoIP normally, think through whether it makes sense to issue prepaid cellular or satellite phones to team members (or to at least have a few on hand) in the event that voice communications become impacted. In fact, an approach like supplementing voice services with external cellular or satellite in some cases can help provide an alternate network connectivity path at the same time, which could be useful in the event network connectivity is slow or unavailable.

Planning Routes to Resources and Key External Players

The next thing to think through is how responders will gain access to procedures, tools and data in the event of a disruption. For example, if you maintain documented response procedures and put them all on the network where everyone can find them in a pinch, that’s a great start… but what happens if the network is unavailable or the server its stored on is down? If it’s in the cloud, what happens if the cloud provider is impacted by the same problem or otherwise can’t be reached?

Just as you thought through and planned alternatives for how responders need to communicate during an event, so too think through what they’ll need to communicate and how they’ll get to important resources they’ll need.

In the case of documents, this might mean maintaining a printed book somewhere that they can physically access — in the case of software tools, it might mean keeping copies stored on physical media (a USB drive, CD, etc.) that they can get to should they need it. The specifics will vary, but think it through systematically and prepare a backup plan.

Extend this to key external resources and personnel your team members may need access to as well. This is particularly important when it comes to three things: access to key decision-makers, external PR, and legal.

In the first case, there are situations where you might need to bring in an external resources to help support you (for example, law enforcement or forensic specialists). In doing that, waiting for approval from someone who is unavailable because of the outage or otherwise difficult to reach puts the organization at risk.

The approver either needs to be immediately reachable (potentially via an alternate communication pathway as described above) or, barring that, have provided approval in advance (for example, preapproval to spend money up to a given spending threshold) so that you’re not stuck waiting around during an event.

The same is true for external communications. You don’t want to find your key contact points and liaisons (for example to the press) to be MIA when you need them most. Lastly, it is very important to have access to legal counsel, so make sure that your alternative communication strategy includes a mechanism to access internal or external resources should you require their input.

The upshot of it is that the natural human tendency is to overlook the fragility of dependencies unless we examine them systematically. Incident responders need to be able to continue to operate effectively and share information even under challenging conditions.

Putting the time into thinking these things through and coming up with workarounds is important to support these folks in doing their job in the midst of a cybersecurity event.


Ed Moyle is general manager and chief content officer at Prelude Institute. He has been an ECT News Network columnist since 2007. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development. Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as author, public speaker and analyst.





Source link

The Looming Skills Crisis in the Epicenter of Your Enterprise | IT Infrastructure Advice, Discussion, Community


We often hear about skills shortages in “hot” fields like security or cloud or artificial intelligence — the roles that make flashy headlines. But there is another massive skills gap being largely overlooked, that if not addressed, could have extraordinary consequences on the success of businesses. That skills gap lies in the very heart of your enterprise: in the data center.

Every digital transformation effort runs through the data center. Modern enterprises need a modern data center. But despite being the lifeblood of the business, the data center hasn’t evolved at the same pace as the rest of the enterprise. Technology alone won’t modernize the data center though – it takes people.   

According to a report from the Uptime Institute, many data center staff simply don’t have the skills needed to modernize the data center. They lack experience in hybrid environments, software, and automation. Data center staff are also getting older, and businesses are struggling to fill open positions. Meanwhile, the people that do have those “newer” skills aren’t joining data center teams. See above: they’re probably being recruited to security, cloud, or AI teams! 

This has left enterprises vulnerable in one of the most important technical functions in the business. To mitigate this skills gap, enterprises need a two-pronged approach: invest in automation  and double-down on training and retaining data center staff.

Automation is not a four-letter word

Embracing more automation in the enterprise may change jobs and roles, but it won’t replace the need for IT staff. Rather, it will augment and assist humans. And ultimately, automation could be the thing to make the data center “cool” again. Because the job won’t be about memorizing CLI commands or IP addresses, which feels old and archaic. Instead, automation takes the mundanity out of the equation, and it will be about streamlining the provisioning and management of the data center. With automation, data center professionals could potentially run the data center on an app on their phone, or literally, use their voice to tell Slack to provision a new server and alert you when it’s done. Automation also removes the time-consuming bottleneck often involved in the change control process, which occurs when there is a request made for a new application or a change to something existing. These changes often turn into laborious processes involving multiple steps, documentation, and approvals, but automation is able to eliminate the manual work and expedite the time it takes to make the necessary change.

Most importantly, automation empowers data center professionals to be proactive and build skills by focusing on more strategic initiatives. It gives them the tools to transform what’s often seen as a cost center into a powerful asset that drives business outcomes. And beyond the satisfaction and day-to-day output of data center professionals, automation will allow organizations to be more agile and forward-looking.

Prioritize training and broadening skillsets

As much as automation will mitigate the skills gap in the data center, it’s not a silver bullet. The success of digital transformation and data center modernization entirely depends on the strength and intellect of the people within the walls of these enterprises. Which is exactly why organizations, large and small, need to up-level and broaden the range of training in the data center.

Training needs to focus on skills development for existing professionals – they need to learn new tools (i.e., software, automation, performance management, analytics) to help enrich their knowledge and extend their capabilities across functions. Data center professionals don’t need to become programmers (most won’t and don’t want to). But the vertical silos within the data center are shifting to a horizontal focus with greater attention to how all the pieces tie together. Think of it as a college major in networking, with minors in software, servers, security, virtualization, and storage.

In addition to providing more in-depth training to existing staff, organizations should also aim to recruit IT professionals with specialized knowledge of software and automation. Those workers may not automatically consider data center jobs, but if businesses can create additional incentives, those skills could greatly augment current teams.

Solving the skills crisis requires both technology and people

Digital transformation is a blessing and a curse. As many doors as it has opened, it’s created legitimate challenges for organizations bold enough to take these projects on. As of today, among the greatest limiting factors in technology-driven initiatives truly taking off is skills. A data center managed by teams with traditional skills will remain traditional, a legacy. A data center managed by teams with modern skills will become a more strategic asset, automating and empowering a modern business and providing a critical foundation for enabling an autonomous enterprise.

Through a combination of smart use of automation and a focus on people, organizations can begin to address the skills shortage and drive their businesses into the future.

 

 



Source link

Automating the Enterprise Network: Why Scripting is No Longer the Answer | IT Infrastructure Advice, Discussion, Community


Numerous open-source scripting approaches to network management are currently available in the market. While promising, they may prove to be a high-risk trap for enterprises looking to automate, remove complexity, and make changes to their networks quickly. While instances of expensive network outages are usually kept under wraps, enterprises must be aware of these hidden issues and look away from traditional scripting to achieve an automated network foundation that ensures business continuity and innovation.

Reducing complexity and improving agility: Drivers for automation

Enterprises are trying to reduce complexity – including lengthy lab testing and implementation cycles – in their networks to improve agility. The end goal is a platform for competitive business innovation with policy-driven, intent-based principles. In addition, network virtualization, SD-WAN, and other new shifts in networking mean the network-as-a-service is no longer predictable.

These dynamics are beginning to obsolete scripting and home-grown coding because both are still locked into a static model of the network rather than maintaining the stability of core business while evolving the network as new initiatives are added dynamically. It’s the network itself that represents the living, evolving business – not the static-scripted or manually- configured model. Months of learning, customizing, and testing not only can’t keep pace but are actually no longer needed. Rather, enterprises need a dynamic knowledge base of the network that can deliver automated remedies, updates, and alerts for configuration and ongoing maintenance and management. This is why intent-based networking is resonating in the industry; validation of business intent, automated implementation, awareness of network state, assurance, optimization, and remediation are all required for the modern network. The question is how to get there fast and efficiently.

Why scripting isn’t the answer

There are several reasons why writing scripts is not the answer for enterprises looking to automate their networks:

  • While python scripting is a compelling upgrade to slow, manual processes, unlike telecom protocols, scripts are not standardized and typically don’t use best practices nor scale in a multi-vendor network. As business intent evolves from new initiatives or acquisitions, scaling the network becomes critical. Scripts are notoriously difficult to adapt to new vendor systems and may inhibit cost-savings.

  • Home-grown scripting, unlike code, cannot self-adapt to new environments, be programmed to interact with network state, nor operate as a machine-learning platform. At best, home-grown scripts provide a one-off, static network configurator for a fixed point in time. As the network changes, the scripts must be updated and re-tested to manage any underlying knowledge base while polling the changing state of network resources. Even without the training, script testing, bug fixing, and maintenance, the user is left with an approach that is static and must be re-scripted manually and continually. If a user wants to make the network policy-driven, he or she must hire or contract further scarce resources to write, test, and maintain custom software.

  • DIY scripting from generic templates or playbooks is another approach that seems promising. However, it requires customizing integrity tests, introduces the same high-risk maintenance issues and testing delays, is unresponsive to policy change, and still requires trained skills and customization. Unlike open source web platforms, these templates are not backed by massive communities and have the potential to damage the enterprise operation.

  • With scripting, the enterprise user is left to build compliance testing software to minimize enterprise risk. Compliance automation requires ongoing audit and action to validate actual network state, ensuring compliance to policy. Even after updating and re-testing scripts, there is no guarantee that problems have been fixed – or that new problems haven’t been introduced.

  • Scripting can be problematic when there are staffing changes in the enterprise. As staff change, the cost of either repeated training or poorly documented scripts creates a cycle of re-creation. Scripts not well understood by new staff tend to be disposable and are replaced, introducing additional testing and ultimately, more risk.

  • Interpreted scripts are slow and inefficient when compared to compiled, optimized code. In large configurations, this can impact availability and maintenance periods as the scripts update networks and are subsequently tested. Enterprises are looking to speed operations and dynamic, automated changes may make the concept of large-scale network maintenance almost disappear.

To operate a responsive, automated network, the existing model of static scripting, monolithic testing, and training maintenance does not support the type of fast-moving intent-based, networking that is becoming the goal of the modern enterprise. To build a foundation that keeps the business evolving and competitive, enterprises need to move away from traditional scripting and towards more intent-based automation.



Source link

Oracle Releases Linux 4.14 Based “Unbreakable Enterprise Kernel R5 U2”


ORACLE --

Oracle today announced the general availability release of their Unbreakable Enterprise Kernel Release 5 Update 2 that pairs with their RHEL-derived Oracle Linux for offering a Linux 4.14 based kernel with various features on top.

The Unbreakable Enterprise Kernel Release 5 Update 2 is based on the upstream Linux 4.14.35 kernel while adding in Pressure Stall Information patches, the KTask framework for helping with parallelizing CPU-intensive kernel work, DTrace support for libpcap packet capture, a variety of file-system driver fixes, various virtualization updates back-ported from Linux 4.19, various hardware driver updates, Arm platform tuning, and NVMe driver updates back-ported from earlier versions of the Linux kernel.

Those wanting to learn more about today’s Oracle Unbreakable Enterprise Kernel Release 5 Update 2 can learn more at the Oracle Linux blog.