Category Archives: Stiri IT Externe

IT Managers: Your Container Software Needs Hardware | IT Infrastructure Advice, Discussion, Community


Containers have become go-to solutions for moving code, and this industry trend hardly appears fleeting. In comparison, virtual machines seem rigid. By eliminating virtual machines’ rooted hardware focus, containers can surface new innovation into operating systems like never before—and it’s paying off. The implementation percentage of containers across industries is growing at an outstanding rate. In fact, it’s predicted that more than 50 percent of companies will use containers in 2020, which a huge jump from just 20 percent in 2017. However, amid this container craze, it’s vital that IT teams remember the importance of hardware.

Noisy neighbors

The nature of containers makes them more fluid in provisioning, lifetime and migration, but the complexity makes them harder to work with. As the industry sees an overwhelming influx of container-driven operations, data centers are experiencing drastic power pulls that drain energy and dollars. This makes infrastructure inefficient. Just like noisy neighbors, over-containerization can ruin an environment. The side effects are vast and varied but almost always detrimental.

There is CI/CD pipeline pressure to push information into the main branch of code and then out to production fast—a gitOPS mentality. Not to mention, developing orchestration projects involves evolving functionality that can sometimes clash with container strategies. This results in telemetry data coming from automated testing and production applications that need to be incorporated in CI/CD. These all involve a lot of package management, orchestration and communication. Finally, when you get the application into production, you have to manage both its performance and resource availability at the bare metal level.

Ultimately, containers are disconnected and unfamiliar with the hardware native to their operating system. Data center management tools provide a simple solution for IT managers to bridge the gap between container and data centers strategies. These tools monitor power usage, gauge utilization across data centers and present data needed to create streamlined and efficient operations. Access to this data allows containers to do what they were meant to do: create an elegant and innovative solution for moving code.

Software needs hardware

Software and hardware aren’t—and can’t be—mutually exclusive. Containers can be selfish when it comes to energy, which is where data center management tools step in to provide granular insight into power consumption. This includes monitoring options for each server, rack, workload and application. Power monitoring not only allows data center managers to identify the amount of energy containers need to function, but also, how to best allocate energy across multiple containers. This crucial information can be the difference between optimized container functions and ones that drain server energy rather pointlessly. In this way, data center managers create a centralized management policy to avoid power pulls.

When it comes to strategy, understanding hardware can play a key role in container adaptability. Many data center management tools visualize power consumption and predictive modeling to assist containers in the ever-changing enterprise space. These insights will allow IT teams to give power to containers across their environment. Furthermore, they’ll be able to save resources and lower operational costs long-term. The results? Containers now have the energy to power innovation, and data centers are operating at maximum efficiency.

Although power pulls are a major industry concern, there is an opportunity with this dynamic. Since data center management tools show rare visibility into uptime and cross-platform consumption levels, this creates a chance to identify underutilized servers that could potentially support more containers. Containers are far easier to migrate than virtual machines, so server migration is likely to occur more often. That being said, data center managers will need to provide a recommendation quickly. However, with enough insight into long-term utilization trends and insights, teams will be able to consolidate and balance work across devices.

Studies show that the “container craze” isn’t just a craze. It’s practical new wave of virtualization that more and more companies are implementing to streamline processes. With the right hardware tools and understanding of data center performance, these two technologies result in efficiencies that will save time, money and a lot of head-scratching.



Source link

OpenMandriva Lx 4.1 Aiming To PGO More Packages, Use IWD For WiFi Connections


OPERATING SYSTEMS --

While OpenMandriva Lx 4.0 was just released last month, we are already looking forward to OpenMandriva 4.1 for a number of improvements and some new features.

OpenMandriva’s developer board provides an interesting look at what’s ahead for OpenMandriva Lx 4.1. Already completed for this next milestone include migrating to LLVM Clang 9, and using LD.lld and BFD as the default linkers.

Meanwhile they are currently tackling using Profile Guided Optimizations (PGO) for more packages to improve the performance of their default binaries. Using PGO should help the likes of Python, Firefox, OpenSSL, LZ4, MPFR, Ogg, Vorbis, and many other packages they are evaluating for PGO’ing.

Also notable is switching to Intel IWD as an alternative to WPA_Supplicant for dealing with WiFi connections. They are also eyeing a replacement for Firewalld, other LLVM toolchain changes, moving to a merged /usr layout, updating their Java stack, and other changes.

Those curious what else is coming for OpenMandriva Lx 4.1 can learn more via GitHub.


Problem: Complex Networks Getting Harder to Secure | IT Infrastructure Advice, Discussion, Community


Public scrutiny of every security breach does a lot for the revenue streams of cyber security companies. It increases public awareness and puts pressure on businesses. But, does it really do anything to address the underlying issue of securing increasingly complex networks?

The expanded attack surface of a modern distributed network or a business undergoing digital transformation forms an atmosphere that’s tailor-made for malicious activity. Previous to an attack, hackers consider the complexity of a system. That’s because the wider the range of possible targets, the easier it is to find undetected vulnerabilities. It also makes successful penetration more likely, and reduces the risk of being caught to almost zero.

What are the specific issues facing networks?

Security issues aren’t just a problem for startups and small businesses, although they are more likely to be targeted due to smaller security budgets and fewer resources. Legacy systems – corporate environments that house multiple databases, geographic locations, and open source components  – and enterprises running a mixture of old and new network devices all add to risk.

The fact that networks are increasingly cloud-based and decentralized also widens the attack surface considerably. No longer do companies have one server or ecommerce platform to protect.

Equifax is a case in point. The credit reporting company recently experienced one of the largest data breaches in history. A contributing factor is most certainly the fact that it has 600 – 1,500 separate domains, sub-domains, and perimeters externally facing the internet.

Whomever hacked it may well have just thrown a dart at a network map to choose a way in.

Addressing complex security issues frees companies to move forward freely in an environment that balances risk and innovation with IT strategy.

But, how do we do that?

One solution is to reduce your technology footprint, thus reducing your attack surface. That may be more difficult in an era of global proliferation and IoT, but it isn’t impossible.

5 steps toward improving complex network security

Complexity is unavoidable in the 21st century. The goal isn’t to regress back to on-premises, centralized systems. It is to manage complexities in a way that doesn’t inhibit global expansion or increase risk.

James Tarala outlined the problem in his simulcast for SANS security institute, Implementing and Auditing the Critical Security Controls – In-Depth. Real world attack surfaces will be found somewhere within the relationship between three components: network, software, and human beings.

Examples include:

  • Outward-facing servers with open ports
  • Service availability within the firewall perimeter
  • Vulnerable code that processes XML, incoming data, email, and other office documents
  • Social engineering

In order to minimize risk, you have to limit opportunities for malicious activity. Here’s how.

1) Eliminate the complexity: This doesn’t involve reducing your network or reach. It simply means to get rid of unnecessary complexity within your system regardless of its scope. Even the most intelligently designed networks include elements of redundancy or can be managed badly by untrained or inexperienced personnel.

This can lead to:

  • Incomplete or duplicate information
  • Obsolete or invalid rules
  • Overly permissive policies that allow access to those who don’t need it

Performing a security and training audit can limit the possibility of human error that leads to data leaks and breaches. Evaluate current policies periodically to eliminate those that promote network insecurity. A good example would be the growing popularity of a virtual private network (VPN) as a way to guard against DDoS attacks (among other benefits).

Companies and their CISOs would be wise to educate employees about threats associated with not using a VPN or firewall, and subsequently requiring employees to use both when they access the internet through the company network. Or the organization could simply install a VPN on the network router, taking the decision out of the employees’ hands. Virtual private networks need not be enterprise level software; most of the best VPN services today for consumers use the same VPN protocols and cryptographic security, and are fine for small to medium-sized businesses.

When informed about the dangers of unencrypted internet browsing, employees should always connect through the encrypted protection of a VPN without having to remember to turn it on. Symanetec’s 2019 Threat Report shows a trend of SMEs adopting and mandating VPN usage for employees, both in and outside the office. By doing so, they’ve just eliminated a bit of complexity and enhanced security. Rinse and repeat. 

2) Visually evaluate your vulnerabilities: Threat detection often involves system scanning and monitoring. What’s often left out of the equation is vulnerability visualization. This makes it easier to overlook how an attack can occur.

When you set up a real-world model of methods as well as possible points of entry, you connect the two and create a more comprehensive approach to attack prevention.

Best practices recommend these three methodologies:

  • Attack surface modelling that includes network assets, likely targets, potential pathways, and overly permissive policies.
  • Attack simulation that demonstrates potential paths and modes of attack.
  • Patch simulation to determine which fixes will have a greater impact on security.

3) Maintain control over endpoints: This is a two-part process that involves continual network endpoint monitoring and controlling what endpoints are actually allowed to do. For example, drawing a visual perimeter around network endpoints that ensures security compliance with communication between network components and instituting a protocol that initiates automatically to curb exposure.

4) Segment and separate networks: You wouldn’t put all of your investments into the same stock, so why would you keep all of your connections and devices on one network? Segmentation allows you to reduce exploitable assets.

It also drills security controls down to a single machine, partition, or workload, and minimizes the amount of time a hacker is able spend undetected on your network by slowing his or her progress. Think of it as trapping a burglar in a corridor of various locked doors.

5) Prioritize attack surface analytics: The final step is to perform a little network security triage. Performing testing that analyzes configuration assessments, quantitative risk, and traffic flow will provide you with insight into risk levels and help you reduce your overall attack surface regardless of your network size or complexity.

Final thoughts

Complex networks are the new reality. They’ve erased old perimeters and reinforced the importance of refocusing on network security requirements. The goal of any cyber security professional should be to redefine those perimeters and protect them. Following the steps outlined above should go a long way toward meeting the security challenges of our new, complex networking capabilities.

The use of artificial intelligence and complex, interconnected networks is part of the problem. But, it can also become part of the solution. However, AI doesn’t just enhance security. It also contributes to better UI/UX. Improving customer outcomes and satisfaction is the ultimate goal of any enterprise.



Source link

FreeBSD 12 Runs Refreshingly Easy On AMD Ryzen 9 3900X – Benchmarks Against Ubuntu 18.04 LTS


While newer Linux distributions have run into problems on the new AMD Zen 2 desktop CPUs (fixed by a systemd patch or fundamentally by a BIOS update) and DragonFlyBSD needed a separate boot fix, FreeBSD 12.0 installed out-of-the-box fine on the AMD Ryzen 9 3900X test system with ASUS ROG CROSSHAIR VIII HERO WiFi motherboard.

I was curious about the FreeBSD support for AMD Zen 2 CPUs and new X570 motherboards, so this weekend I tried out FreeBSD 12.0. Fortunately, the experience was great! This current FreeBSD 12.0 AMD64 image installed effortlessly — no boot problems, networking did work out-of-the-box with this ASUS X570 motherboard, and there were no other issues at least as core functionality is concerned. So in no time I was off to the races in running FreeBSD 12.0 benchmarks on the Ryzen 9 3900X 12-core / 24-thread CPU.

I also attempted to try DragonFlyBSD with its latest daily ISO/IMG following the Zen 2 fix this week by Matthew Dillon. Unfortunately, even with the latest daily ISO I ran into a panic at boot time. So as a result, today are just some FreeBSD 12.0 vs. Ubuntu 18.04 benchmarks for reference. Matthew Dillon did have some interesting comments in our forums about his (great) experiences with these new CPUs, some limitations, and about the original DragonFlyBSD issue.

This system test configuration was the Ryzen 9 3900X at stock speeds, 2 x 8GB DDR4-3600 memory, ASUS ROG CROSSHAIR VIII HERO motherboard, and 2TB Corsair Force MP600 PCIe 4.0 NVMe SSD. Ubuntu 18.04 LTS was benchmarked against FreeBSD 12.0 with its default LLVM Clang 6.0 compiler and then again when switching to the GCC 8.3 compiler.

Ubuntu 18.04.2 LTS wins most of the benchmarks, but FreeBSD 12.0 was able to hold its ground fairly well in many of the benchmarks. Switching over to the GCC compiler did help address the difference in some of these benchmarks. All of these tests were carried out via the Phoronix Test Suite on Linux and BSD. Let’s check out some of those interesting numbers.




NVIDIA’s Graphics Driver Will Run Into Problems With Linux 5.3 On IBM POWER


NVIDIA --

For those using the NVIDIA proprietary graphics driver on an IBM POWER system, it could be a while before seeing Linux 5.3+ kernel support. Upstream has removed code depended upon by the NVIDIA binary driver for supporting the POWER architecture and as is the case they don’t care that it will break NVIDIA driver support since it’s binary/out-of-tree.

The POWER changes for Linux 5.3 remove NPU DMA code. In the pull request they do acknowledge this DMA code is “used by the out-of-tree Nvidia driver, as well as some other functions only used by drivers that haven’t (yet?) made it upstream.”

The patch removing the NPU DMA code by Linux kernel veteran Christoph Hellwig does acknowledge this basically reverts the POWER support for NVIDIA NVLink GPUs. The code is being dropped since it’s no longer being used by the in-tree kernel code and thus a burden when it comes to maintaining the upstream DMA code.

IBM developer Alexey Kardashevskiy did warn that this particular code is “heavily” used by NVIDIA’s graphics driver. Hellwig responded though that “Not by the [driver / code] that actually exists in the kernel tree, so it simply doesn’t matter.

This isn’t just a function or two being removed but amounts to 1,280 lines of code now stripped out of the kernel that was used by the NVIDIA binary driver on POWER. The NVIDIA POWER support will now break on Linux 5.3 but hopefully NVIDIA will be able to come up with a timely solution to fix their driver on 5.3 and newer.