Tag Archives: Key

Wine 4.2 Released With Unicode String Normalization & ECC Crypto Key Support


WINE --

The second bi-weekly development release following last month’s stable debut of Wine 4.0 is now available for testing.

Wine 4.2 was just release and adds Unicode string normalization support, support for ECC cryptographic keys, support for mixing 32/64-bit DLLs in the load path, futex-based implementations of more synchronization primitives, and the usual smothering of bug fixes.

There are 60 known bug fixes over the past two weeks including fixing of “bad” performance for Source Engine games, many Valgrind memory leak fixes, EA Sports FIFA fixes, a few Far Cry lock-up fixes, and a variety of other game fixes.

More details on the Wine 4.2 changes via WineHQ.org.


Full-Stack Engineer: 3 Key Skills


Until fairly recently, most infrastructure professionals typically learned one area of the data center extremely well and spent their entire careers refining that specialty. Someone might be a storage professional or a networking professional, but rarely did he or she need to know both. And some were hyper-specialized, perhaps focusing in on Cisco routers or Linux servers.

While employers are still posting jobs for these types of positions, many are starting to look for IT staff who have broad rather than deep knowledge. As trends like cloud computing, DevOps and containerization have become more prevalent, organizations need IT workers who understand it all:  servers, storage, networking, virtualization, applications, security, and even the basics of how the business functions.

Scott Lowe, engineering architect at VMware, likes to refer to this type of well-rounded IT worker as a “full-stack engineer.” He knows the “full-stack” moniker is often used for developers who work on both front-end and back-end programming, but Lowe said he co-opted the term to describe infrastructure/applications engineers who are being forced to move out of the one area where they’ve worked.

Lowe hosts a popular podcast called The Full-Stack Journey, speaks regularly at Interop ITX, and also writes a blog that covers cloud computing, virtualization, networking and open source tools. Network Computing recently spoke with Lowe about why demand is growing for full-stack engineers.

He traced the origins of the full-stack movement to a number of converging trends.

First, he noted that IT groups are under increasing pressure to define the business value for every project or purchase they undertake. For example, if an organization is going to replace a server, IT often needs to justify that update to the business. That means IT professionals “need to be more aware of what technology is being used for. That’s what’s pulling us up the stack,” explained Lowe. Full-stack engineers need to understand which applications are running on the servers and why they are important to the business.

Second, he said that the trend toward cloud computing had made organizations realize that they have an alternative to in-house infrastructure, which has changed their perspective on IT investments. Also, because many organizations are moving workloads to the public cloud, “IT professionals have to shift their skillsets because the skillset they need to be effective and to thrive when those environments are in play are different than the skillsets they needed in order to thrive and be effective in a private data center,” Lowe said.

In addition, many organizations have “an increasing desire and need to use automation as a way of providing more consistent standardized configurations and to make IT organizations more effective,” said Lowe. That, too, is affecting the skills that IT professionals need to have in order to be successful.

So what skills do infrastructure pros need to have if they want to become full-stack engineers? Lowe said three types of skills are key:

1. Automation

Lowe said that there is no one characteristic that defines a full-stack engineer, “but the thing that comes the closest is fully embracing automation and orchestration in everything that they do.” That encompasses a wide range of tools and technologies, ranging from configuration management to containers to infrastructure as code.

2. Public cloud

With the public cloud becoming more prevalent among enterprises, Lowe also advised IT pros to develop their cloud computing skills. He specifically called out Amazon Web Services (AWS) and Microsoft Azure as two vendors that are important.

3. Continuous learning

The last skill on this list isn’t so much a set of knowledge to acquire as a necessary mindset. “Accept or embrace the idea that learning is going to be an integral part of your career moving forward,” advised Lowe. He said that because this is a dynamic and ever-changing industry, “our skillset also has to be dynamic and ever-changing.”

Scott Lowe will offer more advice about moving up the stack at Interop ITX 2018, where he will present “The Full Stack Journey: A Career Perspective.”

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Big Data Storage: 7 Key Factors


Defining big data is actually more of a challenge than you might think. The glib definition talks of masses of unstructured data, but the reality is that it’s a merging of many data sources, both structured and structured, to create a pool of stored data that can be analyzed for useful information.

We might ask, “How big is big data?” The answer from storage marketers is usually “Big, really big!” or “Petabytes!”, but again, there are many dimensions to sizing what will be stored. Much big data becomes junk within minutes of being analyzed, while some needs to stay around. This makes data lifecycle management crucial. Add to that globalization, which brings foreign customers to even small US retailers. The requirements for personal data lifecycle management under the European Union General Data Protection Regulation go into effect in May 2018 and penalties for non-compliance are draconian, even for foreign companies, at up to 4% of global annual revenues per affected person.

For an IT industry just getting used to the term terabyte, storing petabytes of new data seems expensive and daunting. This would most definitely be the case with RAID storage array; in the past, an EMC salesman could retire on the commissions from selling the first petabyte of storage. But today’s drives and storage appliances have changed all the rules about the cost of capacity, especially where open source software can be brought into play.

In fact, there was quite a bit of buzz at the Flash Memory Summit in August about appliances holding one petabyte in a single 1U rack. With 3D NAND and new form factors like Intel’s “Ruler” drives, we’ll reach the 1 PB goal within a few months. It’s a space, power, and cost game changer for big data storage capacity.

Concentrated capacity requires concentrated networking bandwidth. The first step is to connect those petabyte boxes with NVMe over Ethernet, running today at 100 Gbps, but vendors are already in the early stages of 200Gbps deployment. This is a major leap forward in network capability, but even that isn’t enough to keep up with drives designed with massive internal parallelism.

Compression of data helps in many big data storage use cases, from removing repetitive images of the same lobby to repeated chunks of Word files. New methods of compression using GPUs can handle tremendous data rates, giving those petabyte 1U boxes a way of quickly talking to the world.

The exciting part of big data storage is really a software story. Unstructured data is usually stored in a key/data format, on top of traditional block IO, which is an inefficient method that tries to mask several mismatches. Newer designs range from extended metadata tagging of objects to storing data in an open-ended key/data format on a drive or storage appliance. These are embryonic approaches, but the value proposition seems clear.

Finally, the public cloud offers a home for big data that is elastic and scalable to huge sizes. This has the obvious value of being always right-sized to enterprise needs and AWS, Azure and Google have all added a strong list of big data services to match. With huge instances and GPU support, cloud virtual machines can emulate an in-house server farm effectively, and make a compelling case for a hybrid or public cloud-based solution.

Suffice to say, enterprises have a lot to consider when they map out a plan for big data storage. Let’s look at some of these factors in more detail.

(Images: Timofeev Vladimir/Shutterstock)



Source link

Big Data Storage: 7 Key Factors


Defining big data is actually more of a challenge than you might think. The glib definition talks of masses of unstructured data, but the reality is that it’s a merging of many data sources, both structured and structured, to create a pool of stored data that can be analyzed for useful information.

We might ask, “How big is big data?” The answer from storage marketers is usually “Big, really big!” or “Petabytes!”, but again, there are many dimensions to sizing what will be stored. Much big data becomes junk within minutes of being analyzed, while some needs to stay around. This makes data lifecycle management crucial. Add to that globalization, which brings foreign customers to even small US retailers. The requirements for personal data lifecycle management under the European Union General Data Protection Regulation go into effect in May 2018 and penalties for non-compliance are draconian, even for foreign companies, at up to 4% of global annual revenues per affected person.

For an IT industry just getting used to the term terabyte, storing petabytes of new data seems expensive and daunting. This would most definitely be the case with RAID storage array; in the past, an EMC salesman could retire on the commissions from selling the first petabyte of storage. But today’s drives and storage appliances have changed all the rules about the cost of capacity, especially where open source software can be brought into play.

In fact, there was quite a bit of buzz at the Flash Memory Summit in August about appliances holding one petabyte in a single 1U rack. With 3D NAND and new form factors like Intel’s “Ruler” drives, we’ll reach the 1 PB goal within a few months. It’s a space, power, and cost game changer for big data storage capacity.

Concentrated capacity requires concentrated networking bandwidth. The first step is to connect those petabyte boxes with NVMe over Ethernet, running today at 100 Gbps, but vendors are already in the early stages of 200Gbps deployment. This is a major leap forward in network capability, but even that isn’t enough to keep up with drives designed with massive internal parallelism.

Compression of data helps in many big data storage use cases, from removing repetitive images of the same lobby to repeated chunks of Word files. New methods of compression using GPUs can handle tremendous data rates, giving those petabyte 1U boxes a way of quickly talking to the world.

The exciting part of big data storage is really a software story. Unstructured data is usually stored in a key/data format, on top of traditional block IO, which is an inefficient method that tries to mask several mismatches. Newer designs range from extended metadata tagging of objects to storing data in an open-ended key/data format on a drive or storage appliance. These are embryonic approaches, but the value proposition seems clear.

Finally, the public cloud offers a home for big data that is elastic and scalable to huge sizes. This has the obvious value of being always right-sized to enterprise needs and AWS, Azure and Google have all added a strong list of big data services to match. With huge instances and GPU support, cloud virtual machines can emulate an in-house server farm effectively, and make a compelling case for a hybrid or public cloud-based solution.

Suffice to say, enterprises have a lot to consider when they map out a plan for big data storage. Let’s look at some of these factors in more detail.

(Images: Timofeev Vladimir/Shutterstock)



Source link