Tag Archives: servers

Combatting DNS Hijacking Requires Improved DNS Security | IT Infrastructure Advice, Discussion, Community


Global DNS hijacking is becoming an increasingly troublesome security threat for the entire Internet. Calls for secure domain authentication using DNSSEC specifications have been ongoing for years. But while added security is a step in the right direction, we all must understand that a huge portion of our Internet security lays at the feet of a single, private entity called the Internet Corporation for Assignable Names and Numbers (ICANN).

The latest cry for improved domain name system (DNS) security functionally was sent out in late February — and it came directly from ICANN. For those of use in the field of IT security, we fully understand the security concern surrounding DNS. Like most early networking mechanisms, first iterations of DNS contained no security safeguards. Instead, DNS was simply built as a hierarchical, distributed database to match a hostname (such as networkcomputing.com) to a unique IP address that computer networks use to communicate. The concern is that without the necessary security protections in place, DNS can be intentionally or unintentionally altered to send people to the wrong destination. And if done properly, a session can be hijacked without the end user ever knowing it.

Moves to enforce DNSSEC are a great way to secure the various DNS servers located on the Internet that are managed by various governments, corporations and service providers. DNSSEC authentication helps to solidify the integrity of lower branches on the DNS hierarchy tree. In other words, it helps verify that a compromised DNS server won’t send you to a hijacked server when you point a browser to a specific domain name. That said, this security only goes so far up that tree — and it ends at the very top where ICANN resides. ICANN controls all the top-level domains (TLD) that we’re familiar with including .com, .net and .org. It also controls TLD’s for governments and countries including .gov, .eu and .cn. Any changes at this level – and any security enforced – is made at the organization’s sole discretion.

We’re talking about a massive amount of responsibility – while being run as a private non-profit organization. So, how did it get this way?

ICANN from the beginning

In 1983, a man named Jon Postel established the Internet Assigned Numbers Authority (IANA) at the University of Southern California. At that time, Mr. Postel created the IANA when USC was under contract with the Defense Advanced Research Project Agency (DARPA). Until 1998, IANA — and all TLD control was managed within the U.S. government itself. As the popularity of the Internet exploded in the mid-1990s from a consumer and commercial perspective, the IANA merged with several other Internet-governance groups to form ICANN. The new non-profit was then contracted to manage TLD’s for the U.S. National Telecommunications and Information Administration (NTIA) from the time it formed until October 2016. It was at this point where the US government relinquished control to ICANN. Now that the United States government is out of the picture, ICANN now considers itself a global community that supports what they call a “vision of ‘one, world, one Internet.'”

Now that the Internet is indeed a global network, some conclude that the decision to remove U.S. control over TLD’s is a correct one. Others feel that a compromised ICANN can quickly become a national security threat. That said, as users of the free and global Internet, we must make sure that necessary checks and balances are in place to make sure ICANN never becomes corrupted by groups or governments. In other words, we need to make sure protocols and transparencies are in place so we can all “watch the watchers.”



Source link

Ransomware Attacks Drop Sharply, but Crytojacking Rises | IT Infrastructure Advice, Discussion, Community


Ransomware poses a serious concern, and reasonably so, yet if the most recent trends indicate anything, it is that the threat is not quite as lethal as once recognized.

In fact, what would you do if we told you that ransomware is no longer the biggest threat to cyber security? You probably would be skeptical, right?

When the IBM X-Force Threat Intelligence Index was released a few weeks ago, it highlighted a plethora of cyber security threats with the most jaw-dropping revelation being that hackers are no longer using ransomware as the primary attack vector for making money.

The report, which was based on data observed by IBM as they monitored over 70 billion security events a day, found a significant decline in ransomware compared to the past few years. In fact, ransomware attacks were down 45 percent in one quarter of 2018, indicating a massive drop.

For those that still need a little brush up on ransomware, it is a type of malicious software that threatens to publish users’ data or block access to the device until a ransom is paid.

Of course, that sounds alarming and removing ransomware from a PC after the device is taken hostage is furthermore complicated. In order to fully remove the cyber attack, one must eliminate the hostage taker completely from the PC, which is not always straightforward or easy.

Why are ransomware attacks dropping sharply?

It is important to note that while the 2019 IBM X-Force Threat Intelligence Index took many experts in the industry by complete surprise — celebrating the steep decline in ransomware attacks is a good thing. A really good thing.

 What was more surprising is that ransomware has traditionally been one of the more sophisticated yet effective types of threats for hackers to implement on a victim’s PC, so their sudden attempts to scale back on the number of attacks comes across as peculiar.

However, as cyber-criminals have acted in the past, it rarely does any good for them to dwell on one method of an attack for too long before antivirus software and other systems catch up to combating the threats. Therefore, hackers are always looking to stay ahead of the cyber police, and vice versa. It’s a constant tug of war.

The recent trends do not indicate that ransomware is completely dead, but rather that hackers are embracing new types of threats that security systems have not yet found the best way to combat.

What Is the most recent threat to be worried about?

One word: cryptojacking.

While ransomware witnessed a sharp decrease in the volume of attacks, cryptojacking was the complete opposite. It is very much on the rise.

In fact, the same IBM index reported that cryptojacking attacks were up an incredible 450% over last year, clearly bringing the threat to the forefront of cyber security as systems prepare for new attacks in 2019.

Cryptojacking is described as unauthorized cryptocurrency mining activity. It essentially installs an unknown program onto a device and then secretly accesses your personal information. It can mine cryptocurrency and has proven to be more effective than ransomware from the perspective of hackers.

Therefore, security teams and antivirus software must remain active in handling this growing issue as cyber-criminals develop new ways to use crypto mining tools without being detected by web browsers. As was the case in January, when security researchers found that nearly 25% of all free VPNs for Android contained some form of malware. To see if you’ve been cryptojacked, try the “Cryptojacking Test.” Additionally, utilize endpoint protection in your antivirus software that can detect crypto miners.

What other attacks can we anticipate for 2019?

Ransomware is down, cryptojacking is up. There is no debate about it according to the discoveries from IBM. But there are other threats that you need to keep an eye on.

Have you heard about Business Email Compromise, or BEC? If not, now is the time to get familiar with the latest cyber threat. BEC seeks to trick online users into paying for a fraudulent invoice that claims they owe certain services.

According to IBM, the type of cyber attack is becoming increasingly popular because it has proven to be very lucrative. Last year, BEC scams accounted for 45% of phishing attacks. It is definitely becoming noteworthy.

Overall, are cyber threats up or down?

Due to our increased reliance on technology the ultimate optimism is that we will one day have a firm grasp on discouraging cyber-criminal activity of all kinds and not have any serious risks for online users. That’s obviously a very ideal world.

The problem is that vulnerabilities are actually on the rise, and not the opposite. For example, did you know that 96% of firms have experienced at least one severe exploit last year?

The IBM X-Force also notes there were 140,000 known vulnerabilities that were tracked last year. Of those, 42,000 were reported in just the past three years. It’s a significant percentage of new threats that have surfaced online.

What is even more alarming is that IBM estimates of that number of vulnerabilities, one-third of them do not currently have patches. Therefore, the attack surface is increasing not decreasing.

How does the united states rank in security?

Since the IBM X-Force conducted studies throughout the world, one may wonder if the threats are nearly as dangerous in the United States as they are in other, perhaps lesser informed parts of the world.

The answer is the United States ranks number one when it comes to malware command and control (C&C), which is indeed very good news for those in America. Canada is also performing well including the number of reliable hosts they provide.

The United States ranked best for number of C&C servers, an important feature for controlling malware.

The bottom line

The latest findings from the IBM X-Force committee proved that malware and ransomware attacks are being replaced by other, more recent forms of attacks like cryptojacking and BEC.

While the two remain active, the majority of breaches found in the report from last year (57% in all) did not involve the use of malicious files. It is time to reexamine our security priorities.



Source link

Delivering “5 Nines Availability” to Improve Business Outcomes | IT Infrastructure Advice, Discussion, Community


The classic definition of “5 nines” refers to an uptime of greater than 99.999%, or just over five minutes of downtime per year. By contrast, an uptime of 99.9%, or “3 nines” is about 8.7 hours of downtime per year – more than a full business day. Downtime like that can do some real financial damage.

Whether you provide managed services or hosted solutions for customers, or you’re an enterprise performing your own IT operations support, maintaining a high level of service availability is critical and directly affects business outcomes for your organization.

Combining people, processes and tools

Getting this done is not easy. Today’s contact center and unified communications technologies are incredibly powerful – but they are also very complicated. There are a huge number of components: applications, SIP proxy servers, directories, voice gateways, multiservice switches, recording servers and more.

The sheer complexity is overwhelming when it comes to understanding how these components work individually and, more importantly, how they interact with each other. Providing complex services such as unified communications and contact center reliably comes down to the people, processes and tools that make up your delivery model.

Finding a delicate balance between the three is critical. People are the key, but of course, to err is human. Unforced mistakes, forgetfulness and imprecise execution can each cause significant issues leading to downtime. Processes help to alleviate these kinds of shortcomings and maximize the potential of your people by making them more effective and consistent.

Tools, on the other hand, are force-multipliers. They magnify people’s efforts, allowing them to be more efficient, and they can supplement effort by automating certain jobs. You can argue that tools are the key to making people efficient and effective at executing the processes that support them.

Your end goal is always to reduce the downtime. The triad of people, process and tools need to be well tuned and complementary to best meet this goal. It’s not only important to fix problems in seconds, but it’s even more important to get ahead of problems in a predictive and proactive manner if you can.

Essential IT Ops tools

Let’s focus for the moment on the tools. The IT operations management platform and the suite of tools underpinning it must work cohesively and provide certain capabilities. Here are two of the most important ones:

Automated Root Cause Analysis Tools (RCA): With automated RCA, you don’t waste valuable time manually tracking down the root cause of service issues. Instead, automated RCA does it for you. Utilizing built-in intelligence, it analyzes huge amounts of incoming events, detecting patterns and relationships then performing additional analysis from multiple viewpoints – usually based on topological context – that point to the real problem. It then leverages its findings to rapidly pinpoint the root cause of contact center and unified communications service issues. Being able to quickly and accurately pinpoint a root cause is key to maintaining any kind of uptime requirements your business may have.

Artificial Intelligence for IT Operations Tools (AIOps): Originally known as Algorithmic IT Operations, AIOps has come of age in the last decade. Improvements in computing power and storage, the scalability of cloud-based infrastructure, the availability of truly massive data sets and the increasing sophistication of algorithms have all been key factors in facilitating this evolution in IT operations management.

AI-based tools are beginning to transform how infrastructure is managed. They have the ability to recognize critical issues with superior accuracy, resulting in faster remediation of problems. They bring about efficiencies through by employing machine learning on data collected and leveraging automations that drive real-time feedback loops and workflows, and even provide self-healing capabilities when customers dare to let the machine make decisions.

Intelligent systems utilizing trend and threshold-based techniques can drive predictive value when resources are getting maxed out and provide a truly proactive approach to infrastructure management.

How “5 nines” availability improves business outcomes

Those are just two of the many factors that make it possible to achieve a 5 nines availability. But how does this relate to business outcomes?

Minimize downtime and business disruption – Service outages and degradations have a huge business impact – contact center downtime can lead to well over $100,000 an hour in lost revenues. And that’s just the tip of the iceberg – the long-term impact is equally painful. Automated RCA and AIOps capabilities dramatically accelerate service restoration. You spend your time actually fixing the problem rather than investigating it.Raise customer satisfaction and retention: Reduced service downtime translates directly into increased customer satisfaction. By delivering a positive customer experience, you increase customer retention and build trust. Not only does this protect your revenue streams, but for a managed service provider (MSP), it makes it much easier to upsell additional services to your existing customer base. Additionally, it also reduces pricing pressures – customers are less likely to look for lower-cost alternatives when they are highly satisfied with the services you deliver.

Increase win rates: Better service quality is a key competitive differentiator, especially for companies that rely on their mission-critical contact center and unified communications services. By using automated RCA and AIOps techniques, you can deliver superior service and achieve 5 nines availability. Even better, if you’re an MSP, you can back this up by offering more aggressive SLAs during the sales process. This translates into higher win rates, allowing you to command a price premium. And, of course, you can also upsell enhanced SLAs to your existing customer base.

Reduce service delivery costs: Manually diagnosing service issues is expensive. It takes skilled and experienced contact center and unified communications experts – and consumes vast amounts of their time. By optimizing remediation, you can dramatically lower the cost of managing contact center and unified communications infrastructures.

Make sure your contact center, collaboration and unified communications vendor is able to provide proof that they can offer these benefits and deliver “5 nines” (or greater) of service.



Source link

HTTP2’s Role in Solving Implementation Gaps and Improving UX | IT Infrastructure Advice, Discussion, Community


In the beginning of web browsing as we recognize it today, complete with embedded graphics and blinky tags, there was NCSA Mosaic. The web was a simpler place. It was indexed mostly by hand for search, traffic was low and pages were straightforward. Most importantly, the number of objects on a page–meaning the number of components that need to be fetched in order to display it–were low. At the same time, the 14.4K baud modem was in common use, and connections would be dropped when a parent or roommate picked up the phone at the wrong time. That is to say, the web ran at a slower place and Mosaic had an easy time fetching and displaying the needed objects, and user expectations were low.

Following Mosaic, HTTP/1.x had been in use for many years and did not make the browser’s job any easier. Most notably, it could only send one request at a time on a connection and then wait for the response before it sent another. Browsers got around this problem by opening many connections to a server so it could open some work in parallel, but these individual connections still suffered from the same head-of-line blocking. The criticality of an object helped the browser decide if it should be sent early, or later.

Times (and expectations) have changed

Today, according to the HTTP Archive, mobile websites (not desktop) have a median 70 requests per page for a total of 1.7 MB. On top of that, more than 75 percent of those requests are over HTTPS (i.e. encrypted and authenticated) which means even more (very worthwhile) work for the little phone that could. Mobile browsers today have a tough job of providing users with the instantaneous experiences they expect. 

A significant part of browser development goes into the decision process to determine what the critical objects are on a page. Critical objects in short are the objects that are needed to start rendering the page. Poor prioritization of these critical objects, however, can lead to a jarring experience. For example, stylesheets can change how the page should flow, or even leave the users staring at a blank page as they wonder what goes on in the background. To further complicate this, it is not simply a case of “get the stylesheets first, then the javascript,” the objects can form a tree of complicated dependencies that only emerge as they load.

HTTP/2 addresses gaps with multiplexing and prioritization

HTTP/2 was developed in part to address the bottlenecking issues that HTTP/1.1 was unable to solve.. It’s core differentiator is that it provides a number of important tools for improving performance, such as multiplexing, prioritization, header compression and server push. Specifically, multiplexing and prioritization are most critical to addressing the rendering complications discussed above.

Multiplexing allows the browser to send many requests at the same time, and the server will respond in any order, even overlapping the returned data. Among other things, this makes the request side more efficient and gets the browser out from the head of line blocking that plagued HTTP/1.x. By not having to wait for the back and forth of requests and responses, the server can get more data down to the browser faster.

The necessary partner for multiplexing is prioritization. Prioritization provides the browser a method for telling the server which objects are more important than others and what the dependencies of those objects are. Without this, though the information might get from the server to the browser faster overall, the jumbled mess of critical and non-critical objects could result in the browser taking longer to start painting the page on the screen. Getting this to work relies on solid implementations in the browser and the server. Getting it wrong can lead to a degraded experience for users.

A great example of this can be seen in the work by Pat Meenan, creator of WebPageTest, and his HTTP/2 Priority test.  The test sets up a page that causes the browser to discover high priority resources after it has already sent lower priority resources. The proper behavior for the browser is to immediately send these high priority resources with the correct dependencies indicated. The server, in turn, should make certain those new requests are handled right away.  The results of the tests? Well, not good overall.

Andy Davies maintains a Github that tracks the results from various servers, proxies and content delivery networks (CDNs). The bottom line is very few of them perform well in this fairly common scenario. What does that mean for the user?  Well, it means they are getting suboptimal experiences today over HTTP/2. The protocol that was supposed to speed us up, is in cases slowing us down because of poor implementations. The good news is this will get fixed over time now that it has been identified. 

Looking ahead to HTTP/3

Protocols are developed to solve the network problems of today. The best are able to peer into a crystal ball and make solid predictions about the trends to come. Eventually, the infrastructure and our usage patterns change enough that they cannot keep up, and revising and re-inventing becomes necessary. Even the venerable TCP, without which none of us would be where we are today and it is about to be supplanted by the introduction of QUIC (now renamed HTTP/3) to better address the variabilities on the Internet. HTTP/2 is a great step forward on this path of progress−looking closely at it can provide great insights to just how far we have come and some thoughts on where we still need to go.

 



Source link

Linux 5.1 Might Pick Up Support For Using Persistent Memory As System RAM


HARDWARE --

While we are expecting to see more Intel Optane NVDIMMs this year that offer up persistent memory using 3DXPoint memory on the DDR4 bus for persistent storage, the Linux 5.1 kernel might pick-up support for treating this persistent memory back as traditional RAM if so desired.

Intel Optane DC Persistent Memory is expected to begin appearing in more servers this year for offering application-level persistent memory for use-cases like database servers, HPC, and other enterprise computing possibilities. If you are buying such NVDIMMs in the first place, chances are you planning to utilize the persistent memory for such purposes, but otherwise with Linux 5.1 there are patches pending to allow this PMEM to function as traditional system RAM.

Device-DAX code updates queued for Linux 5.1 allow for persistent memory and other reserved/differentiated memory to be assigned to the core memory management code as system memory. This will treat the NVDIMMs as volatile RAM and support all of the traditional Linux memory management interfaces.

More details on this pending work via this patch series. While it’s a pull request to land in Linux 5.1, Linus Torvalds has requested clarification about some behavior from Intel and is awaiting that before he considers merging the code.