Tag Archives: Cloud

Troubleshooting Network Performance in Cloud Architectures | IT Infrastructure Advice, Discussion, Community


Troubleshooting within public or hybrid clouds can be a challenge when end users begin complaining of network and application performance problems. The loss of visibility of the underlying cloud network renders some traditional troubleshooting methods and tools ineffective. Thus, we must come up with alternative ways to regain that visibility. Let’s look at five tips on how to better troubleshoot application performance in public cloud or hybrid cloud environments.

Tip 1: Verify the application and all services are operational form end-to-end

The first step in the troubleshooting process should be to verify that the cloud provider is not having an issue on their end. Depending on whether your service uses a SaaS, PaaS or IaaS model, the verification process will change. For example, Salesforce SaaS platform has a status page where you can see if there are any incidents/outages or maintenance windows that may be impacting your users.

Also, don’t forget to check other dependent services that can also impact access or performance to cloud services. Services such as DHCP and internal/external DNS are common dependencies can cause problems — making it look like there is something wrong with the network. Depending on where the end user connects from in relation to the cloud application they are trying to access, the DHCP and DNS servers used will vary greatly. Verifying end users are receiving proper IP’s and can resolve domains properly can save a great deal of time and headaches.

Tip 2: Review recent network configuration changes

If a performance problem to a cloud app seemingly crops up out of nowhere, it’s likely a recent network change is to blame. On the corporate LAN, review any firewall, NAT or VLAN adds/changes didn’t inadvertently cause an outage for a portion of your users. These same types of network changes should also be verified within IaaS clouds as well.

QoS or other traffic shaping changes can also accidentally degrade performance between the corporate LAN and remote cloud services. Automated tools can be used to verify that applications are being properly marked — and those markings are being adhered to on a hop-by-hop basis between the end user and as far out to the cloud application or service as possible.

Tip 3: Use traditional network monitoring and troubleshooting tools

Depending on the cloud architecture model you’re using, traditional network troubleshooting tools can be greater or less effective when troubleshooting performance degradation. For instance, if you use IaaS such as AWS EC2 or Microsoft Azure, you have enough visibility to use most network troubleshooting and support tools such as ping, traceroute, and SNMP. You can even get NetFlow/IPFIX data streamed to a collector — or even run packet captures in a limited fashion. However, when troubleshooting PaaS or SaaS cloud models, these tools become far less useful. Thus, you end up having to trust your service provider that everything is operating as it should on their end.

Tip 4: Use built-in application diagnostics and assessment tools

Many enterprise applications have built-in or supplemental diagnostic tools that IT departments can use for troubleshooting purposes. These tools often provide detailed information that help you determine whether performance is an application-related issue — or a problem with the network or infrastructure. For example, if you’re having issues with Microsoft Teams through Office 365, you can test and verify sufficient end-to-end network performance using their Skype for Business Network Assessment Tool. Although this tool is most commonly used to verify whether Teams is a viable option pre-deployment. It can also be used post-deployment for troubleshooting purposes.

Tip 5: Consider SD-WAN built-in analytics or pure-play network analytics tools

Network analytics tools and platforms are the latest way for administrators to troubleshoot network and application performance problems. Network analytics platforms collect streaming telemetry and network health information using several methods and protocols. All data is then combined and analyzed using artificial intelligence (AI). The results of the analysis help pinpoint areas on the corporate network or cloud where network performance problems are occurring.

If you have extended your SD-WAN architecture to the public cloud, you can leverage the myriad of analytics components that are commonly included in these platforms. Alternatively, there are a growing number of pure-play vendors that sell multi-vendor network analytics tools that can be deployed across entire corporate LANs and into public clouds. While these two methods can be expensive and more complicated to deploy initially, they have shown to speed up performance troubleshooting and root cause analysis processes dramatically.



Source link

As Cloud Services Evolve, What’s Next? | IT Infrastructure Advice, Discussion, Community


Since its inception, it’s no exaggeration to say that cloud computing has become one of the pillars on which modern society is built. Yet while the concept of the cloud has fully entered the popular imagination (most people associate it with digital storage services like Google Drive or Dropbox), in truth, we have only scratched the surface of cloud computing’s potential.

But simply storing documents for simultaneous access is only one facet of the cloud, and arguably not even the most important one. In fact, just as cryptocurrency combined several existing technologies to create a new, profitable whole, so too will cloud computing form the backbone of something new.

What’s next for cloud computing?

It seems clear that the next milestone for cloud will be mixed realities (MR), virtual reality (VR), and augmented reality (AR). One possibility includes virtual conferencing; in contrast to video conferences, where several participants are splashed across a screen, a VR (or AR) meeting allows people to sit together in a conference room. Rather than talking over each other or misreading social cues, attendees can carry on a meeting as if they were physically present in the same room, allowing for more productive (and less tense) gatherings.

Another possibility is a Blockchain-based cloud. Combining the two is a logical step: the system would feature the security of blockchain’s tamper-resistant record, as well as the ease and convenience of cloud computing. In many ways, the two are a perfect match for each other. Like the cloud, blockchain is decentralized, as it relies on a network of computers to verify transactions and continually update the record. Dispersing cloud-based blockchain technologies could lead to more secure record-keeping in such vital areas as global finance and manufacturing, where transparency is difficult to come by.

Smart cities are also likely to see significant boosts from cloud computing in the near future. Cloud computing would connect with Internet of Things (IoT) devices to allow for improvements like intelligent traffic and parking management, regulation of reduced cost of power and water, and optimization of other automated devices. Smart cities can lead to greater scalability of cloud-based computing, which can, in turn, make it easier to create common smart city services that can be reused and implemented across other cities.

The edge and the cloud: rivals or friends?

While cloud computing is still considered a relatively new technology, many experts also believe that it will give way to edge computing, which looks to reduce latency and connectivity costs by keeping relevant data as close to its source as possible. While this might seem like the new technology trumps cloud computing as a whole, edge computing is preferred for systems with specialized needs that require lower latency and faster data analysis, such as in fields like finance and manufacturing. Cloud computing alternatively works well as part of a general platform or software, like Amazon Web Services, Microsoft Azure, and Google Drive.

Ultimately, we will see edge computing as a tool to work alongside cloud computing in furthering our technological capabilities. Modern cloud computing hasn’t been around for very long and still has much room for growth. Instead of one form of computing replacing another in order to handle data and the Internet of Things (IoT), they work together to optimize computing and processing performance. As we continue to develop new technologies, both cloud and edge computing will become just two of the many ways we will be able to optimize and effectively navigate our highly interconnected world.

From its conception as an amorphous database of information accessible from any computer on a certain network, to its future incarnations as mediums for mixed realities and blockchain, to the addition of new technologies that work with the cloud like edge computing, the cloud has certainly come a long way in a short time. It’s easy to see that the future of the cloud is bright, and cloud computing is only going to become more capable as we move forward.

 



Source link

The Missing Piece in Cloud App Security | IT Infrastructure Advice, Discussion, Community


As the economy improves, the workforce becomes more mobile. It has become quite common for employees to take more than their potted plants with them when they leave. They take confidential company data, too – and the majority see nothing wrong with it, even though it is a criminal offense. Failing to properly secure this data leaves companies open to the loss of customers and competitive advantage.

Organizations can increase trust by driving bad actors out and improving their overall security posture if they have better visibility into insider threats,. Below are the top five events that organizations monitor cloud applications for and how they can help to promote good security hygiene within a company.

1. Exported data

Users can run reports on nearly anything within Salesforce, from contacts and leads to customers. Employees can extract large amounts of sensitive data from Salesforce and other cloud applications by exporting reports. And those reports can be exported for easy reference and analysis.

This is a helpful feature for loyal employees, but in the hands of others, such data extractions can make a company vulnerable to data theft and breaches. Departing employees may choose to export a report of customers, using the list to join or start a competitive business.

Companies are not helpless, though. Organizations can monitor for exports to:

— Protect sensitive customer, partner and prospect information, increasing trust with your customers and meeting key regulations and security frameworks (e.g., PCI-DSS).

— Easily detect team members who may be stealing data for personal or financial gain and stop the exfiltration of data before more damage occurs.

— More quickly spotting and remediating the activity, reducing the cost of a data breach.

— Spot possible instances of compromised credentials and deactivate compromised users.

2. Who is running reports

While organizations focus most of their attention on which reports are being exported, simply running a report could create a potential security issue. The principle of least privilege dictates that people only be given the minimal amount of permissions necessary to complete their job – and that applies to data that can be viewed. But many companies grant broad access across the organization, even to those whose job does not depend on viewing specific sensitive information.

By paying attention to top report runners, report volume and which reports have been run, you can track instances where users might be running reports to access information that’s beyond their job scope. Users may also be running – but not necessarily exporting – larger reports than they normally do or than their peers do.

In addition, you can monitor for personal and unsaved reports, which can help close any security vulnerability created by users attempting to exfiltrate data without leaving a trail. Whether it’s a user who is attempting to steal the data, a user who has higher access levels than necessary, or a user who has accidentally run the report, monitoring for report

access will help you spot any additional security gaps or training opportunities.

3. Location and identity of logins

You can find some hidden gems of application interaction by looking at login activity. Terminated users who have not been properly deprovisioned may be able to gain access to sensitive data after employment, in the case of a departed employee, or at the end of a contract with a third party. Login activity can also tell you a user’s location, hours, devices and more – all of which can uncover potential security incidents, breaches or training opportunities.

By monitoring for inactive users logging in, then, companies can protect data from theft by a former employee or contractor. Login activity can also tell you whether employees are logging in after hours or from a remote location. This may be an indicator of an employee working overtime — but it may also be a red flag for a departing employee, logging in after hours to steal data, or of compromised credentials.

4. Changes to profiles and permissions

There are profiles and permissions within cloud applications that regulate what a user can and cannot do. For example, in Salesforce, every user has one profile but can have multiple permissions sets. The two are usually combined by using profiles to grant the minimum permissions and access settings for a specific group of users, then permission sets to grant more permissions to individual users as needed. Profiles control object, field, app and user permissions; tab settings; Apex class and Visualforce page access; page layouts; record types; and login hours and IP ranges.

Permissions for each application vary at each organization. In some companies, all users enjoy advanced permissions; others use a conservative approach, granting only the permissions that are necessary for that user’s specific job roles and responsibilities. But with over 170 permissions in Salesforce, for instance – and hundreds or thousands of users – it can be difficult to grasp the full scope of what your users can do in that application.

5. Creating or deactivating users

Managing users including being able to create and deactivate their accounts. Organizations can monitor for deactivation – which, if not done properly after an employee leaves the organization, may result in an inactive user gaining access to sensitive data or an external attacker gaining hold of their still-active credentials. For this and other cloud applications, a security issue may also arise when an individual with administrative permissions creates a “shell,” or fake user, under which they can steal data. After the fact, they can deactivate the user to cover their tracks.

Monitoring for user creation is another way that security teams watch for any potential insider threats. And by keeping track of when users are deactivated, you can run a report of deactivated users within a specific time frame and correlate them with your former employees (or contractors) to ensure proper deprovisioning. Monitoring for creation and/or deactivation of users is also required by regulations like SOX and frameworks like ISO 27001.

Monitor for greater insight

You can’t defend against what you can’t see. With the widespread adoption of cloud applications, businesses are seeing an enormous uptick in user activity that is simultaneously harder to keep track of. Consequently, many organizations are looking for ways to increase visibility into how users are using these applications and the data within them. Monitoring the specific activities detailed above will help organizations increase visibility and keep data safe and secure.

 



Source link

Every Cloud Has a Silver Lining | IT Infrastructure Advice, Discussion, Community


Back in 1634 the optimist’s favorite saying was born out of a quote in John Milton’s Comus. His eloquent phrasing has become known to most of us as “every cloud has a silver lining.”

The proverbial optimism expressed in this idiom is one is almost ironic in today’s digital world when considering the role cloud plays today with respect to data privacy and integrity.

Consider how easy cloud has made it to collect, process, and store large amounts of data. Capacity and processing power alone have made cloud the de facto choice for applications targeting consumer interactions. This has been great for business, but terrible for privacy because “the business” extends from management to developers and then stops.

Unfortunately, cloud deployments have been absent traditional network, system and security operations that would have fought for architectures and controls that would have prevented every cloud breach our team of researchers at F5 Labs examined. How you wonder?  Because systems deployed in the cloud are being breached through the most basic failures. My favorite is the absence of operational security controls otherwise known as “open access”. No credentials are required to access an operational console; anyone can play if they know where the system lives.

Another favorite is the deliberate elimination of security controls on cloud-native storage systems. Typically, these controls are removed early on to facilitate faster development and testing. Sadly, the controls are never returned to a secure state, leaving buckets of data wide open for anyone with the ability to find them.

So, where’s the ‘silver lining’ in all this? On the consumer side, we are being given great visibility into the massive amounts of data about each of us being collected, who it’s used by, and for what purpose it’s used. If it wasn’t for cloud and the often-poor security practices that go along with them, we might never have known about middlemen like validators.

If you haven’t received a notification about the verifications.io breach, you might be new to the Internet. Over 750 million (and they think there’s more) unique email addresses were exposed in February 2019 by the email address validation service. You probably didn’t realize they had access to your data, because they operate behind the scenes on behalf of other businesses. But every time you get an email to ‘verify your email address’ upon signing up for a service, it’s likely verifications.io sent it. And apparently, they collected it – and data used to verify it – on their own systems. 

As consumers, we can shout and write letters and demand this situation be addressed. Aside from living off-grid, there isn’t much more we can do about it.

But business can and should do more about it. Not just to protect our privacy, but to ensure data integrity.

See, if the data is accessible by anyone that doesn’t just imply read access. It implies potential write access. Most folks are out there scooping our data to turn a quick buck, but eventually someone is going to turn that around and dirty up your data – or just delete it. That risk is real and because of the growing dependence of business on data to make decisions, the risk has increasingly damaging repercussions.

In the near future the majority of businesses will be data-driven. Their business and operational decisions will increasingly be made automatically by machines based on the zettabytes of data they hoard like dragons. Imagine losing it all in one simple command, executed by an unknown actor who had access because security practices were ignored or forgotten in the rush to release to market.

Operational and security ‘gates’ (checkpoints) exist to protect data from infiltration, infection, and exfiltration. Skipping them to gain speed is dangerous not only to your customers but to the business. At a minimum, you need to enforce two simple steps:

Lock the door: This is real-life translated to the digital world. Leaving a door unlocked in some neighborhoods is an invitation to come inside. In the cloud, that’s just as true. Make sure that every web, app, database, middleware, container orchestration, and storage system or service requires credentials to access administrative consoles.

Hide the key: You might hide a spare key somewhere outside just in case you lose your own keys. But you don’t leave it on top of the doormat or hanging in plain slight next to the door. So don’t hardcode credentials and other secrets (like keys and certs) and store them publicly. If you use a repository remember it’s not a key management store. Put into place best practices with respect to managing credentials and keys lest you end up on a list with Uber. 

Every cloud does have a silver lining. In the case of cloud-deployed systems that have exposed our data, that silver lining is that we know more about where and how these breaches occur. It’s an opportunity for the business to stand back and re-evaluate not just its own security practices, but that of its partners and suppliers of digital services.

But above all, make sure your cloud security practices exist and put them into place if they don’t.  

 



Source link

Five Steps to Address Cloud Security Challenges | IT Infrastructure Advice, Discussion, Community


Today’s interconnected world relies on data accessibility from anywhere, at any time, on any device. The speed and agility that comes with hosting services and applications in the cloud are central to modern interconnected success. As such, these inherent benefits have compelled organizations to migrate some or all of their applications or infrastructures to the cloud. In fact, some industry experts estimate that up to 83 percent of enterprise workloads will migrate to the cloud by 2020.

While the cloud may offer significant benefits, organizations need to be aware of the security challenges when planning a cloud-first strategy. Some of those challenges involve not only protection and compliance but also operational considerations, such as the ability to integrate security solutions for on-premise and cloud workloads; to enforce consistent security policies across the hybrid cloud and to automate virtual machine (VM) discovery to ensure visibility and control over dynamic infrastructure.

1: Balance protection and compliance

Striking a balance between protection and compliance is a huge challenge. Sometimes, it’s all about discouraging threat actors by making them invest more time, energy, and resources than they first estimated into breaching the organization. Making attackers go through several layers of defenses means they could slip up at some point and trigger an alert before reaching the organization’s crown jewels.

Recent data breaches should push leaders into thinking beyond compliance. Besides risking more fines, they risk their reputation as well. Compliance regulations tend to be addressed as base-minimum security options. However, thorough protection involves deploying multiple security layers designed to both help IT and security teams streamline operations, as well as increase visibility and accelerate detection of threats before a full-blown breach occurs.

2: Integrate security solutions for on-premise and cloud workloads

Finding the right security solution to seamlessly integrate with both on-premise and cloud workloads without impacting consolidation ratios, affecting performance or creating manageability issues is also a challenge. Traditional security solutions can, at best, offer separate solutions for on-premise and cloud workloads; however, still run the risk of creating visibility and management issues. At worst, the same traditional security solution is deployed on all workloads – cloud and local – creating serious performance issues for the latter. It’s important for organizations to integrate a security solution that’s built for automatically molding its security agent to the job at hand, based on whether the workload is on-premises or in the cloud, without impacting performance or compromising on security capabilities.

3: Deploy consistent security policies across the hybrid cloud

To address this challenge, organizations need to find security solutions that can adapt security agents to the type of environment they are deployed in. Cloud environments solutions must be agile enough to leverage all the benefits of cloud without sacrificing security, while for traditional on-premise environments, versatile enough to enable productivity and mobility. Organizations must understand that deploying security policies across hybrid infrastructures can be troublesome, especially without a centralized security console that can seamlessly relay those policies across all endpoints and workloads. It’s important to automatically apply group security policies to newly spawned virtual machines, based on their role within the infrastructure. For instance, newly spawned virtual servers should immediately adhere to group-specific policies, as well as newly spawned VDIs the same, and so on. Otherwise, the consequences could be disastrous, in the sense that they would be left unprotected against threats and attackers for as long as they’re operational.

4: Automate VM discovery

Automated VM discovery is the whole point of an integrated security platform, as security policies can automatically be applied based on the type of machine.

Organizations should consider adopting security solutions that can automate VM discovery and apply security policies accordingly, without forcing IT and security teams to push policies to newly instanced workloads manually.

Considering the hybrid cloud’s flexibility in terms of endpoints (physical and virtual) and infrastructure (on-premise and in the cloud), it’s important that the security solution embraces the same elasticity and enable organizations to fully embrace the benefits of these infrastructures without sacrificing performance, usability or security.

5: Maintain visibility and control over dynamic infrastructure

In the context of adopting a mobility- and cloud-first approach, it has become increasingly difficult for IT and security teams to view an organization’s security posture, especially since traditional security solutions don’t offer single-pane-of-glass visibility across all endpoints.

Integrating a complete security platform can help IT and security teams save time while offering security automation features that help speed up the ability to identify signs of a data breach accurately.

Addressing cloud security challenges is constant, ongoing work that requires IT and security teams to be vigilant while at the same time adopting the right security and automation tools to help take some of the operational burden off their shoulders. Working together to find the right solutions ensures both teams get what they need. The collaboration of these two focused teams ensures the entire infrastructure is protected, regardless of on-premise or cloud workloads.



Source link