Tag Archives: Cloud

Cloud Storage and Policies: How Can You Find Your Way? | IT Infrastructure Advice, Discussion, Community


Cloud storage is one of the hottest topics today. Rightfully so, there seem to be new services being added seemingly daily. Storage services make up one of the most attractive cloud services, so it is only natural to find business problems to solve.

The reality is that storage in the cloud is a whole new discipline. Completely different. Like forget everything you know and let’s start from the beginning. Both Amazon Web Services and Microsoft Azure have many different storage services. Some are like what we have used on-premises, such as Azure File Storage and AWS Elastic Block Store. These resemble traditional file shares and block storage on-premises, yet how they are used can make a very big difference on your experience in the cloud. There are more storage services in the cloud (such as object storage, gateways and more), and they are different than what has traditionally been used on-premises, and that is where it gets interesting.

Let’s first identify why organizations want to leverage the cloud for storage. This may seem a needless step, but it is more critical than ever. The why is very important. The fundamental reason why should be that the cloud is the right platform for the storage need. Supporting reasons will also include cloud benefits such as these:

No upfront purchase: This is different than the on-premises storage practice of purchasing for the future capacity needs (best guesses, overspend or bad misses of targets are common with this practice!).

Effectively unlimited capacity: Ask any mathematician and they will quickly dispute the cloud is not unlimited, but from most customer perspective the cloud will provide effectively unlimited storage options.

Predictable pricing: While not exactly linear, it is pretty clear what consumption pricing will be with cloud storage.

These are some of the good reasons to embrace cloud storage, but beyond the reasons to go to the cloud the strong advice is to look at storage policies and usage to not have any surprises in the future. Some of this includes looking at the economics from a complete scope of use. Too many times pricing is just seen as how much consumption per month. Take AWS S3 for example, for S3 Standard Storage one can have the first 50 TB per month priced at $0.023 per GB (pricing as of March 2019, US East (Ohio) region). But other aspects of using the storage should absolutely be considered. Take for example the following other aspects:

Getting data into the cloud is often overlooked, but there is a cost to that as well. This makes how data is written to the cloud important. Is data sent in small increments (more write operations or put tasks) or in relatively fewer larger increments? This can change the cost profile.

Egress is where data is read from a cloud storage location, and that has a cost. One practical cost is to leverage solutions with cloud storage that retrieve the right pieces; versus entire datasets.

Deleting data Interesting to think about, not for costs per se; but deleting data should be considered. The data in the cloud will live as long as you pay for it, so give thought to ensure no dead data is living in the cloud.

But what can organizations do to manage cloud storage from a policy perspective? In a way, some of the same practices as before can be applied. But also leverage frameworks from the cloud platforms to help manage the usage and consumption. AWS Organizations is a good example for providing policy-based management of multiple AWS accounts. This will streamline account management, billing and control to cloud services. Similar capabilities exist in Azure with Subscription and Service Management along with Azure RBAC.

Between taking a responsible look at new cloud services from what we have learned in the past coupled with what new frameworks are available to use in the cloud, organizations can easily and confidently embrace cloud storage services to not only solve the right platform question, but also manage it in a way that lets CIOs and decision makers sleep at night.



Source link

Troubleshooting Network Performance in Cloud Architectures | IT Infrastructure Advice, Discussion, Community


Troubleshooting within public or hybrid clouds can be a challenge when end users begin complaining of network and application performance problems. The loss of visibility of the underlying cloud network renders some traditional troubleshooting methods and tools ineffective. Thus, we must come up with alternative ways to regain that visibility. Let’s look at five tips on how to better troubleshoot application performance in public cloud or hybrid cloud environments.

Tip 1: Verify the application and all services are operational form end-to-end

The first step in the troubleshooting process should be to verify that the cloud provider is not having an issue on their end. Depending on whether your service uses a SaaS, PaaS or IaaS model, the verification process will change. For example, Salesforce SaaS platform has a status page where you can see if there are any incidents/outages or maintenance windows that may be impacting your users.

Also, don’t forget to check other dependent services that can also impact access or performance to cloud services. Services such as DHCP and internal/external DNS are common dependencies can cause problems — making it look like there is something wrong with the network. Depending on where the end user connects from in relation to the cloud application they are trying to access, the DHCP and DNS servers used will vary greatly. Verifying end users are receiving proper IP’s and can resolve domains properly can save a great deal of time and headaches.

Tip 2: Review recent network configuration changes

If a performance problem to a cloud app seemingly crops up out of nowhere, it’s likely a recent network change is to blame. On the corporate LAN, review any firewall, NAT or VLAN adds/changes didn’t inadvertently cause an outage for a portion of your users. These same types of network changes should also be verified within IaaS clouds as well.

QoS or other traffic shaping changes can also accidentally degrade performance between the corporate LAN and remote cloud services. Automated tools can be used to verify that applications are being properly marked — and those markings are being adhered to on a hop-by-hop basis between the end user and as far out to the cloud application or service as possible.

Tip 3: Use traditional network monitoring and troubleshooting tools

Depending on the cloud architecture model you’re using, traditional network troubleshooting tools can be greater or less effective when troubleshooting performance degradation. For instance, if you use IaaS such as AWS EC2 or Microsoft Azure, you have enough visibility to use most network troubleshooting and support tools such as ping, traceroute, and SNMP. You can even get NetFlow/IPFIX data streamed to a collector — or even run packet captures in a limited fashion. However, when troubleshooting PaaS or SaaS cloud models, these tools become far less useful. Thus, you end up having to trust your service provider that everything is operating as it should on their end.

Tip 4: Use built-in application diagnostics and assessment tools

Many enterprise applications have built-in or supplemental diagnostic tools that IT departments can use for troubleshooting purposes. These tools often provide detailed information that help you determine whether performance is an application-related issue — or a problem with the network or infrastructure. For example, if you’re having issues with Microsoft Teams through Office 365, you can test and verify sufficient end-to-end network performance using their Skype for Business Network Assessment Tool. Although this tool is most commonly used to verify whether Teams is a viable option pre-deployment. It can also be used post-deployment for troubleshooting purposes.

Tip 5: Consider SD-WAN built-in analytics or pure-play network analytics tools

Network analytics tools and platforms are the latest way for administrators to troubleshoot network and application performance problems. Network analytics platforms collect streaming telemetry and network health information using several methods and protocols. All data is then combined and analyzed using artificial intelligence (AI). The results of the analysis help pinpoint areas on the corporate network or cloud where network performance problems are occurring.

If you have extended your SD-WAN architecture to the public cloud, you can leverage the myriad of analytics components that are commonly included in these platforms. Alternatively, there are a growing number of pure-play vendors that sell multi-vendor network analytics tools that can be deployed across entire corporate LANs and into public clouds. While these two methods can be expensive and more complicated to deploy initially, they have shown to speed up performance troubleshooting and root cause analysis processes dramatically.



Source link

As Cloud Services Evolve, What’s Next? | IT Infrastructure Advice, Discussion, Community


Since its inception, it’s no exaggeration to say that cloud computing has become one of the pillars on which modern society is built. Yet while the concept of the cloud has fully entered the popular imagination (most people associate it with digital storage services like Google Drive or Dropbox), in truth, we have only scratched the surface of cloud computing’s potential.

But simply storing documents for simultaneous access is only one facet of the cloud, and arguably not even the most important one. In fact, just as cryptocurrency combined several existing technologies to create a new, profitable whole, so too will cloud computing form the backbone of something new.

What’s next for cloud computing?

It seems clear that the next milestone for cloud will be mixed realities (MR), virtual reality (VR), and augmented reality (AR). One possibility includes virtual conferencing; in contrast to video conferences, where several participants are splashed across a screen, a VR (or AR) meeting allows people to sit together in a conference room. Rather than talking over each other or misreading social cues, attendees can carry on a meeting as if they were physically present in the same room, allowing for more productive (and less tense) gatherings.

Another possibility is a Blockchain-based cloud. Combining the two is a logical step: the system would feature the security of blockchain’s tamper-resistant record, as well as the ease and convenience of cloud computing. In many ways, the two are a perfect match for each other. Like the cloud, blockchain is decentralized, as it relies on a network of computers to verify transactions and continually update the record. Dispersing cloud-based blockchain technologies could lead to more secure record-keeping in such vital areas as global finance and manufacturing, where transparency is difficult to come by.

Smart cities are also likely to see significant boosts from cloud computing in the near future. Cloud computing would connect with Internet of Things (IoT) devices to allow for improvements like intelligent traffic and parking management, regulation of reduced cost of power and water, and optimization of other automated devices. Smart cities can lead to greater scalability of cloud-based computing, which can, in turn, make it easier to create common smart city services that can be reused and implemented across other cities.

The edge and the cloud: rivals or friends?

While cloud computing is still considered a relatively new technology, many experts also believe that it will give way to edge computing, which looks to reduce latency and connectivity costs by keeping relevant data as close to its source as possible. While this might seem like the new technology trumps cloud computing as a whole, edge computing is preferred for systems with specialized needs that require lower latency and faster data analysis, such as in fields like finance and manufacturing. Cloud computing alternatively works well as part of a general platform or software, like Amazon Web Services, Microsoft Azure, and Google Drive.

Ultimately, we will see edge computing as a tool to work alongside cloud computing in furthering our technological capabilities. Modern cloud computing hasn’t been around for very long and still has much room for growth. Instead of one form of computing replacing another in order to handle data and the Internet of Things (IoT), they work together to optimize computing and processing performance. As we continue to develop new technologies, both cloud and edge computing will become just two of the many ways we will be able to optimize and effectively navigate our highly interconnected world.

From its conception as an amorphous database of information accessible from any computer on a certain network, to its future incarnations as mediums for mixed realities and blockchain, to the addition of new technologies that work with the cloud like edge computing, the cloud has certainly come a long way in a short time. It’s easy to see that the future of the cloud is bright, and cloud computing is only going to become more capable as we move forward.

 



Source link

The Missing Piece in Cloud App Security | IT Infrastructure Advice, Discussion, Community


As the economy improves, the workforce becomes more mobile. It has become quite common for employees to take more than their potted plants with them when they leave. They take confidential company data, too – and the majority see nothing wrong with it, even though it is a criminal offense. Failing to properly secure this data leaves companies open to the loss of customers and competitive advantage.

Organizations can increase trust by driving bad actors out and improving their overall security posture if they have better visibility into insider threats,. Below are the top five events that organizations monitor cloud applications for and how they can help to promote good security hygiene within a company.

1. Exported data

Users can run reports on nearly anything within Salesforce, from contacts and leads to customers. Employees can extract large amounts of sensitive data from Salesforce and other cloud applications by exporting reports. And those reports can be exported for easy reference and analysis.

This is a helpful feature for loyal employees, but in the hands of others, such data extractions can make a company vulnerable to data theft and breaches. Departing employees may choose to export a report of customers, using the list to join or start a competitive business.

Companies are not helpless, though. Organizations can monitor for exports to:

— Protect sensitive customer, partner and prospect information, increasing trust with your customers and meeting key regulations and security frameworks (e.g., PCI-DSS).

— Easily detect team members who may be stealing data for personal or financial gain and stop the exfiltration of data before more damage occurs.

— More quickly spotting and remediating the activity, reducing the cost of a data breach.

— Spot possible instances of compromised credentials and deactivate compromised users.

2. Who is running reports

While organizations focus most of their attention on which reports are being exported, simply running a report could create a potential security issue. The principle of least privilege dictates that people only be given the minimal amount of permissions necessary to complete their job – and that applies to data that can be viewed. But many companies grant broad access across the organization, even to those whose job does not depend on viewing specific sensitive information.

By paying attention to top report runners, report volume and which reports have been run, you can track instances where users might be running reports to access information that’s beyond their job scope. Users may also be running – but not necessarily exporting – larger reports than they normally do or than their peers do.

In addition, you can monitor for personal and unsaved reports, which can help close any security vulnerability created by users attempting to exfiltrate data without leaving a trail. Whether it’s a user who is attempting to steal the data, a user who has higher access levels than necessary, or a user who has accidentally run the report, monitoring for report

access will help you spot any additional security gaps or training opportunities.

3. Location and identity of logins

You can find some hidden gems of application interaction by looking at login activity. Terminated users who have not been properly deprovisioned may be able to gain access to sensitive data after employment, in the case of a departed employee, or at the end of a contract with a third party. Login activity can also tell you a user’s location, hours, devices and more – all of which can uncover potential security incidents, breaches or training opportunities.

By monitoring for inactive users logging in, then, companies can protect data from theft by a former employee or contractor. Login activity can also tell you whether employees are logging in after hours or from a remote location. This may be an indicator of an employee working overtime — but it may also be a red flag for a departing employee, logging in after hours to steal data, or of compromised credentials.

4. Changes to profiles and permissions

There are profiles and permissions within cloud applications that regulate what a user can and cannot do. For example, in Salesforce, every user has one profile but can have multiple permissions sets. The two are usually combined by using profiles to grant the minimum permissions and access settings for a specific group of users, then permission sets to grant more permissions to individual users as needed. Profiles control object, field, app and user permissions; tab settings; Apex class and Visualforce page access; page layouts; record types; and login hours and IP ranges.

Permissions for each application vary at each organization. In some companies, all users enjoy advanced permissions; others use a conservative approach, granting only the permissions that are necessary for that user’s specific job roles and responsibilities. But with over 170 permissions in Salesforce, for instance – and hundreds or thousands of users – it can be difficult to grasp the full scope of what your users can do in that application.

5. Creating or deactivating users

Managing users including being able to create and deactivate their accounts. Organizations can monitor for deactivation – which, if not done properly after an employee leaves the organization, may result in an inactive user gaining access to sensitive data or an external attacker gaining hold of their still-active credentials. For this and other cloud applications, a security issue may also arise when an individual with administrative permissions creates a “shell,” or fake user, under which they can steal data. After the fact, they can deactivate the user to cover their tracks.

Monitoring for user creation is another way that security teams watch for any potential insider threats. And by keeping track of when users are deactivated, you can run a report of deactivated users within a specific time frame and correlate them with your former employees (or contractors) to ensure proper deprovisioning. Monitoring for creation and/or deactivation of users is also required by regulations like SOX and frameworks like ISO 27001.

Monitor for greater insight

You can’t defend against what you can’t see. With the widespread adoption of cloud applications, businesses are seeing an enormous uptick in user activity that is simultaneously harder to keep track of. Consequently, many organizations are looking for ways to increase visibility into how users are using these applications and the data within them. Monitoring the specific activities detailed above will help organizations increase visibility and keep data safe and secure.

 



Source link

Every Cloud Has a Silver Lining | IT Infrastructure Advice, Discussion, Community


Back in 1634 the optimist’s favorite saying was born out of a quote in John Milton’s Comus. His eloquent phrasing has become known to most of us as “every cloud has a silver lining.”

The proverbial optimism expressed in this idiom is one is almost ironic in today’s digital world when considering the role cloud plays today with respect to data privacy and integrity.

Consider how easy cloud has made it to collect, process, and store large amounts of data. Capacity and processing power alone have made cloud the de facto choice for applications targeting consumer interactions. This has been great for business, but terrible for privacy because “the business” extends from management to developers and then stops.

Unfortunately, cloud deployments have been absent traditional network, system and security operations that would have fought for architectures and controls that would have prevented every cloud breach our team of researchers at F5 Labs examined. How you wonder?  Because systems deployed in the cloud are being breached through the most basic failures. My favorite is the absence of operational security controls otherwise known as “open access”. No credentials are required to access an operational console; anyone can play if they know where the system lives.

Another favorite is the deliberate elimination of security controls on cloud-native storage systems. Typically, these controls are removed early on to facilitate faster development and testing. Sadly, the controls are never returned to a secure state, leaving buckets of data wide open for anyone with the ability to find them.

So, where’s the ‘silver lining’ in all this? On the consumer side, we are being given great visibility into the massive amounts of data about each of us being collected, who it’s used by, and for what purpose it’s used. If it wasn’t for cloud and the often-poor security practices that go along with them, we might never have known about middlemen like validators.

If you haven’t received a notification about the verifications.io breach, you might be new to the Internet. Over 750 million (and they think there’s more) unique email addresses were exposed in February 2019 by the email address validation service. You probably didn’t realize they had access to your data, because they operate behind the scenes on behalf of other businesses. But every time you get an email to ‘verify your email address’ upon signing up for a service, it’s likely verifications.io sent it. And apparently, they collected it – and data used to verify it – on their own systems. 

As consumers, we can shout and write letters and demand this situation be addressed. Aside from living off-grid, there isn’t much more we can do about it.

But business can and should do more about it. Not just to protect our privacy, but to ensure data integrity.

See, if the data is accessible by anyone that doesn’t just imply read access. It implies potential write access. Most folks are out there scooping our data to turn a quick buck, but eventually someone is going to turn that around and dirty up your data – or just delete it. That risk is real and because of the growing dependence of business on data to make decisions, the risk has increasingly damaging repercussions.

In the near future the majority of businesses will be data-driven. Their business and operational decisions will increasingly be made automatically by machines based on the zettabytes of data they hoard like dragons. Imagine losing it all in one simple command, executed by an unknown actor who had access because security practices were ignored or forgotten in the rush to release to market.

Operational and security ‘gates’ (checkpoints) exist to protect data from infiltration, infection, and exfiltration. Skipping them to gain speed is dangerous not only to your customers but to the business. At a minimum, you need to enforce two simple steps:

Lock the door: This is real-life translated to the digital world. Leaving a door unlocked in some neighborhoods is an invitation to come inside. In the cloud, that’s just as true. Make sure that every web, app, database, middleware, container orchestration, and storage system or service requires credentials to access administrative consoles.

Hide the key: You might hide a spare key somewhere outside just in case you lose your own keys. But you don’t leave it on top of the doormat or hanging in plain slight next to the door. So don’t hardcode credentials and other secrets (like keys and certs) and store them publicly. If you use a repository remember it’s not a key management store. Put into place best practices with respect to managing credentials and keys lest you end up on a list with Uber. 

Every cloud does have a silver lining. In the case of cloud-deployed systems that have exposed our data, that silver lining is that we know more about where and how these breaches occur. It’s an opportunity for the business to stand back and re-evaluate not just its own security practices, but that of its partners and suppliers of digital services.

But above all, make sure your cloud security practices exist and put them into place if they don’t.  

 



Source link