Tag Archives: Business

Rethinking IT: Tech Investments that Drive Business Growth | IT Infrastructure Advice, Discussion, Community


Everybody knows they need to change their old technology stack and processes. But is it enough to upgrade your current systems and maybe add an analytics or AI center of excellence to handle the new initiatives? Probably not, according to a new report from consulting firm Accenture.

To get the full value out of technology investments, organizations need to focus on investing in what Accenture calls “Future Systems.” Generally speaking, these future systems are a rethinking of the architecture and technology a company needs to move quickly and take advantage of tomorrow’s opportunities. They use technologies like cloud, along with end-to-end data pipelines that are flexible enough for tomorrow’s AI and other projects.

“Today’s C-suite is investing staggering amounts of money in new technology, but not every company is realizing the benefits of innovation as a result of those investments,” said Bhaskar Ghosh, group chief executive, Accenture Technology Services, in a statement. “Competing in today’s data-driven, post-digital economy means organizations need to have a carefully calibrated strategy toward technology adoption and a clear vision for what their companies’ future systems should look like.”

Do it right and you will grow at double the rate of your less progressive peers.

Read the rest of this article on InformationWeek.

Related Network Computing articles:

Balancing Risk and Innovation with IT Strategy

How to Build Secure Networks that are Both Agile and Customizable



Source link

How Software-Define Storage can Empower Developers to Increase Business Value | IT Infrastructure Advice, Discussion, Community


Software developers are now among the most strategic assets of any organization. In today’s fast-paced world, the speed at which one can develop new applications and microservices can dictate whether a company gets to market first or can respond effectively to a sudden competitive move or market shift. In other words, developers are having an unprecedented and direct impact on companies’ – and industries’ – fortunes.

This reality is supported by a 2018 Stripe and Harris Poll study, which predicts software developers’ skillsets alone could add $3 trillion to global GDP over the next decade. Accordingly, 61 percent of C-suite respondents to that study believe access to developer talent is a threat to the success of their business.

Freeing developers to work faster and be more productive

Not surprisingly, organizations aren’t just trying to keep developers focused on what they do best: creating, solving problems, and innovating – they’re also trying to increase their productivity.

Yet, despite the evolving appreciation for developers’ talents, the same study found that many companies are misusing their most important resource. A significant proportion of developers’ time is spent maintaining aging, legacy systems and patching bad software – to the tune of approximately $300 billion per year, with nearly $85 billion being spent addressing bad code alone.

As such, the role of the application architect has emerged in this new world of hybrid platforms to ensure developers’ code runs smoothly, interacts with other services, and makes efficient use of data, regardless of where it is created or consumed.

Meanwhile, development teams are gaining more authority from their line of business managers who realize that their organizations need to harness the immense amount of data they’re collecting and use it for competitive advantage. They want to give developers the ability to provision, and deprovision, resources as they need them, and to develop applications faster than ever before. These managers are prepared to invest in tools that can enable their teams’ success.

The strategic role of storage in agile development

The reality is that developers don’t have time to wait for traditional IT anymore. They need tools and technologies that allow them to work at speed, in an agile manner – supporting, for example, rapid experimentation or the deployment of artificial intelligence (AI), machine learning (ML), and deep learning within their applications.

New methods of accelerating value through application development have emerged in the past few years. While pure public cloud strategies can be quick to deploy, they often lack the performance or governance requirements of other specialized deployments. Hybrid cloud strategies that focus on architecting applications to make the best use of resources, from multicloud, on-premises, remote sites, and even at the device edge, are enabling organizations to enact on data streams at every point in the workflow, greatly optimizing time to value.

Cloud-native application development has grown from largely stateless apps to more stateful applications within distributed systems, requiring the ability to rebalance data, auto-scale, and perform seamless upgrades — all of which can become infinitely easier with persistent, reliable storage.

Exploiting data for competitive advantage

In addition to the flexibility it offers, software-defined storage can help organizations to better harness the value of data, including the continual stream of information and insights gleaned through their applications. Developers and data scientists need to be able to constantly extract, analyze, and react to data to maintain agility, and they can do that more easily with software-defined storage.

Whereas the siloed nature of traditional storage arrays and appliances can inhibit access to data, containerized, open source storage environments facilitate access, regardless of whether data is stored on-premises, at a remote site, at the edge, or in a public or multicloud.

Choosing an IT environment conducive to innovation

This raises a related but important point: many organizations believe the silver bullet to enterprise agility lies in the public cloud. In some cases, this is true, but the public cloud can pose a series of challenges itself. The sum of the “fixes” for these challenges can be costly.

It’s no coincidence that there has been an upsurge in open source container-orchestration systems for application deployment, scaling, and management. Embracing hybrid cloud architecture enables organizations to create flexible infrastructure that suits their diverse business and governance requirements – helping them control costs without sacrificing agility.

Developers must differentiate themselves to stay competitive

Today’s developers are being given unfettered access to the tools and technologies they need to drive innovation and are visibly pushing their organizations and industries forward.

Attracted by growing career opportunities in software and application development, newcomers are flocking into the field – further increasing the pressure on the developer community.

Survival in this highly competitive environment is no small feat. Learning how to differentiate oneself and drive industry disruption consistently takes a high level of skill and determination. Equally, a successful developer needs infrastructure, services, and storage-native solutions that can match the speed of development.



Source link

How Software-Define Storage can Empower Developers to Increase Business Value | IT Infrastructure Advice, Discussion, Community


Software developers are now among the most strategic assets of any organization. In today’s fast-paced world, the speed at which one can develop new applications and microservices can dictate whether a company gets to market first or can respond effectively to a sudden competitive move or market shift. In other words, developers are having an unprecedented and direct impact on companies’ – and industries’ – fortunes.

This reality is supported by a 2018 Stripe and Harris Poll study, which predicts software developers’ skillsets alone could add $3 trillion to global GDP over the next decade. Accordingly, 61 percent of C-suite respondents to that study believe access to developer talent is a threat to the success of their business.

Freeing developers to work faster and be more productive

Not surprisingly, organizations aren’t just trying to keep developers focused on what they do best: creating, solving problems, and innovating – they’re also trying to increase their productivity.

Yet, despite the evolving appreciation for developers’ talents, the same study found that many companies are misusing their most important resource. A significant proportion of developers’ time is spent maintaining aging, legacy systems and patching bad software – to the tune of approximately $300 billion per year, with nearly $85 billion being spent addressing bad code alone.

As such, the role of the application architect has emerged in this new world of hybrid platforms to ensure developers’ code runs smoothly, interacts with other services, and makes efficient use of data, regardless of where it is created or consumed.

Meanwhile, development teams are gaining more authority from their line of business managers who realize that their organizations need to harness the immense amount of data they’re collecting and use it for competitive advantage. They want to give developers the ability to provision, and deprovision, resources as they need them, and to develop applications faster than ever before. These managers are prepared to invest in tools that can enable their teams’ success.

The strategic role of storage in agile development

The reality is that developers don’t have time to wait for traditional IT anymore. They need tools and technologies that allow them to work at speed, in an agile manner – supporting, for example, rapid experimentation or the deployment of artificial intelligence (AI), machine learning (ML), and deep learning within their applications.

New methods of accelerating value through application development have emerged in the past few years. While pure public cloud strategies can be quick to deploy, they often lack the performance or governance requirements of other specialized deployments. Hybrid cloud strategies that focus on architecting applications to make the best use of resources, from multicloud, on-premises, remote sites, and even at the device edge, are enabling organizations to enact on data streams at every point in the workflow, greatly optimizing time to value.

Cloud-native application development has grown from largely stateless apps to more stateful applications within distributed systems, requiring the ability to rebalance data, auto-scale, and perform seamless upgrades — all of which can become infinitely easier with persistent, reliable storage.

Exploiting data for competitive advantage

In addition to the flexibility it offers, software-defined storage can help organizations to better harness the value of data, including the continual stream of information and insights gleaned through their applications. Developers and data scientists need to be able to constantly extract, analyze, and react to data to maintain agility, and they can do that more easily with software-defined storage.

Whereas the siloed nature of traditional storage arrays and appliances can inhibit access to data, containerized, open source storage environments facilitate access, regardless of whether data is stored on-premises, at a remote site, at the edge, or in a public or multicloud.

Choosing an IT environment conducive to innovation

This raises a related but important point: many organizations believe the silver bullet to enterprise agility lies in the public cloud. In some cases, this is true, but the public cloud can pose a series of challenges itself. The sum of the “fixes” for these challenges can be costly.

It’s no coincidence that there has been an upsurge in open source container-orchestration systems for application deployment, scaling, and management. Embracing hybrid cloud architecture enables organizations to create flexible infrastructure that suits their diverse business and governance requirements – helping them control costs without sacrificing agility.

Developers must differentiate themselves to stay competitive

Today’s developers are being given unfettered access to the tools and technologies they need to drive innovation and are visibly pushing their organizations and industries forward.

Attracted by growing career opportunities in software and application development, newcomers are flocking into the field – further increasing the pressure on the developer community.

Survival in this highly competitive environment is no small feat. Learning how to differentiate oneself and drive industry disruption consistently takes a high level of skill and determination. Equally, a successful developer needs infrastructure, services, and storage-native solutions that can match the speed of development.



Source link

Can You Hear Me Now? Staying Connected During a Cybersecurity Incident | Cybersecurity


We all know that communication is important. Anyone who’s ever been married, had a friend, or held a job knows that’s true. While good communication is pretty much universally beneficial, there are times when it’s more so than others. One such time? During a cybersecurity incident.

Incident responders know that communication is paramount. Even a few minutes might mean the difference between closing an issue (thereby minimizing damage) vs. allowing a risky situation to persist longer than it needs to. In fact, communication — both within the team and externally with different groups — is one of the most important tools at the disposal of the response team.

This is obvious within the response team itself. After all, there is a diversity of knowledge, perspective and background on the team, so the more eyes on the data and information you have, the more likely someone will find and highlight pivotal information. It’s also true with external groups.

For example, outside teams can help gather important data to assist in resolution: either technical information about the issue or information about business impacts. Likewise, a clear communication path with decision makers can help “clear the road” when additional budget, access to environments/personnel, or other intervention is required.

What happens when something goes wrong? That is, when communication is impacted during an incident? Things can get hairy very quickly. If you don’t think this is worrisome, consider the past few weeks: two large-scale
disruptions impacting Cloudflare (rendering numerous sites inaccessible) and a
disruption in Slack just occurred. If your team makes use of either cloud-based correspondence tools dependent on Cloudflare (of which there are a few) or Slack itself, the communication challenges are probably still fresh in your mind.

Now imagine that every communication channel you use for normative operations is unavailable. How effective do you think your communication would be under those circumstances?

Alternate Communication Streams

Keep in mind that the middle of an incident is exactly when communications are needed most — but it also is (not coincidentally) the point when they are most likely to be disrupted. A targeted event might render critical resources like email servers or ticketing applications unavailable. A wide-scale malware event might leave the network itself overburdened with traffic (impacting potentially both VoIP and other networked communications), etc.

The point? If you want to be effective, plan ahead for this. Plan for communication failure during an incident just like you would put time into preparedness for the business itself in response to something like a natural disaster. Think through how your incident response team will communicate with other geographic regions, distributed team members, and key resources if an incident should render normal channels nonviable.

In fact, it’s often a good idea to have a few different options for “alternate communication channels” that will allow team members to communicate with each other depending on what is impacted and to what degree.

The specifics of how and what you’ll do will obviously vary depending on the type of organization, your requirements, cultural factors, etc. However, a good way to approach the planning is to think through each of the mechanisms your team uses and come up with at least one backup plan for each.

If your team uses email to communicate, you might investigate external services that are not reliant on internal resources but maintain a reasonable security baseline. For example, you might consider external cloud-based providers like ProtonMail or Hushmail.

If you use VoIP normally, think through whether it makes sense to issue prepaid cellular or satellite phones to team members (or to at least have a few on hand) in the event that voice communications become impacted. In fact, an approach like supplementing voice services with external cellular or satellite in some cases can help provide an alternate network connectivity path at the same time, which could be useful in the event network connectivity is slow or unavailable.

Planning Routes to Resources and Key External Players

The next thing to think through is how responders will gain access to procedures, tools and data in the event of a disruption. For example, if you maintain documented response procedures and put them all on the network where everyone can find them in a pinch, that’s a great start… but what happens if the network is unavailable or the server its stored on is down? If it’s in the cloud, what happens if the cloud provider is impacted by the same problem or otherwise can’t be reached?

Just as you thought through and planned alternatives for how responders need to communicate during an event, so too think through what they’ll need to communicate and how they’ll get to important resources they’ll need.

In the case of documents, this might mean maintaining a printed book somewhere that they can physically access — in the case of software tools, it might mean keeping copies stored on physical media (a USB drive, CD, etc.) that they can get to should they need it. The specifics will vary, but think it through systematically and prepare a backup plan.

Extend this to key external resources and personnel your team members may need access to as well. This is particularly important when it comes to three things: access to key decision-makers, external PR, and legal.

In the first case, there are situations where you might need to bring in an external resources to help support you (for example, law enforcement or forensic specialists). In doing that, waiting for approval from someone who is unavailable because of the outage or otherwise difficult to reach puts the organization at risk.

The approver either needs to be immediately reachable (potentially via an alternate communication pathway as described above) or, barring that, have provided approval in advance (for example, preapproval to spend money up to a given spending threshold) so that you’re not stuck waiting around during an event.

The same is true for external communications. You don’t want to find your key contact points and liaisons (for example to the press) to be MIA when you need them most. Lastly, it is very important to have access to legal counsel, so make sure that your alternative communication strategy includes a mechanism to access internal or external resources should you require their input.

The upshot of it is that the natural human tendency is to overlook the fragility of dependencies unless we examine them systematically. Incident responders need to be able to continue to operate effectively and share information even under challenging conditions.

Putting the time into thinking these things through and coming up with workarounds is important to support these folks in doing their job in the midst of a cybersecurity event.


Ed Moyle is general manager and chief content officer at Prelude Institute. He has been an ECT News Network columnist since 2007. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development. Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as author, public speaker and analyst.





Source link

The Building − and Business − Behind 5G Services | IT Infrastructure Advice, Discussion, Community


The allure of 5G is undeniable, but there’s no denying that operators have a long way to go to justify widescale 5G service deployments in the U.S.

The initial deployment and trial plans for 5G services in the U.S. announced late last year and at the Mobile World Congress conference earlier this year grabbed headlines. But an aggressive business and technology plan will be required for operators to truly move the needle in 2019.

What factors go into the rollout of 5G for service providers? And what blanks need to be filled in?

Infrastructure

Spectrum: Though U.S. carriers have already spent big on spectrum to support super-fast 5G services, some need more for broader deployment. Verizon and AT&T claim they’re all set, while Sprint and T-Mobile would improve their situation by merging.

Antennas: 5G services are being built using small cell technology because antennas can support many hundreds of devices – and at far higher speeds. Smaller cells are required as 5G uses far higher radio frequencies that cover shorter distances when compared to 4G systems. As a result, expect to see far more – and much smaller – 5G antennas. They can be located as close as 500 feet apart.

A small cell architecture will be used to create centralized radio-access networks, provide fiber to the antenna connections and to handle heavy backhaul.

Also expect macro-network densification, which requires new antennas to help carriers move from 4G to 5G systems.

Wired Network Spending

Wired network infrastructure will need to be ratcheted up significantly to support mobile cellular 5G services.

Carriers need to invest heavily in their wireline infrastructure which means expansion – and increased density – of their fiber broadband networks. But will they be able to justify spending for broadband broadly throughout the U.S.? This raises economic and social issues from the past.

Haves and Have Nots

Network infrastructure spending has always been a far easier sell/justification in densely populated urban areas such as cities. But what of outer areas and sparsely populated rural regions?

It seems certain that some form of incentive(s) will be required in any effort to span the Digital Divide.

From the glass half full folks, wireless carriers will spend to enhance and expand slower speed service in rural regions in tandem with the rollout of 5G services in big cities and other urban areas.

What’s the Plan?

With average revenue per user (ARPU) in continue decline for current wireless services, how can carriers justify big spending for 5G service deployment? The answers could be compensative plans for consumers, combined with pricey 5G phones. Both seem certainties, especially when you consider that though ARPU is decreasing, while data usage is climbing fast.

Phone makers have not released pricing for their 5G phones. Verizon did, however announced prices for its 5G Home offering, which is a wireless broadband Internet service, not a mobile 5G service.

Verizon 5G Home launched in four U.S. cities, it costs a $50 monthly charge (after three months free) for its pre-standard offering for those with a qualifying plan. (Non-Verizon customers will pay $70 a month). The carrier quotes average upload speeds between 300 and 940 megabits per sec. Phone pricing is not yet available.

The Global Challenge

The success of the services will be heavily reliant on national governments and regulators. Most notably, the speed, reach and quality of 5G services will be dependent on governments and regulators supporting timely access to the right amount and type of spectrum, and under the right conditions.

5G needs “a significant amount of new harmonized mobile spectrum. Regulators should aim to make available 80- 100 MHz of contiguous spectrum per operator in prime 5G mid-bands (e.g. 3.5 GHz) and around 1 GHz per operator in millimeter wave bands (i.e. above 24 GHz),” the GSMA report claims.

5G also needs spectrum “within three key frequency ranges to deliver widespread coverage and support all use cases,” added the GSMA report. The three ranges are sub-1 GHz, 1-6 GHz and above 6 GHz.

Outside the U.S., making a business case for 5G rollouts is difficult.  In fact, a survey of 45 telco CTOs released by McKinsey & Co. Cutting through the 5G hype: Survey shows telco’s nuanced views,” reveals that less than 20% have a commercial strategy.

Going Mobile

It appears the data traffic demand needed to help justify mobile 5G deployments exists.

Cisco’s latest VNI Global Mobile Data Traffic Forecast projects large increases in mobile data traffic:

“Although mobile data traffic had historically been a small percentage of overall global IP traffic, mobile data traffic is expected to grow at a 46 percent Compound Annual Growth Rate from 2017 to 2022, two times faster than the growth of global IP fixed traffic during the same period.” By 2022, the Cisco forecast adds, “mobile data traffic will represent 20 percent of global IP traffic.”

 



Source link