When considering the merits of moving application workloads to an edge computing deployment, it is necessary to examine not only the technical considerations, but also the drivers and overall business benefits, in particular the how these apply to a given organization, application and user base.

At its simplest level, the business decision about where to place application workloads comes down to whether you’re able to cost-effectively deliver the application experience your users expect. Is the periphery the right place for this – for your business?

See also: Leading companies

Two key factors for Edge

For most organizations, the pandemic has kickstarted digital transformation. Businesses have shifted more of their interactions and engagement with users to application workloads, and users now expect more from the applications they use. More features, more features, more responsiveness, more availability.

These expectations are only getting higher, thanks to the ubiquity of mobile and SaaS apps in everyday life, and these expectations will likely never be reset. Organizations that have not embraced this new reality will inevitably fall behind.

Meeting these user experience requirements comes down to two key factors, responsiveness and application availability.

Reactivity

Among other factors, responsiveness is a function of latency, or how long it takes for data to be transferred from one point in a network to another. According to a Quadrant Strategies recent survey86% of C-suite executives and senior IT decision makers agree that low-latency applications help differentiate their organizations from the competition.

At the same time, 80% of respondents are concerned that high latency will impact the quality of their applications. More than 60% of respondents further defined low latency for mission-critical applications as 10 milliseconds or less.

Considering your user base and the latency they may experience is one of the main factors to consider when deploying an application at the edge. Once you’ve considered all other factors, the only way to improve latency is to physically move workloads and data closer to users, in other words, to the edge.

The more geographically dispersed your user base is, the more important this becomes. For a global user base, for example, centralized cloud deployments quickly become untenable as the workload scales; the only answer is edge deployment.

Availablity

Availability is the flip side of the experience coin, and unfortunately any given network will inevitably go down, resulting in a constant stream of headlines about major cloud outages and frustrated users.

The solution is to build in redundancy and resiliency for application workloads. Centralized cloud deployments have finite resilience because they depend on that cloud provider. When the network goes down, so do the applications.

Edge deployments, on the other hand, can easily circumvent this problem, provided the deployment is not tied to a single network operator. Workloads should be widely distributed across heterogeneous vendors so that if, or when, one fails, the problem can be circumvented to ensure continued application availability.

See also: Top Cloud Companies

Edge Computing Cost Concerns

Determining the profitability of the expected experience can quickly turn into technical considerations regarding workload scalability, compute resource allocation, network operations, workload isolation, and compliance Datas.

There are inevitable pros and cons to different deployment strategies that need to be considered. However, all things being equal, the distributed edge beats the centralized cloud every time.

Overall, there is a strong case for a managed service for edge deployment; you get edge deployment for your application workloads, without the added cost of managing or operating your own distributed network.

If you decide to go this direction, be sure to determine if the edge vendor needs new CI/CD (continuous integration and continuous delivery) workflows and proprietary tools to support deployments. While a proprietary approach removes the responsibility of managing your own network, the use of new workflows, tools, and processes can disrupt your current DevOps processes, leading to further workflow issues.

The key consideration for Edge

Many organizations are already rushing to modernize applications with multi-cluster Kubernetes deployments, and the edge is a natural extension of this strategy, delivering significant benefits in performance, scalability, resiliency, isolation, and more. .

The modern edge delivers a significantly better app experience, but only if it can be simple and affordable to adopt. In other words, the key is for organizations to eliminate the challenge of deploying applications to the distributed edge while maintaining their existing containerized environment and using familiar Kubernetes tools.

Additionally, organizations that can find ways to orchestrate and scale workloads to meet real-time traffic demand and deliver cost-effective low-latency responsiveness to users – no matter how many or where they are – are most likely to reap the rewards of the distributed edge.

In summary, the business case for edge computing is that the edge offers tremendous benefits, as long as it can be deployed relatively simply and efficiently.

See also: Technology predictions for 2022: Cloud, data, cybersecurity, AI, and more.

About the Author:

Stewart McGrath, CEO of Section.


Source link

Previous

Round Rock Announces Library Business Plan Competition Finalists

Next

Rice Bran Oil Manufacturing Plant Project Report 2022: Industry Trends, Business Plan 2027 - Syndicated Analysis

Check Also