Author: Credera Team
Author: Credera Team
How do you know if adopting edge technology is right for your business? Edge technology is often associated with rugged environments isolated from a stable internet connection, such as offshore mining rigs or factory floors, but almost every industry can benefit from it.
In a recent episode of the Technology Tangents podcast “Back to the Edge? The Rise of Edge Computing,” Credera’s Chief Technology Officer Jason Goth and Chief Data Scientist Vincent Yates discuss why edge computing has become an increasing priority, the primary benefits and drawbacks, and how organizations should (or should not) be using it. A summary of their podcast conversation is below.
In an age where slow response times and network outages can lead to a direct loss in revenue, the edge fills a crucial role in allowing businesses to reach customers quickly. Edge technology allows leaders to optimize workload location while simplifying deployment and management, all using familiar cloud native tools and techniques.
Adding edge to an architecture combines the flexibility of cloud solutions with the benefits previously only associated with on-premises solutions, such as enhanced security and reduced latency. Continue reading to explore more about edge technology, edge taxonomy, the edge benefits for customers and businesses, and how your organization can start its edge journey.
Edge computing refers to the broad practice of moving high-performance computing physically closer to the customer (or the “edge of the network”) to reduce latency, process data quicker, and react faster to events. Housing data and computing together on location also helps with compliance with region-specific data and privacy policies.
Edge architecture resembles the on-premises server architecture the cloud sought to eliminate but differs in management and location. Where applicable, edge resources are still managed as if they were part of the cloud. This means developers can build and deploy on-premises solutions with the same tools used in the cloud, making traditional embedded tools unnecessary. Administrators can configure on-premises resources like they configure cloud resources and monitor them remotely.
The location of physical computing resources determines how connected to the cloud they need to be. This location depends on a business’s goals. Edge computing can be on-premises, regional, or anywhere in between; it does not have to be physically on site. Each “tier” of the network where a user could locate their workloads has its own set of opportunities and tradeoffs.
To help categorize these different “tiers,” the Linux Foundation and its subsidiary LF Edge, created a taxonomy in their whitepaper titled Sharpening the edge: Overview of the LF edge taxonomy and framework; this taxonomy breaks down the different areas of the network where edge computing can reside and the benefits and tradeoffs associated with them.
The two main tiers of this taxonomy are the user edge and the service provider edge. In general, the user edge optimizes areas such as latency, bandwidth, and privacy, and the service provider edge optimizes areas such as scale and resiliency.
The service provider edge refers to smaller outposts containing cloud servers placed strategically close to highly populated areas. This allows users to specify which “region” or “zone” they want their computing to run in to reduce the physical distance between them and their workloads. Users can also configure workloads to run in multiple regions. The service provider edge can be further broken down into the access edge and regional edge to better categorize the physical location and scale of these regions.
The user edge refers to hardware housed directly on site. Instead of workloads running in a nearby or centralized data center, they run on hardware located on site. Administrators then manage this hardware the same way they would manage cloud resources. The user edge can be further broken down into the constrained device edge, smart device edge, or on-premises data center edge to better categorize the computing power associated with an edge computing device; these devices can range from multiple edge servers configured in a cluster, to Wi-Fi-enabled factory-floor robots, to micro-controller-based Internet of Things (IoT) devices and sensors.
Let’s discuss the benefits of edge computing customers and businesses receive from the user and service provider tiers through edge computing examples.
Since cloud computing resides on servers housed in off-site facilities, an end user will always experience network-based latency when accessing them. This negatively affects customers in ways such as unresponsive websites, slow self-checkout kiosks, or complete regional outages of services like online ordering. These performance and availability risks make critical or time-sensitive applications unfit for the cloud.
However, since the service provider edge focuses on placing computing resources close to highly populated areas, businesses can configure their computing to run in nearby geographical regions. This reduces the latency customers experience when accessing their services. They can also configure services to run in multiple regions to reduce the business impact of a single regional failure. Autoscaling features offered by traditional data centers are still available on the service provider edge, but at a smaller capacity.
The user edge can eliminate network latency altogether by running edge computing architecture completely on-premises. Since the hardware is managed the same way as cloud resources, developers still receive the experience and benefits of cloud computing.
Staying on-premises also allows companies to run edge applications while disconnected from the cloud in a hybrid approach. Intense computing applications can run in the cloud and send their results down to on-premises instances once finished. The inverse is also possible, where data aggregation functions can run locally to reduce the amount of real-time data transferred to the cloud. A critical edge application can run completely locally, only reaching out to the cloud periodically; whether that’s after a regional outage is restored, enough bandwidth on an inconsistent network connection becomes available, or after business hours to avoid extra network requests during normal operation. This prevents outages and latency issues from affecting the customer experience, and the services you provide using the edge appear always available and feel more responsive.
Privacy and compliance issues are a challenge businesses face with cloud storage, especially global businesses operating in countries with different privacy laws. Through the service provider edge, an organization can guarantee the country their data is stored in by specifying what region should host those storage resources. Through the user edge, an organization can control the physical servers storing customer data and therefore directly guarantee its location as well as physical and virtual security.
On-premises solutions are traditionally expensive and difficult to maintain. Since the edge provides the same management tools as the cloud, maintenance can be done remotely and easily. Monitoring tools can track the health of an entire region’s local instances, and maintenance can be done wirelessly without the need for on-premises technicians. Data can also be replicated to the cloud for backup and long-term storage purposes. These resources, paired with the decreasing cost of hardware, make the edge an affordable option for businesses in any industry.
How can your business begin using the edge? Like the cloud, there are two key ways to take advantage of edge processing:
Build an edge network in house.
Buy a pre-configured edge computing solution.
We’ll walk through the details of both approaches below.
1. Build an edge network in house
On one extreme, you could build an edge network using in-house developers leveraging open-source solutions. This allows you to tailor your solution to your business case but is costly and complex: You must provision your own hardware, coordinate third-party solutions to meet your needs, and train developers to use this custom software. Some examples of open-source solutions include Cloudflare for pre-existing infrastructure, OpenStack for management software, or various offerings from RedHat, EdgeX Foundry, or the LF Edge ecosystem.
2. Buy a pre-configured edge computing solution
On the other extreme, you could buy a pre-configured edge computing solution from a popular cloud provider such as AWS, Google, Microsoft, or IBM. This approach reduces upfront cost of resources and kickstarts the development of management solutions by providing out-of-the-box software, but is the most limiting in terms of customization: You traditionally must use their solutions and can only extend your edge network to areas they offer their services in.
These solutions, however, usually integrate very well with their existing cloud native offerings. This not only allows your developers to use the same tools they have used in the past for cloud development, but also allows for advanced features such as virtual testing of local solutions in the cloud, monitoring a network of edge deployments from a single-pane-of-glass admin console, or remote management of edge locations without needing on-site technicians.
Either way, another thing to consider is the location of your computing. Placing workloads close to your customers decreases the latency they experience but increases the cost of hardware and moves those workloads further from the cloud. This can make management more difficult but allows independent operation from the cloud.
There are a lot of factors to consider when deciding if moving to the edge is right for you. At Credera, we specialize in partnering with our customers to provide a wholistic evaluation of your business as well as the resources and expertise for implementing custom solutions to completion. If you’re interested in starting a conversation with one of our technology leaders, then reach out to us at [email protected].