Edge computing: 4 common misconceptions, explained

The term “edge computing” covers a lot of ground - leading to confusion about edge vs. cloud, edge vs. IoT, and more. Let’s clear up four misconceptions that come up repeatedly
377 readers like this.

Edge computing is a hot topic. But that doesn’t mean it’s always well understood.

Edge computing can apply to anything that involves placing service provisioning, data, and intelligence closer to users and devices.

Part of the issue is that the term “edge computing” is forced to cover a lot of territory. Early on, it was perhaps most associated with the Internet of Things (IoT). Add in more traditional architectures that existed well before the edge computing term did: Think content delivery networks and traditional branch office applications. Edge has also expanded to cover new areas including telco network functions.

Edge computing can apply to anything that involves placing service provisioning, data, and intelligence closer to users and devices.

Given that context, several misconceptions in particular seem to come up over and over with respect to edge computing. Let’s clear them up.

[ Why does edge computing matter to IT leaders – and what’s next? Learn more about Red Hat's point of view. ]

Misconception 1: Edge computing replaces public clouds

One of the most persistent cycles in IT has been the tendency to alternate between centralization (e.g., the original mainframes) and decentralization (e.g., the PC). Viewed through this lens, edge computing can be seen as a pullback from the centralization of computing at public cloud providers and other large data centers.

There’s a kernel of truth in such a view. Some people did argue initially that the public cloud compute utility would subsume other forms of computing in the same manner as electric utilities did. (It’s something of an irony that in the time since those early cloud discussions were taking place in the 2000s, distributed renewable power generation has replaced some of the centralized plants.)

However, a centralized utility for IT was never a realistic expectation. Even public clouds themselves have evolved to offer provider-specific differentiated services, rather than competing as commoditized utilities. But more generally, edge computing is a recognition that enterprise computing is heterogeneous and doesn’t lend itself to limited and simplistic patterns.

Edge computing only replaces public clouds in a world where public clouds were going to otherwise capture all workloads. And that world isn’t this one.

Misconception 2: The edge is about endpoints

Just about any distributed computing architecture is going to have multiple tiers.

At around the same time that public cloud discussions were heating up, another popular notion being debated was the idea of ubiquitous devices such as “smartdust.” Device miniaturization and wireless networks were getting to the point that distributed networks of sensors were starting to look more and more practical. This fed some of the early excitement around IPv6 and using IPv6’s vastly larger address space to uniquely identify such sensors.

Ubiquitous computing – a term coined by Mark Weiser of Xerox PARC in 1988 – came to be associated with endpoint devices. And this association has carried over to edge computing more broadly. The fact that consumer devices mostly interact directly with cloud services at the back-end probably reinforces this mindset.

However, the physical architecture to support sensors and other endpoints at scale was always going to be more complex than a cloud tier plus a sensor tier. In fact, when IBM came up with the “pervasive computing” moniker in the early 1990s, it referred to using all the computing resources in an organization, from cell phones to midrange systems to mainframes, in concert with each other.

Thus while “the edge” and edge computing sometimes seem to highlight the endpoint devices, just about any distributed computing architecture is going to have multiple tiers.

[ Get a shareable primer: How to explain edge computing in plain English.] 

Misconception 3: Edge is a new name for IoT

So edge computing goes beyond simple two-tier architectures. But does that mean it’s really just another word for enterprise IoT deployments – which usually have three tiers or more, with messaging, business rules, and other software to tie the whole thing together?

It can be. IoT is an important edge computing use case. For example, in a three-tier IoT architecture, sensor data often feeds into some sort of local gateway. The gateway may use that data to take an action that needs to happen quickly, such as stopping a vehicle. It can also filter and aggregate the data before sending it back to a data center for analysis and tracking purposes, thereby saving network bandwidth.

IoT is an important edge computing use case.

Edge computing isn’t just about IoT, though. An increasing number of other application areas, in telco and elsewhere, work best when services are pushed out closer to the humans and machines interacting with them.

Misconception 4: The edge should be operated like a data center

Edge operations shouldn’t necessarily just cut and paste from the data center playbook.

Edge clusters may be installed in locations that don’t have an IT staff and may even be in places with no permanent human presence.

For example, edge clusters may be installed in locations that don’t have an IT staff and may even be in places with no permanent human presence at all. That may mean you need to think differently about security. Or it might lead you to a different strategy for dealing with hardware failures than you would follow in a data center with physical security and 24-hour IT staff coverage.

More fundamentally, you’re dealing with potentially unreliable and throughput-constrained networks.

In a data center, you can mostly take network connectivity – especially within the data center – as a given. Not so in an edge architecture. What do you do if an edge cluster loses its connection? Do you want a way to continue operating even if in degraded mode?

Thinking through such backup operational plans is important for distributed systems broadly but is especially important in many edge architectures.

Treat edge as part of hybrid cloud architecture

At the same time, edge computing inherits and expands on many of the patterns that we’ve already been using. Open source, containerization, automation, DevSecOps … Some edge architectures have even more directly in common with how data centers are wired together and operated. For example, OpenStack is popular among telcos to create private clouds at the edge, just as it is for creating private clouds in a more traditional on-prem environment.

Practices like the heavy use of automation are also at least as important for maintaining the health of a large distributed system as they are for servers in a datacenter. In general, edge computing needs to focus on making operations simpler through automated provisioning, management, and orchestration. This is especially so at edge computing sites with limited IT staffing.

Ultimately, it’s not so much that you can just treat the edge and the data center the same; they do come with their own design and operational challenges. However, what you can do is to treat them both as part of an overall hybrid cloud architecture. Centralize where you can, distribute when you have to, reduce your business risk overall.

[ Want to learn more about implementing edge computing? Read the blog: How to implement edge infrastructure in a maintainable and scalable way. ]

Gordon Haff is Technology Evangelist at Red Hat where he works on product strategy, writes about trends and technologies, and is a frequent speaker at customer and industry events on topics including DevOps, IoT, cloud computing, containers, and next-generation application architectures.