Feb 4, 2019
We are just now starting to appreciate the revolutionary implications of Edge Computing.
As we kick off 2019, let’s take a look at trends that continue to gain momentum. Try and see if you can figure out what they all have in common:
All of these stories involve oceans of data—about our habits, our location and our personal lives—streaming all around us. This leads to a host of problems, both technical and ethical.
Enter Edge Computing, the latest tech buzz word that you’ll be hearing more of this year. Edge Computing (as in “Edge of the Network”, i.e. your device) seeks to resolve some of these issues by having a significant portion of the computations done securely on the devices themselves, rather than having data uploaded to the cloud for remote processing. Of course, there would still be some cloud communication, but ideally the device wouldn’t need to send the data being processed back home—just network traffic required in order for the system to function. Not only are these “smarter” devices more responsive due to less network traffic, they are also far more secure when it comes to the protection of your personal data. Edge Computing allows you to do much more with your voice assistant enabled devices without a network connection, for example. As TheVerge’s Paul Miller put it, “It doesn’t mean the cloud is going to disappear, it means the cloud is coming to you.”
We all know how cloud computing works. You might ask your smartphone’s voice assistant how to pan fry the perfect steak, only to be made to wait… and wait. Not only is the phone hitting the server to interpret your voice, it’s going to wait for the remote systems to process your voice recording before grabbing the answer and responding. Google, Amazon and Apple’s big server farms already handle millions of queries a day and this is growing exponentially as more people upgrade their phones and more voice-enabled devices infiltrate the consumer market.
Let’s go over some of the problems with the existing cloud computing model:
1 – Speed
Many iPhone users have given up on Siri because of the (lack of) speed. Combine the network transmission with the relatively weak AI processing power on wearable devices like earbuds and fitness trackers and you have a lot of consumers doing a lot of waiting. If the core computations were done on-device, response times would be reduced significantly.
2 – Battery Life
It’s easy to forget that the various antennae on your smartphone/smartwatch are draining the battery all day long. The question is whether an on-device AI processor drains less battery power than the power it takes to send and receive data in “always-on” cloud AI schemes; the promise of Edge Computing is that it will.
3 – Bandwidth Limitations/Costs
We’ve already mentioned that voice-enabled IoT devices are massively gaining in popularity. This means more devices in your home are running on the network, 24/7 – and bandwidth is being sacrificed. Again, having certain AI-heavy computations done on-device frees up bandwidth, effectively saving you money.
4 – Security
Finally, the elephant in the room: data privacy. My Fitbit is great; it tracks my body weight, fitness and sleep activity, which I’m interested in. An Apple Watch may track the intimate details of your respiratory system, body fat/glucose levels, and other health information that you only feel comfortable sharing with your doctor. Do you trust this data being in the hands of a for-profit corporation? Do you trust that your data will not be compromised once it has left your device? Do you trust Amazon and Google having smart speakers sending voice recordings to their clouds 24/7 for processing and storage? Increasingly, the answer to these questions is no. AI-enabled on-device chips go a long way in preventing the need for the transmission of private data.
All of this is why Amazon, Apple, Microsoft and Google are taking chip design very seriously.
Amazon – not normally known for low-level hardware development – is bringing their Amazon Web Services (AWS) chops to chip design with its AI-enabled AWS Inferentia chip, due for the market this year. These will likely be used to add Edge Computing elements to their Alexa devices.
Apple has been doing this for some time now, beginning in 2017 with their introduction of the iPhone X. The A12 chip on the iPhone X came with a Neural Engine for on-device processing (partly to enable the iPhone X’s complex facial recognition functionality).
Microsoft, a growing player in the high-end consumer hardware space, is developing Azure Sphere, which has hardware, software and cloud components working in concert to ensure that internet-enabled smart devices are using your personal data securely and responsibly, while enabling a new generation of products that were never before possible.
Google has been in the chip game since 2017 with their Tensor Processing Unit (TPU), which aims to steadily lower the cost of developing AI-enabled applications through the proliferation of inexpensive AI-focused processors.
The vision for Edge Computing and the AI devices/services that it enables is exciting. One thing we can learn from Facebook’s disastrous 2018 is that tech companies have accumulated tremendous power through their collection of vast amounts of our personal data, and that this places an enormous responsibility on them. You can bet that Apple, Amazon, Google and Microsoft are going to do everything they can to avoid a similar fate. Hopefully this means increased transparency regarding the transmission and storage of personal data. If the cloud has shown us the power of unlimited processing power and storage, Edge Computing aims to resolve some of its blind spots.
Shielding Your Data Fortress: Fortifying the Perimeter with Access Control & Monitoring
Data as a Strategic Asset: Leveraging Insights for a Competitive Advantage
Convergence secures top rankings on Clutch’s Top AI Developers list
Let's have a conversation
Start a Project
777 Hornby Street, Suite 1500, Vancouver, BC, Canada, V6Z 1S4
© 2023 Convergence Concepts Inc.