Engaging with customers daily can sometimes be fascinating. With so many new ideas and innovations in the tech world every year, the eagerness to talk to them and share this with them is great. It can be humbling then, astonishing even, when you go to a customer to talk about a topic like Machine Learning, only to see that not only have they embraced your technology, but are already pushing the boundaries in ways you haven’t seen previously. And when the person giving you a detailed instruction of what they’re doing is barely out of college, it does fill you with some excitement and optimism.
The gist of the above is that terms like Machine Learning and A.I have certainly entered the mainstream lexicon. The cloud has really enabled us to both speed up development in the A.I space (note how this field has progressed post-cloud), and bring A.I technologies to a wider audience with the cloud’s consumptive model. A big advantage of the cloud is gaining access to hardware platforms that you perhaps wouldn’t have invested in previously in the on-premises world. Azure, for example, gives you access to GPU-based VMs, on which you can build HPC and AI solutions, scaling quickly to even 1000s of GPUs.
In August 2017, Microsoft revealed Project Brainwave, a platform built on FPGAs (Field Programmable Gate Arrays) in partnership with Intel. This is a leap over simply chaining GPUs together, the entire architecture is optimized for Deep Learning. At the unveiling of Brainwave, the Intel Stratix 10 FPGA, built on a 14nm process, demonstrated performance of a sustained 39.5 Teraflops.
The above developments in the cloud are expected to continue at a breakneck pace. A performance of 39 TF may not be so impressive in a years’ time. This is an example of what is possible with the cloud.
To complete the picture, however, we need to move from the cloud to the edge…..
The Intelligent Edge
In the world of IOT a new concept is becoming prevalent – the Intelligent Edge. Edge computing refers to data processing power at the edge of a network instead of holding that processing power in a cloud.
Now this may seem odd to some of you, however the importance of the cloud does not diminish in this scenario. The intelligent edge recognizes that to deliver what businesses require, data processing and intelligence need to be applied at the edge before data is synced into the cloud.
I’ve spoken previously about the concept of having tremendous processing power on you, enough to power intelligence, as I’ve always been fascinated by artificial intelligence. In 2010, before the current wave, I spoke of the concept of a “Local AI” on my blog, tremendous computing and analytical power in your pocket ( and on your wrist ) , that changes your daily life. Looking back now, the term seems clumsy, so I am renaming the concept to Personal AI. The concept of the Intelligent Edge will extend to the device you carry, not just to industrial sensors, and deliver Personal AI to you.
Now in the present, to build massive neural networks, you need the scale of the cloud. To train your models, you need vast amounts of data, as well as massive CPU for things like Backpropagation. You’re not going to do that on a mobile device. However, you could run pre-trained models on your mobile device with the right custom chips. You thought that chips don’t matter anymore?
In 2017, we are starting to see Personal AI form.
For many years it was felt that CPU’s were becoming a commodity, and that the real innovation lied elsewhere. As an Engineering graduate who loved chips, I was dismayed by that. I still get excited when the latest desktop CPUs are announced and was pleased to see the CPU wars stronger than ever in 2017, with AMD Ryzen launching and then Intel fighting back.
One company a few years back decided that central to their differentiating strategy would be to go back to designing their own custom chips to give themselves a massive advantage, which went against what the market was saying – Can you guess the company's name ?
It was Apple.
Their custom “Ax” series of custom System-on-a-chips (SOCs) are, according to Apple, key to the smooth and fast experience on their devices. In September 2017, Apple announced the A11 Bionic, their latest custom SOC, and according to them it includes a neural engine. More information here.
The Chinese have already responded…… More information here.
Personal AI is happening, just as I thought it would.
This is the first mainstream example of pushing Machine Learning to The Edge, and already we can see why. Apart from the personal assistant utilizing this capability, there are other use cases - the new iPhone requires facial detection to unlock the device, and that action would need to work even when you’re offline.
Intel, however, will not readily give up any advantage in an exciting space like this. Also announced in September 2017, Intel has created a chip that simulates a neural network (Neuromorphic Computing), called Loihi. Expect this chip to find its way into smart devices that can now process more data on the Edge before sending to the cloud. More information here.
Neuromorphic Computing will Deliver Personal A.I.
Lastly, there is the PCIe card by BrainChip, that plugs into a PC. Why would you need a Neural Network running there? The one application is processing simultaneous video feeds, and doing facial recognition on the fly. More information here.
Back to Personal AI, there are so many uses for increased intelligence on the Edge that it will soon be taken for granted. By 2020, I see most pocket computers (what you still call mobile phones) having advanced ML capability as standard to perform a whole host of tasks. The entry point into the capability will be the personal assistant, which will rapidly become much smarter. Speech recognition will improve as it will be done on the device. Instead of the personal assistant being just a front to a search engine in the cloud (as it is mostly today), the Personal AI will have a real capability to process and understand, using the search engine to reference data.
I see advanced scenarios such as the following:
1) Realtime Health Analysis – currently your smartwatch monitors your heart rate and steps and sends it to your phone, which sends it into the cloud. In the near future, your Personal AI will read this data and analyze it in real-time, with the ability to alert you as early as possible should you be at risk of a heart attack , or stroke ,for example. The number of complex sensors built into your smartwatch will increase in order to enable this. Advancement in sensor technology will be key to deliver on the capability of Artificial Intelligence. We need more capable sensors, all small enough to fit on a wristwatch.
2) Environmental Analysis – Another scenario that’s easy to predict will be to use the capability on your pocket computer to perform analysis of the environment around you. Air quality is a problem in many parts of the world – imagine taking out your device, some basic readings being taken, and then machine learning kicking in to advise you if the air is safe to breathe in that location, and what the risks are. This is especially useful for travelers. Once again, I am saying that development of better sensors is critical. Imagine a form of sensor that could analyze water in a glass and tell you if it is safe to drink – very useful in certain countries.
3) Realtime Language Translation – This is already happening with products like Skype, but could be augmented with the power available on the Edge device. I would imagine that a future version of Skype could take advantage of Personal AI, to improve Realtime translation (the already translated language is sent to the cloud and all the way to the other end).
4) Custom Apps – Once you have the processing power (and the sensors) at the Edge, you will see all kinds of custom apps being built to take advantage of this. Environmental Sensors could be utilized to create an app for workers in dangerous environments, mines for example. Apart from just the raw sensor readings, it is the Personal AI engine that would add real value, delivering insight in near real-time. It would also assist in sending more relevant data into a bigger engine in the cloud, with the knock-on effect of helping train more accurate models.
The development of Personal A.I is an area of technological advancement that I believe will have more of a personal effect on your life. Just like the smartphone and social media did a decade back , the ability to both interact with a smarter personal assistant, and also get access to life changing services as listed above, will change our daily life. Perhaps a more natural interaction with technology will even stop us all from staring at a screen all day - we can only hope.
Thavash is a Data and AI Solution Specialist (SSP) at Microsoft. He looks after the Data and A.I business for the Financial Services industry in South Africa. Thavash studied Electronic Engineering at the Durban University of Technology, majoring in Computer Systems Engineering and also completed a Bachelor of Commerce majoring in Informatics and Quantitative Analysis from the University of South Africa.