Comments
- No comments found
While artificial intelligence (AI) applications are becoming increasingly capable of solving even the most complex of our problems requiring human-like cognition, the black box of artificial intelligence makes it difficult for us to understand how these systems actually go about solving these problems.
Although humans had always known about the existence of fire, it was only when we learned to control or “tame” fire that we really kickstarted the journey of rapid technological progress and evolution we currently find ourselves in. Now, over a million years later, we find ourselves at a similar juncture—albeit faced with an entity that we created instead of a natural phenomenon. The creation of artificial intelligence, undoubtedly, is a step into an era of unprecedented growth unlike any we’ve seen before. But, a true leap can only be achieved once we “tame” the technology, as it were, by first illuminating the black box of artificial intelligence that will enable us to better control the outcomes affected by the technology.
Our brain, the apotheosis of human evolution and the most vital of our organs, also happens to be the most complex and enigmatic things known to us. Similarly, artificial intelligence, which represents the pinnacle of human technological development, is equally perplexing—despite the fact that it has been created by us. To be fair to ourselves, there is a lot we do know about our brain and can predict with some certainty how it reacts to different stimuli. Likewise, we can know with some certainty what outputs an AI algorithm is likely to yield for specific inputs. However, the mystery that scientists and researchers finding hard to fathom is not what the output will be, but how it is produced. Not knowing the inner workings of this black box of artificial intelligence is a major hurdle in AI development, and has been receiving much attention from the global tech community in recent months. Read on to learn about the growing influence of AI on our lives, the effects of the black box problem, why it is significant and how the attempts to solve it are faring.
The era of AI has already begun as we can see from the numerous applications. For instance, AI systems are used in most image recognition systems used by major corporations such as the image captioning AI used by Facebook to know if it’s a person, an animal, or an object in any given image. It not only identifies the people in images but also tells if the person is smiling, wearing accessories, is standing or seated, and the total number of people in a group picture. Thus, there is little doubt regarding the efficacy of AI systems. The data analysis and pattern recognition capabilities driven by deep learning are enabling AI to help doctors to diagnose Cancer in the early stages with greater accuracy than human doctors. With applications of deep learning poised to not only change but also save lives, understanding it’s working is key to not only improve its efficacy but also expand its applicability to various other problems. Just like fire, if allowed to grow unsupervised and unrestrained, AI may eventually prove to be more harmful than useful and cause irreparable damage, especially since it is increasingly being used in situations involving human health and safety. To do that, it is key to know what AI really is and how it works.
Artificial intelligence is an umbrella term that encapsulates multiple subsets of technologies like machine learning and deep learning. Machine learning is an AI agent’s ability to learn on its own by iteratively analyzing data. It allows AI to apply analytical algorithms to large arrays of data to identify patterns and make decisions. Deep learning can be considered as an augmented, more complex form of machine learning. Deep learning uses multiple layers of algorithms to emulate the neural networks in our brain, for the purpose of achieving human-like cognitive abilities. These compound algorithms form what are known as artificial neural networks. Artificial neural networks or deep neural networks have proven to be surprisingly adept at identifying and classifying sensory data such as from visual (e.g., image and text) and auditory signals (e.g., speech and music) to derive meaningful information and insights.
Deep neural networks use multiple layers of algorithms that analyze data and classify or cluster items. Since these neural networks are designed to function in ways similar to our brains, they also gain the ability to classify objects like we do—through experience. A human baby learns to differentiate between objects as it grows by observing them and learning to label them based on what it is taught by its parents. Similarly, a deep learning algorithm learns through training data that gets fed to it so that the algorithm gains “experience”. But this isn’t where the similarity between the human brain and a deep learning algorithm ends. Have you ever noticed that we gain the ability to identify different breeds of cats and dogs as “cats” and “dogs” even when we’ve never seen some breeds before? We don’t need to see every breed of dogs there is to be able to identify one when we see it for the first time. This phenomenon is called ‘generalization’, where we identify and memorize certain distinctive features of dogs (or any other entity) and use those features to generally identify a dog as one, even when we see a totally new breed with radically different features. The similar ability to generalize is also seen in AI agents who can use what they learn from one dataset to generalize across new sets of similar input.
Although the capabilities of AI to perform such feats and more have been documented, the inner mechanisms of how they actually work are far from accurately known. This is because neural networks comprise a large number of neurons or nodes in each layer and multiple layers of such nodes. These nodes are interconnected in a complicated network (somewhat like our brain). Different inputs activate different combinations of nodes in each layer. However, as of now, it is highly improbable to determine how specific images activate specific nodes in the neural network. This leads to the problem that is the black box of artificial intelligence. Knowing how artificial neural networks function is key to further the research in the field and develop more advanced and reliable AI. Once the black box AI is demystified, these systems can be used for interpreting even more complex information with absolute accuracy. This will pave the way for the use of technology not only in broader business applications but also in applications involving human life such as AI-enabled medical diagnosis.
In order to ensure that AI can be made fit for mainstream adoption even in high-stakes situations, it is crucial to develop what is known as transparent A. As the term suggests, these programs are not limited by the ‘black box of AI’ limitation since their functioning is explainable. In transparent AIs, the system itself is able to demonstrate how it arrived at a conclusion. Such systems will truly represent a major breakthrough in the field of artificial intelligence. The development of transparent AI can usher in an era of explosive growth in AI adoption in virtually every industry, from banking and finance to healthcare and defense.
Opening and deciphering the contents of the black box of artificial intelligence is undoubtedly the next area of focus for those engaged in the field. This is because the push for explainable, transparent AI also goes hand-in-hand with the drive for securing AI safety, making the development of the same a high priority for AI researchers. While efforts to understand the “thought” process are poised to continue for the foreseeable future, we still might require a few more developments in the field to get there. Until then, it only makes sense to use AI in areas where the stakes are not very high such as most of the existing applications of AI.
Naveen is the Founder and CEO of Allerin, a software solutions provider that delivers innovative and agile solutions that enable to automate, inspire and impress. He is a seasoned professional with more than 20 years of experience, with extensive experience in customizing open source products for cost optimizations of large scale IT deployment. He is currently working on Internet of Things solutions with Big Data Analytics. Naveen completed his programming qualifications in various Indian institutes.
Leave your comments
Post comment as a guest