Artificial intelligence (AI) governance and regulation is a foremost priority during the pandemic era.
In the distant past, the mighty Republic of Rome had a simple defensive strategy to protect the state from incoming enemy attacks. According to certain versions of history, the people of the ancient state would ‘choose’ a dictator to protect and govern them during times of military emergencies. The chosen public servant would be granted the maximum executive powers and authority during an extraordinary crisis. Usually, the powers would be held by the person for a period of six months during which he would have to deal with enemies nearing the borders of the state. The logic behind such an appointment was simple—democratic bickering within the state would be disastrous for a besieged region that needed strong, undivided, and vocal leadership to emerge unscathed from the situation. That leadership would be provided by exceptional individuals with clear ideas and the will to move mountains. In short, the people of the state would put their faith in someone they may not be familiar with on a daily basis.
If you think about it, some parallels can be drawn between the above-mentioned concept and the choice of gradually increasing AI’s influence on your organization’s operations and automation systems. Despite everything we know about the technology, the actual capabilities of future AI models are so deep and varied that they are incomprehensible for now. Therefore, it is necessary for organizations to consider some points for effective AI governance and regulation. AI-powered systems must be configured to follow certain principles, such as:
Fairness: We may believe that AI-powered systems cannot discriminate on the basis of somebody’s race and gender. However, time and again, there have been instances wherein such systems have displayed results that are biased. As we know, the intelligent algorithms and artificial neural networks are not responsible for discrimination, but the individuals handling AI’s machine learning and computer vision processes certainly are. The main reason for biased AI is the narrow datasets that are used to train AI algorithms in the initial machine learning phases. To resolve this, data experts in organizations will have to use bigger datasets to train the AI’s models and systems. Basically, AI can be trained to be fairer and more inclusive towards everybody.
Transparency: One of the main priorities of any AI system must be the need to have solutions and results that can be easily explained. Quite rightly, people would have greater confidence in AI’s forecasts and suggestions if they knew how those were formed in the first place. Transparent AI will allow organizations to remove hidden biases from their digital operations and allow their systems to make better decisions. AI’s black box problem is a concern for its overall usability, but newer digital solutions, such as Delloite’s Glassbox, may raise the technology’s transparency levels by several notches in the future.
Human-centeredness: Whether used for automation or predictive analyses, AI systems are more likely to yield the desired financial returns and wider acceptance if they include human requirements, psychological factors, and behavioral patterns in their decision-making processes. In the future, smarter design and more in-depth research can improve the output of AI systems to make them centered around human needs and goals. Remember, AI works for the humans in your enterprise and not the other way around.
Data and privacy security: While AI-powered systems become more and more efficient and intuitive with time, measures must be taken to safeguard your organization's data and client's personal details at the same time. Usually, deep learning requires large quantities of data for high-speed analysis and greater output. However, the need for more data also increases the risk of data and privacy breaches at the hands of cybercriminals. Cybersecurity measures must be in place to stop such breaches from taking place.
Once these principles are deeply embedded in the AI systems within your enterprise, here are a few requirements that should be considered while drafting your AI governance plan:
It is easy to go overboard while regulating the internal workings of an AI-powered system that enable it to participate in organizational operations. The foolish thing to do in such instances would be to make an AI governance strategy that completely restricts the technology from getting meaningful work done. This 'suppression' of AI's components and processes would defeat the purpose of installing the technology in the first place. Organizations must remember that AI governance does not translate into blocking the operational and decision-making capabilities of AI systems. The introduction of AI is to boost innovation and productivity in every aspect of an organization: from sales forecasts to human resource management (and many other things too).
While the AI system must comply with the above-mentioned principles, organizations can include a large number of use cases in which machine learning, natural language processing, computer vision, and other components of the technology are applied constructively. Data managers must understand the clear differences between industrialized data products, self-service data provisions, and proofs-of-concept before identifying and addressing the levels of governance that need to be directed towards each of them. Most importantly, AI systems must be given wiggle room to come up with newer, more exciting results and forecasts. While data analysts will keep a close eye on an AI system’s working, the technology should be used to generate new ideas (ones that humans probably couldn't come up with).
Organizations must know that employees at all levels and hierarchies must be a part of an AI governance program. There are two sides to this: Firstly, as we know, every major policy or action in an organization goes through the top bosses first. Similarly, AI governance programs also need them to provide their much-needed approval before it trickles down to the other levels of the organization. Generally, AI governance strategies are highly complex. As a result, the strategies need to be constantly chopped and changed regarding their quality and practicality. Accomplished and experienced leaders of the organization can provide suggestions that can improve an existing governance model. Employees in the lower levels of the organization are unlikely to make the right changes in a proposed AI governance plan.
Secondly, individual operational units and teams are collectively responsible for the data they collect, manage and use for daily operations. As a result, employees at the lower levels (project managers, operational workers) must strive to continuously improve the quality of suggestions or recommendations put forth by the AI-powered systems. In any organization that implements AI, there must be relentless improvement and accountability regarding the common data issues. Using clear communication, lower-level employees will need to work in sync with their superiors to make real improvements in the working and ethical performance of AI systems.
The proper management of AI models is necessary for organizations to keep their intelligent systems performing at a high level continuously. As a result, model management is gaining prominence in industries where deep learning and other AI concepts are being used on a daily basis (and also becoming a major part of most organizations’ AI governance strategies). More importantly, the changing financial markets and consumer preference trends cause datasets to evolve continuously. Due to such changes in input data, AI models may lose their potency and degrade. Boosting the performance of existing AI models is achieved through updates in AI algorithms, continuous monitoring and relentless testing. Managing the performance and efficiency of AI models can help organizations meet their needs despite the evolving nature of data in the market.
In Christopher Nolan’s 2008 blockbuster The Dark Knight, one of the characters muses about how Rome’s main defense strategy ultimately proved to be its undoing. Similarly, letting AI take over your organization’s operations completely may not be a great idea if you do not have a sound regulation and governance strategy in place. The governance guidelines may change from one organization to another, but a few concepts (ethical AI principles) are applicable to all of them.
As we know, smart machines are the future of everything around us. Organizations using AI in their daily operations is hardly news anymore and, in fact, the convergence will only grow stronger with time. In order to make the implementation of AI in your organization flawless, creating the perfect governance plan is a good place to start.
Naveen is the Founder and CEO of Allerin, a software solutions provider that delivers innovative and agile solutions that enable to automate, inspire and impress. He is a seasoned professional with more than 20 years of experience, with extensive experience in customizing open source products for cost optimizations of large scale IT deployment. He is currently working on Internet of Things solutions with Big Data Analytics. Naveen completed his programming qualifications in various Indian institutes.