AI and AIOps are transforming the future of work and IT Operations respectively and this pace is only set to accelerate given the pace of digital transformation and the catalyst of COVID-19.
So it was a pleasure to take part in a recent panel hosted by IBM UK and BEE Executive Events on this critical subject area, with this session available now on demand. Here are my takeaways and reflections on shaping the future of IT, covering the adoption of AIOps, AI and automation fears; reducing downtime and improving productivity, collaboration and experience; and achieving workforce balancing across technology, culture and skills. How can companies more easily plan, adopt, and ultimately deploy AIOps more efficiently and what are the primary benefits this will bring? Hint – this is people and tech in partnership.
Most companies are in the early stages of AIOps adoption – that is, the application of analytics and machine learning to automate and improve IT operations – and its relevance and impact potential is increasing. COVID-19 has proved a significant catalyst, accelerating digital transformation at scale and with speed, resulting in evolution that to varying degrees feels set to last. This includes a hybridisation across personal and professional life with combined working/home environments, evolving customer behaviours such as comfort in using alternative payment methods across wider demographics, and new expectations driving consumers and employees alike, including around conscious consumerism, sustainability and inclusion. As well as creating opportunities and positive change, this also creates vulnerabilities and complexity too.
A challenge that has gained particular prominence is that of Security with cyberattacks taking place a staggering every 39 seconds, on average 2,244 times a day - and a 300% increase in reported Cyber Security crime in March alone (FBI, Eaves 2020). Estate threat areas are increasing due to a number of factors including COVID- linked phishing, convergence of IT and OT, and increasing omni-use of devices for personal and professional use, which can lead to human error and security lapses, for example through ‘work-arounds’. This is only set to increase with the number of always on, sensing and connected IoT devices set to increase by some 50% by 2025. Another challenge is the deluge of data – the 4 V's of Volume, Velocity, Veracity and Volatility – but this is only as good as the availability and quality of that data, and the capacity to filter, interpret and apply it for relevant insights and informed decision making. Artificial Intelligence and Machine Learning needs quality clean data, which in turn requires a good information architecture (IA), which provides confidence too. Getting data right is the starting point imperative for change - simply put, garbage in, creates garbage out.
In response, attention is moving to extending automation or AI enhanced automation to help with some of these challenges. A recent report by IDC predicts that by the end of next year, 70% of CIOs will aggressively apply AIOps to cut costs, improve IT agility, and accelerate innovation. Some of the specific benefits are achieving a faster mean time to issue resolution, enabling proactive to predictive management and modernising across IT Operations and IT Operations teams. More information on the IBM AI Ops approach is available here.
Alongside the technological change is attention to human factors such as culture, skills investment and leadership evolution, especially in the C-Suite. I personally see a significant shift in the CIO role moving towards one of change agency and strategic leadership, requiring greater alignment with CISO and CFO roles, and reflecting the need for both individual and organisational ambidexterity and a holistic and proactive approach to change.
It is time to change the narrative on what AI and automation really means. We have so often seen an AI powered future presented in the headlines, and especially in the language used, as one that is scary and leaves us vulnerable, for example highlighting job ‘destruction’ and ‘elimination’ – highly emotive terms. We need to reset and rebalance the narrative to negate the fear, showcasing what this technology can enable, including in our everyday lives, and how it can support us as a tech-human partnership. It is about building people’s confidence and trust, alongside guiding them through the change and providing the skills ‘toolbox’ to continually adapt and feel comfortable in doing so.
Embedding trust necessitates getting accurate information mainstream to build awareness, understanding and balanced expectations of the future. This includes transparency – for example acknowledging that there will be jobs that will be displaced, requiring reskilling or upscaling - what equally if that can be identified early, then we can put in place the right pathways to address that now. Recent research by Edelman, highlights the importance of trust and its reciprocal relationship with having knowledge, and also that whilst trust levels have been at a global low across sectors, the collective COVID experience has brought out an ethos of cooperation over competition - I believe this can be meaningfully applied in other areas such as AI development. Further, new developments in no-code and low code can help democratise access to automation, building both confidence and understanding too.
Explainable AI is also an imperative to trust – being able to explain how an AI algorithm makes a decision quickly and in an accessible non-complex way. A related concept is that of Responsible AI which raises considerations of safety, ethics, fairness and the respecting of privacy. As a reference, the model building guidelines developed by the European Union High Level Expert Group on Artificial Intelligence proposes that in general, models should be:
Lawful - respecting all applicable laws and regulations
Ethical - respecting ethical principles and values
Robust - from a technical perspective whilst taking into consideration its social environment
‘The future is already here, so let us approach it in a very responsible way and think about the implications, which are the unintended consequences - and they may be unintended, but for the person on the receiving end of the unintended consequences, it may feel intended. We need to fully think through these solutions’ - Nigel Willson, Awaken AI
As discussed in the panel, it is often asked, is legislation the best solution to the challenge? I believe it is part of it, but we need to do more, starting with developing and embedding ethics and norms around decision-making into everyday organizational culture and practice whilst advancing practices around self-governance too. From an IBM perspective, it is excellent to see these areas are clearly centre stage:
‘Fairness, value alignment, robustness, explainability, trust and accountability. The rule of thumb is to start with these focus areas in the development of AI (at IBM). There is an awareness in the industry that this needs to be done… to remove uncertainty and fear. The industry needs to give the reassurance and from that will come the governance and compliance’ – Angus Jamieson, IBM
Making AI and automation empowered to change a reality starts with an audit of your current state. The key is the pace of change – what are you trying to achieve, what systems are critical? and then moving from a reactive to proactive positioning. It is about knowing where you are, what your goals are and progressing through them. Additionally, seeking ‘quick wins’ first can help to ascertain the value and build organisational buy-in. We always see shifts, for example from batch to real-time and streaming and COVID has made businesses re-evaluate where they are and where they could be. The IBM garage process can help support this process, focussing on people, processes and architecture before even getting to the tech.
The acceleration of digital transformation in 2020 has resulted in years’ worth of change occurring in months, with calls from Zoom, to Webex, to Teams now an everyday ‘norm’. This has advanced understanding on the need to transform and show the ‘art of the possible’ – the next stage is raising experience levels through personalisation and immersion to enhance engagement, and also to help ensure we can feel emotionally connected when physically disconnected. I think the rise of emotive technology and Emotion AI will be vital here.
Another example is the incorporation of a chat app feature into the IBM AI Ops solution, so that teams can come together digitally over a collaboration platform such as Zoom or Slack, working off the same data. This is an evolution of how IT Operations teams will work and an opportunity for organisations to look at the convergence of different support teams, coming together through a single application to remove traditional silos. There is also a critical movement towards a service lens. AIOps can help here by improving systemic connectivity and reducing silos to become more service centric. And it is not just about enhancing customer experience, but about the employee experience too.
Indeed, digital transformation is key to AIOps and navigating the rapid rate of change. Traditional methods of solving problems with big data and keyword searches are simply no longer enough. With Machine Learning and AI Ops, we can now solve entrenched problems where the root cause was previously unidentifiable. And better still, we can see problems in real-time and become proactive - identifying issues before there is any degradation of service and with a full audit trail. This is the evolution of Service Management, but it is not a people replacement capability. By contrast:
‘AI Ops is a massive opportunity for people to expand their skills and knowledge and use their skills in other areas. Data scientists and analysts - how their skills can be brought into the IT operations space and vice versa. There is opportunity for someone working in one area to see what a career path in another area would bring. It is an opportunity to bring different skills together and form new career paths’ – David Metcalfe, IBM
Holistic agile skills are key to the future of work alongside being comfortable with and proactive to continual and often ambiguous change – lifelong learning, alongside knowing how to learn best through metacognition has probably never mattered more. While the focus on STEM has been critical, it can also have unintended consequences in what we are learning and prioritising as having most value. With the advance of AI, AIOps and automation, the skills which make us human are ever more vital, for example creativity and emotional intelligence, so I believe necessitates a STEAM learning approach where the arts takes an equal stage.
‘We need to embed creative confidence, curiosity and excitement on an equal basis to technology skills, from schools to organisations. I think of this as the rise of the deep generalist - having a specialism in a particular area, but having that toolbox, and broader confidence to engage with anyone, whether a person or a bot. We need to invest in people to continually adapt and take away the fear. We also need to target particular skills gaps, bringing in a diversity of experience and improving metacognition. We need to be smart learners as well as have smart technologies’ – Sally Eaves, Aspirational Futures
Finally, diversity remains a key issue and not only around issues of gender or race as examples, but beyond this to embracing a diversity of experiences, from philosophers, to historians, to creatives, within the teams building AI and contributing to the technology industry more broadly too. The challenge we have with algorithms, to a large degree, is the lack of diversity in the people that are writing them and the risk of bias, often implicit, that results. Putting this into context, alongside the potential for baked-in data biases, we have a staggering 180 human biases too.
If we can change our approach, then we can change some of the systemic problems and fears around AI, AIOps and automation. And to make it a reality, moving-on the narrative and investing in education and awareness is imperative. Technology careers are not all about computer programming and indeed low code and no code is democratising access to automation. Sharing this message, underpinning technology change with continual learning and supporting people to be effective whilst drawing out the things that makes them unique – this is our critical next step for sustainable change and human-tech partnership.
Concluding, the benefit of AI and AI enhanced automation such as AIOps are clear, from reducing downtime and improving incident management, to moving from reactive to proactive operations to benefit user experience and service orientation alike. And to actualise these benefits, it is equally clear that we must invest in people, culture, processes and skills, embedding confidence and trust. The time is now to change the narrative on what an AI empowered future of work really looks like and to encourage, inspire and support more people to embrace the change.
Dr. Sally Eaves is a highly experienced Chief Technology Officer, Professor in Advanced Technologies and a Global Strategic Advisor on Digital Transformation specialising in the application of emergent technologies, notably AI, FinTech, Blockchain & 5G disciplines, for business transformation and social impact at scale. An international Keynote Speaker and Author, Sally was an inaugural recipient of the Frontier Technology and Social Impact award, presented at the United Nations in 2018 and has been described as the ‘torchbearer for ethical tech’ founding Aspirational Futures to enhance inclusion, diversity and belonging in the technology space and beyond.