Generative AI Will Reshape Tomorrow’s Tech Landscape

Generative AI Will Reshape Tomorrow’s Tech Landscape

Helen Yu 08/04/2024
Generative AI Will Reshape Tomorrow’s Tech Landscape

Generative AI is poised to reshape the future of the tech industry.

Before the tech boom of our time, being responsible in life often fell upon someone’s word, a handshake or even a paper napkin, as with the first contract for soccer star Leo Messi when he joined the Barcelona team in 2000.

The world has most certainly changed.

Over the past decade, the tech landscape has exploded with new innovations, from traditional systems to cloud services and ransomware protection. Tech teams worldwide are shifting focus to streamline multi-cloud operations, aiming to innovate without inflating costs. Swift adoption of generative artificial intelligence (GenAI) tools is now crucial for IT to stay competitive. 

But who will take responsibility for AI’s behavior? How do we ensure responsible AI development and ethical innovation? How might we maximize technology’s full capabilities without compromising security, compliance, data sovereignty or the moral obligation to respect people’s privacy?

What Are Large Enterprise Companies Doing?

GenAI is a clear catalyst for innovation, and responsibility is a topic front and center for the biggest tech pioneers of our time. 

During this year’s VMware Explore, on a Las Vegas stage bathed in cerulean blue and purple, responsible GenAI was one of three key topics explored by VMware President Sumit Dhawan and Rajeev Khanna, Chief Technology Officer at Aon. The global professional services firm, with about 50,000 employees across 120 countries, is an avid user of VMware solutions. According to Khanna, GenAI “opens a whole new set of opportunities.” He says that Aon is, at the end of the day, in the risk consulting business meaning that it’s prudent to balance excitement about what’s next with a steady, cautious approach. 

I agree. It is easy to get enthusiastic about the next shiny object in tech, but there are many hurdles to overcome before scaling innovation and gaining company-wide adoption. Khanna stresses building and maintaining a culture of responsible AI use and governance – and never losing sight of how foundational human oversight is to the ethical and responsible use of AI. 

Bringing GenAI to All Businesses

Bringing_GenAI_to_All_Businesses.png

At VMware Explore 2023, significant advancements were unveiled. A collaboration with Nvidia resulted in VMware Private AI Foundation, integrating Nvidia Enterprise AI into a versatile platform. This allows IT to efficiently manage large language models with privacy, security, and performance in various AI/ML workloads.

VMware Tanzu portfolio additions simplify container-based app management and enhance security, while the Edge Cloud Orchestrator enables rapid edge site provisioning. The broader enhancements include cloud control planes, stronger ransomware protection and performance boosts for VMware Cloud Foundation, culminating in the potent platform for traditional, modern and AI/ML workloads across clouds and edge.

We are in a transformative phase, allowing organizations to optimize operations, reduce waste and foster innovation. At VMware Explore, the company emerges once again as a key player, equipping tech divisions to amplify productivity, accelerate innovation and drive sustainable success.

Are We “Just Ken” When it Comes to GenAI?

In the movie Barbie, Ryan Gosling’s character sings “I’m Just Ken” while musing on the uncertain role he plays with and without Barbie by his side. The VMware Explore panel discussion “Responsible AI: What Role Should Humans Play?” underscores how we too are unsure what role humans are to play in the dynamic convergence of GenAI and multi-cloud technologies. Hosted by Office of the CTO leader Richard Munro, the panel did a great job of exploring the ethical principles guiding AI system development and human involvement. 

To start, Meredith Broussard, data journalist and associate professor at the Arthur L. Carter Journalism Institute of New York University, defines AI as “just complicated, beautiful math.” She said many people think of the Terminator, Star Trek or Star Wars when talking about AI, but we need to distinguish what is real and imaginary. AI, she said, is “pattern reproduction,” describing it like this: data is fed into a computer that makes a model and the model shows the mathematical patterns to make decisions; generate new text, images, or audio; and predict outcomes. 

CEO and founder of The Cantellus Group Karen Silverman talked about fixing the “plumbing,” and how board composition matters. The people at the top, she said, should have a full understanding about AI’s implications. 

What resonated with me most, however, is the discussion around how AI will change culture. Broussard shared the importance of confronting bias and misconceptions in AI systems, also urging us to assume there will be social problems manifesting from AI bias. 

Lastly, Chris Wolf, vice president for AI Labs at VMware, shared that 35% of AI labs are staffed by women and 50% by minority. The company’s AI council has three pillars: Smarter Businesses to help customers, Smarter Products and Services, and Smarter VMware. Also, VMware AI Labs has three workstreams: Research, Incubation and Advanced Development. Two of the three work streams are led by accomplished women. Could this diversity be the new face of AI leadership? I think so. 

Private AI was further explored by the panel. Private AI means using smaller models that are easy to train. Less resources translate into a lower carbon footprint and more accuracy. Private AI allows an organization to iterate against the cycle faster without huge impact on the environment. It’s not just about AI, but about cloud, customer, content, and context. 

One thing is certain: AI is a long game. To quote Chris Wolf: “Avoid the temptation of having a quick win to satisfy the mandate for early success of AI. Be mindful, have choices built in as a capability of AI.” Look for my upcoming episode of CXO Spice featuring Chris as we dive deeper into the topic of responsible AI.

Paving the Way for an Ethical AI Future

The panel emphasized that people should feel comfortable asking questions to guide AI to generate intended outcomes. Our responsibility is to define what AI should be doing and not be doing, and to elevate AI literacy so people understand how AI could amplify existing bias and disinformation. 

Taking great inspiration from the panel and my own experience with AI, here is my view using the acronym RESPONSIBLE.

Reliable: AI’s reliability hinges on high-quality data and the mitigation of biases within the model.  Remember when Apple came out with the Health app in 2014 and it didn’t include menstruation tracking? Reducing bias and building in full representation within the model drives reliability and accountability.

Ethical: The purpose of AI deployment must align with the betterment of society, adhering to regulations.  Embedding ethical guidelines in AI models ensures responsible use. 

Secure: Safeguarding AI’s learning model is crucial as it could fall into unintended hands.  Protecting sensitive data, employee information, and customer data is imperative, and knowing whether your AI model is open-source or private is vital for security. 

Privacy: The nature of data determines its privacy requirements. Identifying whether data is highly sensitive, mission-critical, or subject to regulations is essential to determine what should or shouldn’t be included in the AI model.

Open: Open and transparent communication about AI’s role with employees, customers, and supply chain partners fosters trust and ensures everyone is informed about its purpose and potential impact.

Non-disguised: Addressing the challenge of AI opacity, it’s essential to understand the inner workings of algorithms, how they drive outcomes, and the cascading effects of altering variables within the model to enhance transparency,

Standard: Implementing guardrails is a crucial aspect of ensuring responsible and ethical AI development.  Guardrails help set boundaries and guidelines to prevent AI systems from causing harm or making unethical decisions. 

-Ible – is, above all else, human-centered. GenAI’s true potential lies in its accessibility to people of all ages and professions, making it a tool for everyone who can ask questions. My core philosophy is that growth thrives at the cross section of technology and humanity. Technology serves people.

What Does Responsible AI Mean to You? 

3 Low Risk AI Application Areas For Smart Cities

Who is responsible for AI? The short answer: all of us. Chris Wolf made the point that there is a lot we don’t know and there are no industry standards. We can all take a page from VMware’s book as they ask questions about AI to customers as if they’re “peers.” This humble, team approach is profound.

Collective responsibility is shared by organizations, experts, and policy makers in shaping AI’s trajectory. As we look to GenAI to make decisions and provide insight, solutions like the VMWare platform allow us to confidently pivot and adapt. We live amongst fast-changing scenarios and shifting economics, and the models we use must be elastic and dynamic. GenAI in the smart cloud allows for flexibility. Engage in discussions that promote ethical AI development and deployment. Build the infrastructure for AI first and then expand. And most of all, like Wolf suggested, keep asking questions and stay curious.

VMware CEO Raghu Raghuram gives us a glimpse of what’s on the horizon:” For any meaningful enterprise, their data lives in all types of locations. Distributed computing and multi-cloud will be the foundation of AI. There is no way to separate the two. Generative AI will enable us to understand a global problem in a much deeper fashion and create solutions we can hardly imagine today.” 

Just imagine.  Imagine the possibilities of GenAI in business…if done thoughtfully and responsibly.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Helen Yu

Innovation Expert

Helen Yu is a Global Top 20 thought leader in 10 categories, including digital transformation, artificial intelligence, cloud computing, cybersecurity, internet of things and marketing. She is a Board Director, Fortune 500 Advisor, WSJ Best Selling & Award Winning Author, Keynote Speaker, Top 50 Women in Tech and IBM Top 10 Global Thought Leader in Digital Transformation. She is also the Founder & CEO of Tigon Advisory, a CXO-as-a-Service growth accelerator, which multiplies growth opportunities from startups to large enterprises. Helen collaborated with prestigious organizations including Intel, VMware, Salesforce, Cisco, Qualcomm, AT&T, IBM, Microsoft and Vodafone. She is also the author of Ascend Your Start-Up.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline