The Democratization of Artificial Intelligence

The Democratization of Artificial Intelligence

Kurt Cagle 06/11/2023
The Democratization of Artificial Intelligence

The democratization of artificial intelligence (AI) refers to the process of making AI tools, technologies, and knowledge more accessible and available to a broader range of individuals and organizations.

It aims to break down barriers to entry and empower people with varying levels of expertise to harness the potential of AI.

4 Pillars of Artificial Intelligence Ethics

Here are key aspects of the democratization of AI:

1. Improved Accessibility: Democratization involves making AI tools and platforms more user-friendly, affordable, and widely available. This includes cloud-based AI services, open-source software, and low-cost AI hardware.

2. Simplified Interfaces: Designing AI interfaces that are intuitive and require minimal coding or technical skills, enabling non-experts to use AI effectively.

3. Better Education and Training: Providing training resources and educational materials to help individuals and businesses build AI competency. This includes online courses, tutorials, and certification programs.

4. Community and Collaboration: Encouraging knowledge sharing and collaboration among AI enthusiasts, professionals, and researchers through forums, open-source projects, and conferences.

5. Diverse Applications: Expanding AI applications across various sectors, from healthcare and finance to agriculture and education, making AI accessible for a wide range of industries and purposes.

6. Customization: Allowing users to tailor AI models and solutions to their specific needs, promoting adaptability and customization.

7. Ethical Considerations: Promoting ethical AI practices and raising awareness of potential biases and risks associated with AI to ensure responsible and fair AI development.

8. Promoting Startups and Innovation: Supporting AI startups and entrepreneurial initiatives, fostering innovation and competition in the AI industry.

9. Establishing Government and Regulatory Frameworks: Implementing policies and regulations that promote responsible AI development and address potential ethical concerns.

10. Improved Data Accessibility: Ensuring data availability and open data initiatives to fuel AI development and research.

Democratizing AI has the potential to democratize innovation, improve decision-making, and drive economic growth. It enables a wider range of individuals, organizations, and communities to benefit from AI's capabilities, fostering a more inclusive and equitable future for technology and its applications. However, it also raises challenges related to ethics, privacy, and security that must be addressed as AI becomes more accessible.

Marc Andreesen, of Netscape and VC fame, has been receiving a fair amount of negative press after releasing a manifesto about Techno-Optimism, even from Wired magazine, which is admittedly almost the poster child for techno-optimism. For those unfamiliar with the term (as I was until he surfaced it), techno-optimism is the belief that technology, especially computer technology, is inherently good and desirable and should not be held back by Luddites and government regulators. Technology, in this view, moves beyond invention and instead becomes a secular religion, one that will ultimately prove to be the betterment of all mankind, assuming that by mankind you mean those whose net worth can be measured in the billions or high millions.

I don't know Andreesen (though I have worked sporadically with Netscape and Mozilla over the years). I was at the University of Illinois at Urbana in the math department a few years before Andreesen and cohorts met there to work out the inner workings of Mosaic, and I laugh (while trying to sob) because it was just an accident of timing that I didn't end up becoming a multi-millionaire because of that association. At the time, the web was in the future, despite being told by one guidance counselor that there was no real future in computers and that I'd be better off going into actuarial science (true story).

When Andreesen and James Clark made history in the late 1980s, I saw the effects of unbridled techno-optimism. I had gone to work for a small typesetting company in Jacksonville, FL. At the start of the year, the company made $15 million in revenue from businesses nationwide. By the time they closed their doors a year later, their income had dropped to about $500,000. The reason? A new program ran on the Macintosh called Aldus Pagemaker, and with it companies could dispense with the whole typesetting ordeal and attendant costs.

2035_Office.jpeg

By 2035, the Office as we know it will not exist.

The Virtualization of Work

The lesson I learned that year was simple - you were only as good as your tech, and you were vulnerable if you didn't stay up to date. The lesson was reinforced repeatedly - sitting in on a meeting with a large clothing retailer's accounting department, trying to argue that our consulting team wouldn't put them out of a job by implementing automation, but I (and they) knew better. In the end, the decision was made (wisely) for them to automate themselves, knowing full well that quite a few people working there had months before they had to find a new job elsewhere.

At the time (the mid-90s) a lot of jobs went away, but there were so many new ones being created that it didn't really matter. That, of course, changed in 2000. The stock market, which had gone on to record highs, collapse in the tech sector, many paper millionaires (meaning they held stock options) found themselves sleeping in their family's spare bedrooms and under bridges. After increasing dramatically year-over-year, the tech sector sunk appreciably during that period, and even now tech makes up a smaller part of the economy than it did then.

What automation touches, it transforms, typically by virtualizing it. In the case of AI, it was only a matter of time before we went from providing more efficient processes and assistance to replacement of the people who wrote the programs and the words, who drew the pictures and filmed the videos, usually in the name of efficiency and productivity. Productivity, translated, means the amount of money that a person generates vs. the cost of hiring that person in the first place. There's a brutal calculus there - at some point, the cost of the automation falls below the cost of employing a person, at which point they are let go to find yet another job, while the employer pockets the difference.

We've normalized this process. All that money pocketed goes into the forces that promulgate that normalization. When business demand slows, employers start firing employees who are too close to receiving pensions, who have expensive skill sets, who don't fit the company culture (usually as defined by management). Publicly, this same management and corporate boards give platitudes about how they are reluctant to do this. Still, privately, their mission has been a success - siphon up the expertise of those they employ so that they can turn them into AIs, capture their knowledge, and, better yet, prevent those so employed from taking those ideas to competitors.

This does not mean that every CEO of every tech company is engaged in some vast conspiracy. Most take their companies and their missions seriously. However, it has become increasingly common for some, often in positions of high visibility and responsibility, to express dismissive and disdainful opinions of their employees, customers, and even their peers. This Tech Bro attitude is especially pervasive in Silicon Valley, an abrasive "I've got mine" belief that would not have been out of place during the rail baron era of the 1890s.

The_moat_keeping_competition_out_is_shrinking.jpeg

The moat keeping competition out is shrinking.

AI Democratization and the Diminishing Moat

Ironically, it is likely this time around that the hubris may be short-lived, primarily because the AI revolution is spawning a revolution of a different sort. In the winter of 2023, the big AI models seemed to leap out of nowhere, with an application that looked likely to up-end the established software industry completely. OpenAI became a household name, an epidemic of AI-generated school essays swamped the educational system, and anyone involved in any creative or professional felt a chill as the specter of career death walked over their graves, especially since these companies had essentially used the public Internet (and billions of pages of content and imagery) to train their models (and by extension reproduce content and images that borrowed heavily from this source). The assumption likely was that by doing it fast enough, this would be a fiat accompli.

Unfortunately, it didn't quite work out that way. The code escaped the lab, quickly becoming something akin to open source. For a while, everyone wanted their large language model until it became evident that such models were, in fact, truly large and monolithic, and then programmers with lots of time on their hands after being let go as spurious labor began reverse engineering what they saw and generally making generative AI more compact, more efficient, and easier to integrate with the rest of the world.

This has breathed new life into the hoary realm of semantic graphs as developers in the machine learning space have begun to comprehend what the semantics people have been saying for a while - you can encode logical inferential data into graphs then use those graphs to generate the associated community language models (LLMs), which resolves several of the bigger headaches that working with LLMs incur.

Once this happens, AI becomes a commodity, not an expensively-priced service. Every company can build configurations that pull together different data sources, regardless of whether those sources are LLMs, knowledge graphs, PDFs, or any other data.

This presents a dilemma for the Data Barons. The intent by several of them was to be the keeper of the only truth, as provided by the single master model, available only by prescription at twenty cents per person per hour. For the individual, this amount was trivial, but for companies, that amount was a direct hit to the bottom line. So, of course, the expectation was that a B2B rate would be worked out while still giving these data barons their tithe.

With AI now in the wild, that equation changes ... dramatically. Companies now can create their own AI LLMs for far less than the original systems took to build, and can better reflect specialized content back to the users. While DallE-3 and Midjourney have become the go to platforms for everyday image generation, Stable Diffusion continues to establish itself as the go-to image experimentation community - and is becoming better about policing itself. Similar things are happening on the code generation, video generation, music generation, and related toolsets side.

Medieval castles were often designed with a moat, a deliberate sunken trench surrounding the castle. Most times, the moat was not filled with water, primarily because stagnant water encouraged mosquitoes and other nasties that the local inhabitants had to live with. Still, the real purpose of the moat was to make it difficult for forces to scale the walls otherwise. Moats play a huge role in modern capitalism, typically by forcing competing companies to raise additional capital, hire from a limited workforce, and deal with patents and licensing fees of the first adopter in question. The deeper your moat, the fewer competitors that you'd face.

The Democratization of AI has destroyed the moats that companies use to entrench themselves in a market. The reality is that the development processes involved in creating a start-up require a relatively limited amount of capital. The expensive part comes when a company is forced to scale rapidly in order to achieve net returns as quickly as possible. However, as AI is increasingly disseminated and democratized, that model is changing in favor of one where data - declarative data - subsumes business logic, and where that explosive growth phase instead shifts into a more manageable (and sustainable) climb. It also lays the groundwork for a more comprehensive and equitable sharing of data rather than the distinctly asymmetric relationship that exists today.

The_data_barons_have_benefitted_from_a_system_where_money_speaks_louder_than_talent_creativity_innovation_and_hard_work.png

The data barons have benefitted from a system where money speaks louder than talent, creativity, innovation, and hard work. Investment has been a necessary part of tech, but it is time to re-evaluate if it still is.

Limiting the Data Barons

The rub with all of this is that AI democratization generally is anti-monopolistic. Capitalism requires fair markets to remain viable, and when those markets become monopolistic, capitalism degenerates. To put it in slightly different terms - healthy capitalism is quasi-stable. When capital becomes too concentrated, it becomes monopolistic; when capital becomes too dilute, value cannot be established, and everything requires consensus. Instead, you want a system with some benefits to accrue from investing, but value is reflected by transaction price.

Give everyone the tools to provide data-centric AI; you only need specialized services at the edges. This shouldn't be a new concept - it is, in essence, what has been happening with open source for the last twenty years. We don't need to recreate the stack every time. What we do need to do, however, is figure out how to compensate those who contribute to that stack and get the middlemen (the financiers) largely out of the stack.

No doubt this may be seen as heresy, but we don't need the level of VC funding in the software industry that now exists. Yes, people want to get paid for the code they write, the documentation they create, and the images that can make their way into specialized models for sale. They need to eat, clothe themselves, pay the rent, put their kids through school, take an occasional trip, or go out to see the movies. They want to get compensated for their efforts and do not want to scramble to survive in case of a healthcare crisis or family emergency. These are not unreasonable expectations.

It's time to re-evaluate the VC model. The cost to create a piece of software has been dropping dramatically over the last several years, to the extent that getting a viable product to a point where it is ready to market represents perhaps 20% of the overall costs associated with that product, typically requiring a small creative development team for about four to six months, and this doesn't factor in any of the NoCode solutions that have emerged as part of contemporary AI efforts.

What this means in practice is that the development and deployment of business solutions is dropping below the point where it makes sense to invest many millions, if not billions, of dollars into companies. Without those investments, much less money goes to VC firms, investment banks, and large investors, and more of it remains in the hands of the founders and creators. Moving to a model where developers can also participate for points of final profit, an ownership model akin to company ownership would also go a long way toward making the space more equitable.

There are ways of self-funding these projects, but right now, most are blocked because the only access to funding is through VCs that want an unhealthy return on their investments in perpetuity, just as access to these projects is blocked through recruiting agencies. After all, those same VCs want to arbitrage labor rates across different economies, reducing wages (and hence standards of living) in some countries while importing inflation into other countries that cause extreme disruptions and wealth imbalances. On the other hand, if such is not reduced, Work From Home must become the new normal. Workers can also arbitrage wages and job opportunities if labor is not limited to working within a geographically constrained region.

The other problem is that solutions generally transcend business sectors, and AI is no exception. We see the verticals - healthcare, media, transit, finance, agriculture, etc.- and believe each domain has unique problem sets. However, once you break the problem into data and delivery, the delivery is remarkably consistent across sectors because the business logic is, and should be, in the data. This is what a graph solution provides, similar to an LLM. Yes, you need to identify and articulate what that business logic is. Still, any good semanticist fully understands that a knowledge graph is a perfect way to build applications because rules are fundamentally declarative. Express those rules as metadata; it doesn't matter what vertical you're in.

Conclusion

This is a long post and admittedly covers a great deal of territory. The upshot, however, is that the current evolution of data technologies, which looked to favor a few large corporations, is increasingly being diverted to smaller companies and individuals as monolithic AI solutions give way to decentralized, distributed ones.

This in turn, is raising questions about whether the multibillion-dollar investments being made in AI companies are doing more harm than good, even as these companies shed the very workers that are developing this technology in the first place. At the same time, those doing the investing are endangering potentially millions of people who will be displaced by this technology, not because the technology is truly that magical, but because it is increasingly used to justify draconian actions that enrich those that have already gamed the system heavily in their favor.

Democratization of AI may very well be one solution to this. Getting the generative components of AI into the hands of individuals and small organizations will open up opportunities both for tool builders and subject matter experts/creators, primarily die to the much lower barrier to entry for independents over established publishing giants.

 

Disclaimer: This is a tl;dr post. It's an articulation of some frustrations of mine about the tech field in general, the people that often perceive themselves to be the solution but may actually be the problems, and the troubling economics of an AI economy.

Share this article

Share this article

Kurt Cagle

Tech Expert

Kurt is the founder and CEO of Semantical, LLC, a consulting company focusing on enterprise data hubs, metadata management, semantics, and NoSQL systems. He has developed large scale information and data governance strategies for Fortune 500 companies in the health care/insurance sector, media and entertainment, publishing, financial services and logistics arenas, as well as for government agencies in the defense and insurance sector (including the Affordable Care Act). Kurt holds a Bachelor of Science in Physics from the University of Illinois at Urbana–Champaign. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline