Will Generative AI Spell The End Of Work?

Will Generative AI Spell The End Of Work?

Kurt Cagle 08/12/2023
Will Generative AI Spell The End Of Work?

Generative AI will change the nature of work by automating routine tasks, it's more likely to complement human work rather than replace it entirely.

Will AI end jobs? This question is asked in any number of forums and posts, yet ultimately I don't think there's really any doubt. AI is eliminating jobs, has been for a long time, and ultimately will reach a stage where we will hit an equilibrium where it's simply not cost-effective to get rid of anymore.

I think the bigger question is whether or not Generative AI is eliminating jobs that should be eliminated anyway.

Of Niches, Enhancements and Replacements

Before digging deeper into "well, what happens then?" it's worth discussing a few basic concepts that make it a little easier to understand what exactly is happening. The first term, job niche, is the set of all jobs with roughly the same skill sets and requirements. As a general principle, AI does not so much eliminate jobs, which are actions and tasks for a specific purpose, but job niches, which represent a much broader set of such employment.

For instance, an AI that successfully converts a prompt into the corresponding SQL query could eliminate the job niche of SQL query writer. A similar AI that automates the need to understand creating SQL schemas could eliminate the job niche of SQL schema writer. A SQL developer with the schema function eliminated could survive the elimination, but with both schema and query eliminated, the need for the SQL developer no longer exists. The developer will not be immediately eliminated, but when future developers move on (or when the opportunity presents itself to remove headcount), the likelihood is strong that SQL developers become vulnerable.

A niche is considered transferable if the new AI niche can be filled relatively easily by skills of the same complexity. A person using prompts to create SQL likely still needs to understand what SQL and schemas are but also needs to understand prompt engineering. As this isn't that big of a leap, the niches involved are transferable - it's unlikely that this will have a huge impact on the industry either for older developers learning new skills or newer developers looking to specialize.

Transferability usually occurs when you're seeing a shift between two virtualized niches (SQL to query prompting). Prompt engineering still requires an understanding of database structures, just as "no-code" usually still requires some basic understanding of algorithmic programming. The latter, which can often take a while to assimilate fully, guarantees that the typical users of copilots will be programmers seeking to reduce the amount of time writing "tedious" code in favor of fun rather than non-programmers seeking to become programmers.

Indeed, the latter camp may struggle when dealing with the 80/20 Pareto characteristics of modern coding because they lack the foundations necessary to write code in the first place. As intellisense-type interfaces have been around for more than twenty years, the belief that programming is going away may be both specious and premature.

This holds, especially in the Generative AI realm. Even before ChatGPT first surfaced, spell checkers evolved increasingly into grammar checkers, often making stylistic recommendations based upon transformer algorithms. We've been moving towards understanding broad language structures in various scenarios, from novels to screenplays to marketing copy. Marketing copy is generally highly stylized and formulaic, and as such, generating marketing copy, a largely thankless task, is a legitimate reason to use artificial intelligence. When thinking about careers, no one seeks to become a marketing copywriter.

This, however, becomes one of the key points of contention about virtualization. When you introduce AI in a generative capacity, there's an interesting curve that I suspect is related to the Uncanny Valley problem in animation. When a generative process handles tasks without specific tweaking, it usually performs remarkably well. This makes sense in that what is being described is the solution to a previously calculated optimization surface.

To explain what that means, this of any machine learning solution as one in a very complex curve is plotted in a great number of dimensions. When you create a prompt, the prompt serves to identify a set of parameters along this curve for identifying the most likely clusters of meaning. You can also post-process the results by applying patches to the curves, but those patches make the solution curves less accurate but still reasonable in most cases. This is one source of hallucinations. By taking into account the new information in subsequent builds, a newly created model will jump in accuracy, though of course at the cost of having to rebuild the model.

Note that this is implicit to the media. More traditional data stores face a similar problem because they rely upon indexing strategies to create index keys, with performance being the property taking the biggest hit, while LLMs face both performance and accuracy issues. No algorithm is going to significantly reduce this - few database optimization strategies come for free.

How is this relevant to virtualization? Among other things, it serves to illustrate that there is a fairly hard upper limit to what the current technology brings with it. LLMs in particular must be fed in order to stay relevant. An end run was conducted by the newly emergent AI companies presenting a genii-like product that seemed a game changer upon first impression (and for all the negativity, AI will certainly have an impact). However, the extraordinary claims demanded extraordinary proof, and one of the key aspects of the technology is that there will be a trade-off between availability of resources and scope of the LLMs. Someone will need to be paid to produce those resources, and the more work involved in creating them (the more creative the work), the more expensive this will be.

Now, you can use the result of LLMs and other AI as source material. This is used quite commonly with images and GANs. However, even there, entropy takes its due. GANs are again high dimension optimized surfaces. When you create an image in Stable Diffusion, what in essence is happening is that an approximation is made of the curves in the general proximity of the desired image. The more generations an image has from its base, the poorer the quality of the resulting GANs.

The mechanisms tend to be different for other forms of Generative AI. Still, the general principle remains the same: We're replacing indexing costs with (potentially much higher) model rebuilding costs by moving to AI models. and increasingly having to add into this mix the very real costs of licensing new material or face the perils of entropy. I will also make the point that I believe that there are semantic and symbolic logic approaches that can ameliorate these costs, but they come at the expense of "playing nice" with technology stacks outside of the machine learning realm, a tough pill for many to swallow who believe that neural network approaches by themselves are sufficient to solve the increasingly thorny issues of neural modeling.

Generative_AI_wont_replace_work.png

Generative AI is Still Augmentative

What keeps emerging in discussions of generative AI is that, as a technology, it works well when used to augment existing skills, though currently, this translates primarily into improvements in quality rather than significant changes in throughput directly for workers. Put another way, AI may be helping workers, but that help seems not to translate so well to financial productivity gains for companies. This also has emphasized that expertise, creativity, and innovation can't necessarily be easily (or cheaply) captured and automated. This lesson is consistent with productivity transfers where expertise is already coupled with AI learned over the last few decades.

In the long term, this provides a good indication of how "dangerous" various forms of AI are. Areas where GPUs are heavily used (such as media production), are quite safe right now because those same GPUs are usually a couple of generations ahead of consumer-level GPUs, with the attendant need for specialists. The same holds for biomedical specialization, economics, weather, and similar complex non-linear system modeling.

Ironically, the jobs that are most vulnerable to AI are also jobs that are facing labor shortages because they don't pay enough to be economically viable (s**tjobs). This might have a surprising impact. As these jobs increasingly get filled by ChatGPT or similar interfaces, the long tail gets truncated, pushing prices up even as fewer people pursue them. This makes it possible for the writers and creatives who remain to become more bespoke in their offerings, with people paying these authors for their writings specifically - regardless of whether or not AI may have been used in the production.

A similar phenomenon occurred in the early 2000s with blogging. The web was flooded with lots of bad writing as people could, for the first time, be seen outside the confines of editorial control. However, within three years, people discovered that even with the advent of at least moderately useful tools, blogging was hard, required perseverance, and often did not provide much feedback unless you were willing to make it work for the long haul (and sometimes not even then). This trimmed the number of potential bloggers dramatically, and while it took a while, enough people made a living in blogging to make it a viable career. Jobs weren't so much lost as they reached an equilibrium. The same thing is happening today.

ChatGPT is not free. Beyond the service costs, you do need to set up effective prompts to deliver what's needed, and this becomes part of the overall expense. It is likely that many GANs based tools will also become embedded in existing toolsets (such as Adobe Photoshop) which may also have per-image or subscription fees associated with these tools. The costs involved per individual query or image is comparatively small, but when you start talking about such costs as an integral part of a paid service adds up quickly, a fact already being reflected in diminishing revenues with ChatGPT and other services despite a still growing number of such services. As the market prices the costs of Generative AI into the equation, the real scope of replacement will become evidence (hint - it may not prove as financially viable as you'd think).

Perfect_Jobs_for_Generative_AI.png

Perfect Jobs for Generative AI

Several years ago, I worked on a project for the Library of Congress. At issue was the challenge faced by the Congressional Research Center, which was responsible for curating and annotating the Congressional Record to identify key individuals, events, locations, and similar information. This work was done by a team of six very dedicated librarians, all of whom were in their fifties and sixties, all of whom would retire within the decade, and all of whom were swamped with the work.

Curation work is critical - it is the first step in building out metadata that helps identify key entities in various and sundry works and as such becomes a big part of search and process tracking. It's necessary, but it is also overwhelming for humans to do it at anything close to real time.

There's a natural synergy here between LLMs, Knowledge Graphs (KGs), and this kind of indexable curation. A knowledge graph is a compact representation of information, contains URIs that can uniquely identify a given entity, and can readily associate text annotations with entities at varying degrees of specificity that can be retrieved quickly. In other words, a knowledge graph is a consistent index, and this is critical in any information system, especially one where the cost of language production is comparatively high.

However, most KGs do not intrinsically handle the creation of abstracts well, and while patterns can be used to detect specific shapes of data, querying a knowledge graph consistently requires an intrinsic understanding of the ontology that KG uses. An LLM generates abstracts. What it does not do is generate the same abstract for the same query. An LLM can identify relationships, but in general cannot go very deep, and the relationships themselves are very broad unless the relationships have been prequalified through something like a KG. Finally, while LLMs can create contextual environments, such environments do not hold across sessions.

Together the generative LLM and the KG complement one another, but they also change the nature of the curational role. Instead of producing and annotating content, the curator will increasingly fall into an editorial role, approving content or choosing between multiple potential summaries. Once in the knowledge graph, relationships can be made or added easily. Publishing into the knowledge graph makes it possible to provide verification and overview, but periodically publishing into the LLM will then make it possible to update content within the LLM with stronger and more verifiable information than would be derivable from otherwise uncurated content.

This should provide an indication of how a number of such "busy work" jobs will end up changing. The human stays in the loop primarily as both quality checker and data steward, rather than as author.

Content authors, on the other hand, will likely be writing for multiple audiences - the existing content management system that's evolved from the blogging/publishing platforms of the last thirty years and the community Language Model/Knowledge Graph system. A Knowledge Graph is a perfect platform for traditional digital asset management systems, and it is only because of inertia that you don't see more KG-based publishing systems in play.

You need this content because things change. Content grows stale, new information needs to be added, news needs to be updated. That is the essence of publishing, and everyone has become so caught up in the AI aspect of LLMs that they don't recognize it simply as another publishing media, albeit a very sophisticated one. This is the topic of another article.

Future_of_Work_in_the_Long_Term.png

Long Term

Focusing first on GANs-based systems that affect video, audio, and imagery: my expectation is that existing copyright and ownership laws will likely hold sway here, with clarification coming in what could be considered collage art. This will mean that when a model is created, it is the responsibility of the model creator to provide a model key or keys that can be used to provide a manifest of all artists in the collection. The model creator (or their proxy if imagery comes from a common image or video stock collection) would then negotiate compensation to the artists in that collection.

It is not feasible to identify, with GANs, what specific images contributed to a given final result. I don't see the US copyright office going there. I can see artists creating their own models, however, which would have different license agreements. Loras and similar model network tools would be sold as software, as their role is transformative rather than foundational. Since such augmentative software is increasingly replacing raw models, in the GANs space, this dichotomy will likely only become stronger over time.

AI-based voices have existed for a while now, and they seem to be following the model used with fonts, with the understanding that a person owns their own voice. Novelty and character voices would also almost certainly carry some form of ultrasonic carrier wave that identifies a voice as being ai-generated, with removal of this carrier wave being considered a legal crime.

Video-ai, for the most part, will be treated as a pipeline of filters on "sequences" of resources - images, video clips, audio clips, text clips, and filters, consistent with video copyright law. Because videos are generally considered narrative structures, the primary copyright will be at the narrative level, while component licensing will likely be a smart contract.

As to LLMs and their derivatives - I will make a distinction here between fiction and non-fiction because the law currently varies here. My anticipation is that documentation, technical papers, business analyses, marketing material, and similar content will ultimately be written chiefly by AI, with humans acting primarily in a setup and curational capacity, such as described in the previous section. This may be a major boon for helping data scientists build comprehensive data pipelines, including the final dashboard analysis, annotated with their own thoughts.

Fiction involves narrative structures, on the other hand, and here I do not actually see significant inroads being made by AI in producing content. It's not so much that I don't believe that AI could do so, but there will be social pressure to keep it from happening. What will happen instead is that AI will permeate from the bottom up - increasingly sophisticated AI suggesting style, plot, characters, and so forth, as the writer is developing their story. What will ultimately make the most significant difference is the amount of time it will take for an AI-augmented author to write a novel as compared to one that's not, as the writing marketplace has shifted to smaller, more frequently produced "books" compared to large books that take up much space on a bookseller's bookshelf but take three to four times as long to write.

What won't happen is AI being kept out of the writing process. Corporate and marketing environments don't care where content comes from so long as it meets their objectives and stays monetarily competitive. Journalists need to get the story out and will use the tools available to them to do so before others do. Academics would be far better served laying out the basic contentions of their research and letting an AI make the results intelligible than they would be in writing indecipherable papers that no one can read. Given these incentives, the notion that organizations will actively discourage the use of AI seems specious at best (not that I expect the attempt won't be made) ... so long as the curational aspect remains.

Future_of_AI_-_Wizard.png

Generative AI is a Magic Wand

There is a tendency, whenever a new technology emerges, to want to put into an existing category. Databases are big, of course, because databases typically sell not only lots of licenses but also lots of specialized hardware. Looking at the hype of generative AI, there are a lot of people who really, really wanted GenAI to be the next major database technology.

And yet, when you get right down to it, ChatGPT is a lousy database. It's not always good at coming back with the "right" answer, and sometimes it struggles even to recognize that there is a right answer. It is highly dependent upon initial conditions and corpora, and often times (especially in the realm of media) generative AI is often more useful in finding novelty than it is in finding accuracy.

In other words, generative AI is transformative in nature. This is good news. Transformational technologies create things. They illustrate, suggest, abstract, explain, all things that humans do because they have to, even if they would prefer to not have to.

Generative AI is a magic wand. The thing about magic wands is that in general, you do not want one working with intent. You do not want it to decide things, because that way lies butterflies with steakwings, astronauts riding on unicorns, and laughing skulls presaging Armageddon.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Kurt Cagle

Tech Expert

Kurt is the founder and CEO of Semantical, LLC, a consulting company focusing on enterprise data hubs, metadata management, semantics, and NoSQL systems. He has developed large scale information and data governance strategies for Fortune 500 companies in the health care/insurance sector, media and entertainment, publishing, financial services and logistics arenas, as well as for government agencies in the defense and insurance sector (including the Affordable Care Act). Kurt holds a Bachelor of Science in Physics from the University of Illinois at Urbana–Champaign. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline