Education in the Age of Artificial Intelligence

Education in the Age of Artificial Intelligence

Kurt Cagle 23/01/2024
Education in the Age of Artificial Intelligence

Artificial intelligence (AI) is set to revolutionize education in the up-coming years.

Let’s do a thought experiment. Assume, for the moment, that AI is not going away, that ChatGPT and its ilk will not only survive but will evolve into a form where computer/human chats become complex collaborative multi-user threads, the next major evolution in social media. Preposterous? I suspect it will happen within the next year, and when it does, it will be a tectonic shift like no other.

The_Indomitable_Why.png

The Indomitable “Why?”

The broad shape of education, for the most part, has changed comparatively little in the last five hundred years. There have been profound changes in our understanding of how the brain learns as both children and adults, which has influenced what we teach, and certainly, the scope of information available to both students and teachers has jumped dramatically. Yet, for the most part, the Socratic method of lecturing - the teacher imparting wisdom, then asking questions of the students, who must show they have understood this wisdom, has remained relatively unchanged.

Young children are full of questions.

“Why is the sky blue?”,

“Why do I have five fingers?”,

“Do fish fart?”,

“What happens to me if I die?"

When they reach this inquisitive age, we take them from their parents and send the to school, where they sit in their chairs (or on the floor) and listen to the teacher tell stories and drill them on the basics of mathematics and how to recognize the ABCs, and eventually we start teaching them the stories that we call history, and only occasionally do we let them draw or paint or play with clay because these are messy things and we don’t want to ruin all of their school clothes if they do it too often.

These children are taught to sit still and listen to an authority tell them about the world, and their questions, if too awkward or too rude, or just not appropriate for the moment, are shunted aside in the name of education. Eventually, they learn that asking questions of authority is a bad thing, that they must always have the right answer, and that no one likes the kid who wants to know everything.

Few educational software packages are a little better, and most are egregiously worse because they are built on the principle that the software authors know more than children and many teachers do. Mainly such software is meant to accompany a book and to make diagrams that were static in the book animated, but because this process of animating graphics is fairly expensive, most books do not animate things in meaningful ways.

The Pandemic exposed how problematic this approach truly is. In a classroom, there is usually nothing that a student can do but sit in their chair and listen because everyone else is doing the same thing, and students have learned that standing up and walking around when the teacher is talking is a guaranteed way to get into trouble. With the Pandemic, the teacher suddenly found that she was dealing with 30 monitor screens in a Zoom meeting, some on, some off, while simultaneously needing to teach her lessons across a PowerPoint presentation they had never needed before, all while attempting to use hopelessly useless educational software.

Is it any surprise that so many students had so much trouble when they had to return to a school system where they no longer had learned how to sit still? Nor is it surprising that so many of those same students (and their older brothers and sisters) turned to YouTube videos (and, to a certain extent, TikTok and Instagram), to actually learn anything. They learned that if they had a question about the world, someone likely had produced a video about it somewhere, and that video was able to answer the question that they had at that moment, rather than waiting for another two years (or forever) until the question fell into some school board’s predefined curriculum to be answered.

When Wikipedia first came out, educators moaned about how awful it was, how it wasn’t a good way of learning anything, and how it should never be used as a citation. “People should learn how to research things in books.” was a common refrain, and teachers (and more loudly, school boards and educational experts) bemoaned that the educational system would collapse because of it.

Curiously enough, Wikipedia is still there, and many people still learn things from it autodidactically - without the need for a teacher or other authority telling them not only what they need to learn but when. If the material was too advanced for a given reader, then this incentivised that reader to become more proficient, and search provided the tools to do so.

AI_in_Education_2024.png

Artificial Intelligence Can Dore More Than Generating Essays

When ChatGPT first emerged, one of the biggest issues was that students were figuring out how to get the chat assistant to write their term papers. Yet what was not said in public forums (much) was that students were also beginning to ask questions that they couldn’t ask in school, not because such questions were salacious or forbidden, but because they simply didn’t fit into the current curriculum. Sometimes, the earliest versions of Chat were sufficiently unreliable that what querents (those who query or ask questions … from the same root as the word curiosity) would get misinformation, and even as the quality of the answers improved, this fear of hallucinations was used to keep people from relying upon it for even basic information.

This is not to say that there shouldn’t be some reservations about a single monolithic source of truth providing these issues - this is an ongoing and very heated discussion about AI ethics and social policy, and there should be reservations, but at the same time, one of the biggest sources of knowledge this nascent information service pulled from was none other than Wikipedia.

I will make a convention here to talk about all LLMs, SLMs, and WTFLLMs as AI Assistants because I don’t want to qualify everything here whenever I write a sentence.

What do AI Assistants do? They answer questions. Up until recently, most people asked questions in Google Search or Bing, which would then bring up lists of resources that contained the answer. About eight years ago, these same organizations began deploying knowledge graphs, which increasingly focused on directly answering the questions people asked about, not just retrieving items. The AI Assistant that has emerged from that bypasses the search altogether (or at least hides it very well) and just gives the answer.

What’s more, increasingly, those questions being asked are directed towards creating new artefacts - stories, images, videos, sounds, things that didn’t really “exist” before the user started asking about them. This is a profound and radical change, and it calls into question really interesting problems such as “What is reality, anyway?”. These questions will ultimately tear apart our existing society because much of that society is based upon absolute truth, even when such truth is neither real nor necessarily truthful.

One of the first places that this capability is already impacting is education. Put a speech-to-text-to-speech filter designed to understand a five-year-old on an AI Assistant, and that five-year-old can ask WHY all day long and get answers specifically scoped to them. They don’t even need to know how to read. Of course, if a chatbot did also write out what it was saying (and what Suzy was saying) as it said it, chances are pretty good that their brains, the little pattern-matching geniuses that they are, will pick up reading at a surprisingly sophisticated level far above their pre-Pandemic, pre-AI peers. That same AI Assistant could drop in pictures, sounds or video (even games) into the stream of consciousness discussions they were having with these early learners.

What’s more, and this is critical, Suzy could tell the Assistant to stop, and the Assistant will stop, only to carry on the conversation again when Suzy is ready. Time for lunch? Suzy can go get food and come back, or maybe she can carry on the conversation on her phone while she eats (many, many five-year-olds now have phones. Think about that).

Does that mean that the conversation is all one-sided? No, of course not. That same AI could (in the not-too-distant future) analyse its interactions with Suzy and decide that there are holes or gaps in the kind of questions being asked and can, in the responses, lead on with responses intended to solicit other questions. “What’s a prism? Let’s talk about light, and how it gets refracted. Let me show you a movie that’s relevant, or even let me create one as needed.”

Adults have been doing this for years - there’s an entire industry built around training videos, explainers, podcasts, and so forth, and this will likely explode, as creating completely customized content in real-time is likely to be impractical for at least another decade or so, likely beyond. Kids have been doing it as well, only this is invisible to the existing educational system.

The Case For (and Against) Generative AI In School

Jean Piaget was instrumental in showing that kids learned certain skills at certain times as their brains developed and that missing those windows would often mean that they would “fall behind” their peers because the brain was no longer as optimized for the task at hand. A bad teacher, an illness at the wrong time, a move, financial stress at home, all of these factors have meant that millions of kids have missed these critical milestones and thus lack the foundation to build their knowledge base. Not surprisingly, race and social class enter this mix as well, as people lower down the socioeconomic pyramid are often more likely to face ALL of these factors to a much higher degree than their more affluent peers. This only exacerbates already endemic social inequality.

On the positive side, what if all kids had access to such AI assistants from an early age? I’ll get into the downsides of what could go wrong, but there are many good things about such assistants, especially if they are paired with human assistants (aka teachers). For starters, getting to the student early enough, such an AI could, with the right foundational base, provide subtle prompts to keep the student engaged on their intrinsic timetable, not on the timetable set by a school board or university curriculum panel. Students could work with their peers as well, even if their peers are on the other side of the planet, and those peers likely are formed by those at the same developmental age, not the same physical age. These “classrooms” would be asynchronous, interactive, and to a certain extent self building.

This also allows for diversification of interests. Some people are not interested in advanced mathematics, music theory, or home economics. This kind of teaching can be introduced subtly, but it can also be done one-on-one, as appropriate to the needs and interests of the students, and customized in such a way as to work around a developing psychological profile. This also gets rid of the whole notion of grades, replacing them with competencies, and ultimately becomes a life-long process, not something that stops at High School Graduation.

What are the roles of teachers in this? I anticipate that teachers will go from imparters of wisdom to one of several other roles - one key role is as the manager of the educational development of their portfolio of students, working as a guidance counsellor, helping with a human touch and human tutoring as need be, advocates for their students, and ultimately the steward of at least a few years of their student’s life. Others will go into curators, ensuring that the material that goes into the assistant’s base is timely, accurate and appropriate, while yet another role is curriculum content development. Many people are essentially providing ad-hoc training on the web in exchange for (minimal advertising revenues which could be paid a meaningful wage doing the same thing for an educational assistant base (or several).

Now, the downside to all of this. The existing status quo will fight tooth and nail to prevent this. I don’t think the opposition will come from the teachers themselves - most of them know the system is broken because they went into teaching. After all, they wanted to teach and instead have become trained parrots with very clipped wings. You see this in the haemorrhaging loss of teachers at all levels as wages have failed to keep up with inflation or their educational certification costs, leaving schools increasingly struggling just to find enough teachers to get through the school year.

The opposition will also come from several sources - parents who are forced into working in offices who now must figure out how to keep their kids safe and supervised during the day, school boards that special interest groups are infiltrating to promote specific cultural or religious agendas, publishers whose profits come from sales of both books and educational software, academic researchers who see schools as their laboratories, religious groups who want to control the education of their flock’s children, politicians and civic leaders who use education as a rallying point for their constituencies, and many others. That’s one reason I see this transition as a decades-long process and why I expect that population density will have a big role in adopting an AI-assisted educational system.

In addition, there are concerns about what goes into that knowledge base, the potential for indoctrination along any particular ideology, not to mention sharding that could occur as different regions or interests set up competing educational systems. Indeed, what I’m fairly certain will happen is that you’ll so an emerging overlay of multiple networks emerge - public nets, private nets, commercial/corporate nets, university nets, religious nets, even political nets, that will each likely establish standards first within their networks then between networks. I also think you’ll see open-source nets that will likely reflect that philosophy of information sharing that may, over time, act as the glue that connects the other networks.

AI will play a big part in this, and I anticipate that those who try to inject too much ideology into the mix run the risk of damaging their own children’s competitive potential. However, there may be another factor that plays into the mix - such networks are no longer solely geographically based, though there will still be some geographic bias (in the, for instance, Northeast vs. South vs. Inland Midwest and Plains vs. West Coast). In other countries, you’ll see similar macro realignments, with different networks having many outliers within other network territories.

What I don’t see happening is that the status quo will hold. Demographics alone means that school districts are going to continue to shrink in terms of number of students, the rise of AI Assistants is going to challenge the school districts even more, as is the decentralization and network redistribution that is occurring at all levels (Work From Home is a part of that, as is the decentralization of investment and research). Indeed, I’d argue that one of the biggest groups opposed to Return to the Office directives is working parents (especially women) who increasingly need to be at home to manage their kids (and ageing parents) even as they struggle with a full-time job to make ends meet.

To Sum Up

Once you open up the possibility of AI Assistants being a part (possibly a big part) of the educational system, it will likely start a coalescent process that will likely take between five and ten years to take hold, again primarily in urban centres first. I don’t see formal schools disappearing overnight, but once the process reaches a sufficient critical mass, it will likely transform existing educational systems beyond recognition, and it will likely do so worldwide.

Whether this is desirable or not is another question. Educational studies have strongly suggested that we retain less when we interact with screens of any sort vs. traditional pedagogy, that what we do learn is at a shallower level, and that we may be sacrificing critical thinking and memorization development for ease of use in retrieving information from (and for that matter storing to) outside sources. This is part of a broader trend in which we seem to be exchanging offline literacy for online literacy and the notion of “outsourcing” our knowledge to automated providers.

I’m curious to hear what you, my readers, think about the role that AI will play in education. This has been a subject I’ve been thinking about for awhile, and I’d be curious as to what other takes people have on the topic.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Kurt Cagle

Tech Expert

Kurt is the founder and CEO of Semantical, LLC, a consulting company focusing on enterprise data hubs, metadata management, semantics, and NoSQL systems. He has developed large scale information and data governance strategies for Fortune 500 companies in the health care/insurance sector, media and entertainment, publishing, financial services and logistics arenas, as well as for government agencies in the defense and insurance sector (including the Affordable Care Act). Kurt holds a Bachelor of Science in Physics from the University of Illinois at Urbana–Champaign. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline