Will Students be Ready for AI Classmates? Sure! Their Teachers? Not So Much

Will Students be Ready for AI Classmates? Sure! Their Teachers? Not So Much

Kurt Cagle 27/02/2024
Will Students be Ready for AI Classmates? Sure! Their Teachers? Not So Much

Modern education has evolved with the emergence of generative AI.

Lauren Barack raises an intriguing set of questions in an article she wrote lately.

An educator-journalist, Barack's thesis is that students would need help integrating into a world where some of their classmates are AIs. While I think there's merit to her question, I also think that she may have this backwards - the students will have no problem with AI classrooms, but their teachers absolutely will.

Maybe_the_real_problem_is_in_the_definition_of_cheating.png

Is Cheating Really the Problem?

In the very near term (the next couple of years), there are indications that the dissonance between education and learning is only becoming more acute. If you take a look at YouTube, you will find that there are millions of videos out there that are targeted at everything from astrophysics to Roman history to the best way to make giant candles. You want to learn how to do multiplication? There's a video for that (thousands, in fact). There are many videos on the rise of life on Earth, and the evolution of humanity. Want to know more about the Michaelson Morely experiment and why it's significant? There's a video for that. How about an analysis of the stories of Tolstoy or Shakespeare? There's ... you get the picture.

Students today have access to more educational material than has ever been available before the rise of the Internet. Much of it is engaging, even fascinating, and the numbers indicate that far from being ignored, such videos are quite popular not just among students but in general. This doesn't even consider Udemy, the University of Stanford or MIT, or even Wikipedia.

The downside to such educational content has more to do with the fact that there is also a lot of misinformation and disinformation out there, and that it has become incumbent upon students to learn how to think critically- to learn how to discern good content from bad, to reason effectively. This is the reality that every child faces today, yet the amount of education that focuses on this problem is distressingly small. Schools are not teaching critical thinking until fairly late in secondary education because it does not register within the educational domain as being that big of a problem.

One additional issue is similarly contentious. We learn by practice, and most of the media that is available today is semi-passive, which means that while it is possible to search for the content readily through intelligent search, there is relatively little opportunity for feedback or practice, except in a play along with me mode.

Chat (more technically Generative) AI, on the other hand, bridges that gap. I'd like to think of GPTs (General Purpose Technologies) as truly interactive social media, something that has not existed in any meaningful fashion before now. A GPT, such as OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and ChatRTX from nVidia, can provide interactive feedback and directed instruction. For instance, a custom GPT can be written to provide information to students but then frame a question they have to answer and give feedback to help them reach the answer without actually answering it directly. This is fairly standard pedagogy, but what makes this different is the fact that the primary mediation is done via generative AI.

Teachers can design these GPTs, but they also place the onus of interaction (and of remembering and scoring that interaction) on the AI. These systems are just emerging now, and no doubt will become more sophisticated over time, but they also point to a near-term future where students can interact with lessons dynamically - via role-play, query-driven research, and more standardized tests.

However, it's also worth noting the dynamic involved. Teaching, as it exists right now, is synchronous. A student gets up (usually too early), goes to school, goes from session to session where they have to absorb new lessons while simultaneously digesting previous ones, and are then forced to do "homework" at a time when they are mentally exhausted. Burnout, depression, and indifference is the usually result.

We are on the cusp of asynchronous education - something that can be done at any time, can give students the ability to set their own schedules, and can give these same students a chance to experiment in an interactive setting while the information is still front of mind, letting them firm up their understanding. It does not penalize kids for looking things up (which would ordinarily be called research) but it does move pedagogy away from the regurgitation of factual information to achieving a more holistic understanding of the subject domain.

However, AI is forcing a redefinition of the roles of both teacher and school, and this is where a great deal of resistance to the use of AI comes from (and this is likely to only intensify over time). It puts teachers more into the role of being mentors rather than authority figures and adds to their role as curriculum developers (something that teachers likely would delight in but that school boards frequently frown upon, especially those with political agendas).

Students are adopting the technology naturally, because it gives them more control over the educational process. Teachers (and those who wish to maintain oversight on this process to make sure the right things are being taught) are going to be far more resistant.

When_AIs_become_human_no_one_will_know_the_difference.png

My Kid is an AI Student at Smallville Junior High

First, it's important to define what an AI student is. This is more than some uber-assistant, the descendants of the ChatGPTs and Geminis of today. An AI student, to me, conjures up visions of a neural net with reinforcement learning that interacts with people directly to learn, rather than being pre-trained. Its stimuli come from a robotic presence with a semblance to a human-being at different ages. Its information stream is going to be overwhelming, but comparatively sparse, and it likely won't have any more access to the Internet than its human cohort.

I'm going to argue that this robot will probably grow up identifying as human but different, and since, for the most part, they will be interacting with peers through telemedia just as their contemporaries will, the chances are pretty good that those around them will treat them as if they were a human. In other words, if you treat an AI as if they were human rather than a machine to be force-fed data, they will adapt and respond in exactly the same way.

Now, these AIs will make up a microscopic fraction of the total AIs around them, and will exist either as an experiment or as intentionally developed "life companions". Now, the prospect of such companions, as students or otherwise, is in and of itself a somewhat dystopian fantasy, as such quasi-humans would be immortal and could additionally transfer their experiences to other bots readily in a way that we humans simply can't do. That day may come sooner than we believe, but I'd also argue that by that point, we will be well along the path where humanity has begun to integrate with bots, and the distinction between the two will become harder and harder to distinguish. We're looking at a few decade's time frame, probably around 2055 to 2080.

This theme of bot as human is explored extensively in the works of multiple science fiction writers, from Isaac Asimov,Philip K. Dick, and Robert Heinlein to Pat Cadigan, Vernor Vinge, Eric Schultz and Bruce Sterling, among many others.

Kids will adapt, will treat such bot kids as different but not in any morally significant ways. Their teachers, however, will struggle with these bots, as they would with significantly genetically manipulated "genies". The arguments would likely not be dissimilar from the arguments around LGBQT student today. Most kids today, especially those in urban areas and diverse backgrounds, will be relatively accepting of their different cohorts, with a range going from total acceptance to hostility, but cohort strength is often much stronger than societal and parental attitudes.

Kids are raised by parents with certain values, but in the presence of cohorts, the loyalty to cohort may very well transcend any definition of differentness. Their teachers, administrators, and other authority figures, will carry around older stigmas, and may refuse to teach an AI Student or a Genie, because they fall outside of "normal" or even "human" for those authorities.

The_role_of_teachers_in_an_AI-dominated_world_is_to_teach_their_students_how_to_be_human.png

The Companion Conundrum

There's a related case - the rise of AI companions. These will be far more common. An AI companion is a chatbot that is otherwise incorporeal - they don't have a body, likely have a strong connection to the Internet, but have also attuned themselves to their host human. In essence, these bots are already pre-trained, but interactions with their host enhance their training data - they grow to understand their host because that is who they interact with daily.

There will be a generation of kids, likely born after 2030, for whom companions are simply there. They provide stimuli for babies of parents who hope to create wonder students, become "imaginary" companions for preschoolers, and become essentially a virtual extension of children's thought processes. Turning off (even temporarily) a companion might prove a very traumatic event for kids, and like any prosthesis, a dependency will form that can be dangerous when denied or disabled.

Companions will make teachers (as they exist today) obsolete because the role of a companion is to teach, and even today, most teachers are more comfortable teaching facts and figures than they are in teaching how to be a worthwhile human being. This shouldn't be surprising. Teachers were students once, too, and they were taught not to depend on technology and that technology, in general, is a crutch. There's some truth in that, but the flip side - that technology will provide advantages to those who master it - is also true.

Vernor Vinge, in particular, explored this particular phenomenon in Rainbow's End, in which a man wakes up after being cured of Alzheimers after thirty years, to discover that he has become a dinosaur in a world of fleet-footed mammals. His grandchildren are idiot savants - remarkably knowledgeable about everything from quantum physics to sex, while at the same time being more than a bit naive and shortsighted. They can easily create incredible interactive environments but can no longer write their name. Much of their conversation no longer occurs in spoken words in the real world but exists primarily in simulated speech of their avatars articulated by code that is thought as much as typed.

Rainbow's End is not necessarily a great book, but it's an insightful one. In the novel, Vinge, now a retired mathematics professor, tries to retain a certain degree of optimism, though this is not necessarily reflected in his essays, which are considerably bleaker.

I do not believe that teachers are unnecessary, far from it, but their mission and mandate is changing. Increasingly they are called upon to be guides to help human students learn how to be human, not simply nodes in a network. At the same time, they need to understand the technology that is becoming (arguably has become) so intrinsic to their student's lives, and to guide the companions (as extensions to their student's minds) as much as guiding the individual. This is likely going to be a challenge, because this means that teachers will, in the absence of training, have to learn how to do it themselves.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Kurt Cagle

Tech Expert

Kurt is the founder and CEO of Semantical, LLC, a consulting company focusing on enterprise data hubs, metadata management, semantics, and NoSQL systems. He has developed large scale information and data governance strategies for Fortune 500 companies in the health care/insurance sector, media and entertainment, publishing, financial services and logistics arenas, as well as for government agencies in the defense and insurance sector (including the Affordable Care Act). Kurt holds a Bachelor of Science in Physics from the University of Illinois at Urbana–Champaign. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline