The ‘Scientific Simulacra’: When AI And Hyperreality Collide

John Nosta 11/06/2023

Deepfakes have entered the world of science.

The advancement of artificial intelligence (AI) technology has given rise to a new era of ‘scientific simulacra’, epitomizing Jean Baudrillard’s concept of hyperreality, where reality and representation become indistinguishable. AI language models, like ChatGPT, are now capable of creating convincingly realistic, yet wholly artificial scientific articles. This phenomenon pushes the boundary of hyperreality from visual and symbolic domains into cognitive realms, raising complex philosophical questions about truth and the role of technology in our perception of reality. While AI could potentially enhance scientific writing and editing processes, it also presents the risk of creating fraudulent scientific papers. As we delve deeper into this era of hyperreality, discerning reality from simulation becomes increasingly difficult. Therefore, staying vigilant and connected to underlying truths becomes more critical to navigate this new landscape wisely.

In 1981, Jean Baudrillard’s influential work ‘Simulacra and Simulation’ introduced us to the concept of hyperreality, a state in which the distinction between reality and representation is increasingly blurred. Fast forward to the present, and we’re witnessing this concept taken to unprecedented levels in the digital age, courtesy of advancements in artificial intelligence technology.

Recently, a proof-of-concept study took the application of AI a step further by using ChatGPT, powered by the GPT-3 language model, to generate a wholly fabricated scientific article on neurosurgery. This experiment aimed to create a paper, complete with an abstract, introduction, material and methods, discussion, references, charts, and more, that closely resembled a legitimate scientific paper.

In less than an hour, the AI language model generated an article consisting of 1992 words and 17 citations. At first glance, it was indistinguishable from a genuine scientific paper, with comparable word usage, sentence structure, and overall composition. However, upon closer inspection by experts in neurosurgery, psychiatry, and statistics, specific inaccuracies and errors, particularly in the references, were identified, hinting at its AI origin. The authors concluded:

The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection.

This study suggests that we have ventured into a new era of ‘scientific simulacra,’ where AI language models can produce seemingly authentic yet entirely artificial scientific articles. In Baudrillardian terms, these AI-generated papers are classic examples of simulacra, representations without real-world originals. They mark the evolution of simulacra from visual and symbolic domains to cognitive realms, illustrating the growing influence of AI on our relationship with reality.

We can draw parallels with other everyday examples of hyperreality:

  • Theme parks and Virtual Reality: These simulate realities offering immersive experiences detached from the real world.

  • Advertising and Celebrity Culture: Here, images and personas are crafted, often lacking a connection to the underlying truth or reality.

  • AI-generated images and deep fakes: These technologies blur the boundaries between real and artificial, raising concerns about authenticity and truth.

  • AR/VR contrived realities: Driven by the launch of Apple’s Vision Pro, reality is now designed and constructed for us by technology.

The advent of AI-generated scientific articles presents both a boon and a potential pitfall. The potential benefits, such as enhancing manuscript preparation and language editing, are undeniable. Yet, the risk of misuse, as demonstrated by the ability to create fraudulent scientific papers, is significant.

As we plunge deeper into the era of hyperreality and scientific simulacra, the task of discerning reality from simulation becomes increasingly complex. We find ourselves revisiting fundamental questions concerning truth, humanity, and the role of technology in shaping our perception of reality.

In this landscape, vigilance is key. But vigilance is contrasted by the increasing sophistication of LLMs. We must ensure our reliance on models and representations doesn’t sever our connection to underlying reality.

The emergence of these “scientific simulacra” propels us to revisit and wrestle with fundamental philosophical questions about truth, humanity, and the role of technology in shaping our perception of reality. In this age of AI-enabled hyperreality, it’s imperative that we stay connected to the underlying truths of our world, ensuring that the map doesn’t replace the territory.

This is the world Baudrillard warned us about. Now, it’s upon us to navigate it wisely.

Share this article

Leave your comments

Post comment as a guest