Artificial intelligence (AI) hasn't truly reached its full potential yet.
With visions of machines capable of mimicking human intelligence capturing the imagination of researchers, businesses, and the public alike; the history of AI has been marked by cycles of optimism and disillusionment, commonly referred to as "AI Winters."
The term "AI Winter" describes periods when interest and funding in AI research significantly wane due to unmet expectations, setbacks, or failures to deliver on ambitious goals. Despite remarkable advancements in recent years, it is crucial to approach AI development with pragmatism to avoid another winter.
One aspect contributing to the periodic onset of AI Winters is the overhyping of technology. When expectations surpass the current capabilities of AI systems, the inevitable reality check can lead to a loss of confidence and support. It is vital to temper enthusiasm with a realistic understanding of AI's current limitations.
Pragmatism in AI development involves acknowledging incremental progress and celebrating small victories rather than exclusively focusing on grand, speculative goals. This approach helps maintain a steady momentum in research and development without succumbing to the boom-and-bust cycles of the past.
Ethical considerations are also paramount in cultivating a pragmatic approach to AI. As AI systems become increasingly integrated into various aspects of society, addressing concerns related to bias, transparency, and accountability becomes imperative. A pragmatic stance involves actively addressing ethical challenges, fostering responsible AI development, and ensuring that the benefits of AI are equitably distributed.
Furthermore, collaboration and interdisciplinary efforts play a key role in the pragmatic evolution of AI. Bringing together experts from diverse fields such as computer science, ethics, sociology, and policy-making enables a comprehensive understanding of the multifaceted challenges posed by AI. By fostering collaboration, the AI community can navigate complexities more effectively and develop solutions that align with broader societal needs.
I am not a techno-optimist, nor am I a techno-pessimist. I am, at best, a pragmatist - I recognize that technology can solve problems, but I am also aware that technology can create them, and that solutions in search of problems can all too often find them even when they did not otherwise exist. I try, as much as it is possible to do so, to remain fairly objective about technology - reporting on it when it looks interesting but not becoming enamoured with it. Partially because technology does not happen in a vacuum. It impacts society profoundly, but society also impacts technology.
Self-driving cars and AI driven businesses may sound cool in theory, but having a person at the helm can solve problems that can't be resolved by complex, often opaque algorithms or computer models. AI can replace humans for simple tasks but can generally augment them for more complex ones - and few tasks are, in fact, all that simple because they require judgment, which in turn means the ability to adapt to novelty
I'm honestly not that worried about an AGI Winter, but I think that society has to digest "common" AI first before any discussion about something that is beyond the current state can take off.
I also think that the techno-paternalism that currently infects the upper echelons of the AI world is also being foiled to a certain extent by the difficulty in controlling the open source movement's endeavours to reverse engineer the technology and make it available to anyone with access to the Internet. I don't think this was "supposed" to have happened, though it seems, in retrospect, to have been a foolish business strategy not to consider that.
Pragmatism - it gets a bum rap from idealists, but its surprising how often it turns out to be the path that gets taken.
Kurt is the founder and CEO of Semantical, LLC, a consulting company focusing on enterprise data hubs, metadata management, semantics, and NoSQL systems. He has developed large scale information and data governance strategies for Fortune 500 companies in the health care/insurance sector, media and entertainment, publishing, financial services and logistics arenas, as well as for government agencies in the defense and insurance sector (including the Affordable Care Act). Kurt holds a Bachelor of Science in Physics from the University of Illinois at Urbana–Champaign.