This week, over 1,750 academics, engineers and some notable names in the tech space signed an open letter asking for all Artificial Intelligence (AI) labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. We discuss the issue from a few perspectives.
Early this week, tech news platforms were abuzz about an open letter that had been signed by engineers from big tech companies, such as Amazon, Microsoft, Google, Meta, and by well-known tech leaders including Elon Musk and Steve Wozniak, along with more than 1,000 experts, petitioning for a pause in “Giant AI Experiments”. In the letter, which was authored by the think tank the Future of Life Institute, concern was expressed that though currently, there is an intense race to develop even more powerful Artificial Intelligence (AI) systems “…that no one – not even their creators – can understand, predict, or reliably control…”, the focus on exploring the risks and developing the attendant guidelines, protocols and systems to manage those risks, are considerably under-developed.
In response to the perceived situation, the letter’s authors are advocating for a pause in AI development for at least six months:
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities…
(Source: Future of Life Institute)
Currently, the debate on the open letter continues, and it remains to be seen what impact it will ultimately have. However, the matter has allowed us to revisit the recent growth and visibility of AI through the lens of the requested pause in development.
Putting the genie back in the bottle in unchartered waters
Over the past several weeks, we have all engaged more with AI-supported chatbots and large language model platforms. And although we might be enthralled by them, a broad range of issues have begun to emerge. For example, matters related to ownership and intellectual property rights, and the extent to which the models can invent fact, perpetuate biases and facilitate criminal activity, among other issues, are currently being debated and have been identified as areas of concern.
Unfortunately, and to a considerable degree, these issues are only now coming to the fore after the horse has already bolted from the stable. Moreover, in the research and development space, the emphasis is often on pushing the envelope – to see what the technology can do – and not necessarily considering the implications of pursuing certain types of activities. It is thus not unreasonable for someone to suggest that since we are in unchartered waters some guardrails, so to speak, be established.
One of the challenges that the open letter seems to be grappling with is the fact that we do not know where exactly the work on AI will lead. Essentially, how AI models learn is a black box and so, how they synthesise the data to which they have access and the resulting consequences cannot be accurately predicted. However, increasingly, we are relying on AI in both our personal lives and the workplace; and we may be less inclined to interrogate the AI-generated responses that we are being given. We may thus be put in a position where, for example, we believe AI-generated falsehoods, or we perpetuate certain biases and prejudices that we might not otherwise agree with.
In other words, a pause on AI development would allow us to:
- take stock of the current situation and where AI seems to be going
- identify and articulate the concerns and risks of which we should be aware, and
- identify possible safeguards or guiding principles that could be followed to address the key concerns and risks.
Levelling an uneven playing field?
On the other hand, it may not be lost on most of us that to a considerable degree, ‘big tech’ has been the one engaging in the aggressive competition to capitalise on AI. Arguably half-baked products are being released to try to capture market share and leverage on first-mover advantage. Hence, could a moratorium on AI development allow some who have been lagging behind to catch up?
Although the open letter proposes that the pause be public and verifiable, the cynics among us are likely to doubt whether those conditions would indeed be followed by those who believe they have the most to gain from getting ahead in AI. For big tech companies and publicly traded companies, in particular, the emphasis is on profits and growth. Many of the segments that had been cash cows are drying up as technology has evolved, or are becoming more saturated and competitive. AI offers a new frontier, and for those out in front, the gains could be huge.
Although the figures tend to vary depending on the source, according to Statista, the size of the global AI market is projected to reach USD 1.8 trillion by 2030, from USD 142 billion in 2022, with a compound annual growth rate of between 20% and 40% (Sources: Precedence Research and Business Fortune Insights). However, it is instructive to note that as much as the public’s attention has been on ChatGPT, for example, AI has made considerable inroads into the healthcare, automotive, manufacturing, transportation, logistics, and banking and finance verticals, among others, which are driving the growth that already has been experienced. Further, adoption in areas such as automation and the Artificial Internet of Things (AIoT) is expected to increase in the coming years, thus strengthening the outlook for the AI market.
Hence, in this highly competitive space that is still filled with opportunity, what could be the impact of a moratorium on AI development on those who are currently ahead?
Parting thoughts
In summary, it seems unlikely that a unanimous pause in AI development will occur in the coming weeks or months, as it seems that the initiative will only succeed if all of the companies and institutes leading in AI development agree to do so. However, what may be more important is that a voice is continually highlighting the fact that we are not giving enough attention to how AI may irrevocably change our societies and how we, humans, operate in the world.
Although it could be argued that the underlying thrust for a pause is some form of fear-mongering, we also ought to be aware that big tech tends to act in its own best interest. In other words, the public good is not always top of mind. Hence, policymakers, and even we as consumers, also need to be proactive to ensure that our best interests are known and are being served.
Image credit: Tara Winstead (Pexels)