{"id":168153,"date":"2023-03-31T06:00:00","date_gmt":"2023-03-31T11:00:00","guid":{"rendered":"https:\/\/www.ict-pulse.com\/?p=168153"},"modified":"2023-03-30T20:57:36","modified_gmt":"2023-03-31T01:57:36","slug":"is-pausing-ai-development-the-right-thing-to-do","status":"publish","type":"post","link":"https:\/\/ict-pulse.com\/2023\/03\/is-pausing-ai-development-the-right-thing-to-do\/","title":{"rendered":"Is pausing AI development the right thing to do?"},"content":{"rendered":"\n
This week, over 1,750 academics, engineers and some notable names in the tech space signed an open letter asking for all Artificial Intelligence (AI) labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. We discuss the issue from a few perspectives.<\/em><\/p>\n\n\n\n <\/p>\n\n\n\n Early this week, tech news platforms were abuzz about an open letter that had been signed by engineers from big tech companies, such as Amazon, Microsoft, Google, Meta, and by well-known tech leaders including Elon Musk and Steve Wozniak, along with more than 1,000 experts, petitioning for a pause in \u201cGiant AI Experiments\u201d<\/a>. In the letter, which was authored by the think tank the Future of Life Institute, concern was expressed that though currently, there is an intense race to develop even more powerful Artificial Intelligence (AI) systems \u201c\u2026that no one \u2013 not even their creators \u2013 can understand, predict, or reliably control\u2026<\/em>\u201d, the focus on exploring the risks and developing the attendant guidelines, protocols and systems to manage those risks, are considerably under-developed.<\/p>\n\n\n\n In response to the perceived situation, the letter\u2019s authors are advocating for a pause in AI development for at least six months:<\/p>\n\n\n\n Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.<\/p>\n\n\n\n AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities\u2026<\/p>\n(Source: Future of Life Institute<\/a>)<\/cite><\/blockquote>\n\n\n\n Currently, the debate on the open letter continues, and it remains to be seen what impact it will ultimately have. However, the matter has allowed us to revisit the recent growth and visibility of AI through the lens of the requested pause in development.<\/p>\n\n\n\n <\/p>\n\n\n\n Over the past several weeks, we have all engaged more with AI-supported chatbots and large language model platforms. And although we might be enthralled by them, a broad range of issues have begun to emerge. For example, matters related to ownership and intellectual property rights, and the extent to which the models can invent fact, perpetuate biases and facilitate criminal activity, among other issues, are currently being debated and have been identified as areas of concern.<\/p>\n\n\n\n Unfortunately, and to a considerable degree, these issues are only now coming to the fore after the horse has already bolted from the stable. Moreover, in the research and development space, the emphasis is often on pushing the envelope \u2013 to see what the technology can do \u2013 and not necessarily considering the implications of pursuing certain types of activities. It is thus not unreasonable for someone to suggest that since we are in unchartered waters some guardrails, so to speak, be established.<\/p>\n\n\n\n One of the challenges that the open letter seems to be grappling with is the fact that we do not know where exactly the work on AI will lead. Essentially, how AI models learn is a black box and so, how they synthesise the data to which they have access and the resulting consequences cannot be accurately predicted. However, increasingly, we are relying on AI in both our personal lives and the workplace; and we may be less inclined to interrogate the AI-generated responses that we are being given. We may thus be put in a position where, for example, we believe AI-generated falsehoods, or we perpetuate certain biases and prejudices that we might not otherwise agree with.<\/p>\n\n\n\n In other words, a pause on AI development would allow us to:<\/p>\n\n\n\n <\/p>\n\n\n\n On the other hand, it may not be lost on most of us that to a considerable degree, \u2018big tech\u2019 has been the one engaging in the aggressive competition to capitalise on AI. Arguably half-baked products are being released to try to capture market share and leverage on first-mover advantage. Hence, could a moratorium on AI development allow some who have been lagging behind to catch up?<\/p>\n\n\n\n Although the open letter proposes that the pause be public and verifiable, the cynics among us are likely to doubt whether those conditions would indeed be followed by those who believe they have the most to gain from getting ahead in AI. For big tech companies and publicly traded companies, in particular, the emphasis is on profits and growth. Many of the segments that had been cash cows are drying up as technology has evolved, or are becoming more saturated and competitive. AI offers a new frontier, and for those out in front, the gains could be huge.<\/p>\n\n\n\n Although the figures tend to vary depending on the source, according to Statista<\/a>, the size of the global AI market is projected to reach USD 1.8 trillion by 2030, from USD 142 billion in 2022, with a compound annual growth rate of between 20% and 40% (Sources: Precedence Research<\/a> and Business Fortune Insights<\/a>). However, it is instructive to note that as much as the public\u2019s attention has been on ChatGPT, for example, AI has made considerable inroads into the healthcare, automotive, manufacturing, transportation, logistics, and banking and finance verticals, among others, which are driving the growth that already has been experienced. Further, adoption in areas such as automation and the Artificial Internet of Things (AIoT) is expected to increase in the coming years, thus strengthening the outlook for the AI market.<\/p>\n\n\n\n Hence, in this highly competitive space that is still filled with opportunity, what could be the impact of a moratorium on AI development on those who are currently ahead?<\/p>\n\n\n\n <\/p>\n\n\n\n In summary, it seems unlikely that a unanimous pause in AI development will occur in the coming weeks or months, as it seems that the initiative will only succeed if all of the companies and institutes leading in AI development agree to do so. However, what may be more important is that a voice is continually highlighting the fact that we are not giving enough attention to how AI may irrevocably change our societies and how we, humans, operate in the world.<\/p>\n\n\n\n Although it could be argued that the underlying thrust for a pause is some form of fear-mongering, we also ought to be aware that big tech tends to act in its own best interest. In other words, the public good is not always top of mind. Hence, policymakers, and even we as consumers, also need to be proactive to ensure that our best interests are known and are being served.<\/p>\n\n\n\n <\/p>\n\n\n\n <\/p>\n\n\n\n\n
Putting the genie back in the bottle in unchartered waters<\/h1>\n\n\n\n
\n
Levelling an uneven playing field?<\/h1>\n\n\n\n
Parting thoughts<\/h1>\n\n\n\n