{"id":168153,"date":"2023-03-31T06:00:00","date_gmt":"2023-03-31T11:00:00","guid":{"rendered":"https:\/\/www.ict-pulse.com\/?p=168153"},"modified":"2023-03-30T20:57:36","modified_gmt":"2023-03-31T01:57:36","slug":"is-pausing-ai-development-the-right-thing-to-do","status":"publish","type":"post","link":"https:\/\/ict-pulse.com\/2023\/03\/is-pausing-ai-development-the-right-thing-to-do\/","title":{"rendered":"Is pausing AI development the right thing to do?"},"content":{"rendered":"\n

This week, over 1,750 academics, engineers and some notable names in the tech space signed an open letter asking for all Artificial Intelligence (AI) labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. We discuss the issue from a few perspectives.<\/em><\/p>\n\n\n\n

 <\/p>\n\n\n\n

Early this week, tech news platforms were abuzz about an open letter that had been signed by engineers from big tech companies, such as Amazon, Microsoft, Google, Meta, and by well-known tech leaders including Elon Musk and Steve Wozniak, along with more than 1,000 experts, petitioning for a pause in \u201cGiant AI Experiments\u201d<\/a>. In the letter, which was authored by the think tank the Future of Life Institute, concern was expressed that though currently, there is an intense race to develop even more powerful Artificial Intelligence (AI) systems \u201c\u2026that no one \u2013 not even their creators \u2013 can understand, predict, or reliably control\u2026<\/em>\u201d, the focus on exploring the risks and developing the attendant guidelines, protocols and systems to manage those risks, are considerably under-developed.<\/p>\n\n\n\n

In response to the perceived situation, the letter\u2019s authors are advocating for a pause in AI development for at least six months:<\/p>\n\n\n\n

\n

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.<\/p>\n\n\n\n

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities\u2026<\/p>\n(Source:  Future of Life Institute<\/a>)<\/cite><\/blockquote>\n\n\n\n

Currently, the debate on the open letter continues, and it remains to be seen what impact it will ultimately have. However, the matter has allowed us to revisit the recent growth and visibility of AI through the lens of the requested pause in development.<\/p>\n\n\n\n

 <\/p>\n\n\n\n

Putting the genie back in the bottle in unchartered waters<\/h1>\n\n\n\n

Over the past several weeks, we have all engaged more with AI-supported chatbots and large language model platforms. And although we might be enthralled by them, a broad range of issues have begun to emerge. For example, matters related to ownership and intellectual property rights, and the extent to which the models can invent fact, perpetuate biases and facilitate criminal activity, among other issues, are currently being debated and have been identified as areas of concern.<\/p>\n\n\n\n

Unfortunately, and to a considerable degree, these issues are only now coming to the fore after the horse has already bolted from the stable. Moreover, in the research and development space, the emphasis is often on pushing the envelope \u2013 to see what the technology can do \u2013 and not necessarily considering the implications of pursuing certain types of activities. It is thus not unreasonable for someone to suggest that since we are in unchartered waters some guardrails, so to speak, be established.<\/p>\n\n\n\n

One of the challenges that the open letter seems to be grappling with is the fact that we do not know where exactly the work on AI will lead. Essentially, how AI models learn is a black box and so, how they synthesise the data to which they have access and the resulting consequences cannot be accurately predicted. However, increasingly, we are relying on AI in both our personal lives and the workplace; and we may be less inclined to interrogate the AI-generated responses that we are being given. We may thus be put in a position where, for example, we believe AI-generated falsehoods, or we perpetuate certain biases and prejudices that we might not otherwise agree with.<\/p>\n\n\n\n

In other words, a pause on AI development would allow us to:<\/p>\n\n\n\n