{"id":167958,"date":"2023-02-24T06:00:00","date_gmt":"2023-02-24T11:00:00","guid":{"rendered":"https:\/\/www.ict-pulse.com\/?p=167958"},"modified":"2023-02-23T18:47:25","modified_gmt":"2023-02-23T23:47:25","slug":"ai-disinformation-misinformation-and-meltdowns","status":"publish","type":"post","link":"https:\/\/ict-pulse.com\/2023\/02\/ai-disinformation-misinformation-and-meltdowns\/","title":{"rendered":"AI: Disinformation, misinformation and meltdowns"},"content":{"rendered":"\n

Artificial Intelligence (AI) has taken over our imagination in recent months, with Large Language Models, in particular, being touted as a ground-breaking development. However, as we move beyond the hype, the true capabilities and limitations of the technology are beginning to emerge.<\/em><\/p>\n\n\n\n

 <\/p>\n\n\n\n

It is hard to deny, the recent developments in Artificial Intelligence (AI) that have been released are beginning to highlight to the masses the potential of AI in both our personal and professional lives. Over the past several weeks, people have revelled in the capabilities of chatbots, such as ChatGPT, even declaring that it could replace certain roles, such as those associated with customer care, media (advertising, content creation and technical writing) and research.<\/p>\n\n\n\n

However, over time, as more questions have been thrown as these chatbots and people actively test their limits, several cracks have begun to emerge, as few of which we outline below. As a result, people are beginning to ask more questions about the deficiencies and limitations of AI, especially the Large Language Models (LLM), such as ChatGPT, which to some degree, was being marketed as \u201cthe best thing since sliced bread<\/em>\u201d and being able to transform life as we know it.<\/p>\n\n\n\n

 <\/p>\n\n\n\n

The issue of bias and ethics<\/h1>\n\n\n\n

One of the issues of greatest concern as it relates to AI is that of bias. Although AI is inanimate and inherently unintelligent, it learns based on the data it is fed. The humans who are managing the AI tend to have their own predispositions and perceptions, which to an appreciable degree, have been shaped by life experience and environment.<\/p>\n\n\n\n

So, for example, for the AI that has been developed in developed countries, often the scientists are white, middle-class, or upper-middle-class men who graduated from top universities. Who they are, their life experience and the environment in which they operate, are all likely to shape their perception and how they view the world, which is also likely to influence the dataset that they use to train AI. In many instances, it is not an overt prejudice, but rather is emerging based on the limitations of their experiences.<\/p>\n\n\n\n

The issue of bias in Ai is regularly highlighted, with one of the most vocal voices being Joy Buolamwini<\/a>, who was featured extensively in the documentary film, Coded Bias<\/a><\/em><\/strong>. As a black woman \u2013 and so representing two groups that routinely experience bias \u2013 she highlighted the inaccuracies in facial recognition technology and automated assessment software that are being sold by big tech (and very well-resourced!) companies that perpetuate racial and gender bias.<\/p>\n\n\n\n

The issue of bias quickly leads to a discussion of the ethics of AI, because as organisations increasingly rely on Ai to make decisions, the impact on consumers and the public at large, based on improperly trained AI models becomes even more damning. Once again, the ethics of these tools, and the extent to which they do not engage in unfair discrimination against individuals or groups, and provide equitable access and treatment, is a reflection of those who build them. So far, the models tend to reflect unfavourably on their creators.<\/p>\n\n\n\n

 <\/p>\n\n\n\n

Organising and distilling information<\/h1>\n\n\n\n

To train AI models well, they need to be exposed to large volumes of data. To that end, and depending on the functions the AI is expected to perform, scientists may opt to scrape data from the internet in the hope that with such large volumes, high accuracy will be realised. For example, and in the case of ChatGPT, it was trained on around 570 GB of data drawn from books, web texts Wikipedia, articles, etc, up to around 2021 (Source:\u00a0 Science Focus<\/a>).<\/p>\n\n\n\n

However, though the model has access to a very large dataset, ChatGPT can generate incorrect information that is often framed as authoritative, and so can misinform users. This ability \u2018to fill in the gaps\u2019 can suggest that the data it has been trained on has not been comprehensively organised so that the model organises and processes the data accurately to correctly respond to the questions posed.<\/p>\n\n\n\n

Further, in a situation that was widely reported in which a New York Times<\/a> technology columnist was testing the chat feature on Microsoft Bing\u2019s AI search engine, the interaction took an unexpected turn when the chatbot introduced itself as Sidney and told the reporter he is not happily married and does not love his wife.<\/p>\n\n\n\n

\n