As much as we enjoy the efficiencies arising from using generative AI, we continually have to be aware of the flaws inherent in the technology, the most glaring of which is bias that can result in a broad range of harms. We discuss the impact of the bias on society and outline steps Caribbean countries can take to begin to regulate AI.
Generative Artificial Intelligence (AI) has rapidly transitioned from a niche research area to a pervasive force, capable of creating compelling text, realistic images, and even sophisticated code. However, beneath the veneer of its impressive capabilities lies a significant and pressing concern: the potential for bias and discrimination.
From as early as 2021, we at ICT Pulse have been discussing AI bias and ethics, especially through our podcast. Some of our first conversations were with Matthew Cowen, of dgtlfutures.com, and Gratiana Fu, a Data Scientist for DAI’s Center for Digital Acceleration. However, as generative AI becomes increasingly integrated into daily life, it can be easy to overlook some of its inherent weaknesses and biases, though we use such tools to make important and life-changing decisions. It is thus imperative for Caribbean countries to acknowledge, understand, and proactively address this crucial issue.
Why generative AI discriminates
The biases observed in generative AI are not inherent to the technology itself, but rather a reflection of the data it is trained on and the design choices made during its development. As a result, data bias, thanks to the large datasets scraped from the internet, tends to be the biggest culprit for bias. The internet can reflect the biases and inequalities present in human society, which, if used to train AI models, will result in these models perpetuating those patterns.
It is also important to highlight that even with seemingly unbiased data, the algorithms themselves can introduce bias.The way algorithms are conceptualised to process information, prioritise certain features, or make decisions can inadvertently lead to discriminatory outcomes.However, the situation is compounded by the fact that it can be difficult, if near impossible, to identify the exact code or parts of an algorithm that are fostering certain biases. They can be subtle and difficult to detect, as if often referred to as the ‘black box’ problem of AI.
Finally, it would be remiss not to mention that human biases can infiltrate the AI lifecycle at various stages, from data labelling and curation to model selection and evaluation. Further, in the generative AI and Large Language Models (LLMs) space, the AI development teams lack diversity can lack diversity, resulting in their limited perspectives and blind spots being inadvertently encoded into the models. Hence, subjective decisions made by developers and researchers can inadvertently reinforce existing stereotypes and lead to outputs that fail to account for the diverse experiences and needs of a global user base.
The impact of AI bias and discrimination in society
The ramifications of biased and discriminatory generative AI extend far beyond mere inconvenience.In our multiethnic and multicultural Caribbean societies, the datasets used to train popular AI models are not reflective of our countries and citizens, and could lead to, among other things:
- unfair treatment in critical sectors like hiring, lending, healthcare, and law enforcement
- harmful stereotypes being amplified and content that misrepresents or marginalises certain groups being generated
- important features or prevalences in the society, such as health-related predispositions, not being properly recorded or represented in critical outputs that guide policy and other decisions
- disinformation campaigns, propaganda, or even autonomous weapons systems being developed that pose serious threats to national security and regional stability
- widespread distrust and resistance to its adoption, hindering beneficial advancements if AI is considered unfair or discriminatory
Ultimately, such biases can exacerbate existing societal inequalities, erode trust in technology, and undermine fundamental human rights. Further, and perhaps even more importantly, on a personal or individual level, continual exposure to such biases can erode our own self-worth and self-perception, which can be particularly harmful among children and youth who would be among our most vulnerable.
Gaps and challenges to regulating generative AI in the Caribbean
Over the past several months, there have been more conversations in the Caribbean region on AI ethics as part of the wider discussion on AI governance. However, in most instances, we have not moved from talk to action. In the latest edition of Cancion magazine published by CANTO, Senior Manager of Regulatory & Policy Affairs at TSTT, Christa Leith, highlighted some of the gaps and challenges in the legal frameworks across the region in relation to generative AI regulation. Of particular note were:
- the region’s lack of AI governance structures
- that existing legal frameworks were designed to regulate various forms of traditional human and business interactions
- that existing data protection laws do not contain provisions addressing generative AI data
- the uncertainty regarding liability in AI systems, and
- the absence of provisions in existing legal frameworks addressing algorithmic bias
Properly addressing these challenges will require not only a major revision to our existing legislative framework but also an extensive change in the associated mindset and reasoning. As currently contemplated, the law addresses human-to-human and human-to-organisation interactions, for example, which thus permits blame and liability to be readily determined – provided adequate proof can be presented. In the case of AI, for example, the architects of a model (the developers, data scientists, etc.) may not be as closely tethered to the product, especially since the model has been designed to learn in a seemingly black box, it is likely to be difficult to decisively establish liability under those circumstances.
Further, and especially regarding generative AI, which also tends to be LLMs, they are expensive to build, train and maintain. Hence, they are likely to be owned, managed and housed outside the Caribbean region, which means that Caribbean-based citizens or organisations would have limited access to the inner workings of such platforms, and so little or no control over how such models operate or the results they produce. We thus ought to be thinking carefully about the extent to which we rely on generative AI and are prepared to bear the liability that could arise from relying on the outputs the models produce.
Regulating generative AI bias
Recognising that regulating generative AI will change existing legal and governance paradigms, along with business and societal attitudes and perceptions, particularly regarding rights, offences and liability, it is likely that a multi-faceted approach would need to be employed. Much has already been written on this issue by other sources, but in summary, Caribbean countries ought to be addressing AI regulation along the following fronts:
- Establishing ethical AI guidelines. Although many countries and international bodies have already developed ethical AI principles, no English-speaking Caribbean countries have formally established any ethical AI guidelines. Such a framework could be considered a first step, as it would address the country’s position on basic principles, such as fairness, human-centricity, accountability, transparency, safety, and privacy. These principles should be embedded into national AI strategies and regulatory frameworks.
- Developing comprehensive AI legislation. Current best practice points to the European Union’s AI Act, which is based on risk-based regulation. The laws should also mandate that AI systems be transparent about their data sources, algorithms, and decision-making processes, and with clear lines of accountability for their development, deployment, and use. Further, to address potential bias, require the use of diverse and representative datasets for training AI models.
- Promote technical solutions and best practices. Complementing legislation, good technical practices ought to be encouraged, such as investing in bias detection and mitigation tools, conducting algorithmic audits, adopting techniques that protect individual privacy while allowing for collaborative AI model training, and developing and standardising metrics for evaluating AI fairness.
- Foster collaboration and education. Given the global nature of AI development, international collaboration is essential to harmonise regulations, share best practices, and prevent regulatory arbitrage. It would also be prudent to encourage public-private partnerships between governments, industry, academia, and civil society to address AI bias and develop responsible AI practices. A concerted and sustained effort regarding AI literacy and education is needed to increase the public’s understanding of AI and facilitate their empowerment.
Without a doubt, the bias and discrimination embedded within generative AI are not theoretical concerns; they are real and have the potential to inflict significant societal harm. The rise of these systems presents immense opportunities, but also profound challenges for Caribbean countries, which must be urgently addressed. The time for action is now, to sculpt a future where AI is not only intelligent but also equitable and just.
Image credit: Freepik