The use of AI in law enforcement is a rapidly evolving area, and Jamaica will soon be deploying “Constable Smart”. Although this AI model promises to revolutionise policing in Jamaica, thanks to its significant potential benefits, it also has some considerable ethical and practical drawbacks, both of which we discuss.

 

Earlier this week, a news headline caught my attention: Jamaica introduces AI to police force with ‘Constable Smart’. According to the article, the Jamaica Constabulary Force (JCF) has introduced “Constable Smart,” an advanced artificial intelligence (AI) agent designed to revolutionise policing in Jamaica. This AI is intended to assist the JCF with critical tasks such as:

  • Handling emergency calls, by assisting with the processing and response to emergency calls, potentially triaging them and providing initial guidance;
  • Information dissemination, by increasing the speed and accuracy of information shared with the public and within the police force; and
  • Reporting, by facilitating the creation of official police statements.

“Constable Smart” was described as fully operational and ready to enhance law enforcement and public safety efforts. This AI agent is being sponsored by the global tech company and social enterprise, Amber Group, and is expected to come on stream in the coming months.

 

Crime and policing in the Caribbean

As artificial intelligence (AI) continues to permeate all aspects of our lives and societies, it should not be surprising that law enforcement would want to capitalise on the benefits of integrating more technology into its operations. Across the Caribbean region, many countries, including Belize, Haiti, Jamaica, Saint Lucia, Saint Vincent and the Grenadines, and Trinidad and Tobago, have been grappling with an upsurge in violent crimes, especially homicides, in recent years, and have been struggling to implement adequate interventions. Hence, support from AI, particularly since countries have struggled to recruit enough police officers, whilst also experiencing high attrition rates. Hence, to varying degrees, Caribbean law enforcement appears to be becoming more amenable to increased technology integration to support the execution of their mandate.

However, we are also all aware of unfortunate situations that have occurred when technology has been integrated into law enforcement. For example, AI has been used to support racial profiling and individuals being subject to cases of mistaken identities based on AI models used by the police. Hence, there are some concerns regarding the use of AI models in law enforcement and the reliance on their outputs by the police and citizens.

 

Potential benefits and disadvantages of AI in law enforcement

To be fair, the use of AI by law enforcement is not all doom and gloom. There are potential benefits, especially the use of large Language models (LLMs), some of which are outlined below and echo some of the stated functionality of “Constable Smart”.  

First, thanks to the advanced natural language processing capabilitiesLLMs possess, they can understand and generate human-like text, making them invaluable for tasks involving unstructured textual data. Regarding “Constable Smart”, it should also be able to understand and communicate with citizens in almost any language, including Jamaican Patois.

Second, LLMs tend to excel in research, analysis and report generation. They can quickly create reports, including incident summaries, police reports and statements, and can also conduct legal research to draft initial legal briefs from raw notes, audio transcripts, or fragmented information. Further, they can quickly sift through vast legal databases, summarise relevant case law, identify precedents, and highlight key legal arguments, thus reducing the legwork lawyers, legal clerks, and the police may need to do to fulfil those tasks. Additionally, regarding research activities, LLMs can extract specific entities (names, dates, locations), relationships, and sentiments from witness statements, social media, or intelligence reports, accelerating investigations and potentially reducing the human effort needed for such tasks.

Finally, similar to the trend to use LLMs in public relations, marketing and social media, LLMs in the police force can power advanced communication tools, such as assisting in drafting public statements, creating modest public education campaigns, or even generating customised training materials.

On the other hand, users worldwide have had a rude awakening over the years as the negatives of using AI, and more so LLMs, became more evident. Key cons include the following.

First, LLMs can produce hallucinations and factual inaccuracy. In other words, they confidently generate plausible-sounding but entirely false information. This is a critical risk in law enforcement, where factual accuracy is paramount and errors can have severe consequences, such as wrongful arrests and flawed cases.

Second, although LLMs are said to be good at language, they lack true “understanding” or common sense. As a result, they might miss subtle nuances, sarcasm, or critical context in human interactions or documents, leading to misinterpretations, which, once again, can lead to inaccuracies in an environment where “factual accuracy is paramount”.

Third, LLMs are particularly susceptible to perpetuating biases present in their training data, which has been a longstanding concern. These biases could manifest as risk assessments conducted, discriminatory language in reports, or skewed interpretations of certain facts or demographic statements.

The fourth con is related to data security and confidentiality. Inputting sensitive case details, witness statements, or intelligence into LLMs (especially public ones) poses immense data security and confidentiality risks. Secure, private LLM deployments are essential, but add complexity and cost. In the case of Jamaica and “Constable Smart”, that matter was not addressed in the news report, but insight into the infrastructure deployment arrangements ought to be made clear.

Finally, there ought to be a concern about the over-reliance on generated text. For example, If police officers become too reliant on LLM-generated reports or analyses without critical human review, it could lead to a decline in investigative rigour and independent judgment expected, which in turn could threaten the integrity of law enforcement and citizens’ comfort with the rule of law.

 

Ensuring law enforcement remains smarter than “Constable Smart”

Minimising the cons of using AI and LLMs in law enforcement requires a multi-faceted and proactive approach that prioritises ethics, transparency, accountability, and continuous evaluation. Here are key steps police forces, such as the JCF with “Constable Smart,” can take.

First, a robust governance and ethical framework should be established. Clear policies and guidelines should be developed that formalise specific rules on, among other things, what AI or LLMs will and will not be used for, how officers are to incorporate AI outputs, and how to handle disagreements between AI suggestions and human judgment. Clear usage policies for sensitive data should also be included, with ethical principles incorporated into all AI deployments.

Second, structures ought to be established to address bias and fairness, which include regular bias audits and bias mitigation techniques. Continuous, independent audits of AI models for bias, both pre-deployment and throughout their operational life, ought to be conducted, with technical strategies to reduce bias being implemented.

Third, the accuracy and hallucinations of AI also ought to be addressed. Ideally, addressing this issue starts with how the AI model is built. For example, trusted, verified internal databases and knowledge bases should be used, thus allowing the LLM to retrieve and ground its responses in factual, police-specific information, significantly reducing hallucinations. However, noting that in Jamaica, as may be the case in other Caribbean countries, where many of the systems in police stations are still analogue, local datasets may be limited and may need to be supplemented from other sources, increasing the potential for inaccuracies and hallucinations.

Finally, cognisant that the role of AI in law enforcement should be advisory, not deterministic, robust human oversight and accountability ought to be implemented by ensuring, at the very least, that human officers always retain final decision-making authority, especially for high-stakes decisions like arrests, charges, or sentencing recommendations. Further, comprehensive and continual training for police officers should be executed to ensure that they understand, among other things, the capabilities and limitations of AI tools, how to critically evaluate AI outputs for accuracy, bias, and relevance, and the ethical implications of AI in policing.

 

In summary, although the integration of AI into law enforcement is inevitable, it ought to be carefully managed. Clear and comprehensive frameworks must not only be established but also be adequately resourced to implement them successfully. Police officers will also need to be trained to appreciate that AI is not a magic bullet, but rather a tool to help them do their job better. They are still accountable and so must oversee and interrogate the outputs produced.

Further, as AI becomes more integrated into law enforcement, the public may become more distrustful of the police. Hence, comprehensively and proactively managing the use of AI and LLMs to minimise their inherent risks, uphold ethical policing standards, and foster public trust will be crucial considerations going forward.

 

 

Image credit:  Scott Rodgerson (Unsplash)