Artificial intelligence (AI) is no longer a futuristic concept but a practical tool transforming organisations across all industries and sectors. As businesses increasingly integrate AI into their operations, it is recommended that an AI policy be developed. We outline four reasons why organisations need an AI policy.  

 

Although generative artificial intelligence (AI) platforms became publicly accessible about 18 months ago, their use is increasingly becoming the norm. Individuals and employees may not think twice about using Dall-E, Midjourney, Gemini or ChatGPT to assist them with certain tasks. However, the AI space is still developing, and numerous issues are emerging and are still to be resolved especially regarding the implications of AI use.

To that end, organisations have been advised to establish an AI policy to guide their use of the technology. It thus means that time and resources must be allocated to consider the subject, draft a policy, shepherd it through the established review process, and facilitate its official adoption. Though it could be argued that this is a lot of work, here are four reasons why it is crucial for organisations to have an AI policy.

 

1. Establishes clear guidelines

An AI policy sets clear expectations for how employees should interact with and use AI tools. These expectations should be aligned with the organisation’s core principles and values and thus reinforcing the culture that is being fostered.

Further, in having an AI policy, there is an opportunity to facilitate some transparency for customers and other stakeholders. In client-facing organisations, especially those that collect and use client data, there is a growing expectation for organisations to be more vigilant in their individuals’ personal data, hence, it would be beneficial to establish an AI policy that provides form guidance on this important issue.

 

2. Mitigates risks

AI can introduce new risks, such as bias in decision-making, data security breaches or unintended consequences. A well-defined AI policy can help identify and address these risks, protecting the organisation from potential harm and liability.

An instructive example of a consequence of using AI is the case of Air Canada where its chatbot gave incorrect information to a traveller. The chatbot fabricated a policy and thus gave a traveller inaccurate information. In the dispute that arose, Air Canada argued that its chatbot is “responsible for its own actions“, however, the tribunal directed Air Canada to uphold the policy. It was also found liable for negligent misrepresentation and ordered to pay damages to the affected traveller (Source: Canadian Underwriter).

 

3. Fosters compliance obligations

AI development and use are becoming increasingly regulated. As discussed in previous ICT Pulse articles, numerous concerns have been raised about AI. For example, there have been calls for a pause in AI development and more countries are establishing AI laws to provide some guardrails and oversight.

Further as noted earlier, the growing emphasis on data protection and protecting personal data means that organisations need to be more aware and proactive about how the systems and technologies they are using process data to ensure that they comply with existing laws and regulations. Further, in regulated industries, such as healthcare, banking and financial services, the use of AI may increasingly come under scrutiny, necessitating the creation of appropriate policies.

 

4.  Supports reputation management

Ethical lapses or controversies related to AI can damage an organisation’s reputation and erode customer trust. The Air Canada case previously highlighted could be seen as fuelling consumer distrust in airlines, which often are not seen in a favourable light. Air travel has become increasingly expensive, and airlines have been cutting back on the number of flights, the routes, and even inflight facilities and services. Further, the use of AI chatbots is often a cost-cutting measure. But when they are not managed properly, as occurred with Air Canada, the airline became the subject of ridicule and a cautionary tale of what happens when AI goes rogue.

A clear and transparent AI policy can demonstrate a commitment to responsible AI practices, which not only enhances an organisation’s brand reputation but can also be used to differentiate it in the market.

 

These are just a few reasons why organisations ought to have an AI policy. Others may be more industry-specific and would depend on the individual organisation. Nevertheless, there are numerous ethical, legal, and operational complexities associated with AI use. An AI policy is essential in establishing a coherent framework to navigate those issues whilst also fostering your organisation’s core principles, managing risks, and maintaining the trust of its stakeholders.

 

 

Image credit: rawpixel.com (Freepik)