{"id":170573,"date":"2024-05-10T05:30:00","date_gmt":"2024-05-10T10:30:00","guid":{"rendered":"https:\/\/ict-pulse.com\/?p=170573"},"modified":"2024-05-10T05:30:29","modified_gmt":"2024-05-10T10:30:29","slug":"still-on-the-fence-about-having-an-ai-policy-for-your-organisation-here-are-4-key-reasons-why-you-should","status":"publish","type":"post","link":"https:\/\/ict-pulse.com\/2024\/05\/still-on-the-fence-about-having-an-ai-policy-for-your-organisation-here-are-4-key-reasons-why-you-should\/","title":{"rendered":"Still on the fence about having an AI policy for your organisation? Here are 4 key reasons why you should"},"content":{"rendered":"\n
Artificial intelligence (AI) is no longer a futuristic concept but a practical tool transforming organisations across all industries and sectors. As businesses increasingly integrate AI into their operations, it is recommended that an AI policy be developed. We outline four reasons why organisations need an AI policy. \u00a0<\/em><\/p>\n\n\n\n <\/p>\n\n\n\n Although generative artificial intelligence (AI) platforms became publicly accessible about 18 months ago, their use is increasingly becoming the norm. Individuals and employees may not think twice about using Dall-E, Midjourney, Gemini or ChatGPT to assist them with certain tasks. However, the AI space is still developing, and numerous issues are emerging and are still to be resolved especially regarding the implications of AI use.<\/p>\n\n\n\n To that end, organisations have been advised to establish an AI policy to guide their use of the technology. It thus means that time and resources must be allocated to consider the subject, draft a policy, shepherd it through the established review process, and facilitate its official adoption. Though it could be argued that this is a lot of work, here are four reasons why it is crucial for organisations to have an AI policy.<\/p>\n\n\n\n <\/p>\n\n\n\n An AI policy sets clear expectations for how employees should interact with and use AI tools. These expectations should be aligned with the organisation\u2019s core principles and values and thus reinforcing the culture that is being fostered.<\/p>\n\n\n\n Further, in having an AI policy, there is an opportunity to facilitate some transparency for customers and other stakeholders. In client-facing organisations, especially those that collect and use client data, there is a growing expectation for organisations to be more vigilant in their individuals\u2019 personal data, hence, it would be beneficial to establish an AI policy that provides form guidance on this important issue.<\/p>\n\n\n\n <\/p>\n\n\n\n AI can introduce new risks, such as bias in decision-making, data security breaches or unintended consequences. A well-defined AI policy can help identify and address these risks, protecting the organisation from potential harm and liability.<\/p>\n\n\n\n An instructive example of a consequence of using AI is the case of Air Canada where its chatbot gave incorrect information to a traveller. The chatbot fabricated a policy and thus gave a traveller inaccurate information. In the dispute that arose, Air Canada argued that its chatbot is “responsible for its own actions<\/em>“, however, the tribunal directed Air Canada to uphold the policy. It was also found liable for negligent misrepresentation and ordered to pay damages to the affected traveller (Source: Canadian Underwriter<\/a>).<\/p>\n\n\n\n <\/p>\n\n\n\n AI development and use are becoming increasingly regulated. As discussed in previous ICT Pulse articles, numerous concerns have been raised about AI. For example, there have been calls for a pause in AI development<\/a> and more countries are establishing AI laws<\/a> to provide some guardrails and oversight.<\/p>\n\n\n\n Further as noted earlier, the growing emphasis on data protection and protecting personal data means that organisations need to be more aware and proactive about how the systems and technologies they are using process data to ensure that they comply with existing laws and regulations. Further, in regulated industries, such as healthcare, banking and financial services, the use of AI may increasingly come under scrutiny, necessitating the creation of appropriate policies.<\/p>\n\n\n\n <\/p>\n\n\n\n Ethical lapses or controversies related to AI can damage an organisation\u2019s reputation and erode customer trust. The Air Canada case previously highlighted could be seen as fuelling consumer distrust in airlines, which often are not seen in a favourable light. Air travel has become increasingly expensive, and airlines have been cutting back on the number of flights, the routes, and even inflight facilities and services. Further, the use of AI chatbots is often a cost-cutting measure. But when they are not managed properly, as occurred with Air Canada, the airline became the subject of ridicule and a cautionary tale of what happens when AI goes rogue.<\/p>\n\n\n\n A clear and transparent AI policy can demonstrate a commitment to responsible AI practices, which not only enhances an organisation\u2019s brand reputation but can also be used to differentiate it in the market.<\/p>\n\n\n\n <\/p>\n\n\n\n These are just a few reasons why organisations ought to have an AI policy. Others may be more industry-specific and would depend on the individual organisation. Nevertheless, there are numerous ethical, legal, and operational complexities associated with AI use. An AI policy is essential in establishing a coherent framework to navigate those issues whilst also fostering your organisation\u2019s core principles, managing risks, and maintaining the trust of its stakeholders.<\/p>\n\n\n\n <\/p>\n\n\n\n <\/p>\n\n\n\n1. Establishes clear guidelines<\/h2>\n\n\n\n
2. Mitigates risks<\/h2>\n\n\n\n
3. Fosters compliance obligations<\/h2>\n\n\n\n
4.\u00a0 Supports reputation management<\/h2>\n\n\n\n