Within organisations, the odds are that employees are using Artificial Intelligence (AI) to assist them in certain tasks and activities. However, the use of AI in organisations could have far-reaching consequences, and it may be prudent to have a policy in place that guides its use.
Depending on whom you ask, Generative Artificial Intelligence (AI) and Large Language Models (LLM) are the best things since sliced bread. They are being used everywhere and by everyone: from helping students with their homework to assisting physicians with life-threatening diagnoses for their patients.
However, while the world seems to be barrelling forward in its embrace of AI, scientists and other luminaries are expressing concern. They have been urging countries to slow down the adoption and integration of AI into our lives, livelihood and societies and introduce regulations to act as guardrails as we continue to understand the full impact and implications of AI. We also have others, such as the Chief Executive Officer of OpenAI, which created ChatGPT, reportedly lobbying various governments to water down the legislation that has been proposed to reduce the regulatory burden on the company (Source: TIME).
However, as AI continues to become more mainstream, business leaders and key decision-makers ought to be considering its impact and implications on their organisations, and the guardrails that may be needed. Below we outline six questions that should be answered to determine whether your organisation needs an AI policy.
1. How is AI being used in our organisation?
First, it is important to have some idea of the ways in which AI is being by team members and consequently, to advance the work of the organisation. Some tasks might be seemingly benign, such as providing research or creating inspirational images. In other organisations, AI may have a more substantial role, such as assisting in processing and analysing applications and forms, which could, for example, affect not only its clients but also the lives and livelihoods of its employees and customers.
It is thus crucial to understand how AI is being used in the organisation to assess its importance and value in the first instance, and thereafter the risks, that an AI policy should seek to address.
2. To what degree is AI involved in important decisions of the organisation?
Following on from the previous question, it would be crucial to understand the extent to which AI is not only integrated into the organisation, but also the role it plays in decision-making processes.
With regard to decisions, it is likely that they are being made in the different functional areas of the organisation, such as customer care, administration, HR, etc, and not solely at the senior management or executive levels. Thus, the ‘importance’ of a decision ought to be construed based on the principles and values of the organisation. For example, for a customer-centric or sales-driven business, how customers are treated may be a critical consideration. So, decisions that chatbots, for example, are allowed to make in terms of queries that are escalated to humans should not be overlooked when responding to this question.
Also, it is important to recognise and consider that decisions made at lower levels of an organisation can have an impact on those made by senior management. Hence if AI is being used by the functionaries, junior team members or among non-core teams, they could still have an impact on important decisions of the organisation.
3. Could the use of AI make our processes less transparent?
One of the biggest concerns of AI generally, is that the reasoning behind its outputs is not always clear. As a result, there have been reports of the use of questionable sources and AI platforms, such as Google’s Bard and Open AI’s ChatGPT, providing outputs not based on fact or sound reasoning.
Hence, matters such as the transparency and accountability of the organisation may be affected based on how AI is being utilised. Once again, if Ai is being used to support decision-making, it may be necessary to ask questions including the following:
- What data sets is the AI platform using?
- Is the AI platform drawing information from other sources?
- Do we clearly understand how the AI platform is generating the outputs?
- Can we justify the integration of AI into our processes?
- To what degree could it be argued that the use of AI makes our processes less transparent?
- How does AI affect accountability?
4. Could the use of AI introduce unwanted bias?
The challenges with bias and AI are well-documented. To a considerable degree, that bias has been attributed to how the AI has been trained, which may not accurately or fairly reflect the environments in which they will be used. As a result, that bias could influence the outputs an AI platform generates and upon which we in turn rely.
Once again and to answer this question, it is vital that the underlying programming of the AI platform is understood, and thereafter, carefully monitored.
5. Could our use of AI affect our legal and regulatory compliance?
Cognisant of the continuing shift toward data protection and privacy, the impact of the use of AI on compliance with those rules ought to be carefully considered. Further, depending on the industry, there could also be additional rules, guidelines and standards that ought to be followed.
Once again, understanding how AI uses data will be critical to ensuring that the organisation is able to remain compliant with existing rules, and to determine what protections can be established to mitigate risk.
6. How equipped are we to critically review the AI-generated outputs?
Finally, one of the perceived benefits of generative AI and LLMs has been the fact that they produce (relatively) straightforward outputs that can be easily consumed by users. However, it has also meant that individuals may not be as inclined to interrogate the AI-generated outputs.
Having said this, it also means that to verify outputs users ought to possess some knowledge of a topic in order to discern whether the outputs are accurate and logical, or at the very least, make sense.
Within organisations, there ought to be continual monitoring and evaluation of the AI platforms and their outputs, which should be reflected in the governing policy.
In summary, due to the pervasiveness of AI, most organisations should have an AI policy in place. At first, the policy may not be very detailed, but its purpose would be to establish a framework through which the use of AI can be rigorously considered.
Moreover, due to the speed at which AI technology and use is evolving, it is likely that the policy would need to be reviewed and updated regularly. Though having a policy may be seen as a hassle, it demonstrates intention and proactiveness in managing an organisation’s operations and processes, rather than just getting caught up in the hype.
Image credit: ThisIsEngineering (Pexels)
I think the significance of AI is just being realised. The leadership teams of technology in organisations need to wake up to this reality.