With the EU establishing an Artificial Intelligence Act, the pressure is now on other countries to develop frameworks for AI development in their jurisdictions. However, the AI and business communities are likely to lobby hard against stringent rules, though there are a broad range of issues that must be considered to protect the future of the world and of mankind.<\/em><\/p>\n\n\n\n
<\/p>\n\n\n\n
In June, the European Parliament adopted amendments to the AIA, which in some ways tightened regulation<\/a>. For example, the law:<\/p>\n\n\n\n
On the other hand, while the EU was proposing and deliberating on the amendments that would be made, leaders of Open AI, the creators of ChatGPT Dall-E and Codex, were actively lobbying for the EU<\/a> to relax the law, and consequently, the regulatory burden on the organisation. Open AI even published a White Paper<\/a>, in which it sought to lay its case for a watering down of the legislation, by trying to highlight its own effort and the ways in which some of the provisions as drafted could increase the obligations and responsibilities of the company.<\/p>\n\n\n\n
First, in reviewing Open AI\u2019s White Paper, it is easy to think that generative AI, and the platforms the company created, are all there is to AI. Though it might be the most popular and the one that has captured the public’s imagination, there are other arms of AI that the law must also consider.<\/p>\n\n\n\n
Thus, in fielding stakeholder input, the individual contributions are likely to focus on their own unique interests and perspective. But a more comprehensive framework is necessary, which cannot be narrowly construed or constrained, and would need to address a broad range of issues.<\/p>\n\n\n\n
<\/p>\n\n\n\n
For those who typically tend to follow the rules, it is easy to feel that laws and regulations that are being proposed seem overzealous or unduly restrictive when they seek to address issues or establish boundaries. However, laws do not exist solely for the people who inherently will follow them or stay within the boundaries. Often, in addition to providing some sort of framework, they also highlight the areas where people are likely to go outside of those boundaries and the consequences therein.<\/p>\n\n\n\n
<\/p>\n\n\n\n
In its White Paper, Open AI indicated some of the measures and safeguards it had in place with a view to helping the EU to reconsider some of its amendments to allow AI companies to do what is \u201creasonable\u201d.<\/p>\n\n\n\n
However, \u201creasonable\u201d and \u201creasonableness\u201d are highly subjective and are concepts that have been the source of wide debate in law. Moreover, what is considered \u201creasonable\u201d differs from country to country, and is likely to be a source of considerable conflict, especially for multinational businesses or for digital platforms that are being used globally. For example, in Europe where there are strict data protection privacy rules, what may be reasonable in that context would be different from the United States, where the rules are not as stringent or as cohesive across the entire country.<\/p>\n\n\n\n
Currently, with the EU establishing rules for AI, but other countries not yet having similar frameworks, there are likely to be vast interpretations of what is reasonable. Hence, to the extent that it is possible it may be prudent to limit the use of subjective terms and concepts, which could inherently undermine the objectives of the framework and not provide adequate clarity.<\/p>\n\n\n\n
<\/p>\n\n\n\n
Finally, it is a point of continual debate about how stringent rules should be in the first instance. Are they too strict? Are they too relaxed? What is the optimal level at which they should be set? However, we all know that it is easier to relax rules that initially were restrictive than to start with a relaxed situation and sometime later try to tighten the rules. In other words, it is difficult to put the genie back in the bottle once it has been released.<\/p>\n\n\n\n
Without a doubt, AI is still in its nascent stages, and we do not yet know how it is going to develop and more importantly, what will be the impact on the world and on mankind. As was discussed in Is pausing AI development the right thing to do?<\/a><\/em><\/strong>, scientists worldwide are concerned that we, humans, are already losing our grip on AI and so proposed a pause in AI development so that collectively, we could wrap our heads around the issue and introduce some guardrails.<\/p>\n\n\n\n
Image credit: Pavel Danilyuk (Pexels<\/a>)<\/em><\/p>\n\n\n\n