With the EU establishing an Artificial Intelligence Act, the pressure is now on other countries to develop frameworks for AI development in their jurisdictions. However, the AI and business communities are likely to lobby hard against stringent rules, though there are a broad range of issues that must be considered to protect the future of the world and of mankind.

 

The Artificial Intelligence Act (AIA) 2021 of the European Union (EU) is among the first of its kind worldwide to establish a legal framework for Artificial Intelligence (AI) development. Among other things, the law sought to classify AI systems by risk, whilst also ensuring that matters related to human oversight, transparency, accountability and data quality are adequately addressed.

In June, the European Parliament adopted amendments to the AIA, which in some ways tightened regulation. For example, the law:

  • Bans the use of AI technology in biometric surveillance
  • Expands the classification of high-risk to include harm to people’s health, safety, fundamental rights or the environment
  • Requires providers of foundation models to guarantee robust protection of fundamental rights, health and safety and the environment; and
  • Increases the transparency requirements on generative AI platforms.

On the other hand, while the EU was proposing and deliberating on the amendments that would be made, leaders of Open AI, the creators of ChatGPT Dall-E and Codex, were actively lobbying for the EU to relax the law, and consequently, the regulatory burden on the organisation. Open AI even published a White Paper, in which it sought to lay its case for a watering down of the legislation, by trying to highlight its own effort and the ways in which some of the provisions as drafted could increase the obligations and responsibilities of the company.

Though legislative processes, such as that adopted by the EU, tend to encourage stakeholder participation and consultation, it ought to be appreciated that diverse views are likely to be received, all of which merit consideration. However, it is likely that some of the views will be self-serving and would need to be counterbalanced by other views whilst also taking into consideration the voices and interests of those who frequently are underserved or under-represented.

With the trillions of dollars that can be realised as AI continues to develop, the lobbying by AI companies to ensure that they maintain as much freedom and flexibility is likely to intensify. However, the law and regulators do have an obligation to not be unduly swayed by the interests of very few.

 

1. AI is complex and comprises multiple areas

First, in reviewing Open AI’s White Paper, it is easy to think that generative AI, and the platforms the company created, are all there is to AI. Though it might be the most popular and the one that has captured the public’s imagination, there are other arms of AI that the law must also consider.

Thus, in fielding stakeholder input, the individual contributions are likely to focus on their own unique interests and perspective. But a more comprehensive framework is necessary, which cannot be narrowly construed or constrained, and would need to address a broad range of issues.

 

2.  Laws and rules are not established only for those who follow them

For those who typically tend to follow the rules, it is easy to feel that laws and regulations that are being proposed seem overzealous or unduly restrictive when they seek to address issues or establish boundaries. However, laws do not exist solely for the people who inherently will follow them or stay within the boundaries. Often, in addition to providing some sort of framework, they also highlight the areas where people are likely to go outside of those boundaries and the consequences therein.

 

3.  What is reasonable is highly subjective

In its White Paper, Open AI indicated some of the measures and safeguards it had in place with a view to helping the EU to reconsider some of its amendments to allow AI companies to do what is “reasonable”.

However, “reasonable” and “reasonableness” are highly subjective and are concepts that have been the source of wide debate in law. Moreover, what is considered “reasonable” differs from country to country, and is likely to be a source of considerable conflict, especially for multinational businesses or for digital platforms that are being used globally.  For example, in Europe where there are strict data protection privacy rules, what may be reasonable in that context would be different from the United States, where the rules are not as stringent or as cohesive across the entire country.

Currently, with the EU establishing rules for AI, but other countries not yet having similar frameworks, there are likely to be vast interpretations of what is reasonable. Hence, to the extent that it is possible it may be prudent to limit the use of subjective terms and concepts, which could inherently undermine the objectives of the framework and not provide adequate clarity.

 

4.  It’s easier to relax a restrictive measure than the other way around

Finally, it is a point of continual debate about how stringent rules should be in the first instance. Are they too strict? Are they too relaxed? What is the optimal level at which they should be set? However, we all know that it is easier to relax rules that initially were restrictive than to start with a relaxed situation and sometime later try to tighten the rules. In other words, it is difficult to put the genie back in the bottle once it has been released.

Without a doubt, AI is still in its nascent stages, and we do not yet know how it is going to develop and more importantly, what will be the impact on the world and on mankind. As was discussed in Is pausing AI development the right thing to do?, scientists worldwide are concerned that we, humans, are already losing our grip on AI and so proposed a pause in AI development so that collectively, we could wrap our heads around the issue and introduce some guardrails.

At this junction, it does not appear that the moratorium will occur. AI companies will continue to develop the technology at exponential rates, which makes the rules and frameworks countries will introduce even more critical. Accordingly, it may be prudent for a conservative approach to AI to be adopted, as we continue to learn and more comfortable with how it is unfolding.

 

 

Image credit:  Pavel Danilyuk (Pexels)