On June 14, the European Parliament will finally vote on its "common position" on the artificial intelligence (AI) Act. The AI Act is the first generic law on AI by a major regulator and will assign the applications of AI to a number of regulatory categories that may test future high risk AI systems before their deployment.
Subscribe now for unlimited access.
or signup to continue reading
The "Brussels effect" of the world's largest consumer market means its regulations will affect Australian deployment of AI, as global companies adjust to the new reality of the AI Act.
That does not mean the Australian legislature needs to model its AI laws on the European law, but that many companies operating in both Australia and Europe will adjust as they see fit for the new law.
The vote has been debated and amended for over two years, since it was first proposed in Brussels by the European Commission. The vote is controversial because the minority "left" parties in the Parliament will attempt to reform the law, making it more respectful of the human right to privacy from live facial recognition surveillance, especially at external borders.
Both the right-wing parties in the Parliament and the governments in their Council of Ministers will oppose such amendments, and they are very likely to fail, either in the Wednesday vote or the three-month negotiation between Parliament, Council and Commission (the trilogue) that follows.
Facial recognition controversy obscures the more important outcome of the AI Act. It will deregulate the AI industry.
Despite the loud lobbying from mainly US AI companies in the past month, it is no anti-innovation regulatory weapon for users against abuse by giant corporations.
Here are the three main reasons why it is such a modest law, designed to protect AI companies from regulation of their potentially harmful products and services.
First, it was proposed as a modest product safety law, and appears ever more modest as AI develops rapidly, if not nearly as rapidly as the merchants of AI hype would have you believe.
The proposed AI Liability Act will address consumer rights more fully in 2024-25.
Second, it was designed to ensure a European level playing field for the "digital single market", to stop any pro-regulation government from implementing a stricter law on aspects of AI. The obvious candidates here may be Belgium, Germany and France, which often take privacy and illegal discrimination violations more seriously than the economically focused European Commission.
Now those nations must follow the AI Act's more meagre regulation.
Finally, most of the measures in the AI Act will not be enforced for three years, so companies are unlikely to feel real effects until 2027. Its regulation may not be quite as ineffective and localised as the much misunderstood General Data Protection Regulation (GDPR). GDPR came into force in May 2018, yet only five years later is the Irish regulator threatening large fines. The Irish data protection commissioner has been the regulator of choice for the giant US companies that trade trillions of bits of personal data globally for advertisers.
READ MORE:
That choice was made because Ireland is so "business-friendly" not because it threatened to disrupt any innovative or harmful data-led business models.
The AI Act does promise to create a European Artificial Intelligence Office to advise national governments on enforcement, a similar scheme to that under GDPR.
Companies should be celebrating having secured such a major victory - a veto on national laws, a thin product safety regulation, Irish regulation. Why are they not? First, because they will always press for the weakest European regulation possible, as they have throughout the history of Internet regulation prior to this AI Act.
Second, the mechanisms for US tech astroturfing in Brussels are so well established that they have hyped a Euro-phobic Chicken Little skyfall panic with the less technically literate parts of the media, despite the paucity of this regulation. Finally, many executives in Silicon Valley either archly or genuinely misunderstand the European legislative process and believe the European right-wing hype that this is a real regulatory mechanism with teeth.
That hype has been created to confuse human rights advocates and the Parliamentary left into agreeing to such a modest measure. It may even be working, with less educated Parliamentarians believing that there is a serious attempt to regulate.
Is there any real prospect of this AI Act successfully regulating the worst excesses of AI? Very little.
But there is one prospect that may mean more centralised Brussels regulation if the Irish continue to fail to meaningfully enforce. The AI office could become an agency: "In case the establishment of the AI Office proves not to be sufficient to ensure a fully consistent application of this regulation at union level as well as efficient cross-border enforcement measures, the creation of an AI agency should be considered". That prospect is a decade away.
Australia can observe this "Brussels effect" regulation calmly, noting that it was never intended to have teeth. Industry Minister Ed Husic stated in October his ambition: "I want Australia to become the world leader in responsible AI. This includes setting reasonable regulations and standards." This European AI Act will set a very low bar for the Australian government to match or exceed.
- Professor Chris Marsden is a professor of AI, technology and the Law at Monash University's Faculty of Law.