Home › magazine › latest news › New EU Act aims to keep AI safe
New EU Act aims to keep AI safe
21st of November 2024The world’s first major Act to regulate artificial intelligence passed into law across the European Union in August. It aims to ensure the technology is safe and respects the bloc’s “fundamental rights and values”. Hartley Milner explores how the legislation will impact businesses.
“AS THE MEMORY of past misfortunes pressed upon me, I began to reflect upon their cause … the monster whom I had created, the miserable daemon whom I had sent abroad into the world.”
Could Victor Frankenstein’s remorseful words turn out to be frighteningly prescient as we lurch towards an uncertain future under artificial intelligence? Will science fiction morph inexorably into science fact? Has humanity unleashed a monster that will eventually turn on its creator?
The meteoric rise of AI has sparked countless dystopian scenarios amid deep concerns that the race to develop evermore advanced systems is out of control. Even many of the biggest names in tech remain fearful. In an open letter, the likes of billionaire mogul Elon Musk and Apple co-founder Steve Wozniak called for a pause in the development of the most advanced AI models to allow time to make sure they are safe.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the 1,000-plus signees warned. “Powerful AI systems should be developed only once we are confident their effects will be positive and their risks will be manageable.” The letter was prompted by the release of GPT-4 from Microsoft-backed OpenAI, whose founder, Sam Altman, suggested himself that “at some point it may be important to get an independent review before starting to train future systems”.
Business unease
The potential dangers of AI are causing unease more widely across the business community as well, and inhibiting its uptake. A poll of Irish business leaders this summer found that 91 per cent believed GenAI would increase security risks in the year ahead, with 53 per cent fearing catastrophic cyber attacks. Almost three quarters thought the creative content tool would “fail to enhance” their company’s ability to build trust with shareholders over the same period.
Other concerns included legal liability and reputational risks, the spread of misinformation, bias towards specific groups of customers or employees and challenges assessing AI investment returns.
Most critically, less than three out of 10 respondents were confident their organisation’s current measures to control GenAI lent themselves to “safe and secure outcomes”.
However, three quarters reported having plans to put GenAI governance structures in place, up from 56 per cent last year. And 84 per cent welcomed the EU Act and other regulations, acknowledging legislation was necessary to counter the negative impacts of AI.
“Good governance grounded in an organisation’s risk appetite provides clarity and a safe environment for a business to innovate and explore AI uses,” said Martin Duffy, head of GenAI at accounting firm PwC Ireland, which commissioned the survey. “The business can then focus on faster adoption of AI without exposing itself to unnecessary or unforeseen risks.”
The European Union’s trailblazing Act brings in sweeping implications for companies operating both inside and outside the EU that design, develop, deploy or use intelligent technologies or plan to do so in the future. AI systems are targeted for their potential to cause harm to society using a risk-based approach, specifically the higher the risk the stricter the regulation. Risk categories with implications for businesses are:
Unacceptable risk – AI systems considered a threat to the fundamental rights of people. Banned outright are models used to manipulate human behaviour to circumnavigate free will or exploit vulnerabilities. These include ‘social scoring’ – classifying individuals based on their social behaviours or personal characteristics – by governments, police agencies, the judiciary and businesses, as well as some uses of biometrics such as emotion recognition models in workplaces to pigeonhole people and identification applications like facial recognition.
High risk – systems used in critical sectors such as healthcare, education, law enforcement, the courts or public administration. Other areas include recruitment, employee rating and credit scoring, automated insurance claim processing or the setting of risk premiums for customers. These models will have to comply with strict standards around data quality, transparency, human oversight, accuracy, robustness and security. So-called regulatory ‘sandboxes’ – safe spaces for businesses to test new innovations – will promote the development of fully compliant systems, including those used for hiring or to assess whether somebody is entitled to a loan or operate autonomous robots.
Innovation boost
In addition to sandboxes, startups and small to medium-sized tech firms will be able to draw on €4bn in EU funding set aside to boost innovation and the development and training of compliant AI models. Businesses of all sizes making the transition to technologies such as AI can get expert support via a network of digital innovation hubs being rolled out across member states, as well as Schengen area countries Iceland, Lichtenstein and Norway. These ‘one-stop shops’ offer personalised training programmes, workshops and mentorship, as well as access to technical expertise and cutting-edge digital tools that can be trialled before being fully implemented.
Limited risk – those arising from a lack of transparency in AI usage. The EU Act contains targeted measures to reassure users and promote trust. For instance, people must be made aware they are interacting with systems like chatbots so they can make an informed decision whether to continue. Tech providers also have to ensure AI-generated content is identifiable and legal and text published to inform on matters of public interest is clearly flagged as artificially created. This also applies to images, audio and video content that can be created or manipulated to cause harm (‘deepfakes’).
Minimal risk – AI applications posing little or no threat to citizens’ rights or safety. Applications such as spam filters and video games can be deployed without strict regulatory requirements, allowing for their easier access to markets. The vast majority of AI systems currently used in the European Union fall into this category.
The bulk of AI legislation will be enforceable from August 2 2026. However, the ban on systems that present an unacceptable risk will apply six months from when the Act came into law (August 1), while the rules for so-called general-purpose AI models will apply after 12 months. To bridge the period before full implementation, the European Commission has launched an AI pact, an initiative inviting AI developers to adopt key obligations ahead of legally binding deadlines. The Commission is now working on guidelines for how the Act should be implemented along with a set of standards and codes of practice.
The onus will be on AI system providers and deployers to have ongoing monitoring regimes in place, and to report serious incidents or malfunctions with the technology. EU member states have until August 2 next year to appoint watchdogs to enforce compliance with the legislation and carry out market surveillance activities. Breaches of the rules can result in hefty fines of up to seven per cent of a company’s global annual turnover.
“AI has the potential to change the way we work and live and promises enormous benefits for citizens, our society and the European economy,” said Margrethe Vestager, vice-president responsible
for adapting Europe to the digital age.
“The European approach to technology puts people first and ensures everyone’s rights are preserved. With the AI Act, the EU has taken an important step to ensuring AI technology uptake respects EU rules in Europe.”
Dragos Tudorache, a Romanian lawmaker who oversaw EU negotiations to agree a legal framework for the Act, welcomed the deal, but told ECJ that the biggest hurdle remains its implementation.
Humans in control
“The AI Act has pushed the development of AI in a direction where humans are in control of the technology and where the technology will help us leverage new discoveries for economic growth, societal progress and unlock human potential,” Tudorache said. “But the AI Act is not the end of the journey, rather the starting point for a new model of governance built around technology. We must now focus our political energy on turning it from the law in the books to reality on the ground.”
Others feel more conflicted about the legislation. “This decision has a bitter-sweet taste,” said Marianne Tordeux Bitker, director of public affairs at startup and investor association France Digitale.
“While the AI Act responds to a major challenge in terms of transparency and ethics, it nonetheless creates significant obligations for all companies that use or develop artificial intelligence, despite a few adjustments planned for start-ups and SMEs, notably through regulatory sandboxes.
“We fear the text will simply create additional regulatory barriers that will benefit American and Chinese competition and reduce our opportunities to develop European champions in AI.”
While giving the EU credit for its global leadership on AI mitigation, Max von Thun, Europe director of non-profit organisation the Open Markets Institute, said: “The AI Act is incapable of addressing the number one threat AI currently poses: its role in increasing and entrenching the extreme power a few dominant tech firms already have in our personal lives, our economies and democracies.”