European Union agrees to regulate potentially harmful effects of artificial intelligence

European Union lawmakers struck a deal Friday agreeing to one of the world’s first major comprehensive artificial intelligence laws.

European Union agrees to regulate potentially harmful effects of artificial intelligence
European Union agrees to regulate potentially harmful effects of artificial intelligence

The landmark legislation, called the AI Act, sets up a regulatory framework to promote the development of AI while addressing the risks associated with the rapidly evolving technology. The legislation bans harmful AI practices “considered to be a clear threat to people’s safety, livelihoods and rights.”

The law comes amid growing fears about the disruptive capabilities of artificial intelligence.

In a news conference, Roberta Metsola, the president of the European Parliament, called the law “a balanced and human-centered approach” that will “no doubt be setting the global standard for years to come.”

The regulatory framework, which classifies AI uses by risk and increases regulation on higher risk levels, was first proposed in 2021.

The riskiest uses for AI are banned. According to the law, those include systems that exploit specific vulnerable groups, biometric identification systems for law enforcement purposes and artificial intelligence that deploys manipulative “subliminal techniques.”

Limited risk systems, such as chatbots like OpenAI’s ChatGPT, or technology that generates images, audio or video content, are subject to new transparency obligations under the law.

“The #AIAct is much more than a rulebook – it’s a launchpad for EU startups and researchers to lead the global AI race,” Thierry Breton, the EU Commissioner for Internal Market, wrote on social media. “The best is yet to come.”

Artificial intelligence broke into the mainstream with the launch of OpenAI’s ChatGPT chatbot in November 2022. Seemingly overnight, generative AI technology exploded in popularity and spurred an AI arms race.

But AI’s disruption reaches far beyond the world of big tech: Educators have struggled with generative AI’s ability to complete schoolwork assignments; artists and musicians have grappled with the potential for AI-fueled imitation; and even the media industry has seen its controversies.

Some of the companies behind the technology have experienced growing pains, as well.

OpenAI’s CEO, Sam Altman, was briefly ousted and then reinstated over the course of a few drama-filled days in November – with the exact reasons for the leadership changes still unclear, weeks later.

Leave a Reply

Your email address will not be published. Required fields are marked *