- The EU says the law will protect citizens from AI’s dangers while harnessing the technology’s potential in Europe
RIYADH: EU states on Tuesday gave their final backing to landmark rules on artificial intelligence that will govern powerful systems like OpenAI’s ChatGPT.
The European Parliament had already approved the law in March and it will now enter into force after being published in the official EU journal in the coming days.
The EU says the law will protect citizens from AI’s dangers while harnessing the technology’s potential in Europe.
First proposed in 2021, the rules took on greater urgency after ChatGPT arrived in 2022, showing generative AI’s human-like ability to produce eloquent text within seconds.
Other examples of generative AI include Dall-E and Midjourney, which can produce images in nearly any style with a simple input in everyday language. The law known as the “AI Act” takes a risk-based approach: if a system is high-risk, a company has a tougher set of obligations to fulfill to protect citizens’ rights.
There are strict bans on using AI for predictive policing and systems that use biometric information to infer an individual’s race, religion or sexual orientation. Companies will have to comply by 2026 but rules covering AI models like ChatGPT will apply 12 months after the law becomes official.
Pledge
The world’s leading companies pledged at the start of a mini summit on AI to develop the technology safely, including pulling the plug if they can’t rein in the most extreme risks.
World leaders are expected to hammer out further agreements on artificial intelligence as they gathered virtually to discuss AI’s potential risks but also ways to promote its benefits and innovation.
The AI Seoul Summit is a low-key follow-up to November’s high-profile AI Safety Summit at Bletchley Park in the UK, where participating countries agreed to work together to contain the potentially “catastrophic” risks posed by breakneck advances in AI.
The two-day meeting — co-hosted by South Korea and the UK — also comes as major tech companies like Meta, OpenAI and Google roll out the latest versions of their AI models.
They’re among 16 AI companies that made voluntary commitments to AI safety as the talks got underway, according to a British government announcement.
The companies, which also include Amazon, Microsoft, France’s Mistral AI, China’s Zhipu.ai, and G42 of the UAE, vowed to ensure safety of their most cutting edge AI models with promises of accountable governance and public transparency.
The pledge includes publishing safety frameworks setting out how they will measure risks of these models.
Source: Arab News