Rapid technological developments such as ChatGPT’s implementation of generative artificial intelligence (AI) are complicating efforts by EU lawmakers to agree on Notable Laws and Sources of Artificial Intelligence with direct knowledge of He told Reuters.
The European Commission proposed the draft rules For nearly two years to protect citizens from risks of Emerging technology that has experienced a boom in investment and consumer popularity in recent months.
A project that needs to be crushed out between the European Union countries And EU legislators, called the trio before rules It could become law.
Many lawmakers were expecting a consensus on The 108-page invoice last Month in meeting in Strasbourg, France and moved to Trilogy in the next few months.
But a 5-hour meeting on Resulted on February 13th in There is no decision and lawmakers are at odds over various aspects of According to three familiar sources with discussions.
While the industry expects an agreement by the end of the yearThere are concerns of complexity and lack of progress Legislation could be delayed to next yearEuropean elections can witness MEPs with Completely different set of Take priorities office.
“the pace that new Regulations are issued makes regulation a real challenge, said Daniel Laufer, A.P senior policy analyst in rights group access now. “that it fast-moving The goal, but there are measures remain related though speed of development: transparency, quality control and measures to assert their basic rights.
Rapid developments
Legislators work through over 3,000 scheduled edits, covering everything from creation of a new Amnesty International office for domain of the law rules.
Brando Benifi, Italian MEP, said: one of legislators leading Negotiation on The long-awaited artificial intelligence law of mass. “The debates can be very long. You have to talk to about 20 MEPs at a time.”
Lawmakers have sought to strike a balance between encouraging innovation and protecting the basic rights of citizens.
This led to a different AI tools They are classified according to their perception risk Level: from lowest to limited, high and unacceptable. high-risk tools It won’t be banned but it will require companies to be very transparent in their operations.
But these debates have left A small room for Addressing the strong expansion of generative AI technologies like ChatGPT and Stable Diffusion that have taken the world by storm, sparking user admiration and controversy.
By February, ChatGPT, made by OpenAI powered by Microsoft, set a record for The fastest growing user base of Any consumer application application in history.
Most of the big tech players They have stakes in sector, including Microsoft, Alphabet, and Meta.
big tech, big problems
The EU discussions raised concerns for comp – from small Startups to Big Tech – on how Their regulations may affect business and whether they will be at a competitive disadvantage against competitors from other continents.
Behind the scenes, big tech companies, who They invested billions of dollar in the new technology They put pressure on hard to preserve their innovations outside the range of higher-risk Clarify that would mean more compliance, more costs and more accountability about their products, the sources said.
A recent survey by industry body The application showed that 51% of Respondents expect a slowdown in Amnesty International development activities as a result of artificial intelligence law.
to address tools like ChatGPT, which seems to have endless applications, lawmakers introduced another category, “General Purpose Artificial Intelligence Systems” (GPAIS), to describe tools that can adapt to it perform several jobs. He. She remains It is unclear if all GPAIS will be considered high-risk.
reps from tech Companies paid back against This is amazing movesand insist on their own in-house strong enough to ensure the technology It is posted gracefully, and even propose a law should You have a choice-in Clause under which companies can decide for themselves whether the regulations apply.
double-edged sword?
DeepMind artificial intelligence company owned by Google, which is currently Testing its Sparrow chatbot, Reuters told Reuters of Multipurpose systems were complex.
“we think that creation of to rule framework About GPAIS It should be an inclusive process, which means all affected communities and civil society should said Alexandra Pelias, the company head of international public policy.
she added: question here: how do we make sure the risk-administration framework Will we create today will still be relevant tomorrow? “
Daniel Eck, CEO of Spotify audio streaming platform – which recently launched its own “AI DJ”. of Curating personal playlists – he told Reuters technology He was double-double-edged sword”.
“There’s a lot of Things we have to take into consideration.” “Ours team Works very actively with Regulators, trying to make sure this is technology benefits as much possible It is as safe possible. “
MEPs say the law will be subject to regular reviews, let for Updates as and when new problems with The emergence of artificial intelligence.
but, with European elections on horizon in In 2024, they are under pressure to deliver something substantial first the time.
“You should not rush into discussions and not rush to compromise made just So the file can be closed before the end of The year “People’s rights are at stake,” Lofer said.
Rapid technological developments such as ChatGPT’s implementation of generative artificial intelligence (AI) are complicating efforts by EU lawmakers to agree on Notable Laws and Sources of Artificial Intelligence with direct knowledge of He told Reuters.
The European Commission proposed the draft rules For nearly two years to protect citizens from risks of Emerging technology that has experienced a boom in investment and consumer popularity in recent months.
A project that needs to be crushed out between the European Union countries And EU legislators, called the trio before rules It could become law.
Many lawmakers were expecting a consensus on The 108-page invoice last Month in meeting in Strasbourg, France and moved to Trilogy in the next few months.
But a 5-hour meeting on Resulted on February 13th in There is no decision and lawmakers are at odds over various aspects of According to three familiar sources with discussions.
While the industry expects an agreement by the end of the yearThere are concerns of complexity and lack of progress Legislation could be delayed to next yearEuropean elections can witness MEPs with Completely different set of Take priorities office.
“the pace that new Regulations are issued makes regulation a real challenge, said Daniel Laufer, A.P senior policy analyst in rights group access now. “that it fast-moving The goal, but there are measures remain related though speed of development: transparency, quality control and measures to assert their basic rights.
Rapid developments
Legislators work through over 3,000 scheduled edits, covering everything from creation of a new Amnesty International office for domain of the law rules.
Brando Benifi, Italian MEP, said: one of legislators leading Negotiation on The long-awaited artificial intelligence law of mass. “The debates can be very long. You have to talk to about 20 MEPs at a time.”
Lawmakers have sought to strike a balance between encouraging innovation and protecting the basic rights of citizens.
This led to a different AI tools They are classified according to their perception risk Level: from lowest to limited, high and unacceptable. high-risk tools It won’t be banned but it will require companies to be very transparent in their operations.
But these debates have left A small room for Addressing the strong expansion of generative AI technologies like ChatGPT and Stable Diffusion that have taken the world by storm, sparking user admiration and controversy.
By February, ChatGPT, made by OpenAI powered by Microsoft, set a record for The fastest growing user base of Any consumer application application in history.
Most of the big tech players They have stakes in sector, including Microsoft, Alphabet, and Meta.
big tech, big problems
The EU discussions raised concerns for comp – from small Startups to Big Tech – on how Their regulations may affect business and whether they will be at a competitive disadvantage against competitors from other continents.
Behind the scenes, big tech companies, who They invested billions of dollar in the new technology They put pressure on hard to preserve their innovations outside the range of higher-risk Clarify that would mean more compliance, more costs and more accountability about their products, the sources said.
A recent survey by industry body The application showed that 51% of Respondents expect a slowdown in Amnesty International development activities as a result of artificial intelligence law.
to address tools like ChatGPT, which seems to have endless applications, lawmakers introduced another category, “General Purpose Artificial Intelligence Systems” (GPAIS), to describe tools that can adapt to it perform several jobs. He. She remains It is unclear if all GPAIS will be considered high-risk.
reps from tech Companies paid back against This is amazing movesand insist on their own in-house strong enough to ensure the technology It is posted gracefully, and even propose a law should You have a choice-in Clause under which companies can decide for themselves whether the regulations apply.
double-edged sword?
DeepMind artificial intelligence company owned by Google, which is currently Testing its Sparrow chatbot, Reuters told Reuters of Multipurpose systems were complex.
“we think that creation of to rule framework About GPAIS It should be an inclusive process, which means all affected communities and civil society should said Alexandra Pelias, the company head of international public policy.
she added: question here: how do we make sure the risk-administration framework Will we create today will still be relevant tomorrow? “
Daniel Eck, CEO of Spotify audio streaming platform – which recently launched its own “AI DJ”. of Curating personal playlists – he told Reuters technology He was double-double-edged sword”.
“There’s a lot of Things we have to take into consideration.” “Ours team Works very actively with Regulators, trying to make sure this is technology benefits as much possible It is as safe possible. “
MEPs say the law will be subject to regular reviews, let for Updates as and when new problems with The emergence of artificial intelligence.
but, with European elections on horizon in In 2024, they are under pressure to deliver something substantial first the time.
“You should not rush into discussions and not rush to compromise made just So the file can be closed before the end of The year “People’s rights are at stake,” Lofer said.