Elon Musk and A group of Artificial intelligence (AI) experts and industry executives are calling for a six-Month pause in systems development more powerful From OpenAI’s newly released GPT-4, in An open letter indicating potential risks to society and humanity.
The letter issued by the non-profit Foundation for the Future of Institute of life and signed by more out of 1000 people Including musk and it is called for a pause on Advanced artificial intelligence development So share it safety protocols for These were the designs developedImplemented and reviewed by independent experts.
Strong artificial intelligence systems should He is developed Only once we are confident that its effects will be positive and its risks will be under control.”
The letter detailed potential risks to society and civilization through human-competitive AI systems in the form of Economic and political unrest, called on developers to work with policy makers on Referee f regulatory authorities.
Co-signees include Imad Moustak, CEO of Stability AI, researchers at Alphabet-owned DeepMind, as well as Yoshua Bengio and Stuart Russell.
According to the European Union’s Transparency Register, the future of The Life Institute is funded primarily by the Musk Foundation, as well as Altruistic Effectiveness in London group Founders Pledge and Silicon Valley Community Foundation.
Fears come like EU police force europol on Monday joined choir of Ethical and legal concerns over Advanced artificial intelligence like ChatGPT, warning about possible misuse of the system in Phishing attempts, misinformation and cybercrime.
Meanwhile, the UK government Exposed proposals for “adaptable” regulatory framework about artificial intelligence.
The government policy, shown in a policy published paper on Wed, you will divide the responsibility for Artificial Intelligence (AI) rules among regulators for Human rights and health f safetyAnd competitioninstead of creating a file new body dedicated to technology.
Musk, his Tesla car maker using Amnesty International for Autopilot systemHe has been vocal about his concerns about artificial intelligence.
since its release last yearOpenAI’s ChatGPT, backed by Microsoft, prompted competitors to accelerate a similarly large development process language modelsand companies to integrate generative artificial intelligence models in their products.
OpenAI CEO Sam Altman, a spokesperson for Future, did not sign the letter of Al-Hayat told Reuters. OpenAI did not immediately respond to requests for comment.
The message is not perfect, but the spirit is right: we are need to slow down down Until we better understand the ramifications,” said Gary Marcus, MD professor at New York University who Sign the letter. “They can cause Serious harm … big players They become more secretive about what they do, which makes them more secretive hard for community to defend against No matter what damage can be achieved.”
Elon Musk and A group of Artificial intelligence (AI) experts and industry executives are calling for a six-Month pause in systems development more powerful From OpenAI’s newly released GPT-4, in An open letter indicating potential risks to society and humanity.
The letter issued by the non-profit Foundation for the Future of Institute of life and signed by more out of 1000 people Including musk and it is called for a pause on Advanced artificial intelligence development So share it safety protocols for These were the designs developedImplemented and reviewed by independent experts.
Strong artificial intelligence systems should He is developed Only once we are confident that its effects will be positive and its risks will be under control.”
The letter detailed potential risks to society and civilization through human-competitive AI systems in the form of Economic and political unrest, called on developers to work with policy makers on Referee f regulatory authorities.
Co-signees include Imad Moustak, CEO of Stability AI, researchers at Alphabet-owned DeepMind, as well as Yoshua Bengio and Stuart Russell.
According to the European Union’s Transparency Register, the future of The Life Institute is funded primarily by the Musk Foundation, as well as Altruistic Effectiveness in London group Founders Pledge and Silicon Valley Community Foundation.
Fears come like EU police force europol on Monday joined choir of Ethical and legal concerns over Advanced artificial intelligence like ChatGPT, warning about possible misuse of the system in Phishing attempts, misinformation and cybercrime.
Meanwhile, the UK government Exposed proposals for “adaptable” regulatory framework about artificial intelligence.
The government policy, shown in a policy published paper on Wed, you will divide the responsibility for Artificial Intelligence (AI) rules among regulators for Human rights and health f safetyAnd competitioninstead of creating a file new body dedicated to technology.
Musk, his Tesla car maker using Amnesty International for Autopilot systemHe has been vocal about his concerns about artificial intelligence.
since its release last yearOpenAI’s ChatGPT, backed by Microsoft, prompted competitors to accelerate a similarly large development process language modelsand companies to integrate generative artificial intelligence models in their products.
OpenAI CEO Sam Altman, a spokesperson for Future, did not sign the letter of Al-Hayat told Reuters. OpenAI did not immediately respond to requests for comment.
The message is not perfect, but the spirit is right: we are need to slow down down Until we better understand the ramifications,” said Gary Marcus, MD professor at New York University who Sign the letter. “They can cause Serious harm … big players They become more secretive about what they do, which makes them more secretive hard for community to defend against No matter what damage can be achieved.”