A group of current and former employees of artificial intelligence (AI) companies, including Microsoft-backed OpenAI and Alphabet’s Google DeepMind, expressed concerns on Tuesday about the threats posed by the modern technology.
An open letter by a group of 11 current and former OpenAI employees and one current and one former Google DeepMind employee stated that the financial motives of artificial intelligence companies hamper effective oversight.
“We do not believe that specially designed corporate governance structures will be enough to change this,” he added in the letter.
It further warns of the dangers of unregulated AI, from the spread of disinformation to the loss of independent AI systems and the deepening of existing inequalities, which could result in “human extinction.”
Researchers found examples of image generators from companies such as OpenAI and Microsoft producing images containing voting misinformation, despite policies prohibiting such content.
continued below
The letter said artificial intelligence companies have “faint obligations” to share information with governments about the capabilities and limitations of their systems, adding that the companies cannot be expected to share this information voluntarily.
The open letter is the latest to raise concerns about the safety of generative artificial intelligence technology, which can quickly and cheaply create human-like text, images and sound.
The group urged AI companies to make it easier for current and former employees to report risk concerns and not enforce non-disclosure agreements that prohibit criticism.
Moreover, the company led by Sam Altman said on Thursday that it had disrupted five covert influence operations that sought to operate its artificial intelligence models for “fraudulent activities” on the Internet.
Most read digitally