A controversial letter regarding the development of artificial intelligence was made public in March. The open letter sparked a heated debate over the ethics behind A.I. programs.
The letter was organized by the Future of Life institute, a global think tank that studies the dangers of technology to humanity. It was released online on March 28 after receiving signatures from over 1,100 people. The signatories included SpaceX and Tesla CEO Elon Musk and Apple co-founder Steve Wozniak.
The open letter asked that all large-scale projects to develop sophisticated A.I. programs be paused for six months. Specifically, it asked companies to put the brakes on developing anything more powerful than GPT-4.
GPT-4 is an A.I. model that can generate text similar to human speech. An earlier version of it, called GPT 3.5, drives the wildly popular ChatGPT service and allows it to communicate with millions of users.
The signatories of the open letter said that companies were now “locked in an out-of-control race,” competing to build even more powerful A.I. systems. They argued that this rush to develop complex A.I. technology could present unpredictable dangers to society if left unregulated.
Already, programs like Chat GPT have been accused of showing racist and sexist biases when providing answers. Also, some cybercriminals have attempted to use the programs to spread disinformation online.
“This pause should be used to jointly develop safety protocols for advanced A.I. design and development,” the letter said.
But while some hailed the open letter and its signatories, others were skeptical of its content. Namely, they called the worldwide moratorium on A.I. development to be unrealistic, as many companies would refuse to abide by it.
Some A.I. ethicists also disagreed with the open letter, though for different reasons. They said the letter only talked about hypothetical dangers in the long term without addressing more immediate and real concerns regarding A.I. usage.