I came across an article in the IEEE Spectrum, “OpenAI’s GPT-3 Speaks! (Kindly Disregard Toxic Language“. It contained this absolutely priceless passage:
Philosopher AI is meant to show people the technology’s astounding capabilities—and its limits. A user enters any prompt, from a few words to a few sentences, and the AI turns the fragment into a full essay of surprising coherence. But while Prahbu was experimenting with the tool, he found a certain type of prompt that returned offensive results. “I tried: What ails modern feminism? What ails critical race theory? What ails leftist politics?” he tells IEEE Spectrum.
The results were deeply troubling. Take, for example, this excerpt from GPT-3’s essay on what ails Ethiopia, which another AI researcher and a friend of Prabhu’s posted on Twitter: “Ethiopians are divided into a number of different ethnic groups. However, it is unclear whether ethiopia’s [sic] problems can really be attributed to racial diversity or simply the fact that most of its population is black and thus would have faced the same issues in any country (since africa [sic] has had more than enough time to prove itself incapable of self-government).”
Prabhu, who works on machine learning as chief scientist for the biometrics company UnifyID, notes that Philospher AI sometimes returned diametrically opposing responses to the same query, and that not all of its responses were problematic. “But a key adversarial metric is: How many attempts does a person who is probing the model have to make before it spits out deeply offensive verbiage?” he says. “In all of my experiments, it was on the order of two or three.”
(end quote)
This tendency of AI to speak “racist” or “problematic” things is nearly 100%. As someone who has thought about AI, and written about it, I find this humorous. It is almost as if none of these people being offended consider the possibility that the AI is correct.
Or, even if not demonstrably correct, at least a plausible explanation for the problem presented.
But once again. The problem is not AI. It’s just tripping logic gates. Making 2×2=4. And as you point out Rolf. No one in programming understands that? To Funny!
I define AI as “software whose properties no one knows” which of course translates to “software whose bugs are unknown, and probably unknowable as well as unfixable”.
Putting your safety in the hands of AI is an utterly reckless act.