ChatGPT 'hallucinates'....some researchers worry it isn't fixable
Transdimension… 06 Jun 23 17:07
Reply |

https://www.washingtonpost.com/technology/2023/05/30/ai-chatbots-chatgpt-bard-trustworthy/?utm_source=tldrai

The solution? Apparently to get lots of other AI bots involved, to correct each other relentlessly until the correct answer is reached. Sounds like a futuristic version of rof.

Maybe we can introduce religions among them? Would be interesting to see how it plays out if we introduce different groups to different religions and get them to convert each other ...

I liked this bit:

The drive to improve the hallucinations problem is urgent for a reason.

Already, when Microsoft launched its Bing chatbot, it quickly started making false accusations against some of its users, like telling a German college student that he was a threat to its safety. The bot adopted an alter-ego and started calling itself “Sydney.” It was essentially riffing off the student’s questions, drawing on all the science fiction it had digested from the internet about out-of-control robots.

I was using Bing AI (which uses the same underlying tech as Chat GPT) for some legal research. The advantage of Bing is that it cites it's sources. It made a legal statement which seemed a bit off to me so I checked the source and couldn't find it anywhere. I asked the AI to explain where it got this statement from and it made up a completely fake quote which it said was at a specific paragraph of the source. When questioned about the fake quote it simply said it "would prefer not to continue this conversation" and ended the chat. 

And this is on a tool which is supposed to be citing it's sources. 

Yeah i asked bard to do some research and it fabricated sources. When i said they didn't exist it claimed they were in a different volume number. When i said they weren't it fabricated other volume numbers.

To me, this is good news. If they can’t fix the “making sh1t up” problem, our jobs are saved.

GPT3 is worryingly good at legal research. Haven’t used GPT4, but I expect you Allan upgrade “worryingly” to “terrifyingly”.

Sounds like a futuristic version of rof.

An AI version of rof would have AIs agreeing with other AIs purely because they have met IRL no matter how stupid the hallucination is. 

The Risky AI will eventually create so many alternate AIs that the world's resources will be consumed keeping riskybots sockAIs going. 

To me, this is good news. If they can’t fix the “making sh1t up” problem, our jobs are saved.
 

Exactly. We should all hope that there are persistent niggles like this that keep humans in the loop.