The solution? Apparently to get lots of other AI bots involved, to correct each other relentlessly until the correct answer is reached. Sounds like a futuristic version of rof.
Yeah but you might get a Chillbot that sticks to its answers whatever the others say…
Maybe we can introduce religions among them? Would be interesting to see how it plays out if we introduce different groups to different religions and get them to convert each other ...
The drive to improve the hallucinations problem is urgent for a reason.
Already, when Microsoft launched its Bing chatbot, it quickly started making false accusations against some of its users, like telling a German college student that he was a threat to its safety. The bot adopted an alter-ego and started calling itself “Sydney.” It was essentially riffing off the student’s questions, drawing on all the science fiction it had digested from the internet about out-of-control robots.
I was using Bing AI (which uses the same underlying tech as Chat GPT) for some legal research. The advantage of Bing is that it cites it's sources. It made a legal statement which seemed a bit off to me so I checked the source and couldn't find it anywhere. I asked the AI to explain where it got this statement from and it made up a completely fake quote which it said was at a specific paragraph of the source. When questioned about the fake quote it simply said it "would prefer not to continue this conversation" and ended the chat.
And this is on a tool which is supposed to be citing it's sources.
Yeah i asked bard to do some research and it fabricated sources. When i said they didn't exist it claimed they were in a different volume number. When i said they weren't it fabricated other volume numbers.
Yeah but you might get a Chillbot that sticks to its answers whatever the others say…
Maybe we can introduce religions among them? Would be interesting to see how it plays out if we introduce different groups to different religions and get them to convert each other ...
I don't understand why they can't programme it to just say it doesn't know
It's trying to replace lawyers. We don't say that.
It doesn’t know what it doesn’t know; it just generates text that’s like what it’s read before. Sometimes that’s right and sometimes not.
tbf tbf it was pretty spot on about risky
I liked this bit:
The drive to improve the hallucinations problem is urgent for a reason.
Already, when Microsoft launched its Bing chatbot, it quickly started making false accusations against some of its users, like telling a German college student that he was a threat to its safety. The bot adopted an alter-ego and started calling itself “Sydney.” It was essentially riffing off the student’s questions, drawing on all the science fiction it had digested from the internet about out-of-control robots.
I was using Bing AI (which uses the same underlying tech as Chat GPT) for some legal research. The advantage of Bing is that it cites it's sources. It made a legal statement which seemed a bit off to me so I checked the source and couldn't find it anywhere. I asked the AI to explain where it got this statement from and it made up a completely fake quote which it said was at a specific paragraph of the source. When questioned about the fake quote it simply said it "would prefer not to continue this conversation" and ended the chat.
And this is on a tool which is supposed to be citing it's sources.
Yeah i asked bard to do some research and it fabricated sources. When i said they didn't exist it claimed they were in a different volume number. When i said they weren't it fabricated other volume numbers.
To me, this is good news. If they can’t fix the “making sh1t up” problem, our jobs are saved.
GPT3 is worryingly good at legal research. Haven’t used GPT4, but I expect you Allan upgrade “worryingly” to “terrifyingly”.
GPT 4 is orders or magnitude better than GPT 3 but it still makes things up sometimes
An AI version of rof would have AIs agreeing with other AIs purely because they have met IRL no matter how stupid the hallucination is.
The Risky AI will eventually create so many alternate AIs that the world's resources will be consumed keeping riskybots sockAIs going.
Exactly. We should all hope that there are persistent niggles like this that keep humans in the loop.
“When questioned about the fake quote it simply said it "would prefer not to continue this conversation" and ended the chat. ”
Oh it’s rof
Join the discussion