Main Menu

AI BS stories

Started by rcjordan, November 21, 2025, 01:47:43 AM

Previous topic - Next topic

rcjordan

DAVE:
I was troubleshooting the cheap chinese diesel heater and I asked if it mattered that I replaced the rubber hose with a stiffer tubing with slightly bigger I.D. - it went into this long explanation about the calibration of the pump to the stretch of the OEM tubing - it was all bull shit.  It was just a bad pump. new pump doesn't care what kind of hose.

RC:
I had a hilarious confirmation of the BS factor just this week.  I've been working on the photo-tagging project and found that a minimalist family tree would be helpful with dates. Using the massive Mormon genealogy online database, I fleshed it out (8 generations) in 4 days. The resulting tree is good, solid.

Day before yesterday, my SIL contacted LPJ about a not-too-distant ancestor that the 'article' said was a super-wealthy landowner, hero civil war officer, and --among his many other accomplishments- the founder of Elm City NC. When LPJ read it to me, I just couldn't remember anybody like this in the tree. And the description of the landholdings & wealth just didn't jive with what knew about his son (a modestly well-off mule trader in Elm City) .  I asked if the SIL had used ai --yep, Deepseek. I went to my family tree. Nope, no such person.  I went to the Mormons. Nope, my tree matched their database.

Deepseek had made it all up. 100% fairy tale.  I'd read that LLMs will sometimes make up stuff to please the user, but -damn- this one was wild.

ergophobe

I thought they had gotten better about wholesale hallucination like that, but my understanding is that the hallucination problem is super hard to solve because LLMs are supposed to appear creative. That means not simply choosing the word that is statistically most likely to follow the previous word, but to have some "salt" in there so that it doesn't just keep generating the same stuff over and over.

In other words, there is a bit of a coin flipping thing going on. Generally if you flip a coin 100 times, you will get 5x10^1 heads to one significant digit, but sometimes you'll get 8x10^1

It looks like that's what you got. The difference between a coin and an LLM though is that an LLM is affected by the previous "tosses" so once it's decided that it's a "heads day" it might just keep going.

ergophobe

And now catching up on the feeds after a week

https://www.futurity.org/artificial-intelligence-misinformation-3305302/

QuoteThe new method called Calibrating LLM Confidence by Probing Perturbed Representation Stability, or CCPS, applies tiny nudges to an LLM's internal state while it's forming an answer. These nudges "poke" at the foundation of the answer to see if the answer is strong and stable or weak and unreliable.

Rupert

10 heads in a row... It can be done...

YouTube · ThinkSceptically
44.4k+ views · 13 years ago
Derren Brown - 10 Heads in a Row



Spoiler...













He spends hours with a camera until it happens for real.
... Make sure you live before you die.

rcjordan