FriendsNews header

search Contact Us


ChatGPT Reliability

by steve - 2026-01-25 ( education / tech / ai ) [html version]

As a joke (and to point out how ChatGPT is frequently USELESS, I asked about the projection of silver for the next year; (FYI, it already passed $100)

Silver Price Outlook for the Next Year (2026)

Base Case Projection (Most Likely)

$40-$60 per ounce

Based on expected consolidation after a sharp rally and current industrial demand trends. Supported by analysts who see the market stabilizing after ETF inflows and supply tightness.

On Sunday, January 25th, 2026 at 5:01 PM, someone wrote:

I've had similar problems where its information is completely out of date. But I just point out its error and it tries again and usually comes up with the right answer eventually

On Sunday, January 25, 2026 at 09:32:03 PM PST, someone wrote:

I think it and the others are attempting to shape how people will be brainwashed in the way they prefer. The only reason they tell false information is they are algorithmically designed that way. If there's a paper written in 1956 that is way outside of the narrative they are pushing now, it surely knows that paper exists. The programmers just don't want to admit it. Like Google searches, it's more of a way to hide information than a knowledge base.

People just don't always realize that the government and the tech oligarchs who control governments are all spewing false information and propaganda designed to shape society in the direction they've chosen which is always to take away our rights and further enslave everyone.

If anyone still doesn't realize that, I think they are simply doomed.

On Sunday, January 25th, 2026 at 6:54 PM, someone wrote:

it admits to giving the "consensus" and seems to "prefer" that (can AI be lazy?) A number of times I've really had to dig to get the truth. A study said it's wrong almost half the time and I realize that but a lot of my "chats" for enterainment to see how far it will go to avoid the truth

On Sunday, January 25, 2026 at 04:24:46 PM PST, someone wrote:

I heard that if you know enough about the subject, it will try to bullsh-t you and give you the official narrative that is always false. If you follow up and insist it tell the truth it will even say it's sorry and give a more truthful answer.

But with controversial subjects that our corrupt government and the CIA controlled tech companies don't want anyone to know about, it simply sticks to the false narrative such as "vaccines are safe and effective" which is the exact opposite of the truth.

So far as I know, with controversial subjects the only 100% truthful search engine is at brighteon.ai

Received: 2026-01-26 05:39:20

the problem with AI right now is that it has no temporal awareness. it doesn't recognize when an article was written and doesn't take in context clues.

it also can't determine whether an article is an opinion piece or from a credible source. it takes a lot of answers from reddit which is hearsay. I'm sure that PR could sway answers. advertorials could be perceived as actual articles.

i'm having problems at work where I'm issuing press releases, but AI pulls information from a long time ago or from other counties. I guess it doesn't have geographic awareness either!

I still use AI to polish my work. I ask it to edit my press releases in AP Style format and explain what it did.

I find that if I switch topics or change my mind in a session, it gets confused. the longer the session is, the less accurate it becomes.

On Mon, Jan 26, 2026 at 2:22 AM someone wrote:

it goes along with one of my current books, the one about censoring. I'm almost done with the one about the high school athletes. then there's the one "AI Answers" which will start off with ChatGPT admitting the it uses "consensus" instead of the truth many a time and how it froze up when it couldn't admit that it gives LIES



similar posts here... and elsewhere

Comments
We enjoy free speech. Try not to offend, but feel free to be offended.
Leave a new comment regarding "chatgpt-reliability":

 

Dobie Pokorny | HepYa | Holmes Family Newsletter | Lazy Pug Cafe | Tara's Faves | William Arthur Holmes | substack | Contact Us