Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour - Slashdot
Close
binspam
dupe
notthebest
offtopic
slownewsday
stale
stupid
fresh
funny
insightful
interesting
maybe
offtopic
flamebait
troll
redundant
overrated
insightful
interesting
informative
funny
underrated
descriptive
typo
dupe
error
181402862
story
A New York Times
analysis
found Google's AI Overviews now answer questions correctly about 90% of the time, which might sound impressive until you realize that
roughly 1 in 10 answers is wrong
. "[F]or Google, that means hundreds of thousands of lies going out every minute of the day," reports Ars Technica. From the report:
The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI.
Oumi began running its test last year when Gemini 2.5 was still the company's best model. At the time, the benchmark showed an 85 percent accuracy rate. When the test was rerun following the Gemini 3 update, AI Overviews answered 91 percent of the questions correctly. If you extrapolate this miss rate out to all Google searches, AI Overviews is generating tens of millions of incorrect answers per day.
The report includes several examples of where AI Overviews went wrong. When asked for the date on which Bob Marley's former home became a museum, AI Overviews cited three pages, two of which didn't discuss the date at all. The final one, Wikipedia, listed two contradictory years, and AI Overviews confidently chose the wrong one. The benchmark also prompts models to produce the date on which Yo Yo Ma was inducted into the classical music hall of fame. While AI Overviews cited the organization's website that listed Ma's induction, it claimed there's no such thing as the Classical Music Hall of Fame.
"This study has serious holes," said Google spokesperson Ned Adriance. "It doesn't reflect what people are actually searching on Google." The search giant likes to use a test called
SimpleQA Verified
, which uses a smaller set of questions that have been more thoroughly vetted.
You may like to read:
Anthropic Reveals $30 Billion Run Rate, Plans To Use 3.5GW of New Google AI Chips
Does a Gas-Guzzler Revival Risk Dead-End Futures for US Automakers?
Americans' Junk-Filled Garages Are Hurting EV Adoption, Study Says
Americans are Buying Twice as Many Hybrids as Fully Electric Vehicles. Is The Next Step Synthetic Fuels?
EV Sales Keep Growing In the US, Represent 20% of Global Car Sales and Half in China
China Is Mass-Producing Hypersonic Missiles For $99,000
Supreme Court Wipes Piracy Liability Verdict Against Grande Communications
This discussion has been archived.

No new comments can be posted.
Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour
More
Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour
Comments Filter:
All
Insightful
Informative
Interesting
Funny
The Fine Print:
The following comments are owned by whoever posted them. We are not responsible for them in any way.
Great even the pol
Score:
, Insightful)
by
DarkOx
( 621550 )
writes:
on Tuesday April 07, 2026 @03:05PM (
#66081846
Journal
Well shoot, even the politicians jobs are not safe then!
Share
Re:
Score:
by
Powercntrl
( 458442 )
writes:
Actually, I think someone did try to run for elected office on the premise that they'd let an AI make all their decisions for them. I don't think it worked out for them.
Re: Great even the pol
Score:
by
OrangeTide
( 124937 )
writes:
Too many old people (my age) still vote for that sort of campaign to work.
I don't believe it
Score:
, Funny)
by
TwistedGreen
( 80055 )
writes:
on Tuesday April 07, 2026 @03:08PM (
#66081858
Alice laughed. "There's no use trying," she said. "One can't believe impossible things."
"I daresay you haven't had much practice," said Google. "Why, sometimes I've believed as many as six impossible things before breakfast."
Share
Re:
Score:
by
martin-boundary
( 547041 )
writes:
That's not a breakfast! That's a space station!
Balderdash
Score:
by
SlashbotAgent
( 6477336 )
writes:
on Tuesday April 07, 2026 @03:10PM (
#66081868
The crommulence of AI responses is infallible and unimpeachable. This article is complete balderdash.
Share
Re:
Score:
by
Locke2005
( 849178 )
writes:
I didn't think cromulent was a word... turns out The Simpsons writers invented it. Cromulence just means acceptability, and... you spelled it wrong.
Re:
Score:
by
SlashbotAgent
( 6477336 )
writes:
AI said I am correct and that you're teh gey[sic].
So... there.
Re:
Score:
by
Locke2005
( 849178 )
writes:
Google's Gemini says, "Yo mama's so slow, she makes SlashbotAgent look like a high-frequency trading algorithm." You know, I don't think Gemini is really clear on the concept of yo mama jokes. Maybe it gets it confused with Yo Yo Ma jokes.
Re:
Score:
by
Culture20
( 968837 )
writes:
Maybe it gets it confused with Yo Yo Ma jokes.
I hear he's like the cellist dude. Never gets angry when people call his mother fat.
AI lies
Score:
, Informative)
by
gary s
( 5206985 )
writes:
And non AI search results are pretty much all lies. Look at this, oh wait its an AD link...
Google's AI is so bad...
Score:
by
ebunga
( 95613 )
writes:
I would rather use Grok.
Re:
Score:
by
Tailhook
( 98486 )
writes:
the LLM model they're using for "AI Overview" is terrible. Obviously, they're doing that because it's a small model that runs fast, so it can handle the load of millions of queries a minute. I find that if you then click "Dive Deeper", the model improves to something usable, often completely contradicting the "Overview" slop.
It's not a good look. But I suppose they have to put "AI" out front, even when it's crap.
Re:
Score:
by
Powercntrl
( 458442 )
writes:
It's not a good look.
Yeah, it makes an extremely bad first impression. Anecdotally, everybody I know sees it as the slop on top of the search results that you just skip over.
Re: Google's AI is so bad...
Score:
by
OrangeTide
( 124937 )
writes:
It's a case where doing nothing would have been better than this.
A strange game. The only winning move is not to play.
... How about a nice game of chess?
I use gemini
Score:
, Interesting)
by
MpVpRb
( 1423381 )
writes:
on Tuesday April 07, 2026 @03:22PM (
#66081884
It often gives excellent answers, but when it doesn't, the results are strange.
I asked for help writing code for an obscure hobby CNC control system.
It totally invented function calls and invented plausible documentation to explain how they worked and how to call them.
It totally missed the easy answer that involved calling an existing simple function and writing no new code.
If the answer doesn't exist on the internet, it appears to just make one up
Share
Re:
Score:
by
kellin
( 28417 )
writes:
Yep. I've read that generative AI doesn't say "I don't know the answer," but will just make something up instead.
I wanted to see how helpful Gen AI would be for an edge case to sort through a collection of heroes I have in a game I play. Right off the bat, I learned Gemini is the "most accurate." Anthropic was beyond worthless, OpenAI was maybe 50/50. Even so, I learned to Gemini's data before accepting its results. It definitely did not do as well as I originally thought, so I make sure it knows the stat
Re:I use gemini
Score:
by
Locke2005
( 849178 )
writes:
on Tuesday April 07, 2026 @05:41PM (
#66082138
Yep. I've read that generative AI doesn't say "I don't know the answer," but will just make something up instead.
I've worked with people like that.
Parent
Share
Re:
Score:
by
kbrannen
( 581293 )
writes:
Yep. I've read that generative AI doesn't say "I don't know the answer," but will just make something up instead.
Of course it will because "I don't know" isn't in the training data. If an LLM can't find good word associations, where a lot of the weights are very high, it can only work with the lower weight associations (unlikely to be right), and at worst will take the lowest weight association, which is probably guaranteed to be wrong. It would be nice if the models had a built-in rule such that if the weights fall below a certain threshold that the model would return "I don't know" or "I can't do that", but that's n
Re:
Score:
by
EvilSS
( 557649 )
writes:
You can't code rules into models themselves. Best you can do is try to train the behavior you want but that's never going to be 100% reliable. You
can
do it by watching the logits from the inference engine an try to redirect the model back on track or force a hard stop. Some are doing this today. The problem is that next word low probabilities are not always the source of this problem. You also run into high probability wrong results, so it's a bit more complicated. The other issue is not all of the APIs ex
Re: I use gemini
Score:
by
fluffernutter
( 1411889 )
writes:
It's an algorithm. It doesn't make a choice up to lie. It just doesn't resolve well due to lack of information and gives you the best it can do.
Re:
Score:
by
Locke2005
( 849178 )
writes:
So Trump isn't really a liar, he's just an extremely low information person?
Re:
Score:
by
MachineShedFred
( 621896 )
writes:
He's both.
He is an extremely low information person which has turned disengagement into an art form. And he still chooses to provably lie every chance he gets.
Re:
Score:
by
martin-boundary
( 547041 )
writes:
Were they? Or are they just going along with TACO Tuesday pretending they negotiated?
Re:
Score:
by
martin-boundary
( 547041 )
writes:
Seems like the "ceasefire" is already broken.
it's been a very bad algorithm
Score:
by
OrangeTide
( 124937 )
writes:
It's an algorithm that does not view confidently feeding the user false information as a type of failure.
Re:
Score:
by
fluffernutter
( 1411889 )
writes:
Has Google failed if the first hit didn't apply to your search but the second one did? Now you have to realize that each word in an LLM response is like a Google page hit on its own. What they should be able to do is give you a confidence rating, but there is little knowledge of how accurate the answer is no more than Google provides relevant his.
Re:
Score:
by
OrangeTide
( 124937 )
writes:
Has Google failed if the first hit didn't apply to your search but the second one did?
from a KPI point of view, yes obviously.
What they should be able to do is give you a confidence rating
Most of these models don't have a reliable way to extract confidence. There's a lot of false positives unfortunately. So we've all been hiding any sort of explicit confidence feedback to the user instead of giving them a random number generator.
Re:
Score:
by
jd
( 1658 )
writes:
Gemini is exceptionally bad, as LLMs go. I really have no idea why it is so dreadful, even compared to other LLMs. It isn't context window. and it doesn't seem to be training material either.
Re: I use gemini
Score:
by
drinkypoo
( 153816 )
writes:
It almost always gives shit answers. Any time I search for details of things I know about it jumps in to tell me some shit I know is wrong. Every. Fucking. Time.
Re:
Score:
by
MachineShedFred
( 621896 )
writes:
You used to be able to turn off the AI crap by swearing in your search query. It looks like they fixed that one.
Re:
Score:
by
drinkypoo
( 153816 )
writes:
I hid it with the "Hide Google AI Overviews" extension, but I still see their crap at work.
Google's Response
Score:
by
logjon
( 1411219 )
writes:
"That's not true if you only ask it the questions we want you to ask!"
Re:
Score:
by
kellin
( 28417 )
writes:
Basically this. And that's an idiotic statement to make. Gen AI needs to be good at everything for it to be useful. I realize that's a hard thing to do in the beginning, and it will probably get better over time, but we all need to help it along in some way by feeding it correct data.
Google: "you are all freaks"
Score:
, Informative)
by
Morromist
( 1207276 )
writes:
on Tuesday April 07, 2026 @03:24PM (
#66081894
Google: "Why can't you search for normal things like everybody else? Our ai is great at answering questions like 'where to buy a tv?' and 'who is Leonardo DiCaprio dating?" and "weather". If those things don't satisfy your every need I don't know what to say. Just because we're a search engine doesn't mean you're supposed to use it to search for difficult to find things. Search for normal things like a normal person, assholes."
Share
Re:Google: "you are all freaks"
Score:
, Funny)
by
Locke2005
( 849178 )
writes:
on Tuesday April 07, 2026 @03:31PM (
#66081916
"Here I am, brain the size of a planet, and they ask me to take you up to the bridge. Call that job satisfaction? 'Cause I don't."
Parent
Share
I don't know about that
Score:
, Funny)
by
rsilvergun
( 571051 )
writes:
on Tuesday April 07, 2026 @03:25PM (
#66081898
I mean based on the president of the United states? Those are rookie numbers. Come on Google you can do better!
Share
Re:
Score:
by
Tony Isaac
( 1301187 )
writes:
At least Gemini is *trying* to tell the truth!
Isn't it???
So what you're saying is...
Score:
by
Locke2005
( 849178 )
writes:
Google has implemented Trump Mode in their AI? Gemini has been forced onto my Android Auto against my will.
Re:So what you're saying is...
Score:
, Informative)
by
ranton
( 36917 )
writes:
on Tuesday April 07, 2026 @04:10PM (
#66081988
Google has implemented Trump Mode in their AI?
No, they said Google tells the truth 90% of the time, not 10%.
Parent
Share
Re:
Score:
by
Powercntrl
( 458442 )
writes:
Google has implemented Trump Mode in their AI?
Well, it hasn't bombed Iran and developed a craving for McDonald's hamberders yet, so Google's still got some work to do.
Better than humans none the less
Score:
by
Zero__Kelvin
( 151819 )
writes:
on Tuesday April 07, 2026 @03:35PM (
#66081922
Homepage
If you ask the average human to use a non-AI search engine to find out the answer to 100 non-trivial questions I can assure you that you will get many more than 10 incorrect answers.
Share
Re:Better than humans none the less
Score:
, Interesting)
by
ceoyoyo
( 59147 )
writes:
on Tuesday April 07, 2026 @03:58PM (
#66081968
It would be interesting to compare the AI summary accuracy to
1) Hitting "I feel lucky"
2) A selection of average humans given no-AI Google search
3) A selection of average humans given AI+Google search
4) A selection of average humans
Parent
Share
And people believe AI...
Score:
, Informative)
by
mspohr
( 589790 )
writes:
on Tuesday April 07, 2026 @03:37PM (
#66081926
According to an article here a few days ago, 70% of people just accept whatever AI tells them without thinking.
Share
Re:
Score:
by
jd
( 1658 )
writes:
But was that figure provided by AI?
Even if not, we all know that 793% of all statistics are invented.
Lies, bigger lies and statistics.
Score:
by
devslash0
( 4203435 )
writes:
That's AI models for you in a nutshell.
The New York Times, you say ?
Score:
, Redundant)
by
greytree
( 7124971 )
writes:
The New York Times ?
Sooo
... is one of those lies that NATO stands for "North Atlantic Treaty Organization" ?
Asking for a friend who remembers when the NYT wasn't full of biased shit.
Re: So when can it replace Trump?
Score:
by
bussdriver
( 620565 )
writes:
I'd rather have a digital lying machine than the sub-human one we have right now. At least people will be more willing to ignore criminal orders because they are not in an AI cult. The AI do really like to start nuclear wars but nobody would follow those orders... But given how much AI produced slop from the White House already, we might just end up with a nuclear war... like we did tariffs against penguin island.
What a headline!
Score:
by
dskoll
( 99328 )
writes:
At this rate, reality is going to put The Onion out of business by 2029.
It gives great car repair advice, too
Score:
by
Powercntrl
( 458442 )
writes:
AI Overview
Removing the serpentine belt on a 2018 Chevy Bolt involves releasing tension from the automatic tensioner, which is best accessed from the passenger-side wheel well. Use a 15mm socket on a long breaker bar to rotate the tensioner clockwise, allowing you to slip the belt off the pulleys.
(in case anyone didn't get the joke, this is a real AI result Google just gave me, but the catch is that the Chevy Bolt is an EV and does not have a serpentine belt - or an engine, for that matter)
Unit conversion?
Score:
by
Locke2005
( 849178 )
writes:
on Tuesday April 07, 2026 @03:56PM (
#66081964
How many Trumps is that?
Share
Re: Unit conversion?
Score:
, Interesting)
by
Mr. Dollar Ton
( 5495648 )
writes:
on Tuesday April 07, 2026 @04:49PM (
#66082058
Here, from the horse's mouth:
Summary
Assuming Google AI search gives 1 lie in 10 answers (10%), it is roughly one-eighth of a Trump (0.125).
In other words, you would need 8 AI lies to equal the concentration of misinformation found in a single normalized Trump output.
Would you like to apply this "Trump" unit to other historical figures or tech benchmarks to see how they stack up?
Parent
Share
Re: Unit conversion?
Score:
by
Mr. Dollar Ton
( 5495648 )
writes:
Here's the summary for some names you may know:
Elon Musk ~1.25 Trumps
Vladimir Putin ~1.20 Trumps
Muammar Gaddafi ~1.10 Trumps
Peter Thiel ~0.4 to 0.5 Trumps
Ursula von der Leyen ~0.15 Trumps
Google AI (Hypothetical) ~0.125 Trumps
Re:
Score:
by
Locke2005
( 849178 )
writes:
“Those are rookie numbers, you've gotta pump those numbers up!"
AI doesn't lie.
Score:
by
dfghjk
( 711126 )
writes:
on Tuesday April 07, 2026 @03:57PM (
#66081966
A lie is bad information provided intentionally. AI does not have intent.
Share
Re: AI doesn't lie.
Score:
by
Mr. Dollar Ton
( 5495648 )
writes:
Says who?
The AI's intent is defined by the way it is trained, and Gemini is trained to emphasize what the google executives want emphasized.
Re:
Score:
by
swillden
( 191260 )
writes:
Says who?
The AI's intent is defined by the way it is trained, and Gemini is trained to emphasize what the google executives want emphasized.
Mmmm.... if anything it's "what the Google engineers want emphasized". Executives at Google have surprisingly little control over technical decisions. For nearly all of Google's existence it's been an almost completely bottom-up driven company and while in the last few years management has been trying to exert more control it's a very, very slow process.
It's actually the engineering-driven culture that produces Google's infamous tendency to abandon products. Stuff gets built because some engineers think
Re:
Score:
by
Mr. Dollar Ton
( 5495648 )
writes:
Executives at Google have surprisingly little control over technical decisions.
The executives at google define the policy, the technical crew implement it. The policy is "descriptive neutrality", which is roughly equal to the "fair and balanced" approach of Fox News, with a slight push for normalizing the "official position".
So, while technical decision (how to implement a policy) are not a concern of the executives, setting the policy (what to implement) most definitely is.
The point being that the "descriptive neutrality" with a preference for the "official side" is a thing, which yo
Re:
Score:
by
swillden
( 191260 )
writes:
Executives at Google have surprisingly little control over technical decisions.
The executives at google define the policy, the technical crew implement it.
Not as much as you might think. Definitely not as much as at most companies.
Re:
Score:
by
jd
( 1658 )
writes:
We learned back in the 80s that trying to get a neural net to emphasise what you want is actually very difficult. What it will tend to emphasise are the assumptions that underly the test data, and that's usually a completely different sort of fiction.
Google's AI does not impress.
Score:
by
jd
( 1658 )
writes:
When I test the different AI systems, Google's AI system loses track of complex problems incredibly quickly. It's great on simple stuff, but for complex stuff, it's useless.
Unfortunately.... advice, overviews, etc, are very very complex problems indeed, which means that you're hitting the weakspot of their system.
Yeah!
Score:
by
PPH
( 736903 )
writes:
on Tuesday April 07, 2026 @04:04PM (
#66081978
That's the ticket!
Share
Doctorow is right
Score:
by
RobinH
( 124750 )
writes:
Tinkering with a word guessing machine to see if you can make it smart is like breeding horses to be faster and faster expecting one to give birth to a locomotive. Word guessing machines are cool but they're never going to actually "understand" what they're talking about.
This is what stochastic parrots do
Score:
, Informative)
by
Arrogant-Bastard
( 141720 )
writes:
on Tuesday April 07, 2026 @04:14PM (
#66081996
(Reference:
On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
[acm.org])
The people/companies behind these models will keep trying to "fix" them by throwing ever-increasing amounts of computing power at them (with all the lovely real-world effects on everyone and everything) and by using ever-more-complex models. And yes, they'll perform better. But they're still just large exercises in statistics and linear algebra, they're still just stochastic parrots, and thus there's an upper bound that they may approach asymptotically -- but can't surpass.
That's not because they're broken -- which is why I put "fix" in quotes in the previous paragraph. It's because that's how they work: it's an intrinsic property of all such models and no amount of computing power and/or model tweaking can change that: all it can do is obfuscate it. And obfuscated problems are far worse than obvious problems.
Share
Re:
Score:
by
swillden
( 191260 )
writes:
That's not because they're broken -- which is why I put "fix" in quotes in the previous paragraph. It's because that's how they work: it's an intrinsic property of all such models and no amount of computing power and/or model tweaking can change that: all it can do is obfuscate it. And obfuscated problems are far worse than obvious problems.
That's a strong statement. Can you explain why that isn't also true of human brains? What's the intrinsic difference?
Re: This is what stochastic parrots do
Score:
by
toutankh
( 1544253 )
writes:
A human is able to tell if an LLM is wrong. The opposite isn't true.
Re:
Score:
by
swillden
( 191260 )
writes:
A human is able to tell if an LLM is wrong. The opposite isn't true.
Nonsense. LLMs point out my mistakes all the time. And I point out theirs. At this point there's more of the latter than the former, but both absolutely happen all the time.
Re:
Score:
by
swillden
( 191260 )
writes:
A human is able to tell if an LLM is wrong. The opposite isn't true.
Also, even if this fallacious claim were true, it wouldn't actually support Arrogant-Bastard's claim, which wasn't about the state of AI now, but a claim about "intrinsic properties", meaning it would be true forever.
Give it to us in Lies per GigaWatt!
Score:
by
Fly Swatter
( 30498 )
writes:
on Tuesday April 07, 2026 @04:16PM (
#66082002
Homepage
We need numbers we can understand, saying 10 percent is too simplistic.
Share
Ironically, this Slashdot summary title is a lie
Score:
by
Zero__Kelvin
( 151819 )
writes:
on Tuesday April 07, 2026 @04:24PM (
#66082016
Homepage
It's ironic that the human(s) reporting this couldn't do so without (apparently) lying, in the title no less. The article talks about accuracy, and an inaccuracy is not a lie unless it is intentional. Of course whomever wrote the title is likely seeking to impose their own anti-AI bias to the story, and so chose to lie about what the study actually says.
Share
Re: Ironically, this Slashdot summary title is a l
Score:
by
Zero__Kelvin
( 151819 )
writes:
So AI can't be intelligent, but can be stupid? It seems AI is a lot like the typical person who posts as AC on Slashdot.
Re:
Score:
by
Powercntrl
( 458442 )
writes:
Actually, it's a perfect cromulent use of the word "lie" to mean a falsehood with or without the intent of deception.
At least according to the dictionary.
[merriam-webster.com]
Re: Ironically, this Slashdot summary title is a l
Score:
by
Zero__Kelvin
( 151819 )
writes:
I can't tell if you are lying or mistaken.
Re:
Score:
by
sabbede
( 2678435 )
writes:
So, I read all the definitions and find I must disagree. Even when the intent to deceive or mislead is not explicit in the definition, it is in the example.
i.e. - "an untrue or inaccurate statement that may or may not be believed true by the speaker or writer"
"the lies we tell ourselves to feel better" - explicit intent.
Re:
Score:
by
jd
( 1658 )
writes:
If something is inaccurately presented as being the truth, then it is a lie of omission because it is dishonest about the fact that the information isn't actually known.
Re:
Score:
by
Zero__Kelvin
( 151819 )
writes:
A lie of omission is when pertinent information is withheld. I'm not even going to try to parse the rest of your nonsensical sentence.
Re:
Score:
by
jd
( 1658 )
writes:
Since pertinent information was withheld (that it didn't know), then by your own post you acknowledge it was a lie of omission.
The stupidity of people these days is truly beyond belief. And, yes, get the f off my lawn.
Re:
Score:
by
Zero__Kelvin
( 151819 )
writes:
It was only a lie of omission if the pertinent information was intentionally withheld dipshit.
Re:
Score:
by
jd
( 1658 )
writes:
Which it was.
Re:
Score:
by
sabbede
( 2678435 )
writes:
I see your point, but I don't think it quite works unless one knows it is inaccurate. Otherwise, it's just being wrong. That is, there is a subset of cases for which your statement would be correct, "I don't know the answer, so I'll guess and not tell you I'm just guessing", but you've worded it so broadly that it would also include sincere errors.
One example of a non-lie your statement would have included would have been how, for over a thousand years, people believed flies had 4 legs because Aristotl
Depends heavily on the subject matter.
Score:
by
Narcocide
( 102829 )
writes:
I have noticed that asking it questions about the video game "No Man's Sky" elicits perfect or at least nearly perfect answers every time. Asking it any technical questions about Linux though... usable accuracy drops to something like 50%.
Compared to?
Score:
by
bill_mcgonigle
( 4333 )
writes:
To be fair I just wasted a week tracking down a radio telemetry problem because of a forum post that many people said worked great but it definitely pulled a pin high that was supposed to be low, which shut off an antenna.
Only diving into the spec sheet and some sample embedded code convinced me that the forum post was exactly wrong and after making a simple change to do the opposite did all the telemetry devices mesh up and start reporting correctly.
So
... how does 90% compare to human content?
A wrinkle is
Really?
Score:
by
jenningsthecat
( 1525947 )
writes:
The search giant likes to use a test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted.
Gee - that sounds rather like asking only questions which are known to be correctly answerable by the AI.
But Google would
never
cheat to hide flaws and make their shit look better - right?
MLPH? Mega Lies Per Hour?
Score:
by
ameline
( 771895 )
writes:
We will probably get to GLPH pretty soon. Or even PLPH or YLPH
:-)
(TLPH is Terra not Trump
:-) although I can understand the confusion in this context)
Wrong != lying
Score:
by
sabbede
( 2678435 )
writes:
A lie is an intentional deception. Being wrong is... being wrong.
pot and kettle
Score:
by
groobly
( 6155920 )
writes:
NYT tells the truth about 10% of the time. I wonder if it's the same 10% that google gets "wrong."
Related Links
Top of the:
day
week
month
384
comments
Does a Gas-Guzzler Revival Risk Dead-End Futures for US Automakers?
377
comments
Americans' Junk-Filled Garages Are Hurting EV Adoption, Study Says
363
comments
Americans are Buying Twice as Many Hybrids as Fully Electric Vehicles. Is The Next Step Synthetic Fuels?
323
comments
EV Sales Keep Growing In the US, Represent 20% of Global Car Sales and Half in China
314
comments
China Is Mass-Producing Hypersonic Missiles For $99,000
next
Supreme Court Wipes Piracy Liability Verdict Against Grande Communications
30
comments
previous
Anthropic Reveals $30 Billion Run Rate, Plans To Use 3.5GW of New Google AI Chips
47
comments
Slashdot Top Deals
If God is perfect, why did He create discontinuous functions?
Close
Working...