mesterha - Slashdot User
Close
binspam
dupe
notthebest
offtopic
slownewsday
stale
stupid
fresh
funny
insightful
interesting
maybe
offtopic
flamebait
troll
redundant
overrated
insightful
interesting
informative
funny
underrated
descriptive
typo
dupe
error
181418660
comment
I have some experience with SQUIDS and while you can do some cool things with them the idea you could isolate a human heartbeat beyond a few yards or meters is nonsense.
It's probably what they told Trump. Classic disinformation.
181408888
comment
They have multiple documented patched zero days and provided sha3 verifiable hashes for ones that will be released in the next 135 days.
But I'm sure they trained on this code. It's just repeating it's training data. There is no intelligence.
And yes, I'm kidding since otherwise someone will take me seriously.
180859512
comment
I don't trust Trump's uncle. According to Trump, he knew who the Unibomber was well before his family identified him. Of course, that means Trump also knew. That guy is incredible, in the literal sense of the word.
180852032
comment
Did you play a bit with the sampler demo?
I must have missed that.
If we have a smart enough LLM, I think there
is no reason why it shouldn't produce a uniform distribution while
generating letters for the password. Maybe a LLM could be trained to
get mostly uniform Top 20 tokens when the current sequence is part of
a password to be generated. So I think you have a good point.
In principle, but the current models are pretty smart just to run
a bash command to generate a password. That password probably uses a
better random number generator. The one used by an LLM algorithm to
pick tokens is probably fast but a cryptographer would never use it.
They for example prompt an image model "Create a unique video game
character of a plumber" or similar and wonder why they get Mario, when
they said "unique".
I tried with Gemini. I guess the result is good. It actually
generated a video which isn't what I expected. It created a
futuristic Asian cyborg plumber working in a boiler room. That also
might fit with what I've seen with unique Gemini queries. Things are
moving fast and the uniqueness issue might have been addressed as a
random combination of "things" given the constraints of the query.
However, I can't test many videos as I can only generate 2 a day.
Do that and you can write a paper about it. Even if it only works
partially, a good analysis will be an interesting result.
I've kind of given up writing papers. So many papers get submitted
that they all get dumped onto grad students who generally don't know
what they are doing. I guess now it's LLMs that review the papers.
With the current state of the art, they would do a better job.
180842960
comment
Again I appreciate the effort. It's clear you're thinking about
the problem in a way that helps understanding.
First: Passwords. If you ask for a good password, the likely answer is
a good password. In isolation it probably is a good password even when
you risk that it has the properties of example passwords in the
training set.
It probably looks like a typical password, but a good password
doesn't exist in isolation. What you really want is a set of
potential passwords with a way to pick uniformly over that set.
A good LLM may now determine that it should for example not sample the
same token again and again, as it learned that that is a easy-to-crack
password.
Maybe yes maybe no. That is not a good way to generate passwords and
it might "understand" documentation that explains that.
You already see, that the password entropy is now just 3^length.
If you would sample uniformly from ALL tokens, you would get a
password generator. But all other parts of the output would also look
like passwords.
Entropy is normally measured in bits, so you need to take the lg
of that, but it's probably more intuitive to talk out the number of
possible outputs, so OK.
What you are saying is that it's not a good password generator,
sure. This is consistent with the article, and what I said. The LLM
can use randomness to generate about 1/4 the entropy you would get
from uniform sampling over the set of all character strings of length
16. In terms of counting that's maybe 2^25 which is just 32 million
passwords which can be bruteforced in certain situations. That's a
big difference from 2^100 which is a quadrillion quadrillions.
So I agree, it's a bad password generator, but it still has access
to randomness, so it can generate passwords they are just not optimal
for their length. (The more troubling part is that it sometimes fails
to do the right thing and might generate some passwords with much high
probability.)
Unique is a bit similar problem. People complained for example that
image generators do not understand "Generate a unique SUBJECT".
As I said, it's not very well defined. Dictionaries have multiple
definitions for a reason. This alone is enough to cause issues, but
it gets more complex by speculating what the LLM is doing. However,
it is an interesting question.
In image generation it may be an image that was tagged as
"So very unique", but if all generations get close the concepts in
that image, none of them is unique anymore. The LLM test before
associated fantasy styles names with "unique".
Mathematically, if they are close, they are still unique. In this
context, what a human probably wants is something semantically
meaningful that is sufficiently far, with some metric, from all other
images that have ever been generated. Of course, without knowledge of
what all other generators are doing, unique, in this sense, is
impossible. So at best you're back to a probabilistic solution. You
need to define a space of images and have the algorithm uniformly pick from that
space. Just playing with a LLM, it seems to do a simple form of
this. It combines a bunch of semantic ideas to create a "unique"
image. On gemini, a clock owl that is on a frozen lightning branch
in a cosmic storm.
These things work by "associations" and when they are primed on
something they often follow predictable patterns.
Yes, there was a blog that talked about how if you played bad chess with an LLM it would also play bad, but if you played a strong game against it, it would improve.
Just read a bit the complaints in the creative writing
communities. Try to google for some stories about Elara and you see
how LLM even fail at creating unique names.
Interesting, and it still shows up in current models. It's probably a
bit of manifest destiny since this issue goes back to 2022. However,
when I ask for a unique name for my sci-fi lead, the model claims to
do something similar to what I described with images and comes up with
some weird names: Vyrith-Esh, Xylanthe-Vane, Kyzant-Nu, Zylpha-Kore.
180839414
comment
I appreciate the effort in your answer, but I still disagree. First it's important to establish what is the
accepted definition of random, which is not trivial. As I said, I'm
using pseudorandom (otherwise one needs specialized hardware or an
argument based on the randomness of inputs that are used to generated
an entropy pool) which means technically it's deterministic, but that
it satisfies various randomness tests.
When doing such a test, you repeatedly called the generator which
they don't do. Instead they make a new call to the LLM which seems
similar to calling the pseudorandom generator with the same random
seed which will always give the same answer, but they do get some
variety probably because the generator the LLM used on the server is
in a different state on each call.
Instead, what they are doing is a use case test where different
people get passwords based on a cold query to a LLM. And I guess it
fails, but the articles "claims" are a bit inconsistent, so who knows
what the actual research claims. It's also interesting that if I ask
claude code for a password it runs a linux command to generate a
password which is the right answer. Just like a human, you should use
the proper tool.
So what about your claim that because an LLM has no memory, it
can't generate something "unique". I'm not sure why you focus on
unique and names when the topic is passwords. You go from something
well defined to less defined, but whatever.
In principle, I could train (with a suitable training set) an LLM
that when I want to generate passwords it should have uniform
probability over the tokens that consist of single allowed password characters. Of
course, training a LLM to do that doesn't make sense since the right
answer is to give it a tool to do it, but here we're talking about
what an LLM can do given it has no memory. So the specific algorithms
used to make the choice for commercial models is unknown, but as you
explained, it will pick using information from the distribution with
a pseudorandom number generator. Let's assume it just picks based on the conditional probability of the top 3 choices, which is just a uniform distribution over those 3. This gives an entropy of about 1.58 bits per token
which works out to 25 bits for a 16 bit string which matches well with
the "claimed" results. So maybe it is doing something like that except when it hallucinates and just does the wrong thing.
180834326
comment
As a LLM has no memory, asking it for something "unique" can't work.
Wrong. An LLM uses randomness to generate it's answers. In practice most likely a pseudorandom number generator, but that's generally considered good enough. Also the article claims they get 27 bits of entropy out of 16 characters. That means they are claiming it is random. And that's probably better than humans who pick their own passwords.
180820482
comment
Those high end cables are a waste of money.
Probably true, but peer reviewed audio research does show that they sound consistently different. See
As explained in the research, quickly switching between sources doesn't allow people to properly hear the subtle differences in these high end systems. When extending blind listening sessions to at least 15 minutes, untrained participants could start hearing differences between cables with strong statistical confidence.
While it's not understood why this is the case, there are several theories. One is that it's difficult to forget the details of what you just heard when rapidly switching. Unlike vision, we can't just reinspect something we hear by looking again. The brain has something called echoic memory which stores 3 to 4 seconds of auditory information which could be causing issues when rapidly switching. There's also theories that a large part of the brain is designed to predict the future. Again this could obscure differences with two signal that are so similar. The brains predictive abilities would just fill in any missing details. Last, and somewhat contradictory to the others, is that when quickly comparing two sources you're forcing you brain to use working memory to hold differences. Long term testing lets you use more effective types of memory for making complex distinctions.
Basically it kind of means that these double blind tests that have been used for most audio tests is flawed. It's kind of like using an optical illusion to measure distance.
180792634
comment
It would be much more useful if you specify which AI tools you are using. They are not all equal.
180786822
comment
Left-leaning policies that contribute to these problems include
textbook standards, which currently require textbook publishers to
focus on the mechanics of sex and stay away from discussions of
personal responsibility.
While I agree that left leaning state governments push for a different
type of sex education, you actually have to have evidence that it's
causing this type of harm, and that it's better than the
alternatives. The right often give simple, plausible arguments that
supports their narrative not because they are correct but because it
achieves another objective which is often just to motivate their base.
One actually has to do studies/science to see if it actually true.
180784448
comment
Likewise, the underlying concern about nurturing homes where children are cared for, is about the ability of our children and their descendants to be able to thrive. And in that vein, the left's "solutions" only lead to a disintegrating and less stable environment for children growing up, making things worse for our children.
I'm sympathetic to your concern, but I still think your confusing culture with politics. Even if there is a correlation between people on the left having lifestyles you think are detrimental doesn't mean it's a political problem. What policies on the left are causing these problems? And don't compare against perfection. What alternative policies does the right have that would improve the outcome?
The world is always changing and the government needs policy to adapt and improve people lives. I would support further studies to see how to improve education for these types of social issues, but I suspect good research has already been done in more progressive countries in Europe.
180784308
comment
You asked me how society might go about ensuring stronger families,
and that was my answer.
I asked what is your/their policy to fix it. You were the one
claiming the left and right comparision. Perhaps I confused you by
adding the word "your". Are you not on the right? I guess you agree
they aren't doing anything constructive.
The right plays the same games when it comes to global warming. They
want the "fun" (or whatever goal) of doing things that pollute,
without recognizing their responsibility to manage well the
environment they live in.
Perhaps it's not your intent, but it does sound like a false
equivalence. Comparing a moral/lifestyle choice with a clear
scientific problem that has expensive but viable solutions. And the
equivalence is based on a problem where the right's solutions are most
likely empirically worse by most metrics.
You're comparing some vauge cultural picture of the left with
actual policy (or lack thereof) on the right. I don't think you're
making a bad faith argument, but I do think it falls into the culture
war rhetoric of the right. If you want to talk left and right then you
are talking politics which means it should be grounded in policy.
180783172
comment
So how do we fix it? Well for starters, through a favorite technique espoused by liberals: education. Stop focusing sex education on the mechanics of sex, and focus more on the responsibilities of child rearing.
So you're OK with sex education, but you think they're giving the wrong instruction. Are Republicans proposing any pilot programs to test these ideas? This sounds like your opinion, but not sure what that has to do with the policy the right pushes. Conservative are more likely to ban sex education and end up with even more unintended pregnancies.
Stop glorifying promiscuity in schools and in the movies. Educate children more about responsibility in general, rather than on pleasure.
Again you need viable policy ideas on how to do this. You're just throwing out culture issues with some vague ideas that education can solve them. Now I'm not saying that you have to solve them from your armchair, but your argument is that the right is trying to legitimately solve them, and I'm not seeing it. Instead the right makes up non-issues like critical race theory being a part of elementary school education.
180782546
comment
Shall I go on? I mean, Democrats don't have a monopoly on telling lies, but they don't not tell lies.
True. Politicians will lie if they can get away with it. The important difference is the left has a media ecosystem that is much more likely to call them out on those lies and half truths.
180782502
comment
So how does one define a family? Biology answers this unequivocally. It takes a father and a mother to produce a child.
I agree the research gives solid reasons that having two parents is beneficial, but I don't think we should enforce it to be a man and woman. One has to be careful in enforcing a society based on conforming to current culture instead of trying to set up a society that gives people reasonable freedoms.
And this brings up the larger issue. Even if you think single parents households are bad for kids, what is your/their policy to fix it. For global warming, the left proposed clear policy that can help. For families, what has the right proposed that the left has rejected. It seems like they just complain with no clear policy. Pointing out problems without proposed solutions is not really showing an equivalence.
« Newer
Older »
Slashdot Top Deals
Close
Close
`chris.mesterharm' `at' `gmail.com'
The Maker
Submitted a Story That Was Posted
The Tagger
Re: So, they invented...
Re: Anyone got examples
Re: Nice AI you have here
Re:Duh
Re:Duh
rgbatduke
Opportunist
ffreeloader
99BottlesOfBeerInMyF
slashdot
(submissions)
!windows
(stories)
!business
(stories)
Operating System for Electricity
The steady state of disks is full.
-- Ken Thompson
Close
Working...
US