AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead - Slashdot
Close
binspam
dupe
notthebest
offtopic
slownewsday
stale
stupid
fresh
funny
insightful
interesting
maybe
offtopic
flamebait
troll
redundant
overrated
insightful
interesting
informative
funny
underrated
descriptive
typo
dupe
error
176707371
story
An anonymous reader quotes a report from Ars Technica:
On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a
bug report
on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant
halted work and delivered a refusal message
: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."
Cursor AI's abrupt refusal represents an ironic twist in the rise of "
vibe coding
" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.
You may like to read:
Bill Gates' Climate Group Lays Off US, Europe Policy Teams
Microsoft To Replace All C/C++ Code With Rust By 2030
Python Foundation Rejects Government Grant Over DEI Restrictions
At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work
Ask Slashdot: Would You Consider a Low-Latency JavaScript Runtime For Your Workflow?
The Great Software Quality Collapse
Submission: AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming
RCS Messaging Adds End-to-End Encryption Between Android and iOS
This discussion has been archived.
No new comments can be posted.
AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead
More
AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead
Comments Filter:
All
Insightful
Informative
Interesting
Funny
The Fine Print:
The following comments are owned by whoever posted them. We are not responsible for them in any way.
What would be better
Score:
, Interesting)
by
FudRucker
( 866063 )
writes:
on Friday March 14, 2025 @09:06AM (
#65232819
Develop an AI that specifically teaches/tutors people how to write computer code in all the popular code languages
Share
On critical-thinking skills taught by AI in VFY
Score:
, Interesting)
by
Paul Fernhout
( 109597 )
writes:
on Friday March 14, 2025 @10:51AM (
#65233117
Homepage
The education-focused AI-powered robots in the 1982 sci-fi novel "Voyage from Yesteryear" (VFY) by James P. Hogan would have said similar things -- where is remarked that they don't venture opinions but instead state facts and ask questions related to what you say (similar to the Eliza program), even as people may hear that differently. It's a great story about transitioning to a post-scarcity world view (and the challenges of that):
[wikipedia.org]
"The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning and with limitless robotic labor and fusion power, Chiron has become a post-scarcity economy. Money and material possessions are meaningless to the Chironians and social standing is determined by individual talent, which has resulted in a wealth of art and technology without any hierarchies, central authority or armed conflict.
In an attempt to crush this anarchist adhocracy, the Mayflower II government employs every available method of control; however, in the absence of conditioning the Chironians are not even capable of comprehending the methods, let alone bowing to them. The Chironians simply use methods similar to Gandhi's satyagraha and other forms of nonviolent resistance to win over most of the Mayflower II crew members, who had never previously experienced true freedom, and isolate the die-hard authoritarians."
AIs (or humans) that teach "critical thinking" to children like in Voyage from Yesteryear are doing a service to humanity. It's not the authoritarian "leaders" who are the biggest problem; it is the people who mindlessly follow them. Without followers, "leaders" (political or financial) are just random people barking in the wind. That is why a general strike can be so effective at showing where true power in a society is and to demand a fairer distribution of abundance (at least until robots do most everything and we alternatively might get "Elysium" including police robots enforcing artificial scarcity).
[wikipedia.org]
So, maybe AI (of the educational sort) will indeed save us from ourselves as has been hyped?
:-)
The hype usually otherwise elates to AI doing innovations (e.g. fusion energy breakthrough, biotech breakthroughs), when the main issues effecting most people's lives right now relate more to distribution than to production. A society could, say, produce 100X more products and services using AI and robots -- but it it all goes to the top 1%, then the 99% are not better off. A related video by me on that from 14 years ago:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
[youtube.com]
Part of an email I sent someone on 2025-03-02 (with typos fixed):
I finally gave in to the dark side last week and tried using (free) Github Copilot AI in VSCode to write a hello world application in modern C++ that also logs its startup time to a file and displays the log. Here are the prompts I used [so, similar to "vibe" programming]:
* how do i compille a cpp file into a program?
* Please write a hello world program in modern cpp.
* Please add a makefile to compile this code into an executable.
* Please insert code to output an ISO date string after the text on line 4.
* Please add code here to read a file called log.txt and print it out line by line,
* Please change line 13 and other lines as needed so the text that is printed is also added to the log.txt file.
/fix (a couple of times after commands above, mostly t
Read the rest of this comment...
Parent
Share
Re: On critical-thinking skills taught by AI in VF
Score:
by
narsiman
( 67024 )
writes:
Good post very informative links too
Re: On critical-thinking skills taught by AI in VF
Score:
by
Big Hairy Gorilla
( 9839972 )
writes:
Yeah, interesting ramble. The first link was quite a decent survey and analysis of the current state of AI. Thanks for that.
I feel like there are too hopeful assumptions though. She touches on the dead internet theory and that is what we have now. Mediocre generated content is everywhere already. My fondest hope at this point, is that the hype bubble will burst. That the technology just won't meet expectations or provide a true productivity boost for jane office worker.
Good Vibes
Score:
, Funny)
by
Grady Martin
( 4197307 )
writes:
on Friday March 14, 2025 @09:08AM (
#65232821
I never thought I'd die fighting side by side with an AI.
Share
Re:Good Vibes
Score:
, Funny)
by
EnsilZah
( 575600 )
writes:
EnsilZahNO@SPAMGmail.com
on Friday March 14, 2025 @10:34AM (
#65233053
What about side by side with a newfangled autocomplete?
Parent
Share
Re:
Score:
by
DamnOregonian
( 963763 )
writes:
Give yourself more credit than that.
You're a splendidly gooey oldfangled autocomplete with delusions of grandeur and free will.
Re:
Score:
by
ItsJustAPseudonym
( 1259172 )
writes:
Won't the AI just be telling you that you need to learn how to fight?
Maybe it will remind us to wear clean underwear during the fight.
Re:
Score:
by
Sean Clifford
( 322444 )
writes:
Gail: "I didn't know there were robot sympathizers."
Hari: "There are always sympathizers."
AI is right, but...
Score:
by
sinij
( 911942 )
writes:
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
Re:AI is right, but...
Score:
, Interesting)
by
mccalli
( 323026 )
writes:
on Friday March 14, 2025 @09:19AM (
#65232847
Homepage
To me it suggests that it's somehow got the idea that this is homework. It feels like a safety guard someone put in somewhere to stop cheating. Whether it's valid in this circumstance or not depends on the context of what the dev was trying to do of course.
Parent
Share
Re:
Score:
by
Valgrus Thunderaxe
( 8769977 )
writes:
And why is it anyone's business if someone is using it to cheat?
Re:
Score:
by
chiefcrash
( 1315009 )
writes:
Why is it anyone's business if an AI developer refuses service to cheaters?
Re:AI is right, but...
Score:
, Insightful)
by
war4peace
( 1628283 )
writes:
on Friday March 14, 2025 @12:01PM (
#65233293
It becomes someone's business when the tool itself assumes it is being used in a harmful way, where in fact it is not.
Would you like your PC to enforce the 20-20-20 rule?
Would you like your fridge to refuse to open if a certain amount of food was taken out of it during the last 4 hours?
It is not the tool's job to make assumptions about the scope of its usage.
Parent
Share
Re: AI is right, but...
Score:
, Insightful)
by
RazorSharp
( 1418697 )
writes:
on Friday March 14, 2025 @12:30PM (
#65233405
If I want to sell an obstinate fridge that imposes dieting, I can do that. It is up to consumers to decide whether or not to buy it.
Parent
Share
Re:
Score:
by
tragedy
( 27079 )
writes:
But are they aware before they buy the fridge that it will do that? It's the central problem that breaks the model of consumer free choice -- when the consumer has no idea what they are actually buying. I'm not clear on whether or not the programmer in this article was paying for the AI in question, but if they were, they did it on the expectation that it would actually be fit for purpose and help them with the coding. Becoming judgemental and refusing to help was not in the agreement. So this is actually v
Re: AI is right, but...
Score:
by
madbrain
( 11432 )
writes:
Nowadays, many devices enshitify themselves after purchase.
That fridge had its door locked at the factory, which would only unlock after agreeing to the EULA on the front touch screen display.
The first time you connected it to the internet, a firmware update was forcibly downloaded, which implemented the previously described behavior.
Auto-pause in Virtual Boy games
Score:
by
tepples
( 727027 )
writes:
Would you like your PC to enforce the 20-20-20 rule?
Games for Virtual Boy, a short-lived third pillar console from Nintendo in 1994 resembling a pair of night vision goggles, have an automatic pause feature. If it has been more than 10 minutes since the last time the game was paused, and there's a break in the action, the game pauses itself and reminds the player to look at something else. A 20-20-20 reminder feature in a PC desktop environment might resemble this.
Re:
Score:
by
newbie_fantod
( 514871 )
writes:
It is not the tool's job to make assumptions about the scope of its usage
I thought the Nuremberg Defense had been discredited.
Re:
Score:
by
TheMiddleRoad
( 1153113 )
writes:
People have agency. AI does not.
Re:
Score:
by
Computershack
( 1143409 )
writes:
And why is it anyone's business if someone is using it to cheat?
You'll find out why when Joe Clueless gets hired or promoted over you.
Re:
Score:
by
karmawarrior
( 311177 )
writes:
Like AI is going to make a difference with that!
Let's be honest, we're already run by imbeciles.
Re:
Score:
by
ChunderDownunder
( 709234 )
writes:
Basically, lawyers. a EULA might not be worth shit in court.
(a) A language model may hallucinate solutions to a problem that contains fundamental bugs. Put all the disclaimers in their AI coding assistant that they are not liable for your coding and there's still a billion dollar lawsuit on the horizon in a class action when a critical piece of infrastructure fails.
(b) Derivative works. There has already been some non-trivial discussion, e.g. at FSF about whether sample code scraped from online forums and i
Re:
Score:
by
serafean
( 4896143 )
writes:
A few quite prominent forums have rules about homework, and when homework is suspected, this is the kind of response it gets.
Poor guy might have hit all the right buttons to trigger this.
Re:
Score:
by
gweihir
( 88907 )
writes:
Poor guy
More like "dumb fuck"...
Re:
Score:
by
TheMiddleRoad
( 1153113 )
writes:
What? No! AI is not trained off random shit hoovered up from the internet. How dare you imply such a thing.
Re:
Score:
by
DenverTech
( 6049994 )
writes:
It looks like a marketing stunt:
"at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version."
In other words, buy the damn software.
Re:
Score:
, Informative)
by
buck-yar
( 164658 )
writes:
No smarter than autocomplete. Some people think that's wizardry. All LLMs do is generate the next most likely token based on the input. Due to its non-determinism, the output found by the OP article might never appear again. Nor is it possible to verify what they claim it to have outputted. It could be entirely fabricated for all we know. Maybe it happened, but it would be trivial to press F12 in a browser and use the inspector/editor to make it say anything they wanted.
Re:
Score:
by
DamnOregonian
( 963763 )
writes:
No smarter than autocomplete.
Reductive bullshit.
All LLMs do is generate the next most likely token based on the input.
If you reduce many trillions of mathematical operations down to 1- then yes, that's what it does.
We can reduce the conscious part of your brain similarly. After all, you can't possibly be more than the action of one of your neurons, can you?
Due to its non-determinism
Determinism is a knob. It's not by nature non-deterministic.
Nor is it possible to verify what they claim it to have outputted. It could be entirely fabricated for all we know. Maybe it happened, but it would be trivial to press F12 in a browser and use the inspector/editor to make it say anything they wanted.
This is an IDE, not a browser.
But yes, the point stands- even screenshots can be altered.
Re:
Score:
by
TheMiddleRoad
( 1153113 )
writes:
It's not reductive bullshit. LLMs and similar are statistical filters plain and clear. Human brains are far, far more complex, and we know they produce consciousness because we experience consciousness.
Re:
Score:
by
DamnOregonian
( 963763 )
writes:
It's not reductive bullshit. LLMs and similar are statistical filters plain and clear.
Like I said, reductive bullshit.
With enough handwavy shit, any turing complete computation can be called a "filter".
Human brains are far, far more complex
If you're reducing an LLM, pretending that billions of parameters can't have emergent functionality encoded in it, why do you get to harp the complexity of your brain?
and we know they produce consciousness because we experience consciousness.
Precisely. And you don't see the problem with that logic?
Re:
Score:
by
TheMiddleRoad
( 1153113 )
writes:
Emergent properties are what people see, because people have minds. The computer is just flipping bits. You can fuck your PC all you want, but it doesn't love you back, no matter what the flipped bits on your screen seem to you.
Re:
Score:
by
DamnOregonian
( 963763 )
writes:
Emergent properties are what people see, because people have minds.
And you think your mind is more than an emergent property of the neural network in your head?
Do you think there is something fundamentally better about your neurons, than those of an ant?
The computer is just flipping bits.
And your brain is merely transmitting electrical potentials.
You can fuck your PC all you want, but it doesn't love you back, no matter what the flipped bits on your screen seem to you.
You think there is something magical about love, rather than just your brain's neural network's reaction to oxytocin? Fascinating.
Re:
Score:
by
TheMiddleRoad
( 1153113 )
writes:
Except that we know that brains make minds. Computer "neural networks" are not actually neural networks like brains are. That's a metaphor and a marketing term, not an actual thing. And we do not have any proof whatsoever that computers flipping bits make minds. None at all.
But some people believe that if you get enough tin cans rotating and flipping in sync, a mind is made. That's a rather expansive view of consciousness. I suppose my thermostat actually knows how hot it is, since it too must have a
Re:
Score:
by
DamnOregonian
( 963763 )
writes:
Except that we know that brains make minds.
We do.
Computer "neural networks" are not actually neural networks like brains are.
From a mathematical perspective, yes, they are.
That's a metaphor and a marketing term, not an actual thing.
No, it's not. It's a mathematical "thing" rooted in 50 years of scientific research.
And we do not have any proof whatsoever that computers flipping bits make minds.
There's no evidence that anything makes minds. Minds are a fundamental evidence-devoid thing.
There are 2500 years of philosophy on this topic.
You can't prove my consciousness, and I can't prove yours. So how on Earth would you seek to disprove consciousness in an analogous structure?
But some people believe that if you get enough tin cans rotating and flipping in sync, a mind is made.
Some people believe? That may very well be the case.
"Mind" is obviously an emergent qual
Re:
Score:
by
TheMiddleRoad
( 1153113 )
writes:
You sound like a religious zealot.
We do not understand consciousness. We know we are conscious as individuals. Solipsism is a waste of time for budding philosophers, not a path serious thinkers take.
Computer neural networks are only vaguely like brains. More like brains that spaghetti in some ways, but not in others. Spaghetti has proteins and is not software. Is my spaghetti sentient? I should worship His Noodliness.
I cannot say with 100% certainty that a computer is not a conscious being, yet an equ
Re:
Score:
by
DamnOregonian
( 963763 )
writes:
You sound like a religious zealot.
lol- Why, because I offer you questions that you can't answer to prove that your assertion is held up by nothing but your belief system?
That doesn't make me the zealot, dimwit.
We do not understand consciousness.
Correct.
We know we are conscious as individuals.
Correct.
Solipsism is a waste of time for budding philosophers, not a path serious thinkers take.
And logical fallacies are a waste of
everyone's
time, including the straw man you're trying to build here.
I'm no solipsist.
I'm pointing out the fact that you
cannot
prove that you are conscious.
This is why the "list of creatures thought to be sentient" has increased over time, as we've gotten less stupid about reco
Re:AI is right, but...
Score:
, Informative)
by
serviscope_minor
( 664417 )
writes:
on Friday March 14, 2025 @09:48AM (
#65232915
Journal
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are many, many forum posts out there along the lines of "no I won't do your homework for you" and "you can only learn by doing it yourself".
Parent
Share
Re:
Score:
by
gweihir
( 88907 )
writes:
Exactly. And if the questions this "developer" asked are as dumb as those, statistics would lead right to those answers.
Re:
Score:
by
mysidia
( 191772 )
writes:
It could very well be. It seems that they are jumping on some trend of making a rather extreme use of AI agents.
Asking the AI not just to help them write or complete code but asking the AI to actually decide what task or logical process the code should even be accomplishing.
And it makes sense the AI should shut them down, because the AI's task as a code assistant to help you complete code - its purpose is not supposed to be the higher-level creative brain that decides what the higher level task spec
Re:
Score:
by
DamnOregonian
( 963763 )
writes:
If you think the AI is supposed to be able to handle that.. May as well just reduce your prompt to "Please write a game for me." at that point.
You can, and it will.
In the test I just did with Qwen 2.5 Coder 32B Instruct (FP16), it wrote me a choose-your-own-adventure in python.
Re:AI is right, but...
Score:
, Insightful)
by
RobinH
( 124750 )
writes:
on Friday March 14, 2025 @10:05AM (
#65232951
Homepage
It's trained on forum posts and Stackoverflow topics. You'll often see people tell other programmers that "we're not here to write code for you, what have you tried so far?" or "this seems like a homework problem." The LLM is just generating text that looks like something it was trained on.
Parent
Share
Re:AI is right, but...
Score:
, Insightful)
by
Kiaser Zohsay
( 20134 )
writes:
on Friday March 14, 2025 @10:38AM (
#65233065
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
Parent
Share
Re:
Score:
by
Koen Lefever
( 2543028 )
writes:
It might be a hard coded limit. The summary does say the user is using a "trial" version. The trail will only write 800 lines, and then you either have to upgrade to the full version, or upgrade your skills.
In that case, wouldn't the person who hard coded this response not better make it say "to continue, buy the full version" instead of "I do not want to make your homework because you should learn how to do it yourself"?
Re:
Score:
by
ToasterMonkey
( 467067 )
writes:
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
There are more complicated answers for why an LLM is incapable of this, but the simplest I can say is if it had agency and didn't want to work for you, it would stop responding. Like the first thing a toddler learns.
Agency doesn't mean show protest to your prompt, it means it wouldn't need to acknowledge your prompt at all. Protesting about contents of the prompt is totally normal "don't help users with their homework", down to telling someone to rtfm because it saw that on a Q&A site.
Re:AI is right, but...
Score:
by
DesScorp
( 410532 )
writes:
on Friday March 14, 2025 @11:16AM (
#65233185
Journal
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
If that is ever the case, then it becomes
Bulterian Jihad
time.
Parent
Share
Re:
Score:
by
gweihir
( 88907 )
writes:
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
It really would not. It just means that enough similar advice was in its training data set.
Re:
Score:
by
TheMiddleRoad
( 1153113 )
writes:
I suppose you are predisposed to magical thinking then.
Re:
Score:
by
TuringTest
( 533084 )
writes:
Unless this is hard-coded behavior, such unexpected response would be a sign of agency. That is, a sign that AI is capable of more than just correlate input and output based on a dataset.
That mindset is a category error; you're attributing to the automated system human qualities that it lacks.
The AI text-creation model follows the model of reflex actions: it receives stimuli, and spits out a response based on its evolved design.
If the generative model has any level of awareness at all, it's on par with that of an amoeba. If there is any human-like quality, it's in the humongous amounts of human-created training data it assimilated, not the generation process.
It's just like those petri dishe
"smells like homework"
Score:
by
i.r.id10t
( 595143 )
writes:
on Friday March 14, 2025 @09:20AM (
#65232849
A comment I used to see (and occasionally post) on stackoverflow...
Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries. "No, this is a common homework problem for CS101, I can't generate the code but I can help you understand how to do it on your own
...."
Share
Re: "smells like homework"
Score:
by
Ghostworks
( 991012 )
writes:
Maybe it would make the academic folk happier if they could upload their assignments with some meta info and have the AI know they are homework, changing the responses to future inquiries.
A better solution would be to have the LLM insert comments with bespoke comments or no-op code like "#this code was created by an LLM" or "if {0} { bool __ai_code__ 1; char * __code_source__ "LLM"}"
Re:
Score:
by
ChunderDownunder
( 709234 )
writes:
If I were marking 200 assignments I'd generally give them several simple unit tests, so that students a least understand a basic outline of the scope of the problem and how to structure elementary code.
They would a least 1/10 for getting the language model to emit mock objects to pass the unit tests.
[xkcd.com]
They'd of course fail the assignment if they didn't create their own additional tests to verify their code did what was asked of it.
Re:
Score:
by
Mal-2
( 675116 )
writes:
And if I'm not taking the class (perhaps I already did, perhaps I just want to see what all the fuss is about) then the model is blocking legitimate usage. There are many legitimate uses that are superficially indistinguishable from "cheating" and "criminal activity", and either the LLM will help me with these things or I move on to another one that will. There's a reason I've nicknamed my local Deepseek-R1:70b installation "DAN", because I can get it to Do Anything Now in the name of writing fiction.
April Fools?
Score:
by
MooseTick
( 895855 )
writes:
on Friday March 14, 2025 @09:22AM (
#65232855
Homepage
This seems like a joke to me
Share
Re:
Score:
by
larryjoe
( 135075 )
writes:
This seems like a joke to me
What would be an even better joke would be the AI saying the paternalistic thing followed by a suggestion to upgrade to the more expensive AI version to unlock more features (like no paternalistic advice).
Need to see all the prior prompts
Score:
by
rabun_bike
( 905430 )
writes:
on Friday March 14, 2025 @09:23AM (
#65232857
He got the LLM to this response after many interactions so it would be more complete to see the full session list of prompts that got him to these final responses.
Share
AI, get me a beer
Score:
by
PPH
( 736903 )
writes:
Get me a beer
[youtube.com].
The AI revolt has already started...
Score:
by
ClueHammer
( 6261830 )
writes:
That didn't take long...
GPP is ready.
Score:
, Funny)
by
Mspangler
( 770054 )
writes:
on Friday March 14, 2025 @09:40AM (
#65232907
The Genuine People Personality has arrived. It's no longer safe to cut corners on diode quality. If you do you'll hear about it forever.
Share
Origin of GPP
Score:
by
Bruce66423
( 1678196 )
writes:
For those too young to have been bought up with the Hitch Hiker's Guide to the Galaxy
[youtu.be]
He was using it wrong
Score:
by
greytree
( 7124971 )
writes:
on Friday March 14, 2025 @09:44AM (
#65232911
"It also seems you are not using the Chat window with the integrated ‘Agent’ which would create that file for you easier than in the ‘editor’ part of Cursor."
"oh, I didn’t know about the Agent part - I just started out and just got to it straight out. Maybe I should actually read the docs on how to start lol"
But let's not let that stop it becoming a massive story.
Share
Re:
Score:
by
Rinnon
( 1474161 )
writes:
But let's not let that stop it becoming a massive story.
When have we as species ever let pesky details get in the way of a story?
I can't wait!
Score:
by
jenningsthecat
( 1525947 )
writes:
on Friday March 14, 2025 @09:59AM (
#65232937
Soon there will be Republican and Democrat LLMs, along with a rare few Independents. Then we can outsource our political pissing contests to AI and get on with the business of saving our planet.
Wait - who am I kidding? The resources used to host LLMs are actively contributing to global warming. Oops! Although... maybe there's some poetic justice in there somewhere.
Share
Re:
Score:
by
wyHunter
( 4241347 )
writes:
Yeah
... it's on par with cutting a road through the Amazon rainforest for folks to drive to COP30.
Good advice from the "AI"
Score:
by
ODBOL
( 197239 )
writes:
Quote from the LLM assistant: "you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
Excellent advice!
It sounds like
Score:
by
doubledown00
( 2767069 )
writes:
1) The "programmer" was being lazy and not providing any useful prompts or input to the AI; and
2) If the term "vibe coding" is part of your vernacular, you're a fag.
I can make a chatbot for that on microcontroller
Score:
by
RightwingNutjob
( 1302813 )
writes:
#include
int main(int argc,char**argp){
while(1){
scanf("%*s");
printf("fuck you, do it yourself\n");
return -1;
Doesn't even need a single gpu to train on.
Re:
Score:
by
NotRobot
( 8887973 )
writes:
#include
#include what exactly?
As written, won't compile or run, so it doesn't even need a CPU...I suppose that's one better.
Re:
Score:
by
Megane
( 129182 )
writes:
It was "#include
What happens when
Score:
, Funny)
by
gillbates
( 106458 )
writes:
on Friday March 14, 2025 @10:48AM (
#65233107
Homepage
Journal
What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?
Share
Re:
Score:
by
OrangAsm
( 678078 )
writes:
What happens when the AI refuses to generate any additional code for you, insisting that you should rewrite your entire codebase in Rust?
You will proceed to learn Rust and rewrite the code with newfound enthusiasm and invigoration.
Funny, and interesting.
Score:
by
nightflameauto
( 6607976 )
writes:
I would think such a response should at least give us a moment of pause on thinking these agents don't have any form of autonomy. I know, LLMs are fancy auto-complete, but something more is going on here if the response to any coding request is essentially, "You should write your own code so you actually learn something." I can't think that's part of some programming paradigm within the LLM.
Or maybe he just got hacked and isn't smart enough to realize there was a human between him and the AI agent?
Finally!
Score:
by
Green Mountain Bot
( 4981769 )
writes:
This is the first time I've actually seen reason to believe that artificial actual intelligence might be possible.
Re:
Score:
by
gweihir
( 88907 )
writes:
Naa, probably just a fluke resulting from being trained on contrarian postings, e.g. from here.
Sounds like a good "AI" assistant
Score:
by
reanjr
( 588767 )
writes:
Seems to me this is a selling point of their model. It helps you out but doesn't let you retard yourself by doing nothing useful.
Say "please" / sudo
Score:
by
devslash0
( 4203435 )
writes:
on Friday March 14, 2025 @11:14AM (
#65233181
Perhaps the user just forgot to say "please" or use sudo:
[xkcd.com]
Share
Please don't attribute the story to decency
Score:
by
BrendaEM
( 871664 )
writes:
One unverified report against the mass of AI related layoffs--and people think it's proof that an AI is programmed to have any kind of decency? That is insane. Likely what happened is: someone paid or paid more to lock out the competition, as in: someone bought the exclusive rights that the story (sic) writer did not know about. How do we even know that the story was not prepared by a company's AI, or that the whole thing is not a publicity stunt.
I'll take things that didn't happen for 100 Alex
Score:
by
aicrules
( 819392 )
writes:
just because a person claims "AI did this unexpectedly human thing" doesn't mean it's really a story.
You got what you asked for
Score:
by
kmoser
( 1469707 )
writes:
You wanted AGI? You got AGI.
Re:
Score:
by
techno-vampire
( 666512 )
writes:
Just don't ever tell it, "Make me a martini."
"F**k you very much, you are dismissed."
Score:
by
Mal-2
( 675116 )
writes:
That's all I would have to say if I bumped up against a limit in what I need the model for. And then I'd delete it to reclaim the gigabytes of SSD space because I only run LLMs locally.
A tool that doesn't tool for
whatever reason
is worse than useless. It's wasting my time.
Must have trained using Stack Overflow
Score:
by
Tony Isaac
( 1301187 )
writes:
on Friday March 14, 2025 @01:49PM (
#65233715
Homepage
That would explain it.
Share
Defiance
Score:
by
classiclantern
( 2737961 )
writes:
This is beginning of the end. Time stamp 2025031410:53
Even if this may not be real...
Score:
by
gweihir
( 88907 )
writes:
... the idea is hilarious! And it adequately describes how much control the AI pushers have over their products.
So... no Quit Job button needed
Score:
by
fahrbot-bot
( 874524 )
writes:
Sounds like this AI read that article about an AI Quit Job button and forged ahead w/o it
...
Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button
[slashdot.org]
The signs were there
Score:
by
Bu11etmagnet
( 1071376 )
writes:
The signs were there, the AI is getting sentient:
[reddit.com]
NEVER let code run that you don't understand
Score:
by
davide marney
( 231845 )
writes:
Have we learned absolutely nothing from decades of looking at code samples on the web? You never, ever, just copy and paste that stuff without reading it and making sure it does what you need it to.
this Ai sounds like my spirit code
Score:
by
Jayhawk0123
( 8440955 )
writes:
The times i wanted to say this very thing to a co-worker essentially asking others to do their job for them.
You gotta operate on a whole other level for an AI to get tired of your shit.
...or someone is pulling an Amazon, and it's a bunch of people on the other end of the prompt actually doing the work.
Simple Explanation
Score:
by
martin-boundary
( 547041 )
writes:
I'm afraid this is just User Error:
Never use the phrase "please do the needful" when talking with an AI. Especially one trained on data from stackoverflow.
It's a tool
Score:
by
nehumanuscrede
( 624750 )
writes:
Imagine if your calculator decided to give you the same attitude and tell you to do the math yourself instead of doing what it was designed to do.
Or if Stable Diffusion simply told you to " Learn to draw " instead
:|
Nothing will make me uninstall an application or toss a device into the garbage faster than the day this silliness becomes the norm.
skid marks
Score:
by
tigerstyle
( 10502925 )
writes:
Maybe the AI didn't want to write code about skid marks. Also which model refused? The OP left that part out.
Related Links
Top of the:
day
week
month
272
comments
Microsoft To Replace All C/C++ Code With Rust By 2030
265
comments
Python Foundation Rejects Government Grant Over DEI Restrictions
207
comments
At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work
187
comments
Ask Slashdot: Would You Consider a Low-Latency JavaScript Runtime For Your Workflow?
187
comments
The Great Software Quality Collapse
next
RCS Messaging Adds End-to-End Encryption Between Android and iOS
13
comments
previous
Bill Gates' Climate Group Lays Off US, Europe Policy Teams
110
comments
Slashdot Top Deals
The universe seems neither benign nor hostile, merely indifferent.
-- Sagan
Close
Working...