Does GitHub Copilot Improve Code Quality? - Slashdot
Close
binspam
dupe
notthebest
offtopic
slownewsday
stale
stupid
fresh
funny
insightful
interesting
maybe
offtopic
flamebait
troll
redundant
overrated
insightful
interesting
informative
funny
underrated
descriptive
typo
dupe
error
175528771
story
Microsoft-owned GitHub published a blog post asking "
Does GitHub Copilot improve code quality
? Here's what the data says."
Its first paragraph includes statistics from past studies — that GitHub Copilot has helped developers code
up to 55% faster
, leaving 88% of developers
feeling more "in the flow
" and 85% feeling more confident in their code.
But does it improve code quality?
[W]e recruited 202 [Python] developers with at least five years of experience. Half were randomly assigned GitHub Copilot access and the other half were instructed not to use any AI tools... We then evaluated the code with unit tests and with an expert review conducted by developers.
Our findings overall show that code authored with GitHub Copilot has increased functionality and improved readability, is of better quality, and receives higher approval rates... Developers with GitHub Copilot access had a 56% greater likelihood of passing all 10 unit tests in the study, indicating that GitHub Copilot helps developers write more functional code by a wide margin. In blind reviews, code written with GitHub Copilot had significantly fewer code readability errors, allowing developers to write 13.6% more lines of code, on average, without encountering readability problems. Readability improved by 3.62%, reliability by 2.94%, maintainability by 2.47%, and conciseness by 4.16%. All numbers were statistically significant... Developers were 5% more likely to approve code written with GitHub Copilot, meaning that such code is ready to be merged sooner, speeding up the time to fix bugs or deploy new features.
"While GitHub's reports have been positive, a few others haven't,"
reports Visual Studio magazine
For example,
a recent study
from Uplevel Data Labs said, "Developers with Copilot access saw a significantly higher bug rate while their issue throughput remained consistent."
And earlier this year a "Coding on Copilot"
whitepaper from GitClear
said, "We find disconcerting trends for maintainability. Code churn — the percentage of lines that are reverted or updated less than two weeks after being authored — is projected to double in 2024 compared to its 2021, pre-AI baseline. We further find that the percentage of 'added code' and 'copy/pasted code' is increasing in proportion to 'updated,' 'deleted,' and 'moved 'code. In this regard, AI-generated code resembles an itinerant contributor, prone to violate the DRY-ness [don't repeat yourself] of the repos visited."
You may like to read:
More Business School Researchers Accused of Fabricated Findings
Microsoft To Replace All C/C++ Code With Rust By 2030
Python Foundation Rejects Government Grant Over DEI Restrictions
At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work
Ask Slashdot: Would You Consider a Low-Latency JavaScript Runtime For Your Workflow?
The Great Software Quality Collapse
Verify the Rust's Standard Library's 7,500 Unsafe Functions - and Win 'Financial Rewards'
This discussion has been archived.
No new comments can be posted.
Does GitHub Copilot Improve Code Quality?
More
Does GitHub Copilot Improve Code Quality?
Comments Filter:
All
Insightful
Informative
Interesting
Funny
The Fine Print:
The following comments are owned by whoever posted them. We are not responsible for them in any way.
No.
Score:
, Funny)
by
Local ID10T
( 790134 )
writes:
ID10T.L.USER@gmail.com
on Saturday November 23, 2024 @07:41PM (
#64967449
Homepage
/betteridge
Share
It's also...
Score:
by
Kelxin
( 3417093 )
writes:
on Saturday November 23, 2024 @07:44PM (
#64967457
100% more likely to be stolen and used by other people for monetary gain.
Share
Anecdotal evidence
Score:
by
Misagon
( 1135 )
writes:
on Saturday November 23, 2024 @07:59PM (
#64967473
Every account of using it I've read online has been negative about code quality.
Share
Re: Anecdotal evidence
Score:
by
dwater
( 72834 )
writes:
Is that different to "security through obscurity"?
Re: Anecdotal evidence
Score:
, Informative)
by
phantomfive
( 622387 )
writes:
on Saturday November 23, 2024 @10:46PM (
#64967641
Journal
I think his point is that security requires checking error conditions, and checking error conditions makes the code harder to read.
Of course, the Java solution is to just make every error condition an exception and shut down the whole thing.
Parent
Share
Re:
Score:
by
martin-boundary
( 547041 )
writes:
Yes that's my point. But not just error conditions from system API calls. Also invariants and constraints from the business logic. Who here remembers Perl's ideas about
tainted variables
[wikipedia.org]?
Ironically, one of the downsides of modular coding paradigms is that it can be very difficult to decide, from inside a function or object, where input data originally comes from and where it's going. This is another reason real code is often bloated and messy. We certainly need better computer languages for the 21st centu
Re: Anecdotal evidence
Score:
, Insightful)
by
gweihir
( 88907 )
writes:
on Sunday November 24, 2024 @01:38AM (
#64967789
Who here remembers Perl's ideas about
tainted variables
[wikipedia.org]?
I do and I continue to teach it in my software security classes. Data-paths are really critical for software security.
My take on "AI" coding assistants is negative. And my largest criticism is strategic: People using crutches will never learn how to walk without them. So, yes, some not very significant "productivity" gains may be there in the code generation step if you are really bad at it. But in that case, you should use "AI" tools even less because if you lean on them you will never get better. Obviously, the other criticisms like code churn, more bugs, etc. are valid too.
One particular troubling thing I have seen in exercises and exams where students were allowed to use "AI" was that "AI" simply overlooks border conditions and part of the spec that are not quite standard. For example, I had a well-known algorithm (that the students did not know) to be implemented, but I had very explicitly a different order of some steps, which made sense in the given context. About 5% of the students got that right. The others just took the "AI" answer and got it wrong. I have found a similar thing in my own experiments, and this exam question was kind of a trap. Which worked a lot better than I expected.
The problem, of course, is that LLM-type AI has no understanding and no fact-checking ability. It essentially craps out the "solution" that matches the question best statistically. If there is one sentence in there (or in the case of that exam-task, a numbered list of three), that does not fit what it saw in training, it simply ignores (!) hat part. Now, in the software security space, understanding is critical. For example, as soon as you write input validation for non-trivial things, you are going to fail if you use "AI" for that. And incomplete input-validation is what attackers or sometimes other problems (remember Cloudstrike?) are going to walk in by.
Hence one catastrophic scenario I expect is that "AI" will start to or may well have already have started to recommend "insecurity patterns" users that look good and adequate. These will then make it into many products, maybe even cross platform and cross-language. And then the attackers will have massively less effort to attack different products. On top of that, pattern seeding with insecurity patterns may also be or become an attack vector.
The whole thing is a clusterfuck with retarding (literally) developers, decreasing the variability of code, overlooking parts of the spec and generally and subtly (or not subtly) decreasing code quality.
Obviously, the metrics used to "prove" that "AI" increases developer productivity are completely bogus, because "lines of code written per time" is really bullshit. Writing the code is a minor part of project time. The major part is maintaining the code. A number I remember from 35 years back when I studied software engineering was 20% coding, 60% maintenance. Hence if you code 5% faster, but maintenance cost goes 2% up, you have a net loss. I expect we will see a lot of that happening.
Parent
Share
AI versus cut and paste from Stackoverflow / docs
Score:
by
will4
( 7250692 )
writes:
Skeptical take:
Read many different vendor's API documentation and there is little or no error handling in example code.
Most Stackoverflow and blog entries are the same, no error handling.
And this is not an effective way to handle errors
try
(big block of code with multiple operations, multiple business logic items)
return some not null value
catch (exception e)
return null
Re:
Score:
by
phantomfive
( 622387 )
writes:
Incidentally, all data that comes from the user is tainted. There is no such thing as "completely sanitized" data. You might have sanitized it enough to store in a database, but did you sanitize it enough to fit in an Excel spreadsheet? Probably not.
Re: Anecdotal evidence
Score:
by
AnonymousNoel
( 6972222 )
writes:
You are conflating input sanitisation with output sanitisation.
Re:
Score:
by
phantomfive
( 622387 )
writes:
ok, how do you completely sanitize data? You can't.
Re:
Score:
by
angel'o'sphere
( 80593 )
writes:
Of course, the Java solution is to just make every error condition an exception and shut down the whole thing.
For every exception, there usually is a
catch
...
"Shut downs" only happen in C++, if an exception is popping out of a function, which did not declare that exception int the "throws" clause.
You are mixing up Java with C#
In Java, the compiler makes sure you handle all "checked exceptions", some idiots do a empty catch block, though.
C# has no checked exceptions
... only unchecked. So worst case they fl
Re:
Score:
by
SeaFox
( 739806 )
writes:
My rule of thumb is: if it's readable and clear to a random stranger, then it's not security hardened.
Wouldn't this defeat the entire premise of security when it comes to open-source code? If the code is available for anyone to read and no one can understand it it's not possible for others to scrutinize it, and if they can read it then by your rule it must not be very secure.
Re:
Score:
by
martin-boundary
( 547041 )
writes:
Readability here does not measure if *nobody* can read it.
Readability metrics only measure if someone who's never seen the code before can figure out straight away if it seems to do the right thing.
Readability is what you want when reviewing code from an interview candidate offering a solution to a coding problem, or when reviewing code from a new hire to see if he fucked up on his first month on the job.
Real and mature code is messy because the real world is full of special cases and redesigns and imp
Re:
Score:
by
gweihir
( 88907 )
writes:
To paraphrase Einstein, code should be as simple as possible, but no simpler than that.
And that is just it. Unless the code is only doing really trivial things, "as simple as possible" is not going to be very simple.
Re:
Score:
by
angel'o'sphere
( 80593 )
writes:
You have strange ideas about code.
If you are in one of my teams: you quickly learn to write "unmessy" code, or get assigned the most boring work you can imagine.
Real code is not messy. It is as readable as any code. If you do not learn how to write unmessy code, you can completely forget to get promoted inside of the organization.
Re:
Score:
by
phantomfive
( 622387 )
writes:
The primary positive accounts I've heard about Copilot is that it works as a substitute for Stackoverflow. It helps you figure out what to do when documentation is crappy or nonexistent (but you still need to test it because Stackoverflow can really lead you astray).
Re:Anecdotal evidence
Score:
, Informative)
by
gweihir
( 88907 )
writes:
on Sunday November 24, 2024 @01:43AM (
#64967793
That would be rally bad. Because Stackoverflow usually includes a discussion of alternatives and advantages and drawbacks. This a) serves so that a competent (!) coder can understand the problem better and make an adequate selection of a solution and b) does contribute to developer education and experience. Yes, it takes more time, but that time is well-spent.
Parent
Share
Re:
Score:
by
phantomfive
( 622387 )
writes:
Yeah, if Stackoverflow goes away (or diminishes) then the LLMs are not going to keep up with the latest technology.
Re:
Score:
by
warp_kez
( 711090 )
writes:
It works when the LSP is too slow or does not load properly, but there is no way I will trust copilot to fill out a code block for me.
The irony, I suppose that is the correct word - might be coincidence, its suggestions that it uses are my own code so I guess I should feel flattered in those instances.
Phillip Morris says cigarettes don't cause cancer
Score:
, Insightful)
by
stuff-n-things
( 89988 )
writes:
on Saturday November 23, 2024 @08:02PM (
#64967477
Homepage
No conflict of interest at Github/Micro$oft either.
;-)
Share
Re:
Score:
by
Morromist
( 1207276 )
writes:
I'm sure Microsoft is being 1000% ethical and if there was any evidence that AI actually makes code worse they would definitely let Github publish stuff about that despite Microsoft investing $100 billion or more in AI and AI related stuff.
Re:
Score:
by
gweihir
( 88907 )
writes:
Indeed. Obviously Microsoft would fall on their sword to protect us all and make the world a better place! Right? Right?
Man, I really hope I am retired when all this AI crap has to be ripped out everywhere...
Re:
Score:
by
account_deleted
( 4530225 )
writes:
Comment removed based on user account deletion
Re:
Score:
by
gweihir
( 88907 )
writes:
You can turn crap into diamonds. Just takes a lot of heat and pressure. Marketing can also do it, using a similar approach.
AI is trained on peoples mistakes
Score:
, Interesting)
by
rapjr
( 732628 )
writes:
on Saturday November 23, 2024 @08:15PM (
#64967487
so the common mistakes of people will become pasted into your code by AI. Since training an AI is a statistical process it seems likely that the most average code examples will be what it generates. This might be ok for some things, but it may lower overall code quality. Imagine working with someone who is ok but average in your project and they keep inserting average code into your codebase, perhaps reverting it to less desirable states. Security in particular is not well represented in public code bases. This stuff should be considered experimental until it is better vetted. Yeah it may make your life easier, but what happens later? Any analysis like this needs to follow the side effects from generated code for years.
Share
Re: AI is trained on peoples mistakes
Score:
by
dwater
( 72834 )
writes:
& some languages have more gotchas..& some languages are more common for beginners, who make more mistakes.
Re:
Score:
by
easyTree
( 1042254 )
writes:
One thing which gives me the greatest cause for concern is that internet tech is changing continuously yet the AIs seem to consider the version of each piece of software to be immaterial or it is version aware yet its knowledge cut-off isn't aware that for the version being used, the recommendations are no-longer appropriate.
Re: AI is trained on peoples mistakes
Score:
by
dwater
( 72834 )
writes:
Yeah, good point.
Re:
Score:
by
gweihir
( 88907 )
writes:
Exactly. This will also likely lead to common security mistakes becoming more prevalent, decreasing attacker effort. And as a bonus on top, it will be really hard to prevent an LLM from continuing to recommend some crap code once it is known it is crap.
Yes and no
Score:
, Insightful)
by
Drethon
( 1445051 )
writes:
on Saturday November 23, 2024 @08:25PM (
#64967503
Based on absolutely no studies or anything but my own opinion... I suspect AI will make good coders better and bad coders worse. Good coders will consider the suggestions, take the good ones and reject the bad ones. Bad coders will take everything.
Share
Re:
Score:
by
LoneBoco
( 701026 )
writes:
I would agree with that. It's anecdotal but I've noticed when using Copilot at my job that it usually gets me a "mostly" proper solution. But even getting you mostly to a solution can save you an hour or more of digging through documentation. "Hey Copilot, I have an Excel workbook in a memory stream. Load it up with the Open XML library, open up the Summary spreadsheet, and copy out the contents of cell D:3." AI bots are pretty good at crawling through lots of information and summarizing it; I've been
Re:
Score:
by
gweihir
( 88907 )
writes:
Probably, although I am doubtful on the impact on good coders. Since most coders are crap (just look at the flood of security vulnerabilities we see every day), that part of the impact will dominate anyways.
Re:
Score:
by
computer_tot
( 5285731 )
writes:
I've seen this in other fields. Lazy and unskilled workers use AI badly and make their work worse, faster. Skilled, careful workers will use AI as a starting point and research/confirm/expand on it.
We saw this with the legal system last year when a bunch of lazy lawyers submitted documents to the court with fake precedents. I've seen it with research articles. AI doesn't so much improve or worsen a person's work as magnifies what is already there.
I investigated: three answers so far
Score:
, Insightful)
by
davecb
( 6526 )
writes:
davecb@spamcop.net
on Saturday November 23, 2024 @08:46PM (
#64967517
Homepage
Journal
The best idea is Advait Sarkar's. He noticed how bad LLMs are at anything creative, and instead suggested we use them for things they're good at, predicting what humans would say. Especially if they were asked what a critic would say. See
[wordpress.com]
Trying Pull Requests with CodeRabbit. One of the things I think LLMs can do well is compare my text with a whole body of other people's work. In that vein, CodeRabbit now offers to review git pull requests.
[wordpress.com]
In the search for true artificial intelligence, large language models are a horrible failure which look like a success.
[wordpress.com]
Share
Re:
Score:
by
phantomfive
( 622387 )
writes:
Yeah, but they are pretty cool. And someone else paid for it.
Re:
Score:
by
gweihir
( 88907 )
writes:
True. Impressive toys. The "somebody else pays for it" part will not keep though.
Re:
Score:
by
gweihir
( 88907 )
writes:
In the search for true artificial intelligence, large language models are a horrible failure which look like a success.
[wordpress.com]
You need to have some actual insight to see that though. One thing we are finding out with the current AI craze is how many people actually lack natural insight and typically do not use whatever general intelligence they may actually have available. If you yourself are dumb that way, AI may look like something that can perform on your level or better. That this level can be and often is really bad gets overlooked.
Confidence
Score:
, Insightful)
by
phantomfive
( 622387 )
writes:
on Saturday November 23, 2024 @09:00PM (
#64967529
Journal
"85% feeling more confident in their code."
Imagine having such low confidence in the quality of your own code that you feel an LLM is doing you better.
Share
Re:
Score:
by
Morromist
( 1207276 )
writes:
Oooo. Burn.
Re:
Score:
by
gweihir
( 88907 )
writes:
Indeed. Imagine being this bad at your job. And then ask why that is and does not seem to change. Obviously, incompetent coders (the vast majority) always look for some magic language or tool or approach that makes their code not suck. Obviously that does not work and cannot work because the tooling and the processes are not the problem.
Re:
Score:
by
dvice
( 6309704 )
writes:
Compilers are also tools that improves code quality.
Personally I don't use AI yet for coding. I have tried it, but it is like asking a junior developer for advice. It can do some small things, but for the most part, it is just faster to do it myself than explain how to do it.
The area where I would like to see AI usage to increase is testing. I think testing is much more better suited for AI than programming, because testing does not require anything except trying all sorts of things, and it doesn't really m
Re:
Score:
by
gweihir
( 88907 )
writes:
What about when in testing the "AI" claims a test was successful when it was not, either by test design or by misinterpretation of results? I think mistakes matter a lot in testing.
CoPilot in Python is excellent
Score:
by
Pegasuce
( 455700 )
writes:
The success you'll have with copilot will depend on the language used. Python is the language that CoPilot generates more useful code between java, Angular et C#.
Re: CoPilot in Python is excellent
Score:
by
dwater
( 72834 )
writes:
Interesting. Some languages have more "gotcha"s, so there are surely more examples our there of code that falls into those traps, and so the AI surely uses those in its answers too. Languages with fewer "gotchas" result in better AI code...
No?
CoPilot can't even declare a Java String correctly
Score:
by
Somervillain
( 4719341 )
writes:
The success you'll have with copilot will depend on the language used. Python is the language that CoPilot generates more useful code between java, Angular et C#.
Hmm, I tried it 2 weeks ago with a question of "for
/aaa/bbb/.../xxx[12334]yyy write me a RegEx in Java that replaces 1234 with abcd" (roughly...can't share the details).
1. The RegEx was declared on a Java String with newlines, so it didn't even compile.
2. The RegEx was wrong. It didn't work if I had fixed it for them...it just completely fucked up the RegEx.
3. The RegEx they tried to do was about 10x more complicated than it needed to be
4. The Java API they used was really outdated
5. Their general
Re:
Score:
by
Ksevio
( 865461 )
writes:
Yeah they don't work for everything. Regex's seem to be a pretty big weak point in particular. After using them for a while you get a feel for their strengths and weaknesses.
Re:
Score:
by
gweihir
( 88907 )
writes:
Yep, pretty much. The thing is generating a RegEx requires insight. Obviously an LLM can only give you a RegEx has seen before or incompetently try to combine some. That will not work. And your example was _really_ simple.
Re:
Score:
by
dvice
( 6309704 )
writes:
On the other hand, I think it would be possible to make an AI that is specifically educated to create regex, because it is quite easy to verify and score the results. And I think this would also be pretty good idea as at least I personally quite often need to find and replace something in hundreds of files. If I could get a regex for that in few seconds, by just asking a question, it would be nice.
Re:CoPilot can't even declare a Java String correc
Score:
, Informative)
by
gweihir
( 88907 )
writes:
on Sunday November 24, 2024 @12:27PM (
#64968567
RegEx generators and tools that help you design RegExes exist, if you really need them. And, unlike AI, they do not hallucinate.
Parent
Share
\o/
Score:
by
easyTree
( 1042254 )
writes:
This whole thing seems super-self-serving and suspect so in that setting:
Developers with GitHub Copilot access had a 56% greater likelihood of passing all 10 unit tests in the study
I'm no expert but isn't the idea that you tweak your code until 100% pass is reached and don't stop until then - regardless whether microsoft are watching everything you type? ProTip: Yes.
Our findings overall show that code authored with GitHub Copilot has increased functionality and improved readability, is of better quality, an
Re:
Score:
by
phantomfive
( 622387 )
writes:
It sounds like they had a secret set of tests that they didn't give to the participants.
Re:
Score:
by
easyTree
( 1042254 )
writes:
Or a time-limit; both of which invalidate the whole exercise.
Re:
Score:
by
phantomfive
( 622387 )
writes:
I don't see how secret tests invalidate the exercise.
Re:
Score:
by
easyTree
( 1042254 )
writes:
If tests define expected behaviour and the tests are secret, the developers are aiming at different targets - not a solid foundation for comparison of output.
Re:
Score:
by
dvice
( 6309704 )
writes:
When I review code, I look for errors. Having been a programmer for a few decades, I have a decent understanding of where bugs usually hide. E.g. static variables in Java class that is supposed to be thread safe. Those can cause pretty horrible bugs that are nearly impossible to reproduce and usually never found in unit tests. Also anything that opens a connection or similar, I check if it is closed. Those are also very rarely caught by tests.
It we talk about pretty code, I look for unnecessary casts. Not b
Re:
Score:
by
angel'o'sphere
( 80593 )
writes:
E.g. static variables in Java class that is supposed to be thread safe.
A class that is thread safe? In what regard?
Or a static variable that is threat safe?
Do you have an example for a problem?
Every junior is leaning on AI... hard
Score:
, Interesting)
by
NobleNobbler
( 9626406 )
writes:
on Saturday November 23, 2024 @10:28PM (
#64967631
It's literally impossible to tell the level developers are at with AI. Juniors are abusing it so hard (and hiding it) that they do the most inscrutable, wrong, zero-context solutions, and when I called them on it-- my manager received a report I was being mean. I feel like it's time to get my hose and spray the kids to keep them off my lawn with how this is coming off, but what in the hell is going on. Total circus.
Share
Re:
Score:
by
phantomfive
( 622387 )
writes:
and when I called them on it-- my manager received a report I was being mean.
Use more emojis and memes. You can't say someone is mean when they send you a positive cat GIF.
Like,
Stop using AI, you fucker!
[giphy.com]
Re:
Score:
by
mrproperz
( 6515104 )
writes:
Are you saying junior engineers have gotten worse? Or that junior engineers have always been bad, but now they're just using more advanced tools?
Re:
Score:
by
viperidaenz
( 2515578 )
writes:
Seniors lean on stackoverflow
Re:
Score:
by
gweihir
( 88907 )
writes:
And what is worse is that these juniors will never grow into seniors (except by aging), because harder stuff "AI" cannot help them with and they never really learn the simple stuff now.
Re:
Score:
by
sfcat
( 872532 )
writes:
That should make this year's performance reviews fun. Make sure to work in the words, "unreliable", "overly-sensitive", "disruptive", and "counter-productive". For bonus points, use "liability".
What does ChatGPT say?
Score:
by
Felix Baum
( 6314928 )
writes:
on Saturday November 23, 2024 @11:16PM (
#64967685
"GitHub Copilot, an AI-powered coding assistant, has shown potential in improving code quality based on recent studies and user feedback. Research highlights that it enhances several aspects of code quality, including readability, reusability, maintainability, conciseness, and resilience. Developers reported feeling more confident in their work when using Copilot, thanks to its ability to provide clean, error-resilient, and easily understandable code. These qualities make debugging and collaboration more efficient. Specifically, tools like GitHub Copilot Chat, which integrates directly into IDEs, help streamline coding processes. Developers who used Copilot Chat completed code reviews 15% faster and found their reviews to be more actionable compared to traditional methods. Additionally, the assistant helps users maintain focus and reduces frustration, particularly for repetitive or research-intensive tasks However, critiques exist. Some studies suggest that while Copilot speeds up development, it doesn't always guarantee flawless output, and users must still review and adapt its suggestions carefully. In summary, GitHub Copilot appears to boost both productivity and code quality, though it works best when paired with human oversight to address its occasional shortcomings.
Share
Copilot, trained on github code
Score:
by
viperidaenz
( 2515578 )
writes:
Including repos that are created to demonstrate vulnerable code.
Code quality isn't the point
Score:
by
sinkskinkshrieks
( 6952954 )
writes:
It's obviously shit, but the point is to accelerate code completion faster than what came before. By that measure, the answer is a resounding "yes". Next story.
It solves the wrong problem
Score:
by
Casandro
( 751346 )
writes:
The problem is not to write code faster, the problem is to write better code, and even if Copilot would consistently write excellent code, the fact alone that it becomes easier and faster to write code, will cause there to be more code. More code however means more complexity, more errors and higher costs maintaining that code.
Anecdotal No
Score:
by
cppmonkey
( 615733 )
writes:
I work as a developer and am the only person on my team to shun Copilot. In terms of experience I have more years of experience than half the team and less than the other half. Since my coworkers have adopted Copilot Github says that the number of comments I have left on their PRs has roughly doubled. Even the more experienced developers no write code that looks like it was copy pasted from stackoverflow without regard to correctness. This increased burden of review and the increased number of bug fixes I h
Related Links
Top of the:
day
week
month
272
comments
Microsoft To Replace All C/C++ Code With Rust By 2030
265
comments
Python Foundation Rejects Government Grant Over DEI Restrictions
207
comments
At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work
187
comments
Ask Slashdot: Would You Consider a Low-Latency JavaScript Runtime For Your Workflow?
187
comments
The Great Software Quality Collapse
next
Verify the Rust's Standard Library's 7,500 Unsafe Functions - and Win 'Financial Rewards'
85
comments
previous
More Business School Researchers Accused of Fabricated Findings
60
comments
Slashdot Top Deals
Hackers are just a migratory lifeform with a tropism for computers.
Close
Working...