AI Programming tools ruining code quality!

in Programming & Dev8 months ago

Since copilot went live, a lot of people have begun to use it to help them write code.
But the consequence of this has been that code quality has been dropping. I for sure know that my code quality is nothing special, and I do not suppose most people are generating better code than I am, which means that most of the code is of generally low quality. Since copilot and similar tools are based on lots of frequencies about what symbols arise in what order, whether it is actually symbols or it is letters is not so important.

Since it is a statistical model, garbage input will generate garbage output, there isn't any magic here.

I do not trust these AI models very much, the general quality of the code it has studies, how well has that been vetted? Well for Microsoft there is a few problems, for one the training data for the models were open to tampering for a long time, so anybody could tamper with it, and since these quantities are so vast it is frankly astronomically unlikely that any tampering would be discovered. While the models themselves are in fact opaque boxes, there is no way currently to get any sense of what goes on inside them, the result of running a model is a series of lots of choices based on weights, each weight could be understood that is not very hard, the hard thing that makes it virtually impossible to investigate this, is the sheer number of points where such a weight will be applied.

Another problem apparent with these models is the regurgitation of potential credentials to different services, some people commit these and upload them to Github, this is almost always a mistake very rarely is it the intended thing. The fact that these models regurgitate this information does not seem very good.

There is also the problem with all these models they occasionally have hallucinations, that is create something out of thin air, but they are not able to say that it is a hallucination, for coding this would mean they might use an API but the API is in fact not real.

So how do they ruin code quality

well for one that is the conclusion of some study, I have a few hypothesis as to why that might happen, well actually it is one idea.
I would argue that the code generated by the model is not very good as it can't be better than the average quality, well in general it can't, and it is possible that it could be lower than that. Then when somebody uses copilot or something to generate code they are adding new code of no better than average quality to the growing base of training data, and now the tools are learning from this code, and the quality got ever so slightly lower. Thus training further will ever so slightly lower the quality of the new model.

Another potential reason could be that as the training data grows the AI has some problems with the amount of data it can use, it was a problem previously, OpenAI had the idea of transformers allowing the AI to leverage more data, but perhaps the transformers won't allow it to use infinite data at some point it will choke on it regardless of transformers or not. Whether that is what is happening I have no idea.

Thanks for listening to my rant.
I would have put links to some resources showing the points I was referring to, but unfortunately it seems I have lost them.
So I won't have sources on my claims, so you know while I am fairly sure it is true you probably should do your own research.

Sort:  

AI programming tools like copilot are meant to assist devs in the implementation. It's still up to the dev what to do about the code generated by AI.

I use chatgpt sometimes to help me with coding, but I don't rely on it 100%. I still have to understand the provided code and sometimes tweak it to accommodate to what I want the code to do.