Are we ready for AI-generated code?

Are we ready for AI-generated code?

Over the past few months, we’ve marveled at the quality of computer-generated faces, cat images, videos, essays, and even art. Artificial intelligence (AI) and machine learning (ML) have also quietly crept into software development, with tools such as GitHub Copilot, Tabnine, Polycode and others taking the next logical step of putting the existing code autocomplete functionality on AI steroids. Unlike cat pictures, however, the origin, quality, and security of application code can have far-reaching implications — and at least for security, research shows the risk is real.

Previous academic research has already shown that GitHub Copilot often generates code with security vulnerabilities. More recently, hands-on analysis by Invicti security engineer Kadir Arslan showed that insecure code suggestions are still the rule rather than the exception with Copilot. Arslan discovered that suggestions for many common tasks only included the absolute bare bones, often taking the most basic and least secure route, and that accepting them without modification could result in working but vulnerable applications.

A tool like Copilot is (by design) auto-completion kicked up a notch, trained on open source code to suggest snippets that might be relevant in a similar context. This makes the quality and safety of the suggestions closely related to the quality and safety of the training set. So the big questions aren’t about Copilot or any other specific tool, but AI-generated software code in general.

It is reasonable to assume that Copilot is only the tip of the spear and that similar generators will become commonplace in the years to come. This means that we in the tech industry need to start thinking about how such code is generated, how it is used, and who will take responsibility if something goes wrong.

Satellite navigation syndrome

The traditional code autocompletion that looks up function definitions to complete function names and reminds you of the arguments you need is a huge time saver. Since these suggestions are just a shortcut to finding the documentation on your own, we’ve learned to implicitly trust anything the IDE suggests. Once an AI-powered tool arrives, its suggestions are no longer guaranteed to be correct – but they still feel friendly and trustworthy, so they’re more likely to be accepted.

Especially for less experienced developers, the convenience of getting a free block of code encourages a mindset shift from “Is this code close enough to what I would write” to “How can I modify this code to make it work for me”.

GitHub makes it very clear that Copilot’s suggestions should always be carefully analyzed, reviewed, and tested, but human nature dictates that even low-quality code will sometimes be pushed into production. It’s a bit like driving looking more at your GPS than the road.

Supply chain security issues

The Log4j security crisis has brought software supply chain security and, in particular, open source security into the limelight, with a recent White House memo on secure software development and a new project Open Source Security Improvement Act. With these and other initiatives, having open source code in your applications may soon need to be written in a software bill of materials (SBOM), which is only possible if you knowingly include a specific dependency. Software Composition Analysis (SCA) tools also leverage this knowledge to detect and flag outdated or vulnerable open source components.

But what if your app includes AI-generated code that ultimately comes from an open-source training set? Theoretically, if even a substantial suggestion is identical to existing code and accepted as is, you could have open source code in your software but not in your SBOM. This could lead to compliance issues, not to mention the risk of liability if the code turns out to be insecure and results in a breach – and SCA won’t help you, because it can only find vulnerable dependencies, not vulnerabilities in your own. coded. .

Licensing and attribution pitfalls

Continuing on this path, to use the open source code, you must comply with its license terms. Depending on the specific open source license, you will at least need to provide attribution or sometimes release your own open source code. Some licenses completely prohibit commercial use. Regardless of the license, you need to know where the code came from and how it is licensed.

Again, what if you have AI-generated code in your application that happens to be identical to existing open-source code? If you had an audit, would you find that you are using code without the required attribution? Or maybe you need to open up some of your commercial code to stay compliant? This may not yet be a realistic risk with current tools, but these are the kinds of questions we should all be asking ourselves today, not 10 years from now. (And to be clear, GitHub Copilot has an optional filter to block suggestions that match existing code to minimize supply chain risk.)

Deeper Security Implications

Coming back to security, an AI/ML model is only as good (and as bad) as its training set. We’ve seen this in the past for example, in cases of facial recognition algorithms showing racial bias due to the data they were trained on. So if we have research showing that a code generator frequently produces suggestions without consideration for security, we can deduce that this is what its training set looked like (i.e. the code accessible to the public). And what if the insecure AI-generated code is then fed back into that code base? Can suggestions be secure?

The security issues don’t stop there. If AI-based code generators gain popularity and start accounting for a significant proportion of new code, it’s likely someone will try to attack them. It is already possible to trick AI image recognition by poisoning its training set. Sooner or later, malicious actors will try to place unique vulnerable code in public repositories in hopes that it will show up in suggestions and eventually end up in a production application, opening it up to easy attack.

And what about monoculture? If multiple applications end up using the same highly vulnerable suggestion, regardless of its origin, we could face vulnerability outbreaks or perhaps even AI-specific vulnerabilities.

Keep an eye on the AI

Some of these scenarios may seem far-fetched today, but these are all topics that we in the tech industry need to discuss. Again, GitHub Copilot is in the spotlight purely because it’s currently leading the way, and GitHub provides clear warnings about AI-generated suggestion caveats. Like the autocomplete on your phone or the route suggestions in your satnav, these are just hints to make our lives easier, and it’s up to us to take them or leave them.

With their potential for exponentially improving development efficiency, AI-based code generators are likely to become a permanent part of the software world. In terms of application security, however, this is another source of potentially vulnerable code that must pass rigorous security testing before being allowed into production. We’re looking for a whole new way to slip vulnerabilities (and potentially unchecked dependencies) right into your proprietary code, so it makes sense to treat AI-augmented codebases as untrusted until they’re out. tested – and that means testing everything as often as you can.

Even relatively transparent ML solutions like Copilot already raise legal and ethical issues, not to mention security issues. But imagine that one day a new tool starts generating code that works perfectly and passes security tests, except for one small detail: nobody knows how it works. That’s when it’s time to panic.

#ready #AIgenerated #code

Leave a Comment

Your email address will not be published. Required fields are marked *