How organizations can secure their AI code

How organizations can secure their AI code How organizations can secure their AI code

While Reworkd was open about their error, many similar incidents remain unknown. CISOs often learn about them behind closed doors. Financial institutions, healthcare systems, and e-commerce platforms have all encountered security challenges as code completion tools can introduce vulnerabilities, disrupt operations, or compromise data integrity. Many of the risks are associated with AI-generated code, library names that are the result of hallucinations, or the introduction of third-party dependencies that are untracked and unverified.

“We’re facing a perfect storm: increasing reliance on AI-generated code, rapid growth in open-source libraries, and the inherent complexity of these systems,” says Jens Wessling, chief technology officer at Veracode. “It’s only natural that security risks will escalate.”

Often, code completion tools like ChatGPT, GitHub Copilot, or Amazon CodeWhisperer are used covertly. A survey by Snyk showed that roughly 80% of developers ignore security policies to incorporate AI-generated code. This practice creates blind spots for organizations, who often struggle to mitigate security and legal issues that appear as a result.

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use