“AI components — e.g., LLM, RAG — are embedded in the software supply chain, making them a new frontier for sophisticated attacks,” Garraghan told CSO. “As OWASP LLM 03:2025 points out, LLMs frequently integrate with external APIs and data sources, introducing significant risks through these dependencies.”
Simply encouraging secure coding practices, however, is not enough.
“CISOs must adopt a proactive security posture that includes continuous AI application testing, software bill of materials transparency, and automated threat detection across the AI development lifecycle,” Garraghan advised.