
CISA issues AI cybersecurity advisory
The Cybersecurity and Infrastructure Security Agency (CISA) has issued an advisory to remind organizations building artificial intelligence (AI) models that they are just as susceptible to vulnerabilities as any other piece of software.
Specifically, CISA is calling for builders of AI systems to consider the security of customers as a core business requirement, not just a technical feature, and prioritize security throughout the whole lifecycle of the product. AI models must be secure by design right out of the box, with little to no configuration changes or additional cost, notes the CISA advisory.
As it turns out, AI models are impacted by the same supply chain issues as existing software. The components used to build AI models are typically downloaded from repositories by data science teams that usually have even less awareness of cybersecurity issues than the average application developer. As a result, many of the components used are not scanned for vulnerabilities that could be exploited or malware that might be embedded within them.
CISA notes that adversarial inputs that, for example, force a misclassification can cause cars to misbehave on road courses or hide objects from security camera software. In fact, a PoisonGPT attack technique showing how vulnerable large language models (LLMs) used to build generative AI models can be was recently demonstrated.
Cybersecurity professionals will then find themselves once again in the unenviable position of removing malware and identifying vulnerabilities that need to be patched, hopefully before any of these AI models are breached.
How to improve security of AI models
Naturally, the only way to minimize threats to AI models is to make sure cybersecurity best practices are embedded with the machine learning operations (MLOps) processes used to build AI models. In much the same way DevSecOps practices have helped improve the state of application security by encouraging developers to embrace cybersecurity best practices, the same conversation now needs to be had with data scientists, starting with an emphasis on using curated hardened images of software components to build AI models that have been tested for known vulnerabilities.
Remediating vulnerabilities in an AI model after it’s already been deployed in a production environment is much riskier. Of course, there is no guarantee that at some point a patch will never be needed to be applied no matter how hardened the images used are, but the number of instances where a patch might be required should be substantially reduced. The issue, as always, is going to be narrowing the blast reach of a breach involving an AI model as much as possible because the level of potential risk to the business is going to be a lot higher given the nature of processes likely to be involved.
Hopefully, cybersecurity teams will be able to get ahead of this issue before any major breaches occur, but, if history is any guide, it might not be until there is a major breach that securing AI models will get the attention it deserves. In the meantime, cybersecurity teams might want to start studying how to respond to a security incident involving an AI model now versus once again having to learn how to respond to a cyberattack the hard way.

The Ransomware Insights Report 2025
Risultati chiave sull'esperienza e l'impatto del ransomware sulle organizzazioni a livello mondiale
Iscriviti al blog di Barracuda.
Iscriviti per ricevere i Threat Spotlight, commenti del settore e altro ancora.

Sicurezza della vulnerabilità gestita: correzione più rapida, meno rischi, conformità più semplice
Scopri quanto può essere facile individuare le vulnerabilità che i criminali informatici vogliono sfruttare