
AI models emerge as next major cybersecurity challenge
The Cybersecurity and Infrastructure Agency (CISA) and the National Cyber Security Centre in the United Kingdom (UK) have jointly published a set of guidelines for securing artificial intelligence (AI) system development at a time when it’s becoming increasingly apparent how vulnerable these AI applications are.
The most immediate threats are phishing attacks through which the credentials of anyone involved in the training of an AI model might be compromised. Those individuals span everyone from the data scientists who build the model to contractors hired to reinforce training. These attacks aim to poison the AI model by exposing it to inaccurate data that would increase its tendency to hallucinate, otherwise known as extrapolating a guess that might be anywhere from slightly off to flat-out absurd.
The opportunity to poison an AI model might occur before it is deployed or afterward when, for example, an organization starts using vector databases to extend the capabilities of a large language model (LLM) at the core of a generative AI platform by exposing it to additional data. Access must be secured whether an LLM is being extended, customized, or built from the ground up.
Cybersecurity teams working with their compliance colleagues will also be expected to ensure no sensitive data can be inadvertently shared with a generative AI platform that will then be used at some future date to train an LLM.
Additionally, it’s just as important to remember that AI models are, at their core, another type of software artifact that is subject to the same vulnerability issues that plague other applications. The malware that cybercriminals inject into software repositories can quickly find its way into an AI model. The issue is that once discovered, the cost of rebuilding an AI model is several orders of magnitude more expensive than patching an application.
Unfortunately, the data scientists who build AI models have even less cybersecurity expertise than the average application developers. So, it’s now more a question of when and to what degree a cybersecurity issue will involve the software components used to construct an AI model.
Naturally, cybersecurity teams will be called upon to address incidents that, over time, could involve hundreds of AI models. Rather than relying solely on LLMs made available via cloud services, many organizations will deploy LLMs trained on a narrow data set to automate processes unique to specific domain knowledge areas.
The good news is that cybersecurity professionals who understand what’s required to secure AI models are already in high demand. The salary levels these cybersecurity professionals can command will, as a result, be significantly higher.
The challenge, as always, will be getting everyone involved in these projects to appreciate the importance of security. There is always a tendency to rush innovations to market, but haste doesn’t just make waste. In the case of AI applications automating processes at unprecedented levels of scale, a cybersecurity issue has more potential than ever to be nothing less than catastrophic.

The Ransomware Insights Report 2025
Risultati chiave sull'esperienza e l'impatto del ransomware sulle organizzazioni a livello mondiale
Iscriviti al blog di Barracuda.
Iscriviti per ricevere i Threat Spotlight, commenti del settore e altro ancora.

Sicurezza della vulnerabilità gestita: correzione più rapida, meno rischi, conformità più semplice
Scopri quanto può essere facile individuare le vulnerabilità che i criminali informatici vogliono sfruttare