AI's dark side - mitigating security risks
Fortunately, for GFT clients, in 2019 we formed our Data Science team, known as the Cambridge Lab, and has been assisting customers to deliver AI solutions to solve complex challenges ever since. For instance, some of the solutions we have delivered include: automatically identifying anomalies in communication channels to detect fraudulent transactions, combatting money laundering, and undertaking credit risk assessments.
In 2021 Gartner agreed that there would be a rapid growth in AI-powered applications, predicting that “by 2025, 70% of organizations will have operationalized AI architectures due to the rapid maturity of AI orchestration initiatives.” (Source: Gartner).
In all the excitement, some businesses are forgetting that, although AI-powered applications have the potential to create significant opportunities for their businesses, they also present the same opportunities for malicious actors. Are businesses blinded by the innovation opportunity of this ‘shiny new toy’ and are they considering the potential security risks that it introduces?
Avivah Litan, Gartner Distinguished VP Analyst, suggests “Organizations that do not consistently manage AI risks are exponentially more inclined to experience adverse outcomes, such as project failures and breaches.” (Source: Gartner)
The abundance of freely available AI/ML tools
As is often the case, malicious actors are forging ahead with innovating AI/ML-based threats, whilst there are currently few solutions to protect businesses. These threats have been recognised by government bodies, but there are currently very few AI regulations enacted to guide businesses. The EU AI Act is one exception, which has started to be enforced across all 27 EU Member States from 1 August 2024. This legislation focuses on ensuring organisations take a risk-based approach to the entire lifecycle of an AI system. Canada has also introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022. This too, focuses on ensuring organisations are taking safe and responsible steps for the development and deployment of AI technologies. Whilst legislation and frameworks are being delivered by countries and trading blocks, a quick internet search displays dozens of news reports which reveal the extent of the problem caused by malicious actors, that legislators are trying to mitigate against.
-
Researchers Uncover 'LLMjacking' Scheme Targeting Cloud-Hosted AI Models source: The Hacker News
-
Prompt hacking is an Achilles' heel for AI source: Fierce Networks
-
How AI can be hacked with prompt injection: NIST report source: SecurityIntelligence
-
How to weaponize LLMs to auto-hijack websites source: The Register
Although the threat is very real, many employers are allowing employees access to AI/ML tools to develop innovative solutions that have the potential to bring significant benefits to their business, but with varying degrees of control, introducing AI adoption risk. For instance, employees could be using models with a corrupted or malicious payload that can results in data exposure, LLM compromise and model generated threats.
The abundance of freely available AI/ML tools is making the use of AI/ML within businesses commonplace. An Oliver Wyman survey found that 55% of employees use public generative AI at least weekly. (Source Oliver Wyman).
The velocity at which the AI models are developing has left developers with a plethora of free resources to integrate into their applications. One of the most popular community-focused platforms is Hugging Face, which has >900k models available, with most of them being free. However, the easy availability of these models can lead to ‘AI sprawl’, potentially resulting in excessive cloud costs, as well as increasing the attack surface if the infrastructure is not managed responsibly. On the downside, recently, Jfrog found >100 malicious AI/ML models were freely available on Hugging Face (Source: The Hacker News). This may seem like a small number in comparison to the vast number of resources available on the platform, but it proves there is an issue, and this will almost certainly become more of an issue in the future.
Although there are still very few AI attack vectors, there are a few that have been identified:
-
LLMJacking uses harvested cloud credentials to gain access to cloud hosted LLMs, allowing malicious actors to use the LLM freely, which can increase costs significantly, and allow harvesting of the training data (Source: gbhackers.om).
-
Training data poisoning introduces corrupted or biased data into the training dataset used to develop an AI model, in order to compromise the accuracy and reliability of that model. If as little as 1% of the data is ‘poisoned’ in can have a significantly adverse effect on the model performance (Source: Cornell University).
-
Prompt injection can lead to models to ignore guardrails and leak sensitive information (Source: Cornell University).
'Shadow IT'
Due to lack of controls, these vulnerabilities are often introduced to a business through ‘Shadow IT’ where an absence of AI inventory and guardrails can increase AI sprawl, introduce risk and increase costs. Businesses have historically been slow to develop AI/ML governance policies, which can lead to misconfigurations, overly permissive access, prompt injection, and model denial of service. The lack of visibility of AI pipelines can lead to data exposure and exfiltration, training data poisoning and unintended bias.
Businesses must manage the use of AI/ML resources to protect against these types of vulnerabilities. GFT has built several AI solutions for our banking clients that incorporate strict AI security controls. This is done in order to satisfy the strict security policies enforced by our banking clients (to protect their customers’ and the banks’ assets), and to comply with industry regulation. GFT has used this knowledge to develop blueprints to accelerate solution implementations for clients, to ensure that their AI-powered applications remain as safe as possible from these sorts of attacks. In addition to these GFT-developed assets, GFT has partnered with a specialist security ISV that can achieve the ideal scenario, which is to have an automated platform to determine which of the AI/ML resources are employed by developers within their cloud services provider, providing a greater level of governance.
AI-powered applications are vulnerable to the same risks as any other cloud application during the development lifecycle. Vulnerabilities can be introduced during build, deployment and runtime.
How can businesses protect themselves against this emerging risk? Most businesses already have a Cloud Security Posture Management (CSPM) approach that focuses on the security of their cloud infrastructure. The CSPM scans and identifies vulnerabilities in the cloud environment, such as misconfigurations, weak access and controls, but offers little to protect against AI-borne vulnerabilities. Therefore, a new tool is needed. AI Security Posture Management (AI-SPM) is the new tool designed specifically to address this threat.
The response
There are a handful of security companies that have developed and launched an AI-SPM service. Palo Alto Networks is one of them and is a strategic partner for GFT. Palo Alto Networks launched their AI-SPM tool in the summer of 2024, as part of their Prisma Cloud suite of cloud security tools. Businesses serious about cloud security should consider adding Prisma Cloud to their arsenal to defend against their ever-increasing attack surface. The other Prisma Cloud tools are covered very briefly at the end of this article.
Prisma Cloud AI-SPM has been specifically developed to protect AI-powered applications from code to cloud. The goal is to protect AI-powered applications all the way from coding, through the software supply chain, and into runtime, where the AI-powered applications service customers.
Prisma Cloud AI-SPM aims to discover and catalogue all AI/ML assets deployed across different cloud environments within a business to gain end-to-end visibility of entire AI pipelines. With this information, the AI-SPM can show the lineage of all the components used to build the AI-powered applications, which includes all the models used and the associated resources, to show the relationship between these assets and the underlying data used to train models.
Prisma Cloud AI-SPM performs risk analysis on all the catalogued AI/ML assets to assess the security posture of the AI-powered applications across the whole development lifecycle. The assessment will find vulnerabilities such as overly permissive access and misconfigured resources that can lead to LLMJacking, model theft and misuse.
AI models are continuously monitored by Prisma Cloud AI-SPM for unusual or unexpected behaviour patterns to detect abnormalities that could indicate a prompt injection attack, or other misuse.
Training and reference data is monitored and classified by Data Security Posture Manage (DSPM) and Prisma Cloud AI-SPM to identify sensitive data to help protect against data exposure, training data positioning, privacy and compliance violations, and security breaches.
Once the AI-powered is live, Prisma Cloud AI Runtime Security will protect the AI-powered applications in order to stop attacks in real time.
We can help
Any business serious about protecting their applications from the ever-increasing and evolving cloud security risk, improving compliance, improving ROI, and protecting their reputation should consider Prisma Cloud as their centralised cloud security platform. If interested, please feel free to contact us to discuss how GFT and Palo Alto Networks can help, or check out the links below for more information:
-
Cloud Security Posture Management (CSPM): assess and monitor the cloud infrastructure for vulnerabilities, misconfigurations, and identify non-compliance with regulations.
-
Cloud Infrastructure Entitlement Management (CIEM): minimise the risk of unauthorised access by gaining visibility into the origin of privileges to manage and control access to multicloud resources.
-
Code Security: provide automated security for cloud native infrastructure and applications, integrated with developer tools.
-
Data Security Posture Manage (DSPM): discover, classify and protect data in cloud environments to prevent exfiltration and compliance violations.
-
Cloud Workload Protection (CWP): secure hosts, containers and serverless deployments across the application lifecycle.
-
Web Application and API Security (WAAS): protect web applications and APIs across any cloud native architecture including public or private cloud.
-
Cloud Discovery and Exposure Management (CDEM): highlight unmanaged cloud assets and provide native workflows to convert them into managed assets.
-
Cloud Network Security (CNS): protect cloud-native applications from every network attack path.