From 4db24ad62899d2c546d3355f5c2853797f339f5c Mon Sep 17 00:00:00 2001 From: Ads Dawson <104169244+GangGreenTemperTatum@users.noreply.github.com> Date: Thu, 7 Mar 2024 15:56:58 -0800 Subject: [PATCH] Ads/llm10 typo fix ##275 (#276) * feat: kickoff v2 0 dir and files * fix: typo --- 2_0_vulns/LLM10_ModelTheft.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2_0_vulns/LLM10_ModelTheft.md b/2_0_vulns/LLM10_ModelTheft.md index cec25338..cd0c5798 100644 --- a/2_0_vulns/LLM10_ModelTheft.md +++ b/2_0_vulns/LLM10_ModelTheft.md @@ -23,7 +23,7 @@ Use of a stolen model, as a shadow model, can be used to stage adversarial attac ### Prevention and Mitigation Strategies 1. Implement strong access controls (E.G., RBAC and rule of least privilege) and strong authentication mechanisms to limit unauthorized access to LLM model repositories and training environments. - 1. This is particularly true for the first three common examples, which could cause this vulnerability due to insider threats, misconfiguration, and/or weak security controls about the infrastructure that houses LLM models, weights and architecture in which a malicious actor could infiltrate from insider or outside the environment. + 1. This is particularly true for the first three common examples, which could cause this vulnerability due to insider threats, misconfiguration, and/or weak security controls about the infrastructure that houses LLM models, weights and architecture in which a malicious actor could infiltrate from inside or outside the environment. 2. Supplier management tracking, verification and dependency vulnerabilities are important focus topics to prevent exploits of supply-chain attacks. 2. Restrict the LLM's access to network resources, internal services, and APIs. 1. This is particularly true for all common examples as it covers insider risk and threats, but also ultimately controls what the LLM application "_has access to_" and thus could be a mechanism or prevention step to prevent side-channel attacks.