Skip to content

Commit 50ed2a3

Browse files
authored
Merge branch 'master' into update_Pre-auth_SQL_Injection_to_RCE_in_Fortinet_FortiWeb_20250711_182725
2 parents 44cd43b + 08b4f2c commit 50ed2a3

File tree

51 files changed

+445
-103
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+445
-103
lines changed

searchindex.js

Lines changed: 1 addition & 1 deletion
Large diffs are not rendered by default.

src/AI/AI-llm-architecture/0.-basic-llm-concepts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 0. Basic LLM Concepts
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Pretraining
66

@@ -300,4 +300,4 @@ During the backward pass:
300300
- **Accuracy:** Provides exact derivatives up to machine precision.
301301
- **Ease of Use:** Eliminates manual computation of derivatives.
302302

303-
{{#include /banners/hacktricks-training.md}}
303+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/1.-tokenizing.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 1. Tokenizing
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Tokenizing
66

@@ -99,4 +99,4 @@ print(token_ids[:50])
9999
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
100100

101101

102-
{{#include /banners/hacktricks-training.md}}
102+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/2.-data-sampling.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 2. Data Sampling
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## **Data Sampling**
66

@@ -241,4 +241,4 @@ tensor([[ 367, 2885, 1464, 1807],
241241
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
242242

243243

244-
{{#include /banners/hacktricks-training.md}}
244+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/3.-token-embeddings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 3. Token Embeddings
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Token Embeddings
66

@@ -219,4 +219,4 @@ print(input_embeddings.shape) # torch.Size([8, 4, 256])
219219
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
220220

221221

222-
{{#include /banners/hacktricks-training.md}}
222+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/4.-attention-mechanisms.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 4. Attention Mechanisms
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Attention Mechanisms and Self-Attention in Neural Networks
66

@@ -430,5 +430,4 @@ For another compact and efficient implementation you could use the [`torch.nn.Mu
430430
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
431431

432432

433-
{{#include /banners/hacktricks-training.md}}
434-
433+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/5.-llm-architecture.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 5. LLM Architecture
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## LLM Architecture
66

@@ -702,4 +702,4 @@ print("Output length:", len(out[0]))
702702
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
703703

704704

705-
{{#include /banners/hacktricks-training.md}}
705+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/6.-pre-training-and-loading-models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 6. Pre-training & Loading models
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Text Generation
66

@@ -971,4 +971,4 @@ There 2 quick scripts to load the GPT2 weights locally. For both you can clone t
971971
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
972972

973973

974-
{{#include /banners/hacktricks-training.md}}
974+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/7.0.-lora-improvements-in-fine-tuning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 7.0. LoRA Improvements in fine-tuning
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## LoRA Improvements
66

@@ -64,4 +64,4 @@ def replace_linear_with_lora(model, rank, alpha):
6464

6565
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
6666

67-
{{#include /banners/hacktricks-training.md}}
67+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/7.1.-fine-tuning-for-classification.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 7.1. Fine-Tuning for Classification
22

3-
{{#include /banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## What is
66

@@ -117,4 +117,4 @@ You can find all the code to fine-tune GPT2 to be a spam classifier in [https://
117117

118118
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
119119

120-
{{#include /banners/hacktricks-training.md}}
120+
{{#include ../../banners/hacktricks-training.md}}

0 commit comments

Comments
 (0)