Skip to content

Commit a47f0dd

Browse files
authored
Merge pull request #1109 from HackTricks-wiki/research_update_src_todo_radio-hacking_low-power-wide-area-network_20250712_104905
Research Update Enhanced src/todo/radio-hacking/low-power-wi...
2 parents a5c8d13 + ccc9529 commit a47f0dd

File tree

47 files changed

+165
-99
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+165
-99
lines changed

src/AI/AI-llm-architecture/0.-basic-llm-concepts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 0. Basic LLM Concepts
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Pretraining
66

@@ -300,4 +300,4 @@ During the backward pass:
300300
- **Accuracy:** Provides exact derivatives up to machine precision.
301301
- **Ease of Use:** Eliminates manual computation of derivatives.
302302

303-
{{#include /src/banners/hacktricks-training.md}}
303+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/1.-tokenizing.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 1. Tokenizing
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Tokenizing
66

@@ -99,4 +99,4 @@ print(token_ids[:50])
9999
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
100100

101101

102-
{{#include /src/banners/hacktricks-training.md}}
102+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/2.-data-sampling.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 2. Data Sampling
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## **Data Sampling**
66

@@ -241,4 +241,4 @@ tensor([[ 367, 2885, 1464, 1807],
241241
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
242242

243243

244-
{{#include /src/banners/hacktricks-training.md}}
244+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/3.-token-embeddings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 3. Token Embeddings
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Token Embeddings
66

@@ -219,4 +219,4 @@ print(input_embeddings.shape) # torch.Size([8, 4, 256])
219219
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
220220

221221

222-
{{#include /src/banners/hacktricks-training.md}}
222+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/4.-attention-mechanisms.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 4. Attention Mechanisms
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Attention Mechanisms and Self-Attention in Neural Networks
66

@@ -430,5 +430,4 @@ For another compact and efficient implementation you could use the [`torch.nn.Mu
430430
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
431431

432432

433-
{{#include /src/banners/hacktricks-training.md}}
434-
433+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/5.-llm-architecture.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 5. LLM Architecture
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## LLM Architecture
66

@@ -702,4 +702,4 @@ print("Output length:", len(out[0]))
702702
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
703703

704704

705-
{{#include /src/banners/hacktricks-training.md}}
705+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/6.-pre-training-and-loading-models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 6. Pre-training & Loading models
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## Text Generation
66

@@ -971,4 +971,4 @@ There 2 quick scripts to load the GPT2 weights locally. For both you can clone t
971971
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
972972

973973

974-
{{#include /src/banners/hacktricks-training.md}}
974+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/7.0.-lora-improvements-in-fine-tuning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 7.0. LoRA Improvements in fine-tuning
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## LoRA Improvements
66

@@ -64,4 +64,4 @@ def replace_linear_with_lora(model, rank, alpha):
6464

6565
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
6666

67-
{{#include /src/banners/hacktricks-training.md}}
67+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/7.1.-fine-tuning-for-classification.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 7.1. Fine-Tuning for Classification
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
## What is
66

@@ -117,4 +117,4 @@ You can find all the code to fine-tune GPT2 to be a spam classifier in [https://
117117

118118
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
119119

120-
{{#include /src/banners/hacktricks-training.md}}
120+
{{#include ../../banners/hacktricks-training.md}}

src/AI/AI-llm-architecture/7.2.-fine-tuning-to-follow-instructions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 7.2. Fine-Tuning to follow instructions
22

3-
{{#include /src/banners/hacktricks-training.md}}
3+
{{#include ../../banners/hacktricks-training.md}}
44

55
> [!TIP]
66
> The goal of this section is to show how to **fine-tune an already pre-trained model to follow instructions** rather than just generating text, for example, responding to tasks as a chat bot.
@@ -107,4 +107,4 @@ You can find an example of the code to perform this fine tuning in [https://gith
107107

108108
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
109109

110-
{{#include /src/banners/hacktricks-training.md}}
110+
{{#include ../../banners/hacktricks-training.md}}

0 commit comments

Comments
 (0)