add: ai notes
This commit is contained in:
48
.obsidian/workspace.json
vendored
48
.obsidian/workspace.json
vendored
@@ -13,15 +13,44 @@
|
|||||||
"state": {
|
"state": {
|
||||||
"type": "markdown",
|
"type": "markdown",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "skubelb.md",
|
"file": "README.md",
|
||||||
"mode": "source",
|
"mode": "source",
|
||||||
"source": false
|
"source": false
|
||||||
},
|
},
|
||||||
"icon": "lucide-file",
|
"icon": "lucide-file",
|
||||||
"title": "skubelb"
|
"title": "README"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "4b681292f637f6c6",
|
||||||
|
"type": "leaf",
|
||||||
|
"state": {
|
||||||
|
"type": "markdown",
|
||||||
|
"state": {
|
||||||
|
"file": "learning ai.md",
|
||||||
|
"mode": "source",
|
||||||
|
"source": false
|
||||||
|
},
|
||||||
|
"icon": "lucide-file",
|
||||||
|
"title": "learning ai"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a0cc81e9a0ac6335",
|
||||||
|
"type": "leaf",
|
||||||
|
"state": {
|
||||||
|
"type": "markdown",
|
||||||
|
"state": {
|
||||||
|
"file": "README.md",
|
||||||
|
"mode": "source",
|
||||||
|
"source": false
|
||||||
|
},
|
||||||
|
"icon": "lucide-file",
|
||||||
|
"title": "README"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
],
|
||||||
|
"currentTab": 2
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"direction": "vertical"
|
"direction": "vertical"
|
||||||
@@ -113,12 +142,12 @@
|
|||||||
"state": {
|
"state": {
|
||||||
"type": "outgoing-link",
|
"type": "outgoing-link",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "skubelb.md",
|
"file": "README.md",
|
||||||
"linksCollapsed": false,
|
"linksCollapsed": false,
|
||||||
"unlinkedCollapsed": true
|
"unlinkedCollapsed": true
|
||||||
},
|
},
|
||||||
"icon": "links-going-out",
|
"icon": "links-going-out",
|
||||||
"title": "Outgoing links from skubelb"
|
"title": "Outgoing links from README"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -183,13 +212,16 @@
|
|||||||
"bases:Create new base": false
|
"bases:Create new base": false
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"active": "7d03f81db58db910",
|
"active": "a0cc81e9a0ac6335",
|
||||||
"lastOpenFiles": [
|
"lastOpenFiles": [
|
||||||
"valheim.md",
|
"thoughts.md",
|
||||||
|
"Games.md",
|
||||||
"infra.md",
|
"infra.md",
|
||||||
|
"learning ai.md",
|
||||||
"README.md",
|
"README.md",
|
||||||
"rikidown.md",
|
"rikidown.md",
|
||||||
"skubelb.md",
|
"skubelb.md",
|
||||||
"thoughts.md"
|
"valheim.md",
|
||||||
|
"Learning AI.md"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
9
Games.md
Normal file
9
Games.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# Cyberpunk
|
||||||
|
## Gamescope:
|
||||||
|
|
||||||
|
When playing on my large monitor, these settings work well:
|
||||||
|
```
|
||||||
|
gamescope --force-grab-cursor --sdr-gamut-wideness 1 --mangoapp --hdr-enabled -f -W 3440 -H 1440 -r 240 -- %command% --launcher-skip
|
||||||
|
```
|
||||||
|
|
||||||
|
When streaming, I switch to some different settings. This allows me to more easily manage stuff.
|
||||||
@@ -6,4 +6,5 @@ This is powered by rikidown (see below); some previous text alluded a different
|
|||||||
|
|
||||||
[rikidown.md](rikidown.md) describes the wiki software that was written to support this wobsite. In essence, this is the most basic version of a git-based wiki that uses Markdown to render it's content that I could make.
|
[rikidown.md](rikidown.md) describes the wiki software that was written to support this wobsite. In essence, this is the most basic version of a git-based wiki that uses Markdown to render it's content that I could make.
|
||||||
|
|
||||||
[[skubelb]] is a simple kubernetes load balancer/proxy tool; the intended use case is to provide ingress from a free-tier GCP VM to hosts that live at dynamic IPs. Originally, this was used to expose a GKE instance hosted on spot VMs to the internet, and deal with the constantly changing IPs.
|
[[skubelb]] is a simple kubernetes load balancer/proxy tool; the intended use case is to provide ingress from a free-tier GCP VM to hosts that live at dynamic IPs. Originally, this was used to expose a GKE instance hosted on spot VMs to the internet, and deal with the constantly changing IPs.
|
||||||
|
|
||||||
|
|||||||
13
learning ai.md
Normal file
13
learning ai.md
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
AI has been a huge word lately; let me try and figure out what it is.
|
||||||
|
|
||||||
|
If you see anything wrong (not incomplete, but actually wrong), let me know :).
|
||||||
|
## Large language model (LLM)
|
||||||
|
LLM models are tensor networks that get 'activated' with an activation matrix, resulting in an output matrix.
|
||||||
|
|
||||||
|
There are multiple layers of matrices in most models.
|
||||||
|
|
||||||
|
The "open models" available online are still largely closed-source; the matrices are basically binary blocs that describe weights given to each tensor.
|
||||||
|
## Retrieval-augment generative AI (RAG)
|
||||||
|
Basically, before sending the prompt to the LLM, the client does a search to find additional context. There are lots of tools for doing this, but the most popular seem to be from the AI community, and work by converting the user input to a 'vector' of NLP tokens, using a specialized 'vector database' to find other 'chunks' of related inputs, then add those to the message before sending it to the LLM
|
||||||
|
## Tool calling
|
||||||
|
A super powerful capability, from what I can tell, it is generally implemented by telling the LLM how to structure its output to make tool calls, then attempting to parse the LLMs output to detect tool calls, run the tools, and append the result to the message going into the LLM.
|
||||||
Reference in New Issue
Block a user