diff --git a/docs/content/ProblemStatementAndGoals/ProblemStatement.md b/docs/content/ProblemStatementAndGoals/ProblemStatement.md index edd583c..2563589 100644 --- a/docs/content/ProblemStatementAndGoals/ProblemStatement.md +++ b/docs/content/ProblemStatementAndGoals/ProblemStatement.md @@ -23,7 +23,7 @@ learning, creation, and self-expression, that could adapt to users' increasing s As we progress into the $21^{\text{st}}$ century, software has become comoditised, serving as the engine of transformations that transcend every corner of our life. Simultaneously, we've seen exponential growth in machine learning (ML) systems' capabilities, mainly through the general push of large language models (LLMs) into the mainstream. As these systems exihibit emergent properties of intelligence, how should we craft interfaces that promote -users' [[glossary#agency]] and encourage a sense of personalisation through interactions, rather than providing a tool for automation? +[[glossary#agency|agency]] and encourage a sense of personalisation through interactions, rather than providing a tool for automation? Imagine you are an engineer who pursues creative writing as a hobby. You often curate topics and ideas from discussion on social media, then categorise them into themes for your arguments. There are plethora of tools diff --git a/docs/content/Scratch.md b/docs/content/Scratch.md index 38f8e3c..c3c090d 100644 --- a/docs/content/Scratch.md +++ b/docs/content/Scratch.md @@ -86,6 +86,8 @@ async function createFolder() { Possible UI component library: [shadcn/ui](https://ui.shadcn.com/) +https://x.com/CherrilynnZ/status/1836881535154409629 + ## training [[glossary#sparse autoencoders|SAEs]] see also: [Goodfire](https://goodfire.ai/blog/research-preview/) preview releases diff --git a/docs/content/glossary.md b/docs/content/glossary.md index dac50ef..c623285 100644 --- a/docs/content/glossary.md +++ b/docs/content/glossary.md @@ -47,8 +47,7 @@ Auto-regressive models are often considered a more correct terminology when desc ## transformers -A multi-layer perception (MLP) archiecture built on top of a multi-head -attention mechanism [@vaswani2023attentionneed, 2] to signal high entropy tokens to be amplified and less important tokens to be diminished. +A multi-layer perception (MLP) archiecture built on top of a multi-head attention mechanism [@vaswani2023attentionneed] to signal high entropy tokens to be amplified and less important tokens to be diminished. ## low-rank adapters