diff --git a/content/_index.md b/content/_index.md
index db2a880d694ab..c9a97943a63ed 100644
--- a/content/_index.md
+++ b/content/_index.md
@@ -3,9 +3,9 @@ title: Home
enableToc: false
---
-Welcome to our collaborative second brain. We're a research and development company working at the intersection of human and machine learning.
+Welcome to our collaborative second brain. Here you'll find our blog posts, "Extrusions" newsletter, evergreen notes, and public research. And if you like, [you can engage with the ideas directly](https://github.com/plastic-labs/blog) on GitHub.
-Our first product was [Bloom](https://bloombot.ai) -- a *subversive learning companion*. On this journey, we realized AI tools need a framework for securely and privately handling the intimate data required to unlock deeply personalized, autonomous agents.
+Plastic Labs is a research-driven company working at the intersection of human and machine learning. Our current project is [Honcho](https://github.com/plastic-labs/honcho), a user context management solution for AI-powered applications. We believe that by re-centering LLM app development around the user we can unlock a rich landscape of deeply personalized, autonomous agents.
It’s our mission to realize this future.
@@ -15,6 +15,15 @@ It’s our mission to realize this future.
[[blog/Theory-of-Mind Is All You Need]]
[[blog/Open-Sourcing Tutor-GPT]]
+## Extrusions
+
+[[extrusions/Extrusion 01.24|Extrusion 01.24]]
+## Notes
+
+[[Honcho name lore]]
+[[Metacognition in LLMs is inference about inference]]
+[[The machine learning industry is too focused on general task performance]]
+
## Research
[Violation of Expectation Reduces Theory-of-Mind Prediction Error in Large Language Models](https://arxiv.org/pdf/2310.06983.pdf)
diff --git a/content/assets/honcho logo and text.png b/content/assets/honcho logo and text.png
new file mode 100644
index 0000000000000..40d1a7c1a3ccf
Binary files /dev/null and b/content/assets/honcho logo and text.png differ
diff --git a/content/assets/honcho thumbnail.png b/content/assets/honcho thumbnail.png
new file mode 100644
index 0000000000000..6e7e630335e40
Binary files /dev/null and b/content/assets/honcho thumbnail.png differ
diff --git a/content/extrusions/Extrusion 01.24.md b/content/extrusions/Extrusion 01.24.md
new file mode 100644
index 0000000000000..e3cb0c87d3923
--- /dev/null
+++ b/content/extrusions/Extrusion 01.24.md
@@ -0,0 +1,47 @@
+Welcome to the inaugural edition of Plastic Labs' "Extrusions," a monthly prose-form synthesis of what we've been chewing on.
+
+This first one will be a standard new year recap/roadmap to get everyone up to speed, but after that, we'll try to eschew traditional formats.
+
+No one needs another newsletter, so we'll work to make these worthwhile. Expect them to be densely linked glimpses into the thought-space of our organization. And if you like, [you can engage with the ideas directly](https://github.com/plastic-labs/blog) on GitHub.
+
+## 2023 Recap
+
+Last year was wild. We started as an edtech company and ended as anything but. There's a deep dive on some of the conceptual lore in last week's "[[Honcho; User Context Management for LLM Apps|Honcho: User Context Management for LLM Apps]]:"
+
+>[Plastic Labs](https://plasticlabs.ai) was conceived as a research group exploring the intersection of education and emerging technology...with the advent of ChatGPT...we shifted our focus to large language models...we set out to build a non-skeuomorphic, AI-native tutor that put users first...our [[Open-Sourcing Tutor-GPT|experimental tutor]], Bloom, [[Theory-of-Mind Is All You Need|was remarkably effective]]--for thousands of users during the 9 months we hosted it for free...
+
+Building a production-grade, user-centric AI application, then giving it nascent [theory of mind](https://arxiv.org/pdf/2304.11490.pdf) and [[Metacognition in LLMs is inference about inference|metacognition]], made it glaringly obvious to us that social cognition in LLMs was both under-explored and under-leveraged.
+
+We pivoted to address this hole in the stack and build the user context management solution agent developers need to truly give their users superpowers. Plastic applied and was accepted to [Betaworks](https://www.betaworks.com/)' [AI Camp: Augment](https://techcrunch.com/2023/08/30/betaworks-goes-all-in-on-augmentative-ai-in-latest-camp-cohort-were-rabidly-interested/?guccounter=1):
+
+
+
+We spent camp in a research cycle, then [published pre-print](https://arxiv.org/abs/2310.06983) showing it's possible to enhance LLM theory of mind ability with [predictive coding-inspired](https://js.langchain.com/docs/use_cases/agent_simulations/violation_of_expectations_chain) [metaprompting](https://arxiv.org/abs/2102.07350).
+
+
+
+Then it was back to building.
+
+## 2024 Roadmap
+
+This is the year of Honcho.
+
+![[honcho logo and text.png]]
+
+Last week [[Honcho; User Context Management for LLM Apps|we released]] the...
+
+ >...first iteration of [[Honcho name lore|Honcho]], our project to re-define LLM application development through user context management. At this nascent stage, you can think of it as an open-source version of the OpenAI Assistants API. Honcho is a REST API that defines a storage schema to seamlessly manage your application's data on a per-user basis. It ships with a Python SDK which [you can read more about how to use here](https://github.com/plastic-labs/honcho/blob/main/README.md).
+
+And coming up, you can expect a lot more:
+
+- Next we'll drop a fresh paradigm for constructing agent cognitive architectures with users at the center, replete with cookbooks, integrations, and examples
+
+- After that, we've got some dev viz tooling in the works to allow quick grokking of all the inferences and context at play in a conversation, and visualization and manipulation of entire agent architectures--as well as swap and compare the performance of custom cognition across the landscape of models
+
+- Finally, we'll bundle the most useful of all this into an opinionated offering of managed, hosted services
+
+## Keep in Touch
+
+Thanks for reading.
+
+You can find is on [X/Twitter](https://twitter.com/plastic_labs), but we'd really like to see you in our [Discord](https://discord.gg/plasticlabs) 🫡.
\ No newline at end of file
diff --git a/content/notes/The machine learning industry is too focused on general task performance.md b/content/notes/The machine learning industry is too focused on general task performance.md
index 6b00d111d1072..8965fa30889be 100644
--- a/content/notes/The machine learning industry is too focused on general task performance.md
+++ b/content/notes/The machine learning industry is too focused on general task performance.md
@@ -4,4 +4,4 @@ However, general capability doesn't necessarily translate to completing tasks as
Take summarization. It’s a popular machine learning task at which models have become quite proficient, at least from a benchmark perspective. However, when models summarize for users with a pulse, they fall short. The reason is simple: the models don’t know this individual. The key takeaways for a specific user differ dramatically from the takeaways _any possible_ internet user _would probably_ note.
-So a shift in focus toward user-specific task performance would provide a much more dynamic & realistic approach. Catering to individual needs & paving she way for more personalized & effective ML applications.
+So a shift in focus toward user-specific task performance would provide a much more dynamic & realistic approach. Catering to individual needs & paving the way for more personalized & effective ML applications.
diff --git a/package-lock.json b/package-lock.json
index f1d8657b102ae..7c12e7146da5d 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -1,20 +1,12 @@
{
"name": "@jackyzha0/quartz",
-<<<<<<< HEAD
"version": "4.1.2",
-=======
- "version": "4.1.0",
->>>>>>> f8d1298d (fix: missing field in config)
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@jackyzha0/quartz",
-<<<<<<< HEAD
"version": "4.1.2",
-=======
- "version": "4.1.0",
->>>>>>> f8d1298d (fix: missing field in config)
"license": "MIT",
"dependencies": {
"@clack/prompts": "^0.6.3",
diff --git a/quartz/styles/custom.scss b/quartz/styles/custom.scss
index c642fb892cb45..983310eb0fed9 100644
--- a/quartz/styles/custom.scss
+++ b/quartz/styles/custom.scss
@@ -20,3 +20,9 @@ img {
display: flex;
justify-content: center;
}
+
+iframe {
+ display: block;
+ margin-left: auto;
+ margin-right: auto;
+}