diff --git a/public/assets/images/og/building-cortex.jpg b/public/assets/images/og/building-cortex.jpg
new file mode 100644
index 0000000..aaca714
Binary files /dev/null and b/public/assets/images/og/building-cortex.jpg differ
diff --git a/public/assets/images/og/car-engine-analogy.png b/public/assets/images/og/car-engine-analogy.png
new file mode 100644
index 0000000..d222da1
Binary files /dev/null and b/public/assets/images/og/car-engine-analogy.png differ
diff --git a/src/pages/blog/llama-learns-to-talk.mdx b/src/pages/blog/llama-learns-to-talk.mdx
index dd6d7ab..5a102b6 100644
--- a/src/pages/blog/llama-learns-to-talk.mdx
+++ b/src/pages/blog/llama-learns-to-talk.mdx
@@ -15,19 +15,31 @@ import ResearchCTABlog from '@/components/Blog/ResearchCTABlog'
+<<<<<<< HEAD:src/pages/blog/llama-evolved-to-ichigo.mdx
+# Ichigo: Llama learns to talk
+=======
# π Ichigo: Llama Learns to Talk
+>>>>>>> e034fdb63daac8036f7d09d4c9fb9173f1299170:src/pages/blog/llama-learns-to-talk.mdx
+<<<<<<< HEAD:src/pages/blog/llama-evolved-to-ichigo.mdx
+Our homebrewed early-fusion speech model has evolved. Meet π Ichigo - the latest llama3-s checkpoint, that we're experimenting to teach Llama3 talk and recognize when it can't understand the speech.
+=======
Homebrewβs early-fusion speech model has evolved. Meet π Ichigo - the latest llama3-s checkpoint.
+>>>>>>> e034fdb63daac8036f7d09d4c9fb9173f1299170:src/pages/blog/llama-learns-to-talk.mdx
Inspired by the [Chameleon](https://arxiv.org/pdf/2405.09818) and [Llama Herd](https://arxiv.org/pdf/2407.21783) papers, llama3-s (Ichigo) is an early-fusion, audio and text, multimodal model. We're conducting this research entirely in the open, with an open-source [codebase](https://github.com/homebrewltd/ichigo), [open data](https://huggingface.co/datasets/homebrewltd/instruction-speech-v1.5) and [open weights](https://huggingface.co/homebrewltd/llama3-s-2024-07-19).
+<<<<<<< HEAD:src/pages/blog/llama-evolved-to-ichigo.mdx
+![Llama learns to to talk.](./_assets/ichigov0.3/ichigo.png)
+=======
![Llama learns to talk](./_assets/ichigov0.3/ichigo.png)
+>>>>>>> e034fdb63daac8036f7d09d4c9fb9173f1299170:src/pages/blog/llama-learns-to-talk.mdx
*Image generated by ChatGPT*
## Demo
diff --git a/src/pages/blog/why-we-are-building-cortex.mdx b/src/pages/blog/why-we-are-building-cortex.mdx
new file mode 100644
index 0000000..ac6a973
--- /dev/null
+++ b/src/pages/blog/why-we-are-building-cortex.mdx
@@ -0,0 +1,68 @@
+---
+title: Why we're building π€ Cortex?
+authorURL: https://twitter.com/homebrewltd
+description: "Cortex might initially appear to be just another LLM distribution package but our plan is more than building a local AI alternative."
+categories: product
+ogImage: assets/eniac_computer.jpg
+date: 2024-10-30
+---
+
+import { Callout } from 'nextra/components'
+import BlogBackButton from '@/components/Blog/BackButton'
+import BlogAuthors from '@/components/Blog/Authors'
+import ResearchCTABlog from '@/components/Blog/ResearchCTABlog'
+
+
+
+# Why we're building Cortex?
+
+
+
+ We launched [π€ Cortex](https://github.com/janhq/cortex.cpp), a local AI API Platform that allows people to run and customize models locally. It might initially appear to be just another LLM distribution package but our plan is more than building a local AI alternative to the OpenAI API Platform.
+## Cortex and related links
+
+
+- Give it a try: https://cortex.so/
+- Source Code: https://github.com/janhq/cortex.cpp
+- API Reference: https://cortex.so/api-reference/
+- Community: https://discord.gg/Exe46xPMbK
+
+## Fundamental Walls
+
+The interesting part is what it represents. When we were building [Jan](https://jan.ai/) (it is also our previous company name), we kept hitting walls. Not software walls, but fundamental ones - hardware constraints, model limitations, integration challenges. That's when we realized we needed to become a full-stack AI company. Hence, π Homebrew.
+
+Cortex is our experiment to build a brain. It's a standalone product and powers [Jan](https://github.com/janhq/cortex.cpp), making models run faster and more stable. But that's just the start. We want Cortex to be the brain you can put anywhere - from robots to coffee makers. No cloud dependencies. No remote servers. Just AI running directly on hardware.
+
+### A glue for AI future
+The reason this matters is that whoever controls AI infrastructure controls AI's future. If we want AI to stay open and accessible, we need strong open-source alternatives. Not just in software, but all the way down to how models run on hardware. This may turn out to be wrong. But it feels like the right kind of wrong - the kind worth pursuing.
+
+The hardest problems in AI aren't just about models anymore. They're about integration. You can text model and voice model by using different libraries but multi-modality requires make them work together in a stable way. It's the difference between having parts of a car and having a car that actually drives.
+
+![Car and Engine](/assets/images/og/car-engine-analogy.png)
+*This is a really, really good analogy. [Source](https://www.reddit.com/r/LocalLLaMA/comments/1gd6df6/comment/lu2mjbb/)*
+
+### C++ over Go
+This integration problem shows up everywhere in AI, but most clearly in the mess of libraries needed to make anything work. Want to combine vision and language models? Get ready to juggle five different libraries with conflicting dependencies. The industry treats this as a minor inconvenience, but it's actually a fundamental barrier to progress.
+
+This is why we chose C++ over Go for Cortex. It seems like a small technical decision, but it reflects a deeper truth: being "full-stack" in AI means something different than it did in web development. In web dev, you can get away with different languages and frameworks at different layers.
+
+In AI, the stack needs to be more cohesive. The overhead of crossing language boundaries becomes a real problem when you're processing gigabytes of data in real time.
+
+### Universal file formats
+You see this pattern again in how AI systems handle model files. Most platforms use simple checksums like SHA256 to identify models. It's the obvious solution - the kind that seems fine until it isn't. But it breaks down as soon as you need to do anything sophisticated with model composition or versioning. It's like trying to build git with just file hashes.
+
+So, our mission is to be a local alternative. We plan Cortex to have a role as the brain of that ecosystem.
+
+
+Your feedback is the most critical aspect to make things better. Join our community to share critics with us: https://discord.gg/2mWSNnRd
+
+
+---
+
+