Blog Categories
ai (2)
ai-agents (10)
android (1)
benchmarks (2)
computer-history (1)
data-formats (1)
efficiency (2)
explainers (29)
fundamentals (1)
hardware (1)
homelab (1)
interpretability (1)
introduction (1)
llm (45)
machine-learning (51)
mcp (1)
meta (1)
mobile (1)
multi-agent (1)
programming-history (3)
projects (2)
research (14)
rust (10)
tbt (5)
tools (4)
vibe-coding (10)
visual-programming (1)
webassembly (1)
ai
ai-agents
- How AI Learns Part 7: Designing a Continuous Learning Agent
- How AI Learns Part 6: Toward Continuous Learning
- How AI Learns Part 5: Context Engineering & Recursive Reasoning
- How AI Learns Part 4: Memory-Based Learning
- How AI Learns Part 3: Weight-Based Learning
- How AI Learns Part 2: Catastrophic Forgetting vs Context Rot
- How AI Learns Part 1: The Many Meanings of Learning
- music-pipe-rs: Unix Pipelines for MIDI Composition
- midi-cli-rs: Extending with Custom Mood Packs
- midi-cli-rs: Music Generation for AI Coding Agents
android
benchmarks
computer-history
data-formats
efficiency
explainers
- Five ML Concepts - #29
- Five ML Concepts - #28
- Five ML Concepts - #27
- Five ML Concepts - #26
- Five ML Concepts - #25
- Five ML Concepts - #24
- Five ML Concepts - #23
- Five ML Concepts - #22
- Five ML Concepts - #21
- Five ML Concepts - #20
- Five ML Concepts - #19
- Five ML Concepts - #18
- Five ML Concepts - #17
- Five ML Concepts - #16
- Five ML Concepts - #15
- Five ML Concepts - #14
- Five ML Concepts - #13
- Five ML Concepts - #12
- Five ML Concepts - #11
- Five ML Concepts - #10
- Five ML Concepts - #9
- Five ML Concepts - #8
- Five ML Concepts - #7
- Five ML Concepts - #6
- Five ML Concepts - #5
- Five ML Concepts - #4
- Five ML Concepts - #3
- Five ML Concepts - #2
- Five ML Concepts - #1
fundamentals
hardware
homelab
interpretability
introduction
llm
- Five ML Concepts - #29
- Five ML Concepts - #28
- Five ML Concepts - #27
- Five ML Concepts - #26
- Five ML Concepts - #25
- Five ML Concepts - #24
- Five ML Concepts - #23
- Five ML Concepts - #22
- Five ML Concepts - #21
- Five ML Concepts - #20
- In-Context Learning Revisited: From Mystery to Engineering
- Five ML Concepts - #19
- Five ML Concepts - #18
- Five ML Concepts - #17
- Five ML Concepts - #16
- Multi-Hop Reasoning (2/2): The Distribution Trap
- Towards Continuous LLM Learning (2): Routing Prevents Forgetting
- Five ML Concepts - #15
- Five ML Concepts - #14
- Five ML Concepts - #13
- Five ML Concepts - #12
- Five ML Concepts - #11
- RLM: Recursive Language Models for Massive Context
- Five ML Concepts - #10
- Towards Continuous LLM Learning (1): Sleepy Coder - When Fine-Tuning Fails
- Five ML Concepts - #9
- Five ML Concepts - #8
- Deepseek Papers (3/3): Engram Revisited - From Emulation to Implementation
- Five ML Concepts - #7
- Five ML Concepts - #6
- Five ML Concepts - #5
- Five ML Concepts - #4
- Five ML Concepts - #3
- Small Models (6/6): Which Small AI Fits YOUR Laptop?
- Five ML Concepts - #2
- Small Models (5/6): Max AI Per Watt
- Five ML Concepts - #1
- Small Models (4/6): This AI Has a Visible Brain
- Solving Sparse Rewards with Many Eyes
- Small Models (3/6): Planner + Doer = Genius
- Deepseek Papers (2/3): Engram - Conditional Memory for Transformers
- Multi-Hop Reasoning (1/2): Training Wheels for Small LLMs
- Small Models (2/6): AI in Your Pocket
- Deepseek Papers (1/3): mHC - Training Stability at Any Depth
- Small Models (1/6): 976 Parameters Beat Billions
machine-learning
- Five ML Concepts - #29
- Five ML Concepts - #28
- How AI Learns Part 7: Designing a Continuous Learning Agent
- Five ML Concepts - #27
- How AI Learns Part 6: Toward Continuous Learning
- Five ML Concepts - #26
- How AI Learns Part 5: Context Engineering & Recursive Reasoning
- Five ML Concepts - #25
- How AI Learns Part 4: Memory-Based Learning
- Five ML Concepts - #24
- How AI Learns Part 3: Weight-Based Learning
- Five ML Concepts - #23
- How AI Learns Part 2: Catastrophic Forgetting vs Context Rot
- Five ML Concepts - #22
- Many-Eyes Learning: Intrinsic Rewards and Diversity
- How AI Learns Part 1: The Many Meanings of Learning
- Five ML Concepts - #21
- Five ML Concepts - #20
- In-Context Learning Revisited: From Mystery to Engineering
- Five ML Concepts - #19
- Five ML Concepts - #18
- Five ML Concepts - #17
- Five ML Concepts - #16
- Multi-Hop Reasoning (2/2): The Distribution Trap
- Towards Continuous LLM Learning (2): Routing Prevents Forgetting
- Five ML Concepts - #15
- Five ML Concepts - #14
- Five ML Concepts - #13
- Five ML Concepts - #12
- Neural-Net-RS: An Educational Neural Network Platform
- Cat Finder: Personal Software via Vibe Coding
- Five ML Concepts - #11
- Five ML Concepts - #10
- Towards Continuous LLM Learning (1): Sleepy Coder - When Fine-Tuning Fails
- Five ML Concepts - #9
- Five ML Concepts - #8
- Deepseek Papers (3/3): Engram Revisited - From Emulation to Implementation
- Five ML Concepts - #7
- Five ML Concepts - #6
- Five ML Concepts - #5
- Five ML Concepts - #4
- Five ML Concepts - #3
- Five ML Concepts - #2
- Five ML Concepts - #1
- Small Models (4/6): This AI Has a Visible Brain
- Solving Sparse Rewards with Many Eyes
- Small Models (3/6): Planner + Doer = Genius
- Deepseek Papers (2/3): Engram - Conditional Memory for Transformers
- Multi-Hop Reasoning (1/2): Training Wheels for Small LLMs
- Deepseek Papers (1/3): mHC - Training Stability at Any Depth
- Small Models (1/6): 976 Parameters Beat Billions
mcp
meta
mobile
multi-agent
programming-history
- TBT (3/?): Vector Graphics Games
- TBT (2/?): Pipelines on OS/390
- TBT (1/?): My First Program Was a Horse Race
projects
research
- Many-Eyes Learning: Intrinsic Rewards and Diversity
- In-Context Learning Revisited: From Mystery to Engineering
- Multi-Hop Reasoning (2/2): The Distribution Trap
- Towards Continuous LLM Learning (2): Routing Prevents Forgetting
- RLM: Recursive Language Models for Massive Context
- DyTopo: Dynamic Topology for Multi-Agent AI
- Towards Continuous LLM Learning (1): Sleepy Coder - When Fine-Tuning Fails
- Deepseek Papers (3/3): Engram Revisited - From Emulation to Implementation
- Solving Sparse Rewards with Many Eyes
- Small Models (3/6): Planner + Doer = Genius
- Deepseek Papers (2/3): Engram - Conditional Memory for Transformers
- Multi-Hop Reasoning (1/2): Training Wheels for Small LLMs
- Deepseek Papers (1/3): mHC - Training Stability at Any Depth
- Small Models (1/6): 976 Parameters Beat Billions
rust
- music-pipe-rs: Web Demo and Multi-Instrument Arrangements
- TBT (5/?): IBM 1130 System Emulator - Experience 1960s Computing
- music-pipe-rs: Unix Pipelines for MIDI Composition
- midi-cli-rs: Extending with Custom Mood Packs
- midi-cli-rs: Music Generation for AI Coding Agents
- Neural-Net-RS: An Educational Neural Network Platform
- Cat Finder: Personal Software via Vibe Coding
- RLM: Recursive Language Models for Massive Context
- DyTopo: Dynamic Topology for Multi-Agent AI
- MCP: Teaching Claude to Play (and Trash Talk)
tbt
- TBT (5/?): IBM 1130 System Emulator - Experience 1960s Computing
- TBT (4/?): ToonTalk - Teaching Robots to Program
- TBT (3/?): Vector Graphics Games
- TBT (2/?): Pipelines on OS/390
- TBT (1/?): My First Program Was a Horse Race
tools
- music-pipe-rs: Web Demo and Multi-Instrument Arrangements
- music-pipe-rs: Unix Pipelines for MIDI Composition
- midi-cli-rs: Extending with Custom Mood Packs
- midi-cli-rs: Music Generation for AI Coding Agents
vibe-coding
- music-pipe-rs: Web Demo and Multi-Instrument Arrangements
- TBT (5/?): IBM 1130 System Emulator - Experience 1960s Computing
- Many-Eyes Learning: Intrinsic Rewards and Diversity
- music-pipe-rs: Unix Pipelines for MIDI Composition
- midi-cli-rs: Extending with Custom Mood Packs
- midi-cli-rs: Music Generation for AI Coding Agents
- TBT (4/?): ToonTalk - Teaching Robots to Program
- Neural-Net-RS: An Educational Neural Network Platform
- Cat Finder: Personal Software via Vibe Coding
- TBT (3/?): Vector Graphics Games