Five ML Concepts - #29

Mar 04, 2026
Five ML Concepts - Part 29

Five ML concepts in under 30 seconds each: Neural Collapse (late-stage geometric convergence of class representations), Grokking (sudden generalization after prolonged memorization), SAM (optimizing for flat loss regions under perturbations), Mechanistic Interpretability (analyzing internal circuits of neural networks), Self-Training Instability (feedback loops that amplify errors in self-generated data).

five-ml-concepts neural-collapse grokking sharpness-aware-minimization mechanistic-interpretability

Five ML Concepts - #28

Mar 03, 2026
Five ML Concepts - Part 28

Five ML concepts in under 30 seconds each: Lottery Ticket Hypothesis (small winning subnetworks within large models), Sparse Activation (using only part of a model per input), Conditional Computation (dynamically routing inputs for efficiency), Inference Parallelism (distributing inference across devices), Compute Optimality (balancing model size, data, and compute).

five-ml-concepts lottery-ticket-hypothesis sparse-activation conditional-computation inference-parallelism

Five ML Concepts - #27

Mar 02, 2026
Five ML Concepts - Part 27

Five ML concepts in under 30 seconds each: Elastic Weight Consolidation (protecting important parameters during new task learning), Replay Buffers (mixing past examples to prevent forgetting), Parameter Routing (activating task-specific parameter subsets), Memory-Augmented Networks (external memory modules for neural networks), Model Editing (targeted weight updates without full retraining).

five-ml-concepts continual-learning elastic-weight-consolidation replay-buffers parameter-routing
How AI Learns - Part 7

A robust architecture: core model (rarely updated) + adapters (modular skills) + external memory (facts) + context manager (RLM-style) + logging and evaluation loop. Errors feed into memory first. Only recurring, validated improvements reach adapters.

llm agent architecture continuous-learning safety

Five ML Concepts - #26

Mar 01, 2026
Five ML Concepts - Part 26

Five ML concepts in under 30 seconds each: Data Augmentation (expanding training data with transformations), Caching Strategies (reducing latency by reusing computation), Constitutional AI (training models to follow explicit principles), Goodhart's Law (optimizing metrics distorts objectives), Manifold Hypothesis (data lies on lower-dimensional structures).

five-ml-concepts data-augmentation caching-strategies constitutional-ai goodharts-law
How AI Learns - Part 6

Continuous learning aims to absorb new information and skills over time without losing old capabilities. The key: learn often in memory, consolidate carefully in weights. Periodic consolidation, not constant updates.

llm continuous-learning replay ella share

Five ML Concepts - #25

Feb 28, 2026
Five ML Concepts - Part 25

Five ML concepts in under 30 seconds each: Label Smoothing (softening targets to reduce overconfidence), Miscalibration (confidence not matching accuracy), Representation Learning (automatically learning useful features), Adversarial Examples (inputs crafted to cause errors), Double Descent (test error decreasing twice with model size).

five-ml-concepts label-smoothing miscalibration representation-learning adversarial-examples
How AI Learns - Part 5

Large context windows are not a complete solution. As context grows, attention dilutes and instructions drift. Recursive Language Models treat context as a dynamic environment, rebuilding focus each step instead of dragging everything forward.

llm context rlm recursive-language-models in-context-learning

Expanding my home AI cluster from 10% to 20% brain power with a new X99 motherboard and RTX 3090. Adding VoxCPM voice cloning, FLUX text-to-image, and Wan 2.2 text-to-video capabilities.

lucy ai-cluster rtx-3090 voice-cloning text-to-image
Personal Software - Part 5

Continuing the music-pipe-rs story: a web demo with Bach and Baroque arrangements, the seq command for explicit note sequences, and GarageBand integration. Plus the generative music resources that inspired this project.

rust midi music cli unix-pipes

Five ML Concepts - #24

Feb 27, 2026
Five ML Concepts - Part 24

Five ML concepts in under 30 seconds each: Warmup (gradually increasing learning rate at start), Data Leakage (training on unavailable deployment info), Mode Collapse (limited generative output variety), Blue/Green Deployment (switching between parallel production environments), Reward Hacking (exploiting reward function flaws).

five-ml-concepts warmup data-leakage mode-collapse blue-green-deployment
How AI Learns - Part 4

Modern AI systems increasingly rely on external memory. RAG, CAG, and Engram-style modules shift 'learning' away from weights. The brain stays stable. The notebook grows.

llm rag cag engram vector-database

Five ML Concepts - #23

Feb 26, 2026
Five ML Concepts - Part 23

Five ML concepts in under 30 seconds each: Emergent Behavior (capabilities appearing at scale), Tool Use (AI calling external tools), Loss Surface Sharpness (flatter minima generalize better), Learning Rate Schedules (adjusting learning rate during training), Canary Deployment (gradually rolling out new models safely).

five-ml-concepts emergent-behavior tool-use loss-surface learning-rate
How AI Learns - Part 3

Weight-based learning modifies the neural network itself. Pretraining, fine-tuning, LoRA, alignment methods, distillation---each changes the brain permanently. Slow to change, but forms the stable core.

llm pretraining fine-tuning lora rlhf
Throwback Thursday - Part 5

A browser-based IBM 1130 system emulator with authentic console panel indicator lights, keypunch, printer, and assembly game. Experience the full 1965 minicomputer ecosystem through interactive simulations. Work in progress.

tbt ibm-1130 emulator rust wasm

Five ML Concepts - #22

Feb 25, 2026
Five ML Concepts - Part 22

Five ML concepts in under 30 seconds each: RSFT (rejection sampling fine-tuning with filtered outputs), Model Steerability (adjusting behavior at inference time), LSTM (long short-term memory for sequences), Why More Data Beats Better Models (data scale trumps architecture tweaks), System Reliability vs Model Quality (balancing accuracy with uptime).

five-ml-concepts rsft rejection-sampling steerability lstm
How AI Learns - Part 2

Two fundamentally different failure modes plague AI systems. Catastrophic forgetting destroys old knowledge when learning new skills. Context rot loses early instructions in long conversations. Different problems, different solutions.

llm catastrophic-forgetting context-rot continuous-learning attention
Machine Learning - Part 6

Expanding many-eyes learning with intrinsic rewards and a new web visualization. CuriousScout uses count-based novelty, OptimisticScout uses optimistic initialization. The key trade-off: diversity helps during exploration, but once Q-values converge, all scouts should follow the same optimal policy. Strategy quality matters more than diversity in simple environments.

reinforcement-learning exploration sparse-rewards scouts intrinsic-rewards
How AI Learns - Part 1

When people say 'AI learned something,' they usually mean one of four very different things. Understanding these time scales---from milliseconds to years---is essential for building AI systems that improve safely over time.

llm learning pretraining fine-tuning rag

Five ML Concepts - #21

Feb 24, 2026
Five ML Concepts - Part 21

Five ML concepts in under 30 seconds each: Prompt Injection (malicious instructions overriding AI behavior), Jailbreaks (bypassing safety constraints), GRU (gated recurrent units for sequences), Planning vs Prediction (action evaluation vs forecasting), Production Rollbacks (reverting to stable model versions).

five-ml-concepts prompt-injection jailbreaks gru planning
Personal Software - Part 4

Personal Software continues. music-pipe-rs takes the Unix philosophy to MIDI composition---small tools connected by pipes. Start with a seed, generate motifs, transform, visualize, convert to MIDI. Deterministic output from a single seed at the pipeline head.

rust midi music ai-agents cli
Personal Software - Part 3

Personal Software grows. midi-cli-rs now supports custom mood packs---TOML files that extend built-in moods with your own musical variations. No Rust required. Define tempo, key, intensity, and let the generators handle the rest.

rust midi music ai-agents cli

Five ML Concepts - #20

Feb 23, 2026
Five ML Concepts - Part 20

Five ML concepts in under 30 seconds each: VAEs (generative with structured latents), Uncertainty Estimation (know when you don't know), Interpretability (distributed representations resist explanation), Gradient Noise (mini-batch variation), Human-in-the-Loop (human oversight for critical decisions).

five-ml-concepts vae uncertainty-estimation interpretability gradient-noise

Five ML Concepts - #19

Feb 22, 2026
Five ML Concepts - Part 19

Five ML concepts in under 30 seconds each: Autoencoders (compress and reconstruct), Correlation vs Causation (co-occurrence isn't cause), Curriculum Learning (easy to hard), Failure Analysis (categorize errors), Covariate Shift (new inputs, same task).

five-ml-concepts autoencoders correlation-causation curriculum-learning failure-analysis
Machine Learning - Part 5

ICL evolved from emergent surprise (2020) to mechanistic understanding (2022) to engineered capability (2026). Transformers implement implicit gradient descent during inference---they learn without weight updates. The frontier: models learning from their own feedback. Not magic. Meta-learning in plain sight.

in-context-learning icl transformers meta-learning gpt
General Technology - Part 2

JSON is everywhere, but it's not the only option. This post explores data formats beyond basic JSON—JSONL for streaming, JSONB for fast queries, Protocol Buffers for compact wire formats, YAML/TOML for human editing, and TOON for LLM efficiency. Each has trade-offs: pick two of readability, compactness, or speed.

json jsonb jsonl protobuf yaml

Five ML Concepts - #18

Feb 21, 2026
Five ML Concepts - Part 18

Five ML concepts in under 30 seconds each: Preference Learning (train from comparisons), Ensembling (combine models for robustness), ML Fragility (breaks on distribution shift), Epoch (one pass through data), Cost vs Quality (bigger isn't always better).

five-ml-concepts preference-learning ensembling ml-fragility epoch

Five ML Concepts - #17

Feb 20, 2026
Five ML Concepts - Part 17

Five ML concepts in under 30 seconds each: Benchmark Leakage (test data contamination), Concept vs Data Drift (changed relationships vs inputs), Weight Decay (L2 penalty for simplicity), Scaling Laws (predictable performance growth), Shadow Deployment (test alongside production).

five-ml-concepts benchmark-leakage concept-drift data-drift weight-decay
Personal Software - Part 2

Personal Software via Vibe Coding: a music tool for AI agents. midi-cli-rs provides mood presets (suspense, upbeat, calm, jazz) so agents can generate complete audio compositions from simple commands. No music theory required.

rust midi music ai-agents cli

Five ML Concepts - #16

Feb 19, 2026
Five ML Concepts - Part 16

Five ML concepts in under 30 seconds each: Train/Val/Test Split (separate data roles), Overconfidence (high probability wrong predictions), Batch Normalization (stable training), Optimization vs Generalization (low train loss doesn't mean good test), A/B Testing (compare with experiments).

five-ml-concepts train-val-test overconfidence batch-normalization generalization
Throwback Thursday - Part 4

ToonTalk is a 1995 visual programming environment where you train robots by showing them what to do. I vibe coded tt-rs, a Rust/WebAssembly reimplementation with boxes, scales, birds, nests, and robots---programming by demonstration for the browser.

tbt toontalk visual-programming rust webassembly
Multi-Hop Reasoning - Part 2

RSFT on easy examples made performance worse---27% vs 37% SFT baseline. Training distribution must match evaluation distribution. Easy examples teach shortcuts that fail on hard problems. The fix is one flag change.

knowledge-graphs multi-hop-reasoning mlx rsft distribution-matching
Towards Continuous LLM Learning - Part 2

Part 2 of implementing the Share algorithm: after fixing critical bugs (zero-gradient saddle point, half-parameter training), routing-based coefficient selection achieves zero regressions. Result handling improved 40% to 50%. We're 60% through verifying the paper's claims.

share-algorithm continual-learning rust lora sleepy-coder

Five ML Concepts - #15

Feb 18, 2026
Five ML Concepts - Part 15

Five ML concepts in under 30 seconds each: Perplexity (how surprised by data), Catastrophic Forgetting (new learning erases old), Weight Initialization (starting values matter), Curse of Dimensionality (high-D makes data sparse), Monitoring (track performance and drift).

five-ml-concepts perplexity catastrophic-forgetting weight-initialization curse-of-dimensionality

Five ML Concepts - #14

Feb 17, 2026
Five ML Concepts - Part 14

Five ML concepts in under 30 seconds each: ROC/AUC (performance across thresholds), Spurious Correlations (coincidental patterns), Gradient Clipping (limit gradients for stability), Loss Landscapes (error surface over parameters), Cold Start (no history for new users).

five-ml-concepts roc-auc spurious-correlations gradient-clipping loss-landscapes

Five ML Concepts - #13

Feb 16, 2026
Five ML Concepts - Part 13

Five ML concepts in under 30 seconds each: Calibration (predicted probabilities match outcomes), Shortcut Learning (exploiting spurious patterns), Early Stopping (halt when validation plateaus), Universal Approximation (NNs can fit any function), Checkpointing (save model state).

five-ml-concepts calibration shortcut-learning early-stopping universal-approximation

Five ML Concepts - #12

Feb 15, 2026
Five ML Concepts - Part 12

Five ML concepts in under 30 seconds each: Precision vs Recall (correct positives vs finding all), OOD Inputs (data unlike training), Batch Size (examples per update), Inductive Bias (built-in assumptions), Latency vs Throughput (speed vs capacity).

five-ml-concepts precision-recall ood batch-size inductive-bias
Machine Learning - Part 4

Personal Software for education: a neural network platform where every step is visible---no framework magic. CLI with progress bars, web UI with real-time loss charts, WASM for browser execution. Built via Vibe Coding to watch XOR training reveal why hidden layers matter.

rust neural-networks backpropagation wasm cli-tools
Personal Software - Part 1

Personal Software via Vibe Coding: I needed to find cat photos scattered across my system. Instead of cloud services or app stores, I described what I wanted to Claude Code and got a working Rust CLI tool using YOLOv8 and ONNX Runtime. Privacy-first, locally-run, and mine to modify.

rust yolo onnx object-detection computer-vision

Five ML Concepts - #11

Feb 14, 2026
Five ML Concepts - Part 11

Five ML concepts in under 30 seconds each: RNN (sequential processing with memory), Chain of Thought (step-by-step reasoning), Softmax (scores to probabilities), MoE (route inputs to specialists), Distribution Shift (training vs deployment mismatch).

five-ml-concepts rnn chain-of-thought softmax moe
Machine Learning - Part 3

When data won't fit in a context window, RLM expands the workspace instead. The MIT paper achieves 87-91% accuracy where standard prompting scores 0%. My Rust implementation provides four capability levels from DSL commands to WASM sandboxing to LLM delegation.

rlm recursive-language-models context-window rust wasm

Five ML Concepts - #10

Feb 13, 2026
Five ML Concepts - Part 10

Five ML concepts in under 30 seconds each: CNN (sliding filters for image features), Encoder-Decoder (compress then generate), RAG (retrieve context before generating), Few-shot Learning (learn from prompt examples), Distillation (small student mimics large teacher).

five-ml-concepts cnn encoder-decoder rag few-shot-learning
Throwback Thursday - Part 3

Before pixels, there were vectors. Vibe Coding classic arcade games (Asteroids, BattleZone, Tempest) in Rust/WebAssembly with wgpu rendering---from my first encounter with an IBM 2250 to playable browser demos, all built in one day with Claude Code.

vector-graphics throwback-thursday retrocomputing arcade rust
Machine Learning - Part 2

When multiple AI agents work together, fixed communication patterns fail at scale. DyTopo rebuilds the graph each round based on semantic similarity between what agents need and what they can offer, preventing context explosion while enabling adaptive collaboration.

rust multi-agent topology routing llm
Towards Continuous LLM Learning - Part 1

What happens when you fine-tune a model on new tasks? It forgets old ones. This post documents our implementation of the Share algorithm in Rust—using SVD-based subspace extraction to enable continual learning without catastrophic forgetting. Part 1 covers the problem and initial negative results.

lora fine-tuning continual-learning rust catastrophic-forgetting

Five ML Concepts - #9

Feb 12, 2026
Five ML Concepts - Part 9

Five ML concepts in under 30 seconds each: Dropout (random disabling prevents overfitting), RLHF (learn from human preferences), Inference (using trained models), Quantization (lower precision for efficiency), Flash Attention (block-wise for memory savings).

five-ml-concepts dropout rlhf inference quantization
Deepseek Papers - Part 3

From behavioral emulation to real implementation: integrating hash-based Engram memory with HuggingFace models. The gating mechanism is critical---it learns when to trust memory lookup and when hash collisions would add noise. Engram excels at exact-match retrieval, not generalization.

deepseek engram transformers memory hash-table

Five ML Concepts - #8

Feb 11, 2026
Five ML Concepts - Part 8

Five ML concepts in under 30 seconds each: Bias-Variance Tradeoff (balance under/overfitting), Diffusion (generate by learning to denoise), KV Cache (store past keys/values), Mixed Precision (lower precision for speed), MLA (compress attention into latent space).

five-ml-concepts bias-variance diffusion kv-cache mixed-precision

Five ML Concepts - #7

Feb 10, 2026
Five ML Concepts - Part 7

Five ML concepts in under 30 seconds each: Cross-Validation (rotate held-out data), GPT (predict next token at scale), GQA (shared keys/values for efficiency), Context Window (how much the model sees), Self-Attention (each token attends to all others).

five-ml-concepts cross-validation gpt gqa context-window

Five ML Concepts - #6

Feb 09, 2026
Five ML Concepts - Part 6

Five ML concepts in under 30 seconds each: Regularization (constraints to prevent overfitting), BERT (bidirectional masked language modeling), RoPE (position via rotation in attention), Prompting (craft inputs to steer outputs), Positional Encoding (tell model where tokens are).

five-ml-concepts regularization bert rope prompting

Five ML Concepts - #5

Feb 08, 2026
Five ML Concepts - Part 5

Five ML concepts in under 30 seconds each: Perceptron (single linear unit ancestor), Pre-training (learn general patterns first), Speculative Decoding (draft fast, verify in parallel), In-Context Learning (adapt from prompt examples), Latent Space (internal representations where similar things cluster).

five-ml-concepts perceptron pre-training speculative-decoding in-context-learning

Five ML Concepts - #4

Feb 07, 2026
Five ML Concepts - Part 4

Five ML concepts in under 30 seconds each: Activation Functions (introduce nonlinearity), Transfer Learning (reuse knowledge across tasks), VLM (joint image-text understanding), Adam (adaptive learning rates), Superposition (many concepts in overlapping representations).

five-ml-concepts activation-functions transfer-learning vlm adam-optimizer

Five ML Concepts - #3

Feb 06, 2026
Five ML Concepts - Part 3

Five ML concepts in under 30 seconds each: Loss Function (how far off predictions are), Overfitting (memorizing vs learning), Fine-tuning (specializing pre-trained models), LoRA (efficient adaptation with small matrices), Tokenization (breaking text into digestible pieces).

five-ml-concepts loss-function overfitting fine-tuning lora
Throwback Thursday - Part 2

Unix invented pipes. Mainframes reinvented them for records, not bytes. This Throwback Thursday recreates CMS/TSO Pipelines in Rust with a visual debugger, demonstrating record-oriented dataflow from the 1996 Olympics web server era.

pipelines throwback-thursday retrocomputing ibm mainframe
Small Models, Big Brains - Part 6

Which small AI fits your laptop? Benchmarking Phi-2, Gemma-2B, and SmolLM on the 2-3B efficient frontier. Phi-2 achieves 61.7% MMLU with only 2.7B parameters, beating models 5x larger through synthetic textbook training. Data quality beats parameters.

phi-2 gemma smollm efficient-llm benchmarks

Five ML Concepts - #2

Feb 05, 2026
Five ML Concepts - Part 2

Five ML concepts in under 30 seconds each: Gradient Descent (walk downhill to minimize error), Attention (focus on what matters), DPO (align from preference pairs), Learning Rate (step size tradeoff), Temperature (dial between predictable and creative).

five-ml-concepts gradient-descent attention dpo learning-rate
Small Models, Big Brains - Part 5

One billion parameters: the sweet spot for AI. Big enough to reason, small enough to run anywhere. Comparing TinyLlama, Llama-3.2-1B, StableLM, and Pythia with LoRA fine-tuning in minutes and speculative decoding for 2-3x speedups.

tinyllama llama pythia stablelm fine-tuning

Five ML Concepts - #1

Feb 04, 2026
Five ML Concepts - Part 1

Five ML concepts in under 30 seconds each: Backpropagation (learning by flowing error backward), Transformers (attention over all tokens), Mamba (linear-time sequence modeling), Hallucination (confident nonsense), and Embeddings (meaning as coordinates).

five-ml-concepts backpropagation transformer mamba hallucination
Machine Learning - Part 1

Single explorer: 0% success. Five explorers: 60% success. Sparse rewards are an information problem, not a compute problem. Using multiple scouts with different exploration strategies, we gather diverse discoveries that benefit a shared learner.

reinforcement-learning exploration sparse-rewards scouts dqn
Small Models, Big Brains - Part 4

LLMs are black boxes. Baby Dragon Hatchling uses brain-inspired sparse coding with 80% sparsity, making only 20% of neurons active per token. When fewer neurons fire, each one carries interpretable meaning. Train it on Shakespeare and actually see what's happening inside.

bdh baby-dragon-hatchling sparse-activations interpretable-ai
General Technology - Part 1

Teaching Claude to play tic-tac-toe and trash talk using Model Context Protocol (MCP). A Rust server exposes 6 tools via JSON-RPC over stdio, proving MCP standardizes AI tool integration across any compatible language model.

mcp model-context-protocol rust claude game-dev
Small Models, Big Brains - Part 3

27 million parameters beats o3-mini on ARC. The Hierarchical Reasoning Model separates planning from execution, mimicking the brain's dual-process theory. It achieves 40% on the hardest reasoning benchmark where most LLMs score under 5%.

hrm hierarchical-reasoning arc-challenge planning
Deepseek Papers - Part 2

Implementing Deepseek's Engram paper on conditional memory. Instead of recomputing common patterns through O(n^2) attention, Engram provides O(1) lookup for cached results. Our LoRA-based behavioral approximation achieves 58% loss reduction in 10 seconds.

deepseek engram transformers apple-silicon cuda
Multi-Hop Reasoning - Part 1

A 135M parameter model goes from 0% to 75% accuracy in 5 minutes. Using knowledge graph-guided training with rejection sampling, we teach multi-hop reasoning with scaffolding during training, then remove it at inference.

knowledge-graphs multi-hop-reasoning mlx apple-silicon lora
Small Models, Big Brains - Part 2

AI in your pocket, no internet required. Pocket Eliza++ runs MobileLLM-350M on Android via llama.cpp and JNI, creating a privacy-first therapist chatbot. The 260MB quantized model achieves ~10 tokens/second on mid-range phones.

mobilellm android offline-ai llama-cpp privacy
Deepseek Papers - Part 1

Implementing Deepseek's mHC (Manifold-Constrained Hyper-Connections) paper. Using Sinkhorn-Knopp iteration to create doubly-stochastic matrices, mHC maintains training stability at 48 layers where standard hyper-connections explode. Cross-platform validation on Apple Silicon and NVIDIA.

deepseek mhc transformers apple-silicon cuda
Small Models, Big Brains - Part 1

The best LLMs score zero on hard mazes. A model with 976 parameters scores 85%. The Tiny Recursive Model uses think-act cycles with deep supervision, proving iteration beats scale for tasks requiring backtracking and spatial reasoning.

trm tiny-recursive-model maze-solving recursive-reasoning

Introduction to Software Wrighter Lab: a blog, YouTube channel, and GitHub repos exploring AI coding agents, systems programming in Rust, and practical ML implementations. Written by Mike Wright, a software engineer with 40+ years of experience from mainframes to modern AI.

about rust ai-agents machine-learning wasm
Throwback Thursday - Part 1

My first program was a horse race game in APL on an IBM mainframe in 1972. This Throwback Thursday post recreates it using GNU APL, exploring array-oriented programming and the ideas that shaped languages from J to NumPy.

apl throwback-thursday retrocomputing ibm mainframe