CV - Logan Thomson

logan@loganthomson.com | github.com/xycoord | loganthomson.com

Technical Skills

Core ML Engineering:

Specialised Expertise:

Active Exploration:

Research Skills:

Projects

All independently designed and implemented:

  1. Transformer Language Model [GitHub]
    From-scratch PyTorch implementation with KV caching and RoPE [Blog Post].
    Modular architecture emphasising code clarity and understanding.

  2. Reinforcement Learning Course [Part 1] [Part 2] [GitHub]
    Teaching RL through rigorous mathematical derivations and implementations from first principles.

  3. BPE Tokeniser [GitHub] [Blog Post]
    Optimised training implementation (hours → 13s) with systematic profiling.

  4. Mechanistic Interpretability [GitHub]
    Reproduced “Toy Models of Superposition” experiments and trained SAEs (ReLU, TopK, BatchTopK) to extract their learnt features.

  5. Masters Research Project (Supervised by Ronald Clark) [GitHub] [Report]
    Fine-tuned diffusion models for image segmentation of transparent objects. Implemented and evaluated NeRF methods which learn how light bends in a scene.

  6. PPO Implementation [GitHub]
    From-scratch PPO agent featuring key modern techniques, such as GAE and vectorised environments.

Active interest in AI safety research, particularly mechanistic interpretability approaches.

Team Experience

Co-founder at [The Grove], 2024

Education

Oxford University, 2020-2024
Masters in Computer Science and Philosophy (MCompPhil)
First Class

Relevant Courses - Ethics of AI, Computer Vision, Geometric Deep Learning, Machine Learning, Ethics, Philosophy of Mind, Philosophy of Cognitive Science, Law and Computer Science, Computers in Society