Notes on Machine Learning: From Linear Models to Multimodal Transformers
Personal yet rigorous notes tracing the evolution of machine learning from linear and tree-based models to Transformers, language models, and vision–language systems. Written as a structured synthesis for learning and reflection, rather than a definitive textbook.
If the PDF does not render in your browser, use "Open in new tab" above.
Continue reading
More posts you might find interesting
Demystifying Git: The Content-Addressable Filesystem
A technical synthesis of Git's inner architecture, framing version control as a navigable manifold. This guide integrates the mechanics of SHA-1 hashing, directed acyclic graphs (DAG), and content-addressable storage to explain how Git manages project entropy.
The Structured Dialogue: How Markdown Elevates LLM Outputs
Master the art of structured prompting by using Markdown to transform generic AI responses into precise, high-quality outputs for complex fields like bioinformatics.
The Illusion in the Lab Coat: How Computational Biology Can Deceive Itself
A critical essay on self-deception, statistical illusion, and methodological fragility in computational biology.
Explore more posts
Browse the complete collection of notes and essays on computational biology