Latent Space

Share this post

Latent Space
The 2025 AI Engineer Reading List
Copy link
Facebook
Email
Notes
More

Discover more from Latent Space

The AI Engineer newsletter + Top 10 US Tech podcast. Exploring AI UX, Agents, Devtools, Infra, Open Source Models. See https://latent.space/about for highlights from Chris Lattner, Andrej Karpathy, George Hotz, Simon Willison, Emad Mostaque, et al!
Over 64,000 subscribers
Continue reading
Sign in

The 2025 AI Engineer Reading List

We picked 50 paper/models/blogs across 10 fields in AI Eng: LLMs, Benchmarks, Prompting, RAG, Agents, CodeGen, Vision, Voice, Diffusion, Finetuning. If you're starting from scratch, start here.

Dec 28, 2024
421

Share this post

Latent Space
The 2025 AI Engineer Reading List
Copy link
Facebook
Email
Notes
More
15
42
Share

Discussions on X, LinkedIn, YouTube. Also: Meet AI Engineers in person! Applications closing soon for attending and sponsoring AI Engineer Summit NYC, Feb 20-21.


The picks from all the speakers in our Best of 2024 series catches you up for 2024, but since we wrote about running Paper Clubs, we’ve been asked many times for a reading list to recommend for those starting from scratch at work or with friends. We started with the 2023 a16z Canon, but it needs a 2025 update and a practical focus.

Here we curate “required reads” for the AI engineer. Our design goals are:

  • pick ~50 papers (~one1 a week for a year), optional extras. Arbitrary constraint.

  • tell you why this paper matters instead of just name drop without helpful context

  • be very practical for the AI Engineer; no time wasted on Attention is All You Need, bc 1) everyone else already starts there, 2) most won’t really need it at work

Latent Space is a reader-supported publication for AI Engineers! We really appreciate you sharing and supporting our work.

We ended up picking 5 “papers” per section for:

  • Section 1: Frontier LLMs

  • Section 2: Benchmarks and Evals

  • Section 3: Prompting, ICL & Chain of Thought

  • Section 4: Retrieval Augmented Generation

  • Section 5: Agents

  • Section 6: Code Generation

  • Section 7: Vision

  • Section 8: Voice

  • Section 9: Image/Video Diffusion

  • Section 10: Finetuning


Section 1: Frontier LLMs

  1. GPT1, GPT2, GPT3, Codex, InstructGPT, GPT4 papers. Self explanatory. GPT3.5, 4o, o1, and o3 tended to have launch events and system cards2 instead.

  2. Claude 3 and Gemini 1 papers to understand the competition. Latest iterations are Claude 3.5 Sonnet and Gemini 2.0 Flash/Flash Thinking. Also Gemma 2.

  3. LLaMA 1, Llama 2, Llama 3 papers to understand the leading open models. You can also view Mistral 7B, Mixtral and Pixtral as a branch on the Llama family tree.

  4. DeepSeek V1, Coder, MoE, V2, V3 papers. Leading (relatively) open model lab.

  5. Apple Intelligence paper. It’s on every Mac and iPhone.

You can both use and learn a lot from other LLMs, this is a vast topic.

  • In particular, BERTs are underrated as workhorse classification models - see ModernBERT for the state of the art, and ColBERT for applications.

  • Honorable mentions of LLMs to know: AI2 (Olmo, Molmo, OlmOE, Tülu 3, Olmo 2), Grok, Amazon Nova, Yi, Reka, Jamba, Cohere, Nemotron, Microsoft Phi, HuggingFace SmolLM - mostly lower in ranking or lack papers.

  • Research to know: If time allows, we recommend the Scaling Laws literature: Kaplan, Chinchilla, Emergence / Mirage, Post-Chinchilla laws.3

  • In 2025, the frontier (o1, o3, R1, QwQ/QVQ, f1) will be very much dominated by reasoning models, which have essentially no direct papers, but the basic knowledge is Let’s Verify Step By Step4, STaR, and Noam Brown’s talks/podcasts. Most practical knowledge is accumulated by outsiders (LS talk) and tweets.

Section 2: Benchmarks and Evals

  1. MMLU paper - the main knowledge benchmark, next to GPQA and BIG-Bench. In 2025 frontier labs use MMLU Pro, GPQA Diamond, and BIG-Bench Hard.

  2. MuSR paper - evaluating long context, next to LongBench, BABILong, and RULER. Solving Lost in The Middle and other issues with Needle in a Haystack.

  3. MATH paper - a compilation of math competition problems. Frontier labs focus on FrontierMath and hard subsets of MATH: MATH level 5, AIME, AMC10/AMC12.

  4. IFEval paper - the leading instruction following eval and only external benchmark adopted by Apple. You could also view MT-Bench as a form of IF.

  5. ARC AGI challenge - a famous abstract reasoning “IQ test” benchmark that has lasted far longer than many quickly saturated benchmarks.

We covered many of these in Benchmarks 101 and Benchmarks 201, while our Carlini, LMArena, and Braintrust episodes covered private, arena, and product evals (read LLM-as-Judge and the Applied LLMs essay). Benchmarks are linked to Datasets.

Section 3: Prompting, ICL & Chain of Thought

Note: The GPT3 paper (“Language Models are Few-Shot Learners”) should already have introduced In-Context Learning (ICL) - a close cousin of prompting. We also consider prompt injections required knowledge — Lilian Weng, Simon W.

  1. The Prompt Report paper - a survey of prompting papers (podcast).

  2. Chain-of-Thought paper - one of multiple claimants to popularizing Chain of Thought, along with Scratchpads and Let’s Think Step By Step

  3. Tree of Thought paper - introducing lookaheads and backtracking (podcast)

  4. Prompt Tuning paper - you may not need prompts - if you can do Prefix-Tuning, adjust decoding (say via entropy), or representation engineering

  5. Automatic Prompt Engineering paper - it is increasingly obvious that humans are terrible zero-shot prompters and prompting itself can be enhanced by LLMs. The most notable implementation of this is in the DSPy paper/framework.

Section 3 is one area where reading disparate papers may not be as useful as having more practical guides - we recommend Lilian Weng, Eugene Yan, and Anthropic’s Prompt Engineering Tutorial and AI Engineer Workshop.

Section 4: Retrieval Augmented Generation

  1. Introduction to Information Retrieval - a bit unfair to recommend a book, but we are trying to make the point that RAG is an IR problem and IR has a 60 year history that includes TF-IDF, BM25, FAISS, HNSW and other “boring” techniques.

  2. 2020 Meta RAG paper - which coined the term. The original authors have started Contextual and have coined RAG 2.0. Modern “table stakes” for RAG — HyDE, chunking, rerankers, multimodal data are better presented elsewhere.

  3. MTEB paper - known overfitting that its author considers it dead, but still de-facto benchmark. Many embeddings have papers - pick your poison - SentenceTransformers, OpenAI, Nomic Embed, Jina v3, cde-small-v1, ModernBERT Embed - with Matryoshka embeddings increasingly standard.

  4. GraphRAG paper - Microsoft’s take on adding knowledge graphs to RAG, now open sourced. One of the most popular trends in RAG in 2024, alongside of ColBERT/ColPali/ColQwen (more in the Vision section).

  5. RAGAS paper - the simple RAG eval recommended by OpenAI. See also Nvidia FACTS framework and Extrinsic Hallucinations in LLMs - Lilian Weng’s survey of causes/evals for hallucinations (see also Jason Wei on recall vs precision).

RAG is the bread and butter of AI Engineering at work in 2024, so there are a LOT of industry resources and practical experience you will be expected to have. LlamaIndex (course) and LangChain (video) have perhaps invested the most in educational resources. You should also be familiar with the perennial RAG vs Long Context debate.

Section 5: Agents

  1. SWE-Bench paper (our podcast) - after adoption by Anthropic, Devin and OpenAI, probably the highest profile agent benchmark today (vs WebArena or SWE-Gym). Technically a coding benchmark, but more a test of agents than raw LLMs. See also SWE-Agent, SWE-Bench Multimodal and the Konwinski Prize.

  2. ReAct paper (our podcast) - ReAct started a long line of research on tool using and function calling LLMs, including Gorilla and the BFCL Leaderboard. Of historical interest - Toolformer and HuggingGPT.

  3. MemGPT paper - one of many notable approaches to emulating long running agent memory, adopted by ChatGPT and LangGraph. Versions of these are reinvented in every agent system from MetaGPT to AutoGen to Smallville.

  4. Voyager paper - Nvidia’s take on 3 cognitive architecture components (curriculum, skill library, sandbox) to improve performance. More abstractly, skill library/curriculum can be abstracted as a form of Agent Workflow Memory.

  5. Anthropic on Building Effective Agents - just a great state-of-2024 recap that focuses on the importance of chaining, routing, parallelization, orchestration, evaluation, and optimization. See also Lilian Weng’s Agents (ex OpenAI), Shunyu Yao on LLM Agents (now at OpenAI) and Chip Huyen’s Agents.

We covered many of the 2024 SOTA agent designs at NeurIPS, and you can find more readings in the UC Berkeley LLM Agents MOOC. Note that we skipped bikeshedding agent definitions, but if you really need one, you could use mine.

Section 6: Code Generation

  1. The Stack paper - the original open dataset twin of The Pile focused on code, starting a great lineage of open codegen work from The Stack v2 to StarCoder.

  2. Open Code Model papers - choose from DeepSeek-Coder, Qwen2.5-Coder, or CodeLlama. Many regard 3.5 Sonnet as the best code model but it has no paper.

  3. HumanEval/Codex paper - This is a saturated benchmark, but is required knowledge for the code domain. SWE-Bench is more famous for coding now, but is expensive/evals agents rather than models. Modern replacements include Aider, Codeforces, BigCodeBench, LiveCodeBench and SciCode.

  4. AlphaCodeium paper - Google published AlphaCode and AlphaCode2 which did very well on programming problems, but here is one way Flow Engineering can add a lot more performance to any given base model.

  5. CriticGPT paper - LLMs are known to generate code that can have security issues. OpenAI trained CriticGPT to spot them, and Anthropic uses SAEs to identify LLM features that cause this, but it is a problem you should be aware of.

CodeGen is another field where much of the frontier has moved from research to industry and practical engineering advice on codegen and code agents like Devin are only found in industry blogposts and talks rather than research papers.

Section 7: Vision

  • Non-LLM Vision work is still important: e.g. the YOLO paper (now up to v11, but mind the lineage), but increasingly transformers like DETRs Beat YOLOs too.

  • CLIP paper - the first successful ViT from Alec Radford. These days, superceded by BLIP/BLIP2 or SigLIP/PaliGemma, but still required to know.

  • MMVP benchmark (LS Live)- quantifies important issues with CLIP. Multimodal versions of MMLU (MMMU) and SWE-Bench do exist.

  • Segment Anything Model and SAM 2 paper (our pod) - the very successful image and video segmentation foundation model. Pair with GroundingDINO.

  • Early fusion research: Contra the cheap “late fusion” work like LLaVA (our pod), early fusion covers Meta’s Flamingo, Chameleon, Apple’s AIMv2, Reka Core, et al. In reality there are at least 4 streams of visual LM work.

Much frontier VLM work these days is no longer published (the last we really got was GPT4V system card and derivative papers). We recommend having working experience with vision capabilities of 4o (including finetuning 4o vision), Claude 3.5 Sonnet/Haiku, Gemini 2.0 Flash, and o1. Others: Pixtral, Llama 3.2, Moondream, QVQ.

Section 8: Voice

  • Whisper paper - the successful ASR model from Alec Radford. Whisper v2, v3 and distil-whisper and v3 Turbo are open weights but have no paper.

  • AudioPaLM paper - our last look at Google’s voice thoughts before PaLM became Gemini. See also: Meta’s Llama 3 explorations into speech.

  • NaturalSpeech paper - one of a few leading TTS approaches. Recently v3.

  • Kyutai Moshi paper - an impressive full-duplex speech-text open weights model with high profile demo. See also Hume OCTAVE.

  • OpenAI Realtime API: The Missing Manual - Again, frontier omnimodel work is not published, but we did our best to document the Realtime API.

We do recommend diversifying from the big labs here for now - try Daily, Livekit, Vapi, Assembly, Deepgram, Fireworks, Cartesia, Elevenlabs etc. See the State of Voice 2024. While NotebookLM’s voice model is not public, we got the deepest description of the modeling process that we know of.

With Gemini 2.0 also being natively voice and vision multimodal, the Voice and Vision modalities are on a clear path to merging in 2025 and beyond.

Section 9: Image/Video Diffusion

  • Latent Diffusion paper - effectively the Stable Diffusion paper. See also SD2, SDXL, SD3 papers. These days the team is working on BFL Flux [schnell|dev|pro].

  • DALL-E / DALL-E-2 / DALL-E-3 paper - OpenAI’s image generation.

  • Imagen / Imagen 2 / Imagen 3 paper - Google’s image gen. See also Ideogram.

  • Consistency Models paper - this distillation work with LCMs spawned the quick draw viral moment of Dec 2023. These days, updated with sCMs.

  • Sora blogpost - text to video - no paper of course beyond the DiT paper (same authors), but still the most significant launch of the year, with many open weights competitors like OpenSora. Lilian Weng survey here.

We also highly recommend familiarity with ComfyUI (we were first to interview). Text Diffusion, Music Diffusion, and autoregressive image generation are niche but rising.

Section 10: Finetuning

  • LoRA/QLoRA paper - the de facto way to finetune models cheaply, whether on local models or with 4o (confirmed on pod). FSDP+QLoRA is educational.

  • DPO paper - the popular, if slightly inferior, alternative to PPO, now supported by OpenAI as Preference Finetuning.

  • ReFT paper - instead of finetuning a few layers, focus on features instead.

  • Orca 3/AgentInstruct paper - see the Synthetic Data picks at NeurIPS but this is a great way to get finetue data.

  • RL/Reasoning Tuning papers - RL Finetuning for o1 is debated, but Let’s Verify Step By Step and Noam Brown’s many public talks give hints for how it works.

We recommend going thru the Unsloth notebooks and HuggingFace’s How to fine-tune open LLMs for more on the full process. This is obviously an endlessly deep rabbit hole that, at the extreme, overlaps with the Research Scientist track.


How To Start

This list will seem intimidating and you will fall off the wagon a few times. Just get back on it. We’ll update with more thru 2025 to keep it current. You can make up your own approach but you can use our How To Read Papers In An Hour as a guide if that helps. Many folks also chimed in with advice here.

  • If you’re looking to go thru this with new friends, reader Krispin has started a discord here: https://app.discuna.com/invite/ai_engineer, and in NYC, Fed of Flow AI has a AI NYC Telegram group reading this. You’re also ofc welcome to join the Latent Space discord.

  • Reader Niels has started a notes blog where he is pulling out highlights: https://niels-ole.com/2025/01/05/notes-on-the-2025-ai-engineer-reading-list


Did we miss anything obvious? It’s quite possible. Please comment below and we’ll update with credit to help the community.

Happy reading!

Thanks to Eugene Yan and Vibhu Sapra for great suggestions to this list.

Latent Space is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

1

As per our comment, not EXACTLY one paper per week, but rather one “paper family” per week.

This may cause uneven workloads, but also reflects the fact that older papers (GPT1, 2, 3) are less relevant now that 4/4o/o1 exist, so you should proportionately spend less time each per paper, and kind of lump them together and treat them as "one paper worth of work", simply because they are old now and have faded to rough background knowledge that you'll roughly be expected to have as an industry participant.

2

and community prompting guides!

o1 isn’t a chat model (and that’s the point)

Ben Hylak and swyx & Alessio
·
Jan 12
Read full story
3

We used to recommend “historical interest” papers like Vicuna and Alpaca, but if we’re being honest they are less and less relevant these days. So this footnote is a compromise.

4

Leading to research like PRIME (explainer)

Subscribe to Latent Space

By swyx & Alessio · Hundreds of paid subscribers

The AI Engineer newsletter + Top 10 US Tech podcast. Exploring AI UX, Agents, Devtools, Infra, Open Source Models. See https://latent.space/about for highlights from Chris Lattner, Andrej Karpathy, George Hotz, Simon Willison, Emad Mostaque, et al!

421 Likes∙
42 Restacks
421

Share this post

Latent Space
The 2025 AI Engineer Reading List
Copy link
Facebook
Email
Notes
More
15
42
Share

Discussion about this post

Krispin
Jan 3

Thanks for creating this comprehensive list - this is gonna be a long read ;)

For other readers like me that sometimes struggle (or are willing to help others - @authors) to understand a paper, please consider joining the community I’ve created on Discuna where we can share questions and other comments directly in these papers *within a community*:

https://app.discuna.com/invite/ai_engineer

- currently just me, but I’ll do my best to respond! I've already added the papers from sections 1-5 to this community.

Discuna (discuna.com) is a free non-profit tool I programmed that is like Discord but in addition to text channels you also have PDF channels that allow for sharing comments directly inside PDF documents within communities. (In principle you could also program your own channels!)

PS I hope I did not violate some rules with this post but I genuinely believe this tool will help the community.

Expand full comment
Like (8)
Reply
Share
2 replies by swyx & Alessio and others
Devon Artis
Dec 28

This is definitely ideal way to learn, do you have any recommendations for someone doing it alone.

Expand full comment
Like (2)
Reply
Share
2 replies by swyx & Alessio and others
13 more comments...
The Rise of the AI Engineer
Emergent capabilities are creating an emerging job title beyond the Prompt Engineer.
Jul 1, 2023
378

Share this post

Latent Space
The Rise of the AI Engineer
Copy link
Facebook
Email
Notes
More
17
$2 H100s: How the GPU Rental Bubble Burst
H100s used to be $8/hr if you could get them. Now there's 7 different resale markets selling them under $2. What happened?
Oct 11, 2024 • 
Eugene Cheah
127

Share this post

Latent Space
$2 H100s: How the GPU Rental Bubble Burst
Copy link
Facebook
Email
Notes
More
19
o1 isn’t a chat model (and that’s the point)
How Ben Hylak turned from ol pro skeptic to fan by overcoming his skill issue.
Jan 12 • 
Ben Hylak
 and 
swyx & Alessio
211

Share this post

Latent Space
o1 isn’t a chat model (and that’s the point)
Copy link
Facebook
Email
Notes
More
23

Ready for more?

© 2025 Latent.Space
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More

Create your profile

undefined subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.