The Heavybit Library
The Heavybit Library is an extensive catalog of educational content featuring hundreds of hours of expert presentations, insightful podcasts, and articles focused on helping technical founders achieve breakout success.
Browse
RAG vs. Fine-Tuning: What Dev Teams Need to Know
RAG vs. Fine-Tuning: Advantages and Disadvantages In the rapidly evolving world of artificial intelligence, the ability of...
LLM Fine-Tuning: A Guide for Engineering Teams in 2025
General-purpose large language models (LLMs) are built for broad artificial intelligence (AI) applications. The most popular...
Data Council 2025: The Foundation Models Track with Dr. Bryan Bischof and Tom Drummond
Heavybit is thrilled to be sponsoring Data Council 2025, and we invite you to join us in Oakland from Apr 22-24 to experience 3...
AI’s Hidden Opportunities: Shawn "swyx" Wang on New Use Cases and Careers
This article covers thoughts from AI engineering expert Shawn "swyx" Wang on new opportunities in AI for use cases and engineers...
Best Practices for Developing Data Pipelines in Regulated Spaces
How to Think About Data Pipelines in Regulated Spaces Tech teams standing up new AI programs, or scaling existing programs, need...
How to Properly Scope and Evolve Data Pipelines
For Data Pipelines, Planning Matters. So Does Evolution. A data pipeline is a set of processes that extracts, transforms, and...
How Local-First Development Is Changing How We Make Software
What Local First Is, and Why It Matters Local-first development is a development ethos that keeps data and code on your device...
MLOps vs. Eng: Misaligned Incentives and Failure to Launch?
Failure to Launch: The Challenges of Getting ML Models into Prod Machine learning is a subset of AI–the practice of using...
Generationship Ep. #39, Simon Willison: I Coined Prompt Injection
In episode 39 of Generationship, Rachel speaks with Simon Willison, founder of Datasette and co-creator of Django. Simon...
The Role of Synthetic Data in AI/ML Programs in Software
Why Synthetic Data Matters for Software Running AI in production requires a great deal of data to feed to models. Reddit is now...
AI Inference: A Guide for Founders and Developers
What Is AI Inference (And Why Should Devs Care?) AI inference is the process of machine learning models processing previously...
Generationship Ep. #2, Putting LLMs to Work with Liz Fong-Jones and Phillip Carter of Honeycomb
In episode 2 of Generationship, Rachel Chalmers speaks with Liz Fong-Jones and Phillip Carter of Honeycomb. Together they explore...