Course Outline#
This is a 6-week live cohort based course. We will learn the fundamentals of LLMs like tokenizations, embeddings, RAG, prompt engineering, attention, transformers, fine tuning, and AI Agents. The sessions will have a large hands-on component and we dig deep into mathematical concepts.
Session recordings will be made available.
Apart from the live sessions, every week we will have office hours to clear doubts and have general conversations about AI. These office hours are optional.
Program Outline#
Our first AI App
In our very first week, we will setup the tools needed to build AI powered apps.
Agenda
Understand the LLM generation process
Build a basic language model from scratch
Use a data structure to optimally compute the generation process
Learn about how LLMs represent words initially - tokenization
Build a tokenizer from scratch
RAG-powered App
Remember that LLMs don’t have access to your private data, and hence cannot answer questions relevant to your business problems. We address this issue by retrieval augmented generation (RAG).
Agenda
Learn how to connect the powerful LLMs to your private data
Ask questions to the LLMs specific to your business needs
Learn about vector embeddings and vector databases
Build your first RAG powered App
Build a recommender system app
Prompt Engineering
Steering the run time behaviour of LLMs are extremely important for practical use cases. We’ll explore techniques to get the most out of these models through effective prompting.
Agenda
Learn about few-shot and zero-shot prompting techniques
Master chain-of-thought style prompting to elicit reasoning in LLMs
Implement system prompts and persona-based instructions
Learn how prompts are evaluated
Learn about temperature, top-p, and top-k parameters
A peek into next week - attention mechanism
Transformer from scratch
Now that we know tokenizations, embeddings, and prompt engineering, we arrive at the central part of the LLM - the transformer architecture.
Agenda
Dissect the transformer architecture
Learn about the all-important attention mechanism
Visualize contextual embeddings
Implement GPT architecture from scratch
Understand how LLMs learn probability distributions
Fine-tuning LLMs
Powered by our knowledge of LLM architecture, we learn how to update the weights of an LLM based on finetuning on data that we are interested in.
Agenda
Understand when and why to fine-tune pre-trained models
Learn about parameter-efficient fine-tuning techniques (PEFT, LoRA)
Learn how finetuning can be done on classification tasks
Update embedding models using fine tuning
Advanced AI Agents
The final week is where we use all our knowledge to build advanced AI Agents that can autonomously solve complex tasks.
Agenda
Understand the agent architecture and reasoning frameworks
Implement tools and function-calling capabilities
Build a multi-agent system with specialized roles
Create an autonomous agent that can plan and execute complex workflows
Prerequisite Knowledge#
The course is accesible to a beginner audience. The only prerequisite knowledge is Python coding. If you are comfortable in other coding languages, you should feel comfortable following along. If you have never coded in your life, this course is not for you. There is no prerequisite knowledge of AI/ML needed for this course.
Upcoming Cohort - October 2025#
📅 KEY DATES
Cohort Duration: October 14th - November 20th, 2025
Live Sessions: Tuesdays & Thursdays, 9:00 PM - 11:00 PM IST
Office Hours: Saturdays, 2:30 PM - 3:30 PM IST
WHAT YOU’LL GET
- ✓ Hands-on experience building cutting-edge GenAI applications
- ✓ 6 weeks of structured learning with practical projects
- ✓ Direct access to industry professionals
- ✓ Exclusive WhatsApp community with peers and AI/ML experts
- ✓ Access to session recordings with no time limit
If you have any questions, feel free to reach out
Any Questions?