Generative AI with Large Language Models (Coursera Course Notes)
$5+
$5+
https://schema.org/InStock
usd
Abhinav Kimothi
Generative AI with Large Language Models
The arrival of the transformers architecture in 2017, following the publication of the "Attention is All You Need" paper, revolutionised generative AI.
From pre-training to deployment, the lifecycle of any generative AI project can be divided into five distinct stages.
These are the course notes from the "Generative AI with LLM" course on Coursera (by AWS and deeplearning.ai.) that covers different stages of the generative AI project lifecycle in detail.
📚 What's Inside? 📚
🚀 PART 1: LLM Pre-Training
- What is an LLM?
- Use Cases for application of LLMs
- What are Transformers? How was text generation done before Transformers?
- How does a Transformer generate Text?
- What is a Prompt?
- Generative AI Project Life Cycle.
- How do you pre-train Large Language Models?
- Challenges with pre-training LLMs.
- What is the optimal configuration for pre-training LLMs?
- When is pre-training useful?
💡 PART 2: LLM Fine Tuning
- What is Instruction Fine Tuning?
- What Catastrophic Forgetting?
- How to Evaluate a Fine Tuned model?
- What is Parameter Efficient Fine Tuning?
🌐 PART 3: RLHF & Application
- Aligning with Human Values
- How does RLHF work?
- How to avoid Reward Hacking?
- Scaling Human Feedback : Self Supervision with Constitutional AI
- How to optimise and deploy LLMs for inferencing?
- Using LLMs in Applications
- LLM Application Architecture
- Responsible AI
- Generative AI Project Lifecycle Cheatsheet
The course and these notes are a great starting point for anyone looking to learn more about Large Language Models.
Connect on LinkedIn for feedback and discussion
Send Feedback802 sales
A pdf file of detailed course notes.
Parts
3
Size
4.86 MB
Length
36 pages
Add to wishlist
Ratings
24
5
5 stars
96%
4 stars
4%
3 stars
0%
2 stars
0%
1 star
0%