Skip to content

Commit 4a522a3

Browse files
committed
docs: Update docs
1 parent 3ef455e commit 4a522a3

File tree

3 files changed

+35
-21
lines changed

3 files changed

+35
-21
lines changed

README.md

+33-19
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,12 @@
1515

1616
This hands-on course teaches you how to build and deploy a real-time personalized recommender system for H&M fashion articles. You'll learn:
1717

18-
- A practical 4-stage recommender architecture
19-
- Two-tower model implementation and training
20-
- Scalable ML system design principles
21-
- MLOps best practices
22-
- Real-time model deployment
23-
- LLM-enhanced recommendations
24-
- Building an interactive web interface
18+
- To architect a modern ML system for real-time personalized recommenders.
19+
- To do feature engineering using modern tools such as Polars.
20+
- To design and train ML models for recommenders powered by neural networks.
21+
- To use MLOps best practices by leveraging [Hopsworks AI Lakehouse](https://rebrand.ly/homepage-github).
22+
- To deploy the recommender on a Kubernetes cluster managed by [Hopsworks Serverless](https://rebrand.ly/serverless-github) using KServe.
23+
- To apply LLM techniques for personalized recommendations.
2524

2625
<p align="center">
2726
<img src="assets/4_stage_recommender_architecture.png" alt="4_stage_recommender_architecture" width="400" style="display: inline-block; margin-right: 20px;">
@@ -30,8 +29,33 @@ This hands-on course teaches you how to build and deploy a real-time personalize
3029

3130
## ❔ About This Course
3231

32+
This course is part of Decoding ML's open-source series, where we provide free hands-on resources for building GenAI and recommender systems.
3333

34+
The **Hands-on H&M Real-Time Personalized Recommender**, in collaboration with [Hopsworks](https://rebrand.ly/homepage-github), is a 5-module course backed up by code, Notebooks and lessons that will teach you how to build an H&M real-time personalized recommender from scratch.
3435

36+
By the end of this course, you will know how to architect, build and deploy a modern recommender.
37+
38+
**What you'll do:**
39+
40+
1. Architect a scalable and modular ML system using the Feature/Training/Inference (FTI) architecture.
41+
3. Feature engineering on top of our H&M data for collaborative and content-based filtering techniques for recommenders.
42+
2. Use the two-tower network to Create user and item embeddings in the same vector space.
43+
3. Implement an H&M real-time personalized recommender using the 4-stage recommender design and a vector database.
44+
4. Use MLOps best practices, such as a feature store and a model registry.
45+
5. Deploy the online inference pipeline to Kubernetes using KServe.
46+
6. Deploy the offline ML pipelines to GitHub Actions.
47+
7. Implement a web interface using Streamlit.
48+
8. Improve the H&M real-time personalized recommender using LLMs.
49+
50+
🥷 With these skills, you'll become a ninja in building real-time personalized recommenders.
51+
52+
## 🌐 Live Demo
53+
54+
Try out our deployed H&M real-time personalized recommender:
55+
[💻 Live Streamlit Demo](https://decodingml-hands-on-personalized-recommender.streamlit.app/)
56+
57+
> [!IMPORTANT]
58+
> The demo is in 0-cost mode, which means that when there is no traffic, the deployment scales to 0 instances. The first time you interact with it, give it 1-2 minutes to warm up to 1+ instances. Afterward, everything will become smoother.
3559
3660
## 👥 Who Should Join?
3761

@@ -54,8 +78,8 @@ This hands-on course teaches you how to build and deploy a real-time personalize
5478

5579
All tools used throughout the course will stick to their free tier, except OpenAI's API, as follows:
5680

57-
- Lessons 1-4: Completely free
58-
- Lesson 5 (Optional): ~$1-2 for OpenAI API usage when building LLM-enhanced recommenders
81+
- Modules 1-4: Completely free
82+
- Module 5 (Optional): ~$1-2 for OpenAI API usage when building LLM-enhanced recommenders
5983

6084
## 🥂 Open-source Course: Participation is Open and Free
6185

@@ -72,8 +96,6 @@ Our recommendation for each module:
7296
2. Run the Notebook to replicate our results (locally or on Colab)
7397
3. Following the Notebook, go deeper into the code by reading the `recsys` Python module
7498

75-
🥷 You will become a ninja in building real-time personalized recommenders by the end.
76-
7799
| Module | Article | Description | Notebooks |
78100
|--------|-------|-------------|----------------|
79101
| 1 | [Building a TikTok-like recommender](https://decodingml.substack.com/p/33d3273e-b8e3-4d98-b160-c3d239343022) | Learn how to architect a recommender system using the 4-stage architecture and two-tower network. | **No code** |
@@ -120,14 +142,6 @@ It contains:
120142

121143
More on the dataset in the feature engineering pipeline [Notebook](notebooks/1_fp_computing_features.ipynb) and [article](https://decodingml.substack.com/p/feature-pipeline-for-tiktok-like).
122144

123-
## 🌐 Live Demo
124-
125-
Try out our deployed H&M real-time personalized recommender:
126-
[💻 Live Streamlit Demo](https://decodingml-hands-on-personalized-recommender.streamlit.app/)
127-
128-
> [!IMPORTANT]
129-
> The demo is in 0-cost mode, which means that when there is no traffic, the deployment scales to 0 instances. The first time you interact with it, give it 1-2 minutes to warm up to 1+ instances. Afterward, everything will become smoother.
130-
131145
## 🚀 Getting Started
132146

133147
For detailed installation and usage instructions, see our [INSTALL_AND_USAGE](https://github.com/decodingml/hands-on-personalized-recommender/blob/main/INSTALL_AND_USAGE.md) guide.

notebooks/4_ip_computing_item_embeddings.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373
"cell_type": "markdown",
7474
"metadata": {},
7575
"source": [
76-
"# 👩🏻‍🔬 Feature pipeline: Computing item embeddings\n",
76+
"# 👩🏻‍🔬 Offline inference pipeline: Computing item embeddings\n",
7777
"\n",
7878
"In this notebook you will compute the candidate embeddings and populate a Hopsworks feature group with a vector index."
7979
]

notebooks/5_ip_creating_deployments.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373
"cell_type": "markdown",
7474
"metadata": {},
7575
"source": [
76-
"# Inference pipeline: Deploying and testing the inference pipeline \n",
76+
"# Online inference pipeline: Deploying and testing the real-time ML services\n",
7777
"\n",
7878
"In this notebook, we will dig into the inference pipeline and deploy it to Hopsworks as a real-time service."
7979
]

0 commit comments

Comments
 (0)