You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -30,8 +29,33 @@ This hands-on course teaches you how to build and deploy a real-time personalize
30
29
31
30
## ❔ About This Course
32
31
32
+
This course is part of Decoding ML's open-source series, where we provide free hands-on resources for building GenAI and recommender systems.
33
33
34
+
The **Hands-on H&M Real-Time Personalized Recommender**, in collaboration with [Hopsworks](https://rebrand.ly/homepage-github), is a 5-module course backed up by code, Notebooks and lessons that will teach you how to build an H&M real-time personalized recommender from scratch.
34
35
36
+
By the end of this course, you will know how to architect, build and deploy a modern recommender.
37
+
38
+
**What you'll do:**
39
+
40
+
1. Architect a scalable and modular ML system using the Feature/Training/Inference (FTI) architecture.
41
+
3. Feature engineering on top of our H&M data for collaborative and content-based filtering techniques for recommenders.
42
+
2. Use the two-tower network to Create user and item embeddings in the same vector space.
43
+
3. Implement an H&M real-time personalized recommender using the 4-stage recommender design and a vector database.
44
+
4. Use MLOps best practices, such as a feature store and a model registry.
45
+
5. Deploy the online inference pipeline to Kubernetes using KServe.
46
+
6. Deploy the offline ML pipelines to GitHub Actions.
47
+
7. Implement a web interface using Streamlit.
48
+
8. Improve the H&M real-time personalized recommender using LLMs.
49
+
50
+
🥷 With these skills, you'll become a ninja in building real-time personalized recommenders.
51
+
52
+
## 🌐 Live Demo
53
+
54
+
Try out our deployed H&M real-time personalized recommender:
55
+
[💻 Live Streamlit Demo](https://decodingml-hands-on-personalized-recommender.streamlit.app/)
56
+
57
+
> [!IMPORTANT]
58
+
> The demo is in 0-cost mode, which means that when there is no traffic, the deployment scales to 0 instances. The first time you interact with it, give it 1-2 minutes to warm up to 1+ instances. Afterward, everything will become smoother.
35
59
36
60
## 👥 Who Should Join?
37
61
@@ -54,8 +78,8 @@ This hands-on course teaches you how to build and deploy a real-time personalize
54
78
55
79
All tools used throughout the course will stick to their free tier, except OpenAI's API, as follows:
56
80
57
-
-Lessons 1-4: Completely free
58
-
-Lesson 5 (Optional): ~$1-2 for OpenAI API usage when building LLM-enhanced recommenders
81
+
-Modules 1-4: Completely free
82
+
-Module 5 (Optional): ~$1-2 for OpenAI API usage when building LLM-enhanced recommenders
59
83
60
84
## 🥂 Open-source Course: Participation is Open and Free
61
85
@@ -72,8 +96,6 @@ Our recommendation for each module:
72
96
2. Run the Notebook to replicate our results (locally or on Colab)
73
97
3. Following the Notebook, go deeper into the code by reading the `recsys` Python module
74
98
75
-
🥷 You will become a ninja in building real-time personalized recommenders by the end.
76
-
77
99
| Module | Article | Description | Notebooks |
78
100
|--------|-------|-------------|----------------|
79
101
| 1 |[Building a TikTok-like recommender](https://decodingml.substack.com/p/33d3273e-b8e3-4d98-b160-c3d239343022)| Learn how to architect a recommender system using the 4-stage architecture and two-tower network. |**No code**|
@@ -120,14 +142,6 @@ It contains:
120
142
121
143
More on the dataset in the feature engineering pipeline [Notebook](notebooks/1_fp_computing_features.ipynb) and [article](https://decodingml.substack.com/p/feature-pipeline-for-tiktok-like).
122
144
123
-
## 🌐 Live Demo
124
-
125
-
Try out our deployed H&M real-time personalized recommender:
126
-
[💻 Live Streamlit Demo](https://decodingml-hands-on-personalized-recommender.streamlit.app/)
127
-
128
-
> [!IMPORTANT]
129
-
> The demo is in 0-cost mode, which means that when there is no traffic, the deployment scales to 0 instances. The first time you interact with it, give it 1-2 minutes to warm up to 1+ instances. Afterward, everything will become smoother.
130
-
131
145
## 🚀 Getting Started
132
146
133
147
For detailed installation and usage instructions, see our [INSTALL_AND_USAGE](https://github.com/decodingml/hands-on-personalized-recommender/blob/main/INSTALL_AND_USAGE.md) guide.
0 commit comments