Skip to content

Commit 52d6141

Browse files
committed
added copy-button
1 parent 4e1abe4 commit 52d6141

File tree

42 files changed

+514
-247
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+514
-247
lines changed

public/categories/index.xml

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,6 @@
66
<description>Recent content in Categories on Stefano Giannini</description>
77
<generator>Hugo -- gohugo.io</generator>
88
<language>en</language>
9-
<lastBuildDate>Sat, 06 Jul 2024 00:00:00 +0100</lastBuildDate><atom:link href="http://localhost:1313/categories/index.xml" rel="self" type="application/rss+xml" />
9+
<lastBuildDate>Sun, 14 Jul 2024 00:00:00 +0100</lastBuildDate><atom:link href="http://localhost:1313/categories/index.xml" rel="self" type="application/rss+xml" />
1010
</channel>
1111
</rss>

public/categories/nlp/index.html

+6-5
Original file line numberDiff line numberDiff line change
@@ -275,14 +275,15 @@
275275
<div class="card">
276276
<div class="card-head">
277277
<a href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/" class="post-card-link">
278-
<img class="card-img-top" src='/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram.svg' alt="Hero Image">
278+
<img class="card-img-top" src='/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram-hd.png' alt="Hero Image">
279279
</a>
280280
</div>
281281
<div class="card-body">
282282
<a href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/" class="post-card-link">
283283
<h5 class="card-title">Gemma-2 &#43; RAG &#43; LlamaIndex &#43; VectorDB</h5>
284-
<p class="card-text post-summary">Introduction Retrieval-Augmented Generation (RAG) is an advanced AI technique that enhances large language models (LLMs) with the ability to access and utilize external knowledge. This guide will walk you through a practical implementation of RAG using Python and various libraries, explaining each component in detail.
285-
Setup and Import %pip install transformers accelerate bitsandbytes flash-attn faiss-cpu llama-index -Uq %pip install llama-index-embeddings-huggingface -q %pip install llama-index-llms-huggingface -q %pip install llama-index-embeddings-instructor llama-index-vector-stores-faiss -q import contextlib import os import torch device = torch.</p>
284+
<p class="card-text post-summary">Open in:
285+
1. Introduction Retrieval-Augmented Generation (RAG) is an advanced AI technique that enhances large language models (LLMs) with the ability to access and utilize external knowledge. This guide will walk you through a practical implementation of RAG using Python and various libraries, explaining each component in detail.
286+
2. Setup and Import %pip install transformers accelerate bitsandbytes flash-attn faiss-cpu llama-index -Uq %pip install llama-index-embeddings-huggingface -q %pip install llama-index-llms-huggingface -q %pip install llama-index-embeddings-instructor llama-index-vector-stores-faiss -q import contextlib import os import torch device = torch.</p>
286287
</a>
287288

288289
<div class="tags">
@@ -304,8 +305,8 @@ <h5 class="card-title">Gemma-2 &#43; RAG &#43; LlamaIndex &#43; VectorDB</h5>
304305
</div>
305306
<div class="card-footer">
306307
<span class="float-start">
307-
Tuesday, June 25, 2024
308-
| 15 minutes </span>
308+
Sunday, July 14, 2024
309+
| 14 minutes </span>
309310
<a
310311
href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/"
311312
class="float-end btn btn-outline-info btn-sm">Read</a>

public/categories/nlp/index.xml

+5-4
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,15 @@
66
<description>Recent content in NLP on Stefano Giannini</description>
77
<generator>Hugo -- gohugo.io</generator>
88
<language>en</language>
9-
<lastBuildDate>Tue, 25 Jun 2024 00:08:25 +0100</lastBuildDate><atom:link href="http://localhost:1313/categories/nlp/index.xml" rel="self" type="application/rss+xml" /><item>
9+
<lastBuildDate>Sun, 14 Jul 2024 00:00:00 +0100</lastBuildDate><atom:link href="http://localhost:1313/categories/nlp/index.xml" rel="self" type="application/rss+xml" /><item>
1010
<title>Gemma-2 &#43; RAG &#43; LlamaIndex &#43; VectorDB</title>
1111
<link>http://localhost:1313/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/</link>
12-
<pubDate>Tue, 25 Jun 2024 00:08:25 +0100</pubDate>
12+
<pubDate>Sun, 14 Jul 2024 00:00:00 +0100</pubDate>
1313

1414
<guid>http://localhost:1313/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/</guid>
15-
<description>Introduction Retrieval-Augmented Generation (RAG) is an advanced AI technique that enhances large language models (LLMs) with the ability to access and utilize external knowledge. This guide will walk you through a practical implementation of RAG using Python and various libraries, explaining each component in detail.
16-
Setup and Import %pip install transformers accelerate bitsandbytes flash-attn faiss-cpu llama-index -Uq %pip install llama-index-embeddings-huggingface -q %pip install llama-index-llms-huggingface -q %pip install llama-index-embeddings-instructor llama-index-vector-stores-faiss -q import contextlib import os import torch device = torch.</description>
15+
<description>Open in:
16+
1. Introduction Retrieval-Augmented Generation (RAG) is an advanced AI technique that enhances large language models (LLMs) with the ability to access and utilize external knowledge. This guide will walk you through a practical implementation of RAG using Python and various libraries, explaining each component in detail.
17+
2. Setup and Import %pip install transformers accelerate bitsandbytes flash-attn faiss-cpu llama-index -Uq %pip install llama-index-embeddings-huggingface -q %pip install llama-index-llms-huggingface -q %pip install llama-index-embeddings-instructor llama-index-vector-stores-faiss -q import contextlib import os import torch device = torch.</description>
1718
</item>
1819

1920

public/categories/physics/index.html

+2-2
Original file line numberDiff line numberDiff line change
@@ -280,7 +280,7 @@
280280
</div>
281281
<div class="card-body">
282282
<a href="/posts/physics/quantum_computing/teleportation/" class="post-card-link">
283-
<h5 class="card-title">Quantum Computing - Fundementals - Teleportation</h5>
283+
<h5 class="card-title">Quantum Computing - Fundamentals - Teleportation</h5>
284284
<p class="card-text post-summary">Introduction Quantum teleportation is a fundamental protocol in quantum information science that enables the transfer of quantum information from one location to another. Despite its name, it doesn&rsquo;t involve the transportation of matter, but rather the transmission of the quantum state of a particle.
285285
The Concept In quantum teleportation, we have three main parties:
286286
Alice: The sender who wants to transmit a quantum state. Bob: The receiver who will receive the quantum state.</p>
@@ -329,7 +329,7 @@ <h5 class="card-title">Quantum Computing - Fundementals - Teleportation</h5>
329329
</div>
330330
<div class="card-body">
331331
<a href="/posts/physics/quantum_computing/introduction/" class="post-card-link">
332-
<h5 class="card-title">Quantum Computing - Fundementals (Part 1)</h5>
332+
<h5 class="card-title">Quantum Computing - Fundamentals (Part 1)</h5>
333333
<p class="card-text post-summary">Introduction to Quantum Computing Quantum computing represents a transformative leap in computational technology. Unlike classical computers, which use bits as the smallest unit of data, quantum computers employ quantum bits, or qubits. These qubits take advantage of the principles of quantum mechanics, allowing for exponentially greater processing power in certain types of computations.
334334
Core Concepts:
335335
Superposition: Unlike classical bits that can be either 0 or 1, qubits can exist in a state that is a superposition of both.</p>

public/categories/physics/index.xml

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
<generator>Hugo -- gohugo.io</generator>
88
<language>en</language>
99
<lastBuildDate>Wed, 03 Jul 2024 08:00:00 +0100</lastBuildDate><atom:link href="http://localhost:1313/categories/physics/index.xml" rel="self" type="application/rss+xml" /><item>
10-
<title>Quantum Computing - Fundementals - Teleportation</title>
10+
<title>Quantum Computing - Fundamentals - Teleportation</title>
1111
<link>http://localhost:1313/posts/physics/quantum_computing/teleportation/</link>
1212
<pubDate>Wed, 03 Jul 2024 08:00:00 +0100</pubDate>
1313

@@ -18,7 +18,7 @@ Alice: The sender who wants to transmit a quantum state. Bob: The receiver who w
1818
</item>
1919

2020
<item>
21-
<title>Quantum Computing - Fundementals (Part 1)</title>
21+
<title>Quantum Computing - Fundamentals (Part 1)</title>
2222
<link>http://localhost:1313/posts/physics/quantum_computing/introduction/</link>
2323
<pubDate>Sun, 30 Jun 2024 08:00:00 +0100</pubDate>
2424

public/index.json

+1-1
Large diffs are not rendered by default.

public/index.xml

+2-2
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Introduction to Exogenous Variables in Time Series Models Exogenous variables, a
3838
</item>
3939

4040
<item>
41-
<title>Quantum Computing - Fundementals - Teleportation</title>
41+
<title>Quantum Computing - Fundamentals - Teleportation</title>
4242
<link>http://localhost:1313/posts/physics/quantum_computing/teleportation/</link>
4343
<pubDate>Wed, 03 Jul 2024 08:00:00 +0100</pubDate>
4444

@@ -49,7 +49,7 @@ Alice: The sender who wants to transmit a quantum state. Bob: The receiver who w
4949
</item>
5050

5151
<item>
52-
<title>Quantum Computing - Fundementals (Part 1)</title>
52+
<title>Quantum Computing - Fundamentals (Part 1)</title>
5353
<link>http://localhost:1313/posts/physics/quantum_computing/introduction/</link>
5454
<pubDate>Sun, 30 Jun 2024 08:00:00 +0100</pubDate>
5555

public/posts/index.html

+2-2
Original file line numberDiff line numberDiff line change
@@ -621,7 +621,7 @@ <h5 class="card-title">Time Series Analysis and SARIMA Model for Stock Price Pre
621621
</div>
622622
<div class="card-body">
623623
<a href="/posts/physics/quantum_computing/teleportation/" class="post-card-link">
624-
<h5 class="card-title">Quantum Computing - Fundementals - Teleportation</h5>
624+
<h5 class="card-title">Quantum Computing - Fundamentals - Teleportation</h5>
625625
<p class="card-text post-summary">Introduction Quantum teleportation is a fundamental protocol in quantum information science that enables the transfer of quantum information from one location to another. Despite its name, it doesn&rsquo;t involve the transportation of matter, but rather the transmission of the quantum state of a particle.
626626
The Concept In quantum teleportation, we have three main parties:
627627
Alice: The sender who wants to transmit a quantum state. Bob: The receiver who will receive the quantum state.</p>
@@ -670,7 +670,7 @@ <h5 class="card-title">Quantum Computing - Fundementals - Teleportation</h5>
670670
</div>
671671
<div class="card-body">
672672
<a href="/posts/physics/quantum_computing/introduction/" class="post-card-link">
673-
<h5 class="card-title">Quantum Computing - Fundementals (Part 1)</h5>
673+
<h5 class="card-title">Quantum Computing - Fundamentals (Part 1)</h5>
674674
<p class="card-text post-summary">Introduction to Quantum Computing Quantum computing represents a transformative leap in computational technology. Unlike classical computers, which use bits as the smallest unit of data, quantum computers employ quantum bits, or qubits. These qubits take advantage of the principles of quantum mechanics, allowing for exponentially greater processing power in certain types of computations.
675675
Core Concepts:
676676
Superposition: Unlike classical bits that can be either 0 or 1, qubits can exist in a state that is a superposition of both.</p>

public/posts/index.xml

+2-2
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Introduction to Exogenous Variables in Time Series Models Exogenous variables, a
3838
</item>
3939

4040
<item>
41-
<title>Quantum Computing - Fundementals - Teleportation</title>
41+
<title>Quantum Computing - Fundamentals - Teleportation</title>
4242
<link>http://localhost:1313/posts/physics/quantum_computing/teleportation/</link>
4343
<pubDate>Wed, 03 Jul 2024 08:00:00 +0100</pubDate>
4444

@@ -49,7 +49,7 @@ Alice: The sender who wants to transmit a quantum state. Bob: The receiver who w
4949
</item>
5050

5151
<item>
52-
<title>Quantum Computing - Fundementals (Part 1)</title>
52+
<title>Quantum Computing - Fundamentals (Part 1)</title>
5353
<link>http://localhost:1313/posts/physics/quantum_computing/introduction/</link>
5454
<pubDate>Sun, 30 Jun 2024 08:00:00 +0100</pubDate>
5555

public/posts/machine-learning/deep-learning/index.html

+21-20
Original file line numberDiff line numberDiff line change
@@ -485,16 +485,16 @@
485485
<div class="post-card">
486486
<div class="card">
487487
<div class="card-head">
488-
<a href="/posts/machine-learning/deep-learning/computer-vision/florence/" class="post-card-link">
489-
<img class="card-img-top" src='/posts/machine-learning/deep-learning/computer-vision/florence/images/florence-2-lvm-computer-vision-exploration_28_3.png' alt="Hero Image">
488+
<a href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/" class="post-card-link">
489+
<img class="card-img-top" src='/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram-hd.png' alt="Hero Image">
490490
</a>
491491
</div>
492492
<div class="card-body">
493-
<a href="/posts/machine-learning/deep-learning/computer-vision/florence/" class="post-card-link">
494-
<h5 class="card-title">Florence-2 - Vision Foundation Model - Examples</h5>
495-
<p class="card-text post-summary">Install dependencies Type the following command to install possible needed dependencies (especially if the inference is performed on the CPU)
496-
%pip install einops flash_attn In Kaggle, transformers and torch are already installed. Otherwise you also need to install them on your local PC.
497-
Import Libraries from transformers import AutoProcessor, AutoModelForCausalLM from PIL import Image import requests import copy import torch %matplotlib inline Import the model We can choose Florence-2-large or Florence-2-large-ft (fine-tuned).</p>
493+
<a href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/" class="post-card-link">
494+
<h5 class="card-title">Gemma-2 &#43; RAG &#43; LlamaIndex &#43; VectorDB</h5>
495+
<p class="card-text post-summary">Open in:
496+
1. Introduction Retrieval-Augmented Generation (RAG) is an advanced AI technique that enhances large language models (LLMs) with the ability to access and utilize external knowledge. This guide will walk you through a practical implementation of RAG using Python and various libraries, explaining each component in detail.
497+
2. Setup and Import %pip install transformers accelerate bitsandbytes flash-attn faiss-cpu llama-index -Uq %pip install llama-index-embeddings-huggingface -q %pip install llama-index-llms-huggingface -q %pip install llama-index-embeddings-instructor llama-index-vector-stores-faiss -q import contextlib import os import torch device = torch.</p>
498498
</a>
499499

500500
<div class="tags">
@@ -504,7 +504,7 @@ <h5 class="card-title">Florence-2 - Vision Foundation Model - Examples</h5>
504504
<li class="rounded"><a href="/tags/deep-learning/" class="btn btn-sm btn-info">Deep Learning</a></li>
505505

506506

507-
<li class="rounded"><a href="/tags/computer-vision/" class="btn btn-sm btn-info">Computer Vision</a></li>
507+
<li class="rounded"><a href="/tags/nlp/" class="btn btn-sm btn-info">NLP</a></li>
508508

509509

510510
<li class="rounded"><a href="/tags/machine-learning/" class="btn btn-sm btn-info">Machine Learning</a></li>
@@ -516,10 +516,10 @@ <h5 class="card-title">Florence-2 - Vision Foundation Model - Examples</h5>
516516
</div>
517517
<div class="card-footer">
518518
<span class="float-start">
519-
Tuesday, June 25, 2024
520-
| 5 minutes </span>
519+
Sunday, July 14, 2024
520+
| 14 minutes </span>
521521
<a
522-
href="/posts/machine-learning/deep-learning/computer-vision/florence/"
522+
href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/"
523523
class="float-end btn btn-outline-info btn-sm">Read</a>
524524
</div>
525525
</div>
@@ -531,15 +531,16 @@ <h5 class="card-title">Florence-2 - Vision Foundation Model - Examples</h5>
531531
<div class="post-card">
532532
<div class="card">
533533
<div class="card-head">
534-
<a href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/" class="post-card-link">
535-
<img class="card-img-top" src='/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram.svg' alt="Hero Image">
534+
<a href="/posts/machine-learning/deep-learning/computer-vision/florence/" class="post-card-link">
535+
<img class="card-img-top" src='/posts/machine-learning/deep-learning/computer-vision/florence/images/florence-2-lvm-computer-vision-exploration_28_3.png' alt="Hero Image">
536536
</a>
537537
</div>
538538
<div class="card-body">
539-
<a href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/" class="post-card-link">
540-
<h5 class="card-title">Gemma-2 &#43; RAG &#43; LlamaIndex &#43; VectorDB</h5>
541-
<p class="card-text post-summary">Introduction Retrieval-Augmented Generation (RAG) is an advanced AI technique that enhances large language models (LLMs) with the ability to access and utilize external knowledge. This guide will walk you through a practical implementation of RAG using Python and various libraries, explaining each component in detail.
542-
Setup and Import %pip install transformers accelerate bitsandbytes flash-attn faiss-cpu llama-index -Uq %pip install llama-index-embeddings-huggingface -q %pip install llama-index-llms-huggingface -q %pip install llama-index-embeddings-instructor llama-index-vector-stores-faiss -q import contextlib import os import torch device = torch.</p>
539+
<a href="/posts/machine-learning/deep-learning/computer-vision/florence/" class="post-card-link">
540+
<h5 class="card-title">Florence-2 - Vision Foundation Model - Examples</h5>
541+
<p class="card-text post-summary">Install dependencies Type the following command to install possible needed dependencies (especially if the inference is performed on the CPU)
542+
%pip install einops flash_attn In Kaggle, transformers and torch are already installed. Otherwise you also need to install them on your local PC.
543+
Import Libraries from transformers import AutoProcessor, AutoModelForCausalLM from PIL import Image import requests import copy import torch %matplotlib inline Import the model We can choose Florence-2-large or Florence-2-large-ft (fine-tuned).</p>
543544
</a>
544545

545546
<div class="tags">
@@ -549,7 +550,7 @@ <h5 class="card-title">Gemma-2 &#43; RAG &#43; LlamaIndex &#43; VectorDB</h5>
549550
<li class="rounded"><a href="/tags/deep-learning/" class="btn btn-sm btn-info">Deep Learning</a></li>
550551

551552

552-
<li class="rounded"><a href="/tags/nlp/" class="btn btn-sm btn-info">NLP</a></li>
553+
<li class="rounded"><a href="/tags/computer-vision/" class="btn btn-sm btn-info">Computer Vision</a></li>
553554

554555

555556
<li class="rounded"><a href="/tags/machine-learning/" class="btn btn-sm btn-info">Machine Learning</a></li>
@@ -562,9 +563,9 @@ <h5 class="card-title">Gemma-2 &#43; RAG &#43; LlamaIndex &#43; VectorDB</h5>
562563
<div class="card-footer">
563564
<span class="float-start">
564565
Tuesday, June 25, 2024
565-
| 15 minutes </span>
566+
| 5 minutes </span>
566567
<a
567-
href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/"
568+
href="/posts/machine-learning/deep-learning/computer-vision/florence/"
568569
class="float-end btn btn-outline-info btn-sm">Read</a>
569570
</div>
570571
</div>

0 commit comments

Comments
 (0)