Skip to content

Commit 7ebd50c

Browse files
authored
🚀🚀🚀 Transformers.js V3 🚀🚀🚀
2 parents 880a2cc + 7a58d6e commit 7ebd50c

File tree

259 files changed

+31351
-11172
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

259 files changed

+31351
-11172
lines changed

.github/workflows/documentation.yml

-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,6 @@ jobs:
1010
build:
1111
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
1212
with:
13-
repo_owner: xenova
1413
commit_sha: ${{ github.sha }}
1514
package: transformers.js
1615
path_to_docs: transformers.js/docs/source

.github/workflows/pr-documentation.yml

-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ jobs:
1111
build:
1212
uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
1313
with:
14-
repo_owner: xenova
1514
commit_sha: ${{ github.sha }}
1615
pr_number: ${{ github.event.number }}
1716
package: transformers.js

.github/workflows/tests.yml

+8-7
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,20 @@ on:
77
pull_request:
88
branches:
99
- main
10-
11-
env:
12-
TESTING_REMOTELY: true
10+
types:
11+
- opened
12+
- reopened
13+
- synchronize
14+
- ready_for_review
1315

1416
jobs:
1517
build:
18+
if: github.event.pull_request.draft == false
1619
runs-on: ubuntu-latest
1720

1821
strategy:
1922
matrix:
20-
node-version: [18.x, latest, node]
23+
node-version: [18, 20, 22]
2124

2225
steps:
2326
- uses: actions/checkout@v4
@@ -27,11 +30,9 @@ jobs:
2730
node-version: ${{ matrix.node-version }}
2831
- run: npm ci
2932
- run: npm run build
30-
- run: pip install -r tests/requirements.txt
3133

3234
# Setup the testing environment
33-
- run: npm run generate-tests
34-
- run: git lfs install && GIT_CLONE_PROTECTION_ACTIVE=false git clone https://huggingface.co/Xenova/t5-small ./models/t5-small
35+
- run: git lfs install && GIT_CLONE_PROTECTION_ACTIVE=false git clone https://huggingface.co/hf-internal-testing/tiny-random-T5ForConditionalGeneration ./models/hf-internal-testing/tiny-random-T5ForConditionalGeneration
3536

3637
# Actually run tests
3738
- run: npm run test

.prettierignore

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Ignore artifacts:
2+
.github
3+
dist
4+
docs
5+
examples
6+
scripts
7+
types
8+
*.md

.prettierrc

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
{
2+
"overrides": [
3+
{
4+
"files": ["tests/**/*.js"],
5+
"options": {
6+
"printWidth": 10000000
7+
}
8+
}
9+
]
10+
}

README.md

+67-32
Large diffs are not rendered by default.

docs/scripts/build_readme.py

+19-9
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,29 @@
55
<p align="center">
66
<br/>
77
<picture>
8-
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/xenova/transformers.js/assets/26504141/bd047e0f-aca9-4ff7-ba07-c7ca55442bc4" width="500" style="max-width: 100%;">
9-
<source media="(prefers-color-scheme: light)" srcset="https://github.com/xenova/transformers.js/assets/26504141/84a5dc78-f4ea-43f4-96f2-b8c791f30a8e" width="500" style="max-width: 100%;">
10-
<img alt="transformers.js javascript library logo" src="https://github.com/xenova/transformers.js/assets/26504141/84a5dc78-f4ea-43f4-96f2-b8c791f30a8e" width="500" style="max-width: 100%;">
8+
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-dark.svg" width="500" style="max-width: 100%;">
9+
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
10+
<img alt="transformers.js javascript library logo" src="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
1111
</picture>
1212
<br/>
1313
</p>
1414
1515
<p align="center">
16-
<a href="https://www.npmjs.com/package/@xenova/transformers"><img alt="NPM" src="https://img.shields.io/npm/v/@xenova/transformers"></a>
17-
<a href="https://www.npmjs.com/package/@xenova/transformers"><img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@xenova/transformers"></a>
18-
<a href="https://www.jsdelivr.com/package/npm/@xenova/transformers"><img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@xenova/transformers"></a>
19-
<a href="https://github.com/xenova/transformers.js/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/xenova/transformers.js?color=blue"></a>
20-
<a href="https://huggingface.co/docs/transformers.js/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers.js/index.svg?down_color=red&down_message=offline&up_message=online"></a>
16+
<a href="https://www.npmjs.com/package/@huggingface/transformers">
17+
<img alt="NPM" src="https://img.shields.io/npm/v/@huggingface/transformers">
18+
</a>
19+
<a href="https://www.npmjs.com/package/@huggingface/transformers">
20+
<img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@huggingface/transformers">
21+
</a>
22+
<a href="https://www.jsdelivr.com/package/npm/@huggingface/transformers">
23+
<img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@huggingface/transformers">
24+
</a>
25+
<a href="https://github.com/huggingface/transformers.js/blob/main/LICENSE">
26+
<img alt="License" src="https://img.shields.io/github/license/huggingface/transformers.js?color=blue">
27+
</a>
28+
<a href="https://huggingface.co/docs/transformers.js/index">
29+
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers.js/index.svg?down_color=red&down_message=offline&up_message=online">
30+
</a>
2131
</p>
2232
2333
{intro}
@@ -42,7 +52,7 @@
4252
4353
Here is the list of all tasks and architectures currently supported by Transformers.js.
4454
If you don't see your task/model listed here or it is not yet supported, feel free
45-
to open up a feature request [here](https://github.com/xenova/transformers.js/issues/new/choose).
55+
to open up a feature request [here](https://github.com/huggingface/transformers.js/issues/new/choose).
4656
4757
To find compatible models on the Hub, select the "transformers.js" library tag in the filter menu (or visit [this link](https://huggingface.co/models?library=transformers.js)).
4858
You can refine your search by selecting the task you're interested in (e.g., [text-classification](https://huggingface.co/models?pipeline_tag=text-classification&library=transformers.js)).

docs/snippets/0_introduction.snippet

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in
33

44
Transformers.js is designed to be functionally equivalent to Hugging Face's [transformers](https://github.com/huggingface/transformers) python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:
55
- 📝 **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
6-
- 🖼️ **Computer Vision**: image classification, object detection, and segmentation.
7-
- 🗣️ **Audio**: automatic speech recognition and audio classification.
8-
- 🐙 **Multimodal**: zero-shot image classification.
6+
- 🖼️ **Computer Vision**: image classification, object detection, segmentation, and depth estimation.
7+
- 🗣️ **Audio**: automatic speech recognition, audio classification, and text-to-speech.
8+
- 🐙 **Multimodal**: embeddings, zero-shot audio classification, zero-shot image classification, and zero-shot object detection.
99

1010
Transformers.js uses [ONNX Runtime](https://onnxruntime.ai/) to run models in the browser. The best part about it, is that you can easily [convert](#convert-your-models-to-onnx) your pretrained PyTorch, TensorFlow, or JAX models to ONNX using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
1111

docs/snippets/1_quick-tour.snippet

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ out = pipe('I love transformers!')
2323
<td>
2424

2525
```javascript
26-
import { pipeline } from '@xenova/transformers';
26+
import { pipeline } from '@huggingface/transformers';
2727
2828
// Allocate a pipeline for sentiment-analysis
2929
let pipe = await pipeline('sentiment-analysis');

docs/snippets/2_installation.snippet

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11

2-
To install via [NPM](https://www.npmjs.com/package/@xenova/transformers), run:
2+
To install via [NPM](https://www.npmjs.com/package/@huggingface/transformers), run:
33
```bash
4-
npm i @xenova/transformers
4+
npm i @huggingface/transformers
55
```
66

77
Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
88
```html
99
<script type="module">
10-
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
10+
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.0.0';
1111
</script>
1212
```

docs/snippets/3_examples.snippet

+12-12
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,17 @@ Want to jump straight in? Get started with one of our sample applications/templa
44
|-------------------|----------------------------------|-------------------------------|
55
| Whisper Web | Speech recognition w/ Whisper | [code](https://github.com/xenova/whisper-web), [demo](https://huggingface.co/spaces/Xenova/whisper-web) |
66
| Doodle Dash | Real-time sketch-recognition game | [blog](https://huggingface.co/blog/ml-web-games), [code](https://github.com/xenova/doodle-dash), [demo](https://huggingface.co/spaces/Xenova/doodle-dash) |
7-
| Code Playground | In-browser code completion website | [code](https://github.com/xenova/transformers.js/tree/main/examples/code-completion/), [demo](https://huggingface.co/spaces/Xenova/ai-code-playground) |
8-
| Semantic Image Search (client-side) | Search for images with text | [code](https://github.com/xenova/transformers.js/tree/main/examples/semantic-image-search-client/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search-client) |
9-
| Semantic Image Search (server-side) | Search for images with text (Supabase) | [code](https://github.com/xenova/transformers.js/tree/main/examples/semantic-image-search/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search) |
10-
| Vanilla JavaScript | In-browser object detection | [video](https://scrimba.com/scrim/cKm9bDAg), [code](https://github.com/xenova/transformers.js/tree/main/examples/vanilla-js/), [demo](https://huggingface.co/spaces/Scrimba/vanilla-js-object-detector) |
11-
| React | Multilingual translation website | [code](https://github.com/xenova/transformers.js/tree/main/examples/react-translator/), [demo](https://huggingface.co/spaces/Xenova/react-translator) |
12-
| Text to speech (client-side) | In-browser speech synthesis | [code](https://github.com/xenova/transformers.js/tree/main/examples/text-to-speech-client/), [demo](https://huggingface.co/spaces/Xenova/text-to-speech-client) |
13-
| Browser extension | Text classification extension | [code](https://github.com/xenova/transformers.js/tree/main/examples/extension/) |
14-
| Electron | Text classification application | [code](https://github.com/xenova/transformers.js/tree/main/examples/electron/) |
15-
| Next.js (client-side) | Sentiment analysis (in-browser inference) | [code](https://github.com/xenova/transformers.js/tree/main/examples/next-client/), [demo](https://huggingface.co/spaces/Xenova/next-example-app) |
16-
| Next.js (server-side) | Sentiment analysis (Node.js inference) | [code](https://github.com/xenova/transformers.js/tree/main/examples/next-server/), [demo](https://huggingface.co/spaces/Xenova/next-server-example-app) |
17-
| Node.js | Sentiment analysis API | [code](https://github.com/xenova/transformers.js/tree/main/examples/node/) |
18-
| Demo site | A collection of demos | [code](https://github.com/xenova/transformers.js/tree/main/examples/demo-site/), [demo](https://xenova.github.io/transformers.js/) |
7+
| Code Playground | In-browser code completion website | [code](https://github.com/huggingface/transformers.js/tree/main/examples/code-completion/), [demo](https://huggingface.co/spaces/Xenova/ai-code-playground) |
8+
| Semantic Image Search (client-side) | Search for images with text | [code](https://github.com/huggingface/transformers.js/tree/main/examples/semantic-image-search-client/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search-client) |
9+
| Semantic Image Search (server-side) | Search for images with text (Supabase) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/semantic-image-search/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search) |
10+
| Vanilla JavaScript | In-browser object detection | [video](https://scrimba.com/scrim/cKm9bDAg), [code](https://github.com/huggingface/transformers.js/tree/main/examples/vanilla-js/), [demo](https://huggingface.co/spaces/Scrimba/vanilla-js-object-detector) |
11+
| React | Multilingual translation website | [code](https://github.com/huggingface/transformers.js/tree/main/examples/react-translator/), [demo](https://huggingface.co/spaces/Xenova/react-translator) |
12+
| Text to speech (client-side) | In-browser speech synthesis | [code](https://github.com/huggingface/transformers.js/tree/main/examples/text-to-speech-client/), [demo](https://huggingface.co/spaces/Xenova/text-to-speech-client) |
13+
| Browser extension | Text classification extension | [code](https://github.com/huggingface/transformers.js/tree/main/examples/extension/) |
14+
| Electron | Text classification application | [code](https://github.com/huggingface/transformers.js/tree/main/examples/electron/) |
15+
| Next.js (client-side) | Sentiment analysis (in-browser inference) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/next-client/), [demo](https://huggingface.co/spaces/Xenova/next-example-app) |
16+
| Next.js (server-side) | Sentiment analysis (Node.js inference) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/next-server/), [demo](https://huggingface.co/spaces/Xenova/next-server-example-app) |
17+
| Node.js | Sentiment analysis API | [code](https://github.com/huggingface/transformers.js/tree/main/examples/node/) |
18+
| Demo site | A collection of demos | [code](https://github.com/huggingface/transformers.js/tree/main/examples/demo-site/), [demo](https://xenova.github.io/transformers.js/) |
1919

2020
Check out the Transformers.js [template](https://huggingface.co/new-space?template=static-templates%2Ftransformers.js) on Hugging Face to get started in one click!

docs/snippets/4_custom-usage.snippet

+3-4
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,11 @@
11

22

3-
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:
4-
3+
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:
54

65
### Settings
76

87
```javascript
9-
import { env } from '@xenova/transformers';
8+
import { env } from '@huggingface/transformers';
109
1110
// Specify a custom location for models (defaults to '/models/').
1211
env.localModelPath = '/path/to/models/';
@@ -22,7 +21,7 @@ For a full list of available settings, check out the [API Reference](./api/env).
2221

2322
### Convert your models to ONNX
2423

25-
We recommend using our [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [🤗 Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.
24+
We recommend using our [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [🤗 Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.
2625

2726
```bash
2827
python -m scripts.convert --quantize --model_id <model_name_or_path>

0 commit comments

Comments
 (0)