Releases: xorbitsai/inference
v1.4.1
What's new in 1.4.1 (2025-04-03)
These are the changes in inference v1.4.1.
New features
- FEAT: Support Fin-R1 model by @Jun-Howie in #3116
- FEAT: distributed inference for vLLM by @qinxuye in #3120
- FEAT: Support gptq(int4, int8) and fp8 for Fin-R1 model by @Jun-Howie in #3157
- feat: fix the quantization parameter in the vLLM engine cannot work by @amumu96 in #3159
- FEAT: sglang vision by @Minamiyama in #3150
- FEAT: support max_completion_tokens by @amumu96 in #3168
- FEAT: support DeepSeek-VL2 by @Jun-Howie in #3179
Enhancements
- ENH: support for qwen2.5-vl-32b by @Minamiyama in #3119
- ENH: sglang supports gptq int8 quantization now by @Minamiyama in #3149
- ENH: Add validation of n_worker by @rexjm in #3166
- ENH: add qwen2.5-vl-32b-awq supported, and fix 7b-awq download hub typo by @Minamiyama in #3169
- BLD: use gptqmodel to replace auto-gptq by @qinxuye in #3147
- BLD: resolve docker fail by @amumu96 in #3164
Bug fixes
- BUG: Fix PyTorch TypeError: Make _ModelWrapper Inherit from nn.Module by @JamesFlare1212 in #3131
- BUG: fix llm stream response by @amumu96 in #3115
- BUG: prevent potential stop hang for distributed vllm inference by @qinxuye in #3180
Documentation
New Contributors
- @JamesFlare1212 made their first contribution in #3131
- @rexjm made their first contribution in #3166
Full Changelog: v1.4.0...v1.4.1
v1.4.0
What's new in 1.4.0 (2025-03-21)
These are the changes in inference v1.4.0.
New features
- FEAT: Support gemma-3 text part by @zky001 in #3077
- FEAT: Gemma-3-it that supports vision by @qinxuye in #3102
- FEAT: add deepseek v3 function calling by @rogercloud in #3103
Enhancements
- ENH: xllamacpp backend raise exception if failed by @codingl2k1 in #3053
- ENH: [UI] change 'GPU Count' to 'GPU Count per Replica'. by @yiboyasss in #3078
Bug fixes
- BUG: [UI] fix dark mode bugs. by @yiboyasss in #3028
- BUG: fix Internvl2.5-mpo awq, fix model card info typo by @Minamiyama in #3067
- BUG: fix max_tokens for MLX VL models. by @qinxuye in #3072
- BUG:fix vLLM parameter "enable_prefix_caching" by @Gmgge in #3081
- BUG: fix first token error and support deepseek stream api by @amumu96 in #3090
Documentation
- DOC: add auth usage guide for http request by @Minamiyama in #3065
- DOC: add xllamacpp related docs by @qinxuye in #3088
Others
- FIX: [UI] remove the restriction of model_format on n_gpu for llama.cpp by @yiboyasss in #3050
New Contributors
- @Gmgge made their first contribution in #3081
- @zky001 made their first contribution in #3077
- @rogercloud made their first contribution in #3103
Full Changelog: v1.3.1...v1.4.0
v1.3.1.post1
What's new in 1.3.1.post1 (2025-03-11)
These are the changes in inference v1.3.1.post1.
Bug fixes
- BUG: Fix reasoning content parser for qwq-32b by @amumu96 in #3024
- BUG: Failed to download model 'QwQ-32B' (size: 32, format: ggufv2) after multiple retries by @Jun-Howie in #3031
Documentation
Full Changelog: v1.3.1...v1.3.1.post1
v1.3.1
What's new in 1.3.1 (2025-03-09)
These are the changes in inference v1.3.1.
New features
- FEAT: Support qwen2.5-instruct-1m by @Jun-Howie in #2928
- FEAT: Support moonlight-16b-a3b by @Jun-Howie in #2963
- FEAT: create_embedding add field model_replica by @zhoudelong in #2779
- FEAT: [UI] add the reasoning_content parameter. by @yiboyasss in #2980
- FEAT: Support QwQ-32B by @cyhasuka in #3005
- FEAT: all engine support reasoning_content by @amumu96 in #3013
Enhancements
- ENH: InternVL2.5-MPO by @Minamiyama in #2913
- ENH: [UI] add copy button by @Minamiyama in #2920
- ENH: [UI] add model ability filtering feature to the audio model. by @yiboyasss in #2986
- ENH: Support xllamacpp by @codingl2k1 in #2997
- BLD: Install ffmpeg 6 for audio & video models by @phuchoang2603 in #2946
- BLD: fix ffprobe library not imported by @phuchoang2603 in #2971
- BLD: fix docker requirements for sglang by @qinxuye in #3015
- REF: [UI] move featureModels to data.js by @yiboyasss in #3008
Bug fixes
- BUG: fix qwen2.5-vl-7b cannot chat bug by @amumu96 in #2944
- BUG: Fix modelscope model id on Qwen2.5-VL Added support for AWQ quantization format in Qwen2.5-VL by @Jun-Howie in #2943
- BUG: fix Error while using Langchain-chatchat, because the parameter [max_tokens] passed is None by @William533036 in #2962
- BUG: using jina-clip-v2, no attribute error when only text of image pass in by @Minamiyama in #2974
- BUG: fix compatibility of mlx-lm v0.21.5 by @qinxuye in #2993
- BUG: Fix tokenizer error in create_embedding by @shuaiqidezhong in #2992
- BUG: wrong kwargs passing to encode method when using jina-clip-v2 by @Minamiyama in #2991
- BUG: [UI] fix the white screen bug. by @yiboyasss in #3014
New Contributors
- @phuchoang2603 made their first contribution in #2946
- @William533036 made their first contribution in #2962
- @zhoudelong made their first contribution in #2779
Full Changelog: v1.3.0.post2...v1.3.1
v1.3.0.post2
What's new in 1.3.0.post2 (2025-02-22)
These are the changes in inference v1.3.0.post2.
Bug fixes
Full Changelog: v1.3.0.post1...v1.3.0.post2
v1.3.0.post1
What's new in 1.3.0.post1 (2025-02-21)
These are the changes in inference v1.3.0.post1.
New features
- FEAT: Support qwen-2.5-instruct-1m by @Jun-Howie in #2841
- FEAT: support deepseek-v3 and deepseek-r1 by @qinxuye in #2864
- FEAT: [UI] additional parameter tip function. by @yiboyasss in #2876
- FEAT: [UI] add featured models filtering function. by @yiboyasss in #2871
- FEAT: [UI] support form parameters and command line conversion. by @yiboyasss in #2850
- FEAT: support distributed inference for sglang by @qinxuye in #2877
- FEAT: [UI] add n_worker parameter for model launch. by @yiboyasss in #2889
- FEAT: InternVL 2.5 by @Minamiyama in #2776
- FEAT: support vllm reasoning content by @amumu96 in #2905
Enhancements
- enh: add gpu utilization info by @amumu96 in #2852
- ENH: Update Kokoro model by @codingl2k1 in #2843
- ENH: cmdline supports --n-worker, add --model-path and make it compatible with --model_path by @qinxuye in #2890
- BLD: update sglang to v0.4.2.post4 and vllm to v0.7.2 by @qinxuye in #2838
- BLD: fix flashinfer installation in dockerfile by @qinxuye in #2844
Bug fixes
- BUG: Fix whisper CI by @codingl2k1 in #2822
- BUG: fix FLUX when a scheduler is specified which is incompatible. by @shuaiqidezhong in #2897
- BUG: [UI] fix the bug of missing hint during model running. by @yiboyasss in #2904
- BUG: Clear dependency by @codingl2k1 in #2910
Tests
- TST: Pin CI transformers<4.49 by @codingl2k1 in #2883
- TST: fix lint error by @amumu96 in #2911
Documentation
Others
- CHORE: Xavier now supports
vLLM >= 0.7.0
, drops support for older versions by @ChengjieLi28 in #2886
New Contributors
- @shuaiqidezhong made their first contribution in #2897
Full Changelog: v1.2.2...v1.3.0.post1
v1.2.2
What's new in 1.2.2 (2025-02-08)
These are the changes in inference v1.2.2.
New features
- FEAT: support qwen2.5-vl-instruct by @qinxuye in #2788
- FEAT: Support internlm3 by @Jun-Howie in #2789
- FEAT: support deepseek-r1-distill-llama by @qinxuye in #2811
- FEAT: Support Kokoro-82M by @codingl2k1 in #2790
- FEAT: vllm support for qwen2.5-vl-instruct by @qinxuye in #2821
Bug fixes
- BUG: fix llama-cpp when some quantizations have multiple parts by @qinxuye in #2786
- BUG: Use
Cache
class instead of rawtuple
for transformers continuous batching, compatible with latesttransformers
by @ChengjieLi28 in #2820
Documentation
- DOC: Update multimodal doc by @codingl2k1 in #2785
- DOC: update model docs by @qinxuye in #2792
- DOC: fix docs by @qinxuye in #2793
- DOC: Fix a couple of typos by @Paleski in #2817
New Contributors
Full Changelog: v1.2.1...v1.2.2
v1.2.1
What's new in 1.2.1 (2025-01-24)
These are the changes in inference v1.2.1.
New features
- FEAT: Support MeloTTS by @codingl2k1 in #2760
- FEAT: support deepseek-r1-distill-qwen by @qinxuye in #2781
Enhancements
- ENH: add model config for Whisper by @fonsc in #2755
- ENH: support cline style messages for all backend engines by @liunux4odoo in #2763
- ENH: CosyVoice2 support SFT speakers by @codingl2k1 in #2770
- ENH: Some improvements for Xavier by @ChengjieLi28 in #2777
Bug fixes
- BUG: Compat with openai extra body by @codingl2k1 in #2759
Tests
Documentation
- DOC: update new models in README and doc by @qinxuye in #2761
- DOC: using discord instead of slack & updating model to qwen2.5 in getting started doc by @qinxuye in #2775
Others
- FIX: [UI] normalize language input to ensure consistent array format. by @yiboyasss in #2771
New Contributors
Full Changelog: v1.2.0...v1.2.1
v1.2.0
What's new in 1.2.0 (2025-01-10)
These are the changes in inference v1.2.0.
New features
- FEAT: support HunyuanVideo by @qinxuye in #2721
- FEAT: support hunyuan-dit text2image by @qinxuye in #2727
- FEAT: support cline for vllm engine by @hwzhuhao in #2734
- FEAT: [UI] theme switch by @Minamiyama in #1335
- FEAT: support qwen2vl run on ascend npu by @Xu-pixel in #2741
- FEAT: [UI] Add language toggle for i18n support. by @yiboyasss in #2744
- FEAT: Support cogagent-9b by @amumu96 in #2740
- FEAT: Xavier: Share KV cache between VLLM replicas by @ChengjieLi28 in #2732
- FEAT: [UI] Add gguf_quantization, gguf_model_path, and cpu_offload for image models. by @yiboyasss in #2753
- FEAT: Support Marco-o1 by @Jun-Howie in #2749
Enhancements
- ENH: [UI] Update Button Style and Interaction Logic for Editing Cache in Model Card. by @yiboyasss in #2746
- ENH: Improve error message by @codingl2k1 in #2738
Bug fixes
- BUG: adapt mlx-vlm v0.1.7 by @qinxuye in #2724
- BUG: pin mlx<0.22.0 to prevent qwen2_vl failing in mlx-vlm by @qinxuye in #2752
Others
- FIX: [UI] Resolve bug preventing '/' input in model_path. by @yiboyasss in #2747
- FIX: [UI] Fix dark mode background bug. by @yiboyasss in #2748
- CHORE: Update new models in readme by @codingl2k1 in #2713
New Contributors
Full Changelog: v1.1.1...v1.2.0
v1.1.1
What's new in 1.1.1 (2024-12-27)
These are the changes in inference v1.1.1.
New features
- FEAT: support F5-TTS-MLX by @qinxuye in #2671
- FEAT: Support qwen2.5-coder-instruct model for tool calls by @Timmy-web in #2681
- FEAT: Support minicpm-4B on vllm by @Jun-Howie in #2697
- FEAT: support scheduling-policy for vllm by @hwzhuhao in #2700
- FEAT: Support QvQ-72B-Preview by @Jun-Howie in #2712
- FEAT: support SD3.5 series model by @qinxuye in #2706
Enhancements
- ENH: Guided Decoding OpenAIClient compatibility by @wxiwnd in #2673
- ENH: resample f5-tts-mlx ref audio when sample rate not synching. by @qinxuye in #2678
- ENH: support no images for MLX vlm by @qinxuye in #2670
- ENH: Update fish speech 1.5 by @codingl2k1 in #2672
- ENH: Update cosyvoice 2 by @codingl2k1 in #2684
- REF: Reduce code redundancy by setting default values by @pengjunfeng11 in #2711
Bug fixes
- BUG: Fix f5tts audio ref by @codingl2k1 in #2680
- BUG:
glm4-chat
cannot apply for continuous batching with transformers backend by @ChengjieLi28 in #2695
New Contributors
- @Timmy-web made their first contribution in #2681
Full Changelog: v1.1.0...v1.1.1