#qwen3

  1. fastembed

    generating vector embeddings, reranking locally

    v5.13.0 229K #onnx #text-embedding #embedding-model #vector-embedding #generator #reranking #vector-search #reranker #hugging-face #qwen3
  2. qts_cli

    Command-line tools for Qwen3 TTS synthesis and WAV output

    v0.1.0 #text-to-speech #model #prompt #wav #qwen3 #tui #ggml #vocoder #audio #onnx
  3. large

    Rust LLM inference implementation

    v0.2.0 #gguf #inference #tokenize #llm-inference #top-p #cache #qwen3 #dot-product #bpe #gpt-2
  4. async-dashscope

    client for DashScope API

    v0.12.0 #sdk #websocket #qwen #asr #text-to-speech #支持 #embedding #qwen3 #deepseek #示例
  5. qts

    Qwen3 TTS inference (GGUF + GGML); Rust API for host apps and gdext

    v0.1.0 #gguf #onnx #text-to-speech #ggml #exported #qwen3 #sample-rate #vocoder #gdext #checkpoint
  6. qwen3-asr-rs

    Pure Rust implementation of Qwen3 ASR (Automatic Speech Recognition) with libtorch and MLX backends

    v0.2.0 #text-to-speech #safetensors #qwen3 #asr #audio #mlx #libtorch #audio-format #audio-transcription #artificial-intelligence
  7. candle-pipelines

    intuitive pipelines for local LLM inference in Rust, powered by Candle. Inspired by Python's Transformers library.

    v0.0.7 120 #inference #llama #llm #pipeline #candle #hugging-face #llm-inference #classification #qwen3 #text-output
  8. cuttle

    A large language model inference engine in Rust

    v0.1.1 #inference-engine #language-model #model-inference #tokenize #qwen3 #text-generation #performance-monitoring #benchmark
  9. car-inference

    Local model inference for CAR — Candle backend with Qwen3 models

    v0.1.1 #inference-engine #model-inference #car #local-model #candle #qwen3 #model-schema #cuda #hugging-face #openai
  10. Try searching with DuckDuckGo.