QwQ-32B-AWQ

QwQ is the reasoning model of the Qwen series, designed for enhanced problem-solving and downstream task performance. QwQ-32B competes with top reasoning models like DeepSeek-R1 and o1-mini.

Input

Prompt:

Press Ctrl + Enter to submit
The maximum number of tokens to generate. Shorter token lengths will provide faster performance.
A decimal number that determines the degree of randomness in the response
An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass.

Output

Submit a prompt for a response.

Notes

QwQ-32B-AWQ

Model source

Model is serving with lmdeploy

Introduction

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.

This repo contains the AWQ-quantized 4-bit QwQ 32B model, which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  • Number of Parameters: 32.5B
  • Number of Paramaters (Non-Embedding): 31.0B
  • Number of Layers: 64
  • Number of Attention Heads (GQA): 40 for Q and 8 for KV
  • Context Length: Full 131,072 tokens
  • Quantization: AWQ 4-bit

Note: For the best experience, please review the usage guidelines before deploying QwQ models.

You can try our demo or access QwQ models via QwenChat.

For more details, please refer to our blog, GitHub, and Documentation.

Usage

Set your PAT

Export your PAT as an environment variable. Then, import and initialize the API Client.

Find your PAT in your security settings.

  • Linux/Mac: export CLARIFAI_PAT="your personal access token"

  • Windows (Powershell): $env:CLARIFAI_PAT="your personal access token"

Running the API with Clarifai's Python SDK

# Please run `pip install -U clarifai` before running this script

from clarifai.client import Model
from clarifai_grpc.grpc.api.status import status_code_pb2


model = Model(url="https://clarifai.com/qwen/qwenLM/models/QwQ-32B-AWQ")
prompt = "What’s the future of AI?"

results = model.generate_by_bytes(prompt.encode("utf-8"), "text")

for res in results:
  if res.status.code == status_code_pb2.SUCCESS:
    print(res.outputs[0].data.text.raw, end='', flush=True)

Evaluation & Performance

Detailed evaluation results are reported in this 📑 blog.

For requirements on GPU memory and the respective throughput, see results here.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwq32b,
    title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
    url = {https://qwenlm.github.io/blog/qwq-32b/},
    author = {Qwen Team},
    month = {March},
    year = {2025}
}

@article{qwen2.5,
      title={Qwen2.5 Technical Report}, 
      author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
      journal={arXiv preprint arXiv:2412.15115},
      year={2024}
}
  • ID
  • Model Type ID
    Text To Text
  • Input Type
    text
  • Output Type
    text
  • Description
    QwQ is the reasoning model of the Qwen series, designed for enhanced problem-solving and downstream task performance. QwQ-32B competes with top reasoning models like DeepSeek-R1 and o1-mini.
  • Last Updated
    Mar 14, 2025
  • Privacy
    PUBLIC
  • License
  • Share
  • Badge
    QwQ-32B-AWQ