Playground
NEW
Compute
NEW
Community
HELP
Contact Us
Documentation
Quick Start Guide
API Status
Join our Discord Channel
Product Roadmap
Log in
Sign up
Apps / Templates
Models
Workflows
Modules
All
Starred
Sort by:
Last Updated
SORT BY
Star Count
Last Updated
Last Created
Model ID
Order
Ascending
Descending
DeepSeek-R1
1
Text To Text
DeepSeek-R1 base model.
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Copy ID
Delete
DeepSeek-R1-Distill-Qwen-32B
2
Text To Text
DeepSeek-R1-Distill-Qwen-32B is a 32B-parameter dense model distilled from DeepSeek-R1 based on Qwen-32B.
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Copy ID
Delete
deepseek-coder-33b-instruct
3
Text To Text
DeepSeek-Coder-33B-Instruct model is a SOTA 33 billion parameter code generation model, fine-tuned on 2 billion tokens of instruction data, offering superior performance in code completion...
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Copy ID
Delete
DeepSeek-R1-Distill-Qwen-1_5B
1
Text To Text
DeepSeek-R1-Distill-Qwen-1_5B is a 1.5B-parameter dense model distilled from DeepSeek-R1 based on Qwen-1.5B.
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Copy ID
Delete
DeepSeek-R1-Distill-Qwen-7B
Text To Text
DeepSeek-R1-Distill-Qwen-7B is a 7B-parameter dense model distilled from DeepSeek-R1 based on Qwen-7B.
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Copy ID
Delete
DeepSeek-R1-Distill-Qwen-14B
Text To Text
DeepSeek-R1-Distill-Qwen-14B is a 14B-parameter dense model distilled from DeepSeek-R1 based on Qwen-14B.
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Copy ID
Delete
deepseek-V2-Chat
4
Text To Text
DeepSeek-V2-Chat: A high-performing, cost-effective 236 billion MoE LLM excelling in diverse tasks such as chat, code generation, and math reasoning
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Copy ID
Delete
Help