- Community
- Model
- multilingual-multimodal-clip-embed
e3289fa66be4419eb2958ba74b6e9fee
e3289fa66be4419eb2958ba74b6e9fee
Notes
Multilingual Multimodal CLIP
From https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1
This is a multi-lingual version of the OpenAI CLIP-ViT-B32 model. You can map text (in 50+ languages) and images to a common dense vector space such that images and the matching texts are close. This model can be used for image search (users search through a large collection of images) and for multi-lingual zero-shot image classification (image labels are defined as text).
This model is used in the Universal-Multilingual workflow. Use that as the app's base workflow to enable vector search with different languages!
- ID
- NameMultilingual Multimodal Clip Embedder
- Model Type IDMultimodal Embedder
- DescriptionCLIP-based multilingual multimodal embedding model.
- Last UpdatedOct 25, 2024
- PrivacyPUBLIC
- License
- Share
- Badge
Concept | Date |
---|