The prompt for our WizardLM is:
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
WizardLM-13B V1.2 of 13B parameters in 4 bit format with max_new_tokens = 512 as default. Please use in accordance with Llama-2's license terms.
WizardLM-13B V1.2 is a large language model, trained from Llama-2 13b. on AI-evolved instructions using the Evol+ approach. This model is designed to follow complex instructions and generate coherent and fluent text in response to various inputs. The model is trained on a large volume of text data and is capable of achieving high performance on various natural language processing tasks.
WizardLM-13B V1.2 is a transformer-based language model with 13 billion parameters. It is fine-tuned on AI-evolved instructions using the Evol+ approach. The model is pre-trained on a large corpus of text data and fine-tuned on the Llama-2 dataset to generate high-quality responses to complex instructions. The model supports a 4k context window and is licensed under the same terms as Llama-2.
What is the capital of France?
The capital of France is Paris.
Translate "Bonjour, comment ça va?" to English.
Hello, how are you?
Write a short story about a boy who discovers a magical world.
Once upon a time, there was a boy named Jack who stumbled upon a magical world hidden behind a secret door in his backyard...
WizardLM-13B V1.2 is fine-tuned on the Llama-2 dataset, which is a collection of AI-evolved instructions using the Evol+ approach. The dataset is designed to test the ability of language models to follow complex instructions and generate high-quality responses. The dataset contains a variety of instructions, ranging from simple to complex, and covers a wide range of natural language processing tasks.
WizardLM-13B V1.2 is evaluated using both automatic and human evaluation metrics. The model is evaluated on various natural language processing tasks, including question answering, language translation, and text generation. The model achieves impressive results, with a score of 7.06 on MT-Bench, 89.17% on Alpaca Eval, and 101.4% on WizardLM Eval.
Despite its high performance on various natural language processing tasks, WizardLM-13B V1.2 has some limitations. The model may not perform well on tasks that require domain-specific knowledge or on tasks that
- Model Type IDText To Text
- DescriptionWizardLM models are finetuned on Llama2-13B llm using Evol+ methods, delivers outstanding performance: 7.06 on MT-Bench, 89.17% on Alpaca Eval, and 101.4% on WizardLM Eval
- Last UpdatedOct 26, 2023
- Use Case