claude-v2

The chat completion model, driven by an LLM, generates contextually relevant and coherent responses

Input

Prompt:

Press Ctrl + Enter to submit
The maximum number of tokens to generate. Shorter token lengths will provide faster performance.
A decimal number that determines the degree of randomness in the response
The top_k parameter is used to limit the number of choices for the next predicted word or token.

Output

Notes

  • ID
  • Model Type ID
    Text To Text
  • Input Type
    text
  • Output Type
    text
  • Description
    The chat completion model, driven by an LLM, generates contextually relevant and coherent responses
  • Last Updated
    Oct 17, 2024
  • Privacy
    PUBLIC
  • Use Case
  • Toolkit
  • License
  • Share
  • Badge
    claude-v2