September 25, 2023

GPT-3.5 Turbo Instruct model from OpenAI

Table of Contents:
GPT-3.5 Turbo Instruct: An AI Instruction Model from OpenAI

GPT-3.5 Turbo Instruct Model From OpenAI

OpenAI's GPT-3.5-turbo-instruct is a language model designed to excel in understanding and executing specific instructions efficiently. Unlike GPT-3.5-turbo, which is primarily geared towards engaging in conversations, GPT-3.5-turbo-instruct shines in completing various tasks and answering questions directly.

This new instruction language model is designed for efficiently following specific instructions, similar to the chat-focused GPT-3.5-turbo. It offers the same cost and performance as other GPT-3.5 models, all within a 4K context window, and it uses training data up to September 2021.

The Purpose of GPT-3.5 Turbo Instruct

The primary goal of GPT-3.5-turbo-instruct is to effectively follow instructions. It is not meant to be a conversational model but rather a task-oriented one. This distinction sets it apart from chat models and makes it exceptionally efficient at providing precise responses. Whether you need specific tasks accomplished or have questions that require precise answers, this model has got you covered.

Perhaps the most notable distinction between GPT-3.5-turbo and GPT-3.5-turbo-instruct is their approach to interactions. While GPT-3.5-turbo is conversational and chatty, the Instruct variant is more task-oriented. It excels in following instructions without requiring additional prompts, making it an invaluable tool for specific tasks and queries.

The following image shows an OpenAI-designed differentiation between the Instruct and Chat models. The difference can have implications on how prompts need to be written for both the models.

You can use Clarifai’s LLM Battleground, a comparison module, to simultaneously compare GPT-3.5-turbo and GPT-3.5-turbo-instruct using different prompts across different tasks.

Running GPT-3.5-Turbo-Instruct model with Python

You can run GPT-3.5-Turbo-Instruct Model using the Clarifai's Python client.

Check out the Code Below:

######################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the URL of
# the text we want as an input. Change these strings to run your own example.
######################################################################################################
# Your PAT (Personal Access Token) can be found in the portal under Authentification
PAT = ''
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = 'openai'
APP_ID = 'completion'
# Change these to whatever model and text URL you want to use
MODEL_ID = 'gpt-3_5-turbo-instruct'
MODEL_VERSION_ID = 'd6185b9b500b4a1d9f9b947472e272f8'
RAW_TEXT = 'I love your product very much'
# To use a hosted text file, assign the url variable
# TEXT_FILE_URL = 'https://samples.clarifai.com/negative_sentence_12.txt'
# Or, to use a local text file, assign the url variable
# TEXT_FILE_LOCATION = 'YOUR_TEXT_FILE_LOCATION_HERE'
############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
metadata = (('authorization', 'Key ' + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
# To use a local text file, uncomment the following lines
# with open(TEXT_FILE_LOCATION, "rb") as f:
# file_bytes = f.read()
post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=RAW_TEXT
# url=TEXT_FILE_URL
# raw=file_bytes
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception(f"Post model outputs failed, status: {post_model_outputs_response.status.description}")
# Since we have one input, one output will exist here
output = post_model_outputs_response.outputs[0]
print("Completion:\n")
print(output.data.text.raw)

You can also run GPT-3.5 Turbo Instruct Model using other Clarifai Client Libraries like Javascript, Java, cURL, NodeJS, PHP, etc here

Model Demo in the Clarifai Platform:

Try out the GPT-3.5 Turbo Instruct model here: https://clarifai.com/openai/completion/models/gpt-3_5-turbo-instruct


Instruct Models and Their Impact

OpenAI introduced Instruct models as a response to challenges seen in earlier models, such as hallucinations and the generation of inaccurate or harmful content. Instruct models were designed to reduce these issues and produce more truthful and safe responses.

Instruct models, including this new release, are a crucial foundation for the breakthroughs seen in ChatGPT, they are trained to follow instructions with human feedback using a powerful technique known as Reinforcement Learning from Human Feedback (RLHF). RLHF relies on human preferences as a reward signal to enhance the performance of language models. This is particularly crucial because the safety and alignment challenges being addressed are intricate and subjective, and they cannot be fully quantified through simplistic automatic metrics.

Key Features and Advantages

  1. Efficient Instruction Following: The primary strength of GPT-3.5-turbo-instruct lies in its ability to follow instructions with precision and efficiency. Whether needed to complete a specific task, answer questions, or perform text-based functions, the model excels at promptly executing commands.
  2.  Reduced Hallucinations and Toxicity: OpenAI's journey in developing instruct models began with the intent to reduce hallucinations and encourage more truthful and less toxic responses. By aligning the model's behavior with user expectations and instructions, GPT-3.5-turbo-instruct contributes to a safer and more reliable AI interaction.
  3. GPT-3.5-turbo-instruct's Chess Capabilities: In a surprising turn of events, GPT-3.5-turbo Instruct has demonstrated its prowess in the world of chess. It achieved a remarkable Elo rating of around 1800, outperforming Stockfish Level 4 (1700) and putting up a respectable fight against Level 5 (2000). Notably, it played by the rules and even showcased clever strategies, including a cheeky pawn and king checkmate. This is a remarkable departure from earlier beliefs that GPT models couldn't play chess, a misconception that mainly pertained to chat-focused models
  4. Replacing Older Models: OpenAI's introduction of "gpt-3.5-turbo-instruct" also involves retiring certain older models, including text-ada-001, text-babbage-001, text-curie-001, and the three text-davinci models. These older models will be phased out on January 4, 2024.

Keep up to speed with AI

  • Follow us on X to get the latest from the LLMs

  • Join us in our Discord to talk LLMs