Getting Started with the OpenAI API (ChatGPT) in Python
Unlocking the Power of the OpenAI API: Models, Chat Completions, Function Calls, Embeddings, Token Management, and Best Practices — A Blend of Official Documentation and My Personal Insights
Models
OpenAI offers a range of models tailored to various tasks, making it essential to choose the right one for your needs. Let’s explore my top choices:
- gpt-3.5-turbo: This is the cream of the crop when it comes to GPT-3 models. As the very same model powering the free version of ChatGPT, it’s my go-to for straightforward completion tasks. Its ease of use and impressive performance make it a dependable choice.
- gpt-3.5-turbo-16k: Building upon the strengths of its predecessor, this version offers an expanded context limit. Whenever I require more extensive context for my prompts, especially in scenarios where historical context matters, this model shines.
- gpt-4: As OpenAI’s latest and most advanced chat completion model, gpt-4 takes center stage when complexity is the name of the game. With its enhanced understanding and generation capabilities, it excels in delivering in-depth and contextually rich responses.
A Practical Comparison: gpt-3.5-turbo vs. gpt-4 in Code Refactoring
In this section, we conduct a practical comparison between two OpenAI models, gpt-3.5-turbo and gpt-4, using a simple code refactoring task. Code refactoring is a fundamental aspect of software development, aimed at improving code structure and readability while preserving functionality. Let’s explore how these models perform in this context.
Before we dive into the results, let’s introduce the Python code we used for this refactoring task:
while True:
mass = int(input("Enter the mass value: "))
if mass > 0:
break
while True:
acceleration = int(input("Enter the acceleration: "))
if acceleration > 0:
break
print("The Force is", mass * acceleration)
Our goal was to refactor this code for better clarity and maintainability.
Response from gpt-3.5-turbo
gpt-3.5-turbo suggested the following refactoring:
while True:
mass = int(input("Enter the mass value: "))
if mass > 0:
break
while True:
acceleration = int(input("Enter the acceleration: "))
if acceleration > 0:
break
force = mass * acceleration
print("The Force is", force)
This response offers improved readability by introducing a ‘force’ variable, making the code more self-explanatory.
Response from gpt-4
gpt-4 suggested a more structured approach:
def get_positive_input(prompt):
while True:
value = int(input(prompt))
if value > 0:
return value
mass = get_positive_input("Enter the mass value: ")
acceleration = get_positive_input("Enter the acceleration: ")
print("The Force is", mass * acceleration)
gpt-4’s response introduces a function for obtaining positive input, enhancing code modularity.
Comparison
In comparing these responses, gpt-4’s approach stands out for its modularity and structured design, which can be advantageous in larger codebases. However, gpt-3.5-turbo’s response maintains simplicity and readability, which may be preferable for smaller projects or when speed is essential.
Conclusion
The choice between gpt-3.5-turbo and gpt-4 for code refactoring depends on the project’s size, complexity, and specific needs. Understanding their strengths and differences can help developers make informed decisions when it comes to improving their codebase.
Running Prompts and Receiving Completions with Python
In this section, I’ll walk you through the process of running a prompt and receiving completions using Python and the OpenAI API. I’ll use the same code refactoring prompt from the previous section as an example.
Step 1: Setting Up Your API Key
First, make sure you have your OpenAI API key ready. You should store this key securely, such as in an environment variable, before proceeding. If you haven’t obtained your API key yet, visit the OpenAI website to get one.
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
Step 2: Crafting Your Prompt
To run a prompt, create a message that specifies the user’s role and content. In this example, we’re using the “gpt-3.5-turbo” model, and our prompt is the Python code snippet that we want to refactor.
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Refactor the following Python code:\n\n'''\nwhile True:\n mass = int(input(\"Enter the mass value: \"))\n if mass > 0:\n break\nwhile True:\n acceleration = int(input(\"Enter the acceleration: \"))\n if acceleration > 0:\n break\nprint(\"The Force is\", mass * acceleration)\n'''"
}
],
temperature=0,
max_tokens=1000,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
Step 3: Handling the Model’s Response
After running the prompt, you’ll receive a response from the model. You can access the generated completion by using:
response["choices"][0]["message"]["content"]
This content will contain the model's refactored version of the Python code.
Step 4: Customizing Parameters
In the code, you can customize various parameters such as temperature
, max_tokens
, top_p
, frequency_penalty
, and presence_penalty
to adjust the model's behavior according to your specific requirements.
By following these steps, you can effectively run prompts and receive completions using Python and OpenAI’s API. This process is versatile and can be adapted for various text generation tasks beyond code refactoring.
Function Calls: An Exciting New Feature
The OpenAI API now offers a captivating feature: function calls. This feature allows you to receive function call responses with auto-generated parameters, creating a new dimension of interaction with the model. All you need to do is define your functions and describe them to the model. Here’s how it works:
Step 1: Import Modules and Set Your API Key
Begin by importing the necessary modules and setting your OpenAI API key. If you don’t have an API key, obtain one from OpenAI’s website.
import requests
import json
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
Step 2: Create Your Function
Define a function that you want to use in the interaction with the model. In this example, we’ll create a function called get_exchange
for currency conversion.
def get_exchange(amount, currency_from, currency_to):
response = requests.get("https://open.er-api.com/v6/latest/" + currency_from)
exchange_rate = response.json()["rates"][currency_to]
return amount*exchange_rate
Step 3: Describe the Function to the Model
You need to describe your function to the model so it understands how to use it. Provide details about the function’s name, description, and expected parameters.
functions = [
{
"name": "get_currency_exchange",
"description": "Calculate the exchange by given currencies",
"parameters": {
"type": "object",
"properties": {
"amount": {
"type": "number",
"description": "The amount of the currency",
},
"currency_from": {"type": "string", "enum": ["USD", "EUR", "GBP"]},
"currency_to": {"type": "string", "enum": ["USD", "EUR", "GBP"]}
},
"required": ["amount", "currency_from", "currency_to"],
},
}
]
Step 4: Make a Request for Completions
Now, you’re ready to make a request to receive completions from OpenAI. This request includes a message from the user and the functions you’ve described.
messages = [{"role": "user", "content": "How much is 100 usd in euro?"}]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=functions,
function_call="auto"
)
Step 5: Run the Function with Generated Parameters
If the response is a function call, you can run the function with auto-generated parameters. Here’s how to do it:
response_message = response["choices"][0]["message"]
if response_message.get("function_call"):
available_functions = {
"get_currency_exchange": get_currency_exchange,
}
function_name = response_message["function_call"]["name"]
function_to_call = available_functions[function_name]
function_args = json.loads(response_message["function_call"]["arguments"])
function_response = function_to_call(
amount=function_args.get("amount"),
currency_from=function_args.get("currency_from"),
currency_to=function_args.get("currency_to")
)
print(function_response)
Conclusion: Elevating Conversations with Function Calls
Function calls in the OpenAI API bring a game-changing dimension to interaction with AI models. This feature allows you to seamlessly integrate custom functions, enabling dynamic and context-aware conversations. As you define and describe functions, the model can understand and incorporate them, offering endless possibilities for automation, answering questions, and enhancing user experiences. With function calls, the OpenAI API opens doors to innovative solutions across a spectrum of applications, pushing the boundaries of AI-driven interactions.
Exploring the Horizon: What’s Next
In the coming posts, we’ll delve deeper into the vast capabilities of the OpenAI API. We’ll unlock the potential of embeddings, unravel the intricacies of token management for fine-tuned control, and share best practices to harness the full power of these AI models. Stay tuned for insightful discussions and hands-on tutorials as we journey through the realm of AI-driven innovation. Whether you’re a developer, a data scientist, or just curious about the future of AI, there’s something here for everyone. Join us in this exciting exploration of AI’s boundless possibilities.
Stay Connected
If you have questions, ideas, or simply want to chat about AI, feel free to reach out on Twitter. I’m always eager to engage in meaningful conversations with fellow enthusiasts. For more AI insights and articles, don’t forget to follow me on Medium as well. Your support and engagement are greatly appreciated.