OPEN AI - TEXT GENERATION
Simon-Pierre Boucher
2024-09-14
This Python script facilitates interaction with the OpenAI API to generate and format responses using a chat model like gpt-3.5-turbo
. Below is a breakdown of the script's functionality:
Environment Setup:
- The script loads environment variables (such as the OpenAI API key) from a
.env
file usingdotenv
. os.getenv("OPENAI_API_KEY")
retrieves the API key from the environment.
- The script loads environment variables (such as the OpenAI API key) from a
Function
generate_openai_text()
:- This function sends a request to OpenAI's API to generate a response based on specified parameters:
api_key
: The OpenAI API key.model
: The model to use for generation (e.g.,gpt-3.5-turbo
).messages
: A list representing a chat conversation where each message has a role (e.g.,user
).- Other parameters (temperature, max_tokens, etc.) control the behavior of the model.
- The function sends the API request, checks for errors, and returns the response in JSON format.
- This function sends a request to OpenAI's API to generate a response based on specified parameters:
Function
format_openai_response()
:- This function extracts and formats the assistant's response from the API output, displaying only the generated text.
- If the response contains valid data, it formats it for display; otherwise, it returns an error message.
Text Generation and Output:
- The script creates a conversation where the user asks about "quantum entanglement" and its implications for causality and information transfer.
- The
generate_openai_text()
function is called with this query, and the output is formatted and printed usingformat_openai_response()
.
Switching Models:
- The script demonstrates how to switch between models (e.g., from
gpt-3.5-turbo
togpt-4
) to generate different responses.
- The script demonstrates how to switch between models (e.g., from
Each generated response addresses the user's question about quantum entanglement and presents the answer with varying levels of detail based on the selected model.
In [1]:
import os
import requests
from dotenv import load_dotenv
from IPython.display import display, HTML
import re
# Charger les variables d'environnement depuis le fichier .env
load_dotenv()
# Obtenir la clé API depuis les variables d'environnement
api_key = os.getenv("OPENAI_API_KEY")
In [2]:
import requests
def generate_openai_text(api_key, model, messages, temperature=1.0, max_tokens=2000, top_p=1.0,
frequency_penalty=0.0, presence_penalty=0.0):
"""
Generate text using OpenAI's API.
Parameters:
- api_key (str): The API key for OpenAI.
- model (str): The model to use for text generation.
- messages (list): A list of messages to pass to the API in a conversation format.
- temperature (float): Controls randomness in the output (0-1).
- max_tokens (int): The maximum number of tokens to generate in the completion.
- top_p (float): Controls the diversity via nucleus sampling (0-1).
- frequency_penalty (float): Controls the repetition of words (0-1).
- presence_penalty (float): Controls the introduction of new topics (0-1).
Returns:
- response (dict): The API response as a dictionary.
"""
url = "https://api.openai.com/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": messages,
"temperature": temperature,
"max_tokens": max_tokens,
"top_p": top_p,
"frequency_penalty": frequency_penalty,
"presence_penalty": presence_penalty
}
try:
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
return None
In [3]:
def format_openai_response(response):
"""
Formats the response from OpenAI API to display only the assistant's message.
Parameters:
- response (dict): The API response as a dictionary.
Returns:
- formatted_text (str): A formatted string with Markdown for the assistant's message.
"""
if response and "choices" in response:
assistant_message = response["choices"][0]["message"]["content"]
formatted_text = f"**Assistant:**\n\n{assistant_message}\n"
return formatted_text
else:
return "No valid response received."
In [4]:
model = "gpt-3.5-turbo"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_openai_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_openai_response(response)
print(formatted_response)
In [5]:
model = "gpt-4-turbo"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_openai_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_openai_response(response)
print(formatted_response)
In [6]:
model = "gpt-4"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_openai_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_openai_response(response)
print(formatted_response)
In [7]:
model = "gpt-4o"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_openai_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_openai_response(response)
print(formatted_response)
In [8]:
model = "gpt-4o-mini"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_openai_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_openai_response(response)
print(formatted_response)