Text Generation with Various AI Models
Simon-Pierre Boucher
2024-09-14
This Python script generates text using multiple language models from OpenAI, Anthropic, and Mistral. Here's a breakdown of its components:
1. Environment Setup:¶
load_dotenv()
: This loads environment variables (API keys) from a.env
file, which securely stores sensitive data like API keys for OpenAI, Anthropic, and Mistral.
2. Text Generation Functions:¶
openai_generate_text()
:- This function sends a prompt to OpenAI's models (e.g.,
gpt-4
). - The function uses HTTP POST to communicate with OpenAI’s API, passing the prompt and other parameters like temperature, max tokens, and stop sequences.
- It returns the generated response or an error message if the request fails.
- This function sends a prompt to OpenAI's models (e.g.,
anthropic_generate_text()
:- Similar to OpenAI, this function sends a prompt to Anthropic's models (e.g.,
claude-3-5-sonnet
). - It also handles HTTP POST requests and returns the generated text from the Anthropic API or an error message if it fails.
- Similar to OpenAI, this function sends a prompt to Anthropic's models (e.g.,
run_mistral()
:- This function sends a prompt to Mistral’s API (e.g.,
mistral-medium-latest
) and retrieves the generated response. - It includes configurable parameters like temperature, max tokens, and others, and processes the response returned from the Mistral API.
- This function sends a prompt to Mistral’s API (e.g.,
mistral_generate_text()
:- This function builds the prompt specifically for Mistral and calls
run_mistral()
to generate a response.
- This function builds the prompt specifically for Mistral and calls
3. Aggregated Text Generation:¶
generate_text_with_all_models()
:- This function takes a prompt and loops through a list of OpenAI, Anthropic, and Mistral models to generate responses from each one.
- It stores the generated results from each model in a dictionary, keyed by the model name.
- The responses are generated using the three different APIs (OpenAI, Anthropic, Mistral) by calling the respective functions.
4. Main Program:¶
API Keys and Prompt Setup:
- API keys for OpenAI, Anthropic, and Mistral are loaded from environment variables.
- The user prompt in this case is to "Create a simple, healthy recipe using chicken breast, spinach, and quinoa."
Model Lists:
- Lists of models for OpenAI (
gpt-3.5-turbo
,gpt-4
), Anthropic (claude-3-5-sonnet
), and Mistral (open-mistral-7b
,mistral-medium-latest
) are defined.
- Lists of models for OpenAI (
Generating Text:
- The function
generate_text_with_all_models()
is called to generate responses from all models for the given prompt.
- The function
Results Output:
- The results are printed for each model, showing the model name, word count, and the generated text.
Purpose:¶
This script is used to compare outputs from different AI models for a single prompt, allowing the user to evaluate how different models from OpenAI, Anthropic, and Mistral respond to the same input. It helps in cross-model evaluation and benchmarking.
In [1]:
import os
from dotenv import load_dotenv
import requests
import json
# Charger les variables d'environnement
load_dotenv()
Out[1]:
In [2]:
# Fonction pour générer du texte avec OpenAI
def openai_generate_text(api_key, prompt, model, temperature=0.7, max_tokens=1024, stop=None):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": [
{"role": "user", "content": prompt}
],
"temperature": temperature,
"max_tokens": max_tokens
}
if stop:
data["stop"] = stop
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
generated_text = response_json["choices"][0]["message"]["content"].strip()
return generated_text
else:
return f"Error {response.status_code}: {response.text}"
In [3]:
# Fonction pour générer du texte avec Anthropic
def anthropic_generate_text(api_key, prompt, model="claude-3-5-sonnet-20240620", max_tokens=1024, temperature=0.7):
url = "https://api.anthropic.com/v1/messages"
headers = {
"x-api-key": api_key,
"anthropic-version": "2023-06-01",
"content-type": "application/json"
}
data = {
"model": model,
"max_tokens": max_tokens,
"temperature": temperature,
"messages": [
{"role": "user", "content": prompt}
]
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
generated_text = response_json["content"][0]["text"].strip()
return generated_text
else:
return f"Error {response.status_code}: {response.text}"
In [4]:
# Fonction pour générer du texte avec Mistral
def run_mistral(api_key, user_message, model="mistral-medium-latest"):
url = "https://api.mistral.ai/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": [
{"role": "user", "content": user_message}
],
"temperature": 0.7,
"top_p": 1.0,
"max_tokens": 1024,
"stream": False,
"safe_prompt": False,
"random_seed": 1337
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
return response_json["choices"][0]["message"]["content"].strip()
else:
return f"Error {response.status_code}: {response.text}"
def mistral_generate_text(api_key, prompt, model="mistral-medium-latest"):
user_message = f"Generate text based on the following prompt:\n\n{prompt}"
return run_mistral(api_key, user_message, model=model)
In [5]:
def generate_text_with_all_models(openai_key, anthropic_key, mistral_key, prompt, openai_models, anthropic_models, mistral_models, temperature=0.7, max_tokens=100, stop=None):
results = {}
# Générer du texte avec tous les modèles OpenAI
for model in openai_models:
openai_result = openai_generate_text(openai_key, prompt, model, temperature, max_tokens, stop)
results[f'openai_{model}'] = openai_result
# Générer du texte avec tous les modèles Anthropic
for model in anthropic_models:
anthropic_result = anthropic_generate_text(anthropic_key, prompt, model, max_tokens, temperature)
results[f'anthropic_{model}'] = anthropic_result
# Générer du texte avec tous les modèles Mistral
for model in mistral_models:
mistral_result = mistral_generate_text(mistral_key, prompt, model)
results[f'mistral_{model}'] = mistral_result
return results
In [6]:
if __name__ == "__main__":
openai_key = os.getenv("OPENAI_API_KEY")
anthropic_key = os.getenv("ANTHROPIC_API_KEY")
mistral_key = os.getenv("MISTRAL_API_KEY")
prompt = "Create a simple, healthy recipe using the following ingredients: chicken breast, spinach, and quinoa."
openai_models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-turbo", "gpt-4o-mini", "gpt-4o"]
anthropic_models = ["claude-3-5-sonnet-20240620", "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307"]
mistral_models = ["open-mistral-7b", "open-mixtral-8x7b", "open-mixtral-8x22b", "mistral-small-latest", "mistral-medium-latest", "mistral-large-latest"]
results = generate_text_with_all_models(openai_key, anthropic_key, mistral_key, prompt, openai_models, anthropic_models, mistral_models)
for model_name, result in results.items():
word_count = len(result.split())
print(f"\033[1mResult from {model_name} ({word_count} words):\033[0m\n{result}\n")