Question-Answer with Various AI Models
Simon-Pierre Boucher
2024-09-14
This Python script generates answers to a question using multiple models from OpenAI, Anthropic, and Mistral APIs. Here is a summary of its main components:
1. Environment Setup:¶
load_dotenv()
: This function loads environment variables (API keys) from a.env
file, allowing secure access to the APIs for OpenAI, Anthropic, and Mistral.
2. Question Answering Functions:¶
openai_question_answer()
:- This function sends a user question (and optional context) to OpenAI's models (e.g.,
gpt-4
) via an API request. - The function formats the input as a user message, sends it to OpenAI’s API, and retrieves the generated answer.
- It supports configurable parameters like temperature, max tokens, and stop sequences.
- This function sends a user question (and optional context) to OpenAI's models (e.g.,
anthropic_question_answer()
:- Similar to the OpenAI function, this sends a question (and context) to Anthropic’s models (e.g.,
claude-3-5-sonnet
) using an API request. - The function constructs the request and retrieves the generated response from the Anthropic API.
- Similar to the OpenAI function, this sends a question (and context) to Anthropic’s models (e.g.,
run_mistral()
:- This is a helper function that sends user input to Mistral’s API to retrieve a response.
- It handles the API request and returns the response generated by the model (e.g.,
mistral-medium-latest
).
mistral_question_answer()
:- This function formats the question (and optional context) for Mistral and uses the
run_mistral()
function to generate an answer.
- This function formats the question (and optional context) for Mistral and uses the
3. Aggregated Question Answering:¶
answer_question_with_all_models()
:- This function iterates through lists of models for OpenAI, Anthropic, and Mistral, generating answers for a given question and context.
- The function stores the responses in a dictionary, with the keys being the model names and the values being the generated answers.
- It loops through multiple models for each API and calls the respective functions to fetch answers.
4. Main Program:¶
API Keys and Question Setup:
- The API keys are fetched from environment variables.
- The question posed in this example is: "What are the effects of climate change on polar bears?"
- A detailed context is provided, explaining how climate change affects polar bears, such as habitat loss and malnutrition due to shrinking ice.
Model Lists:
- Lists of models for OpenAI (
gpt-3.5-turbo
,gpt-4
), Anthropic (claude-3-5-sonnet
,claude-3-opus
), and Mistral (open-mistral-7b
,mistral-medium-latest
) are specified.
- Lists of models for OpenAI (
Generating Answers:
- The function
answer_question_with_all_models()
is called to generate answers from all models for the given question and context.
- The function
Results Output:
- The script prints the results for each model, displaying the model name, word count, and the generated answer.
Purpose:¶
This script is useful for comparing how different AI models from OpenAI, Anthropic, and Mistral respond to the same question and context. It enables cross-model evaluation, providing a comprehensive way to benchmark different large language models for question-answering tasks.
In [1]:
import os
from dotenv import load_dotenv
import requests
import json
# Charger les variables d'environnement
load_dotenv()
Out[1]:
In [2]:
def openai_question_answer(api_key, question, context=None, model="gpt-4", temperature=0.7, max_tokens=1024, stop=None):
"""
Generates an answer to a given question based on provided context using the OpenAI API.
"""
if context:
prompt_content = f"Context: {context}\n\nQuestion: {question}"
else:
prompt_content = f"Question: {question}"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": [
{"role": "user", "content": prompt_content}
],
"temperature": temperature,
"max_tokens": max_tokens
}
if stop:
data["stop"] = stop
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
generated_answer = response_json["choices"][0]["message"]["content"].strip()
return generated_answer
else:
return f"Error {response.status_code}: {response.text}"
In [3]:
def anthropic_question_answer(api_key, question, context=None, model="claude-3-5-sonnet-20240620", temperature=0.7, max_tokens=1024):
"""
Generates an answer to a given question based on provided context using the Anthropic API.
"""
if context:
prompt_content = f"Context: {context}\n\nQuestion: {question}"
else:
prompt_content = f"Question: {question}"
url = "https://api.anthropic.com/v1/messages"
headers = {
"x-api-key": api_key,
"anthropic-version": "2023-06-01",
"content-type": "application/json"
}
data = {
"model": model,
"max_tokens": max_tokens,
"temperature": temperature,
"messages": [
{"role": "user", "content": prompt_content}
]
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
generated_answer = response_json["content"][0]["text"].strip()
return generated_answer
else:
return f"Error {response.status_code}: {response.text}"
In [4]:
def run_mistral(api_key, user_message, model="mistral-medium-latest"):
url = "https://api.mistral.ai/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": [
{"role": "user", "content": user_message}
],
"temperature": 0.7,
"top_p": 1.0,
"max_tokens": 1024,
"stream": False,
"safe_prompt": False,
"random_seed": 1337
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
return response_json["choices"][0]["message"]["content"].strip()
else:
return f"Error {response.status_code}: {response.text}"
def mistral_question_answer(api_key, question, context=None, model="mistral-medium-latest", temperature=0.7, max_tokens=1024):
"""
Generates an answer to a given question based on provided context using the Mistral API.
"""
if context:
user_message = f"Context: {context}\n\nQuestion: {question}"
else:
user_message = f"Question: {question}"
return run_mistral(api_key, user_message, model=model)
In [5]:
def answer_question_with_all_models(openai_key, anthropic_key, mistral_key, question, context, openai_models, anthropic_models, mistral_models, temperature=0.7, max_tokens=100, stop=None):
results = {}
# Répondre à des questions avec tous les modèles OpenAI
for model in openai_models:
openai_result = openai_question_answer(openai_key, question, context, model, temperature, max_tokens, stop)
results[f'openai_{model}'] = openai_result
# Répondre à des questions avec tous les modèles Anthropic
for model in anthropic_models:
anthropic_result = anthropic_question_answer(anthropic_key, question, context, model, temperature, max_tokens)
results[f'anthropic_{model}'] = anthropic_result
# Répondre à des questions avec tous les modèles Mistral
for model in mistral_models:
mistral_result = mistral_question_answer(mistral_key, question, context, model, temperature, max_tokens)
results[f'mistral_{model}'] = mistral_result
return results
In [6]:
if __name__ == "__main__":
openai_key = os.getenv("OPENAI_API_KEY")
anthropic_key = os.getenv("ANTHROPIC_API_KEY")
mistral_key = os.getenv("MISTRAL_API_KEY")
question = "What are the effects of climate change on polar bears?"
context = "Polar bears are increasingly threatened by climate change. As the Arctic ice melts, their habitat shrinks, making it difficult for them to hunt seals, their primary food source. This leads to malnutrition and decreased reproduction rates. Conservation efforts are crucial to mitigate these effects and protect polar bear populations."
openai_models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-turbo", "gpt-4o-mini", "gpt-4o"]
anthropic_models = ["claude-3-5-sonnet-20240620", "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307"]
mistral_models = ["open-mistral-7b", "open-mixtral-8x7b", "open-mixtral-8x22b", "mistral-small-latest", "mistral-medium-latest", "mistral-large-latest"]
results = answer_question_with_all_models(openai_key, anthropic_key, mistral_key, question, context, openai_models, anthropic_models, mistral_models)
for model_name, result in results.items():
word_count = len(result.split())
print(f"\033[1mResult from {model_name} ({word_count} words):\033[0m\n{result}\n")