Few Shot Prompting with Various AI Models
Simon-Pierre Boucher
2024-09-14
This Python script is designed to generate responses to a few-shot prompt using several large language models (LLMs) via APIs from OpenAI, Anthropic, and Mistral. Here is a breakdown of its key components:
Loading Environment Variables:
- The
load_dotenv()
function loads API keys for the different LLM services from a.env
file. This helps securely manage sensitive credentials like API keys (OPENAI_API_KEY
,ANTHROPIC_API_KEY
,MISTRAL_API_KEY
).
- The
Function:
openai_few_shot_prompt()
:- This function sends a few-shot learning prompt to the OpenAI API to generate a response from a model (e.g.,
gpt-4
). - It constructs the prompt by appending example prompts and responses to the user-provided input.
- API request parameters include model type, temperature, and token limits.
- If the API call is successful, it returns the model-generated response.
- This function sends a few-shot learning prompt to the OpenAI API to generate a response from a model (e.g.,
Function:
anthropic_few_shot_prompt()
:- Similar to the OpenAI function, this one constructs a few-shot prompt and makes a request to Anthropic's API (e.g., using models like
claude-3-5-sonnet
). - The response is returned after being processed from the JSON data received from the API.
- Similar to the OpenAI function, this one constructs a few-shot prompt and makes a request to Anthropic's API (e.g., using models like
Function:
run_mistral()
:- This function makes a request to the Mistral API with the user’s message, model, and configuration settings (e.g., temperature, token limits).
- It returns the Mistral model’s generated content or an error if the API call fails.
Function:
mistral_few_shot_prompt()
:- This function builds a few-shot prompt and calls
run_mistral()
to generate a response using the Mistral API.
- This function builds a few-shot prompt and calls
Function:
generate_few_shot_prompts_with_all_models()
:- This function loops through several OpenAI, Anthropic, and Mistral models to generate responses to the same prompt using different models.
- It stores the responses in a dictionary where the keys are the model names and the values are the generated outputs.
- The function supports multiple models for each service, such as
gpt-4
,claude-3-opus
, andmistral-medium-latest
.
Main Program Execution:
- API keys are fetched from environment variables.
- A user prompt is provided (e.g., "Describe the impact of climate change on polar bears"), along with examples.
- Lists of model names for each service are specified.
- The
generate_few_shot_prompts_with_all_models()
function is called to generate responses for the prompt from all the models. - The results are printed, showing the response and word count for each model.
This script allows users to experiment with different LLMs and compare their responses to a given prompt, facilitating evaluation across various models from different providers.
In [1]:
import os
from dotenv import load_dotenv
import requests
import json
# Charger les variables d'environnement
load_dotenv()
Out[1]:
In [2]:
def openai_few_shot_prompt(api_key, prompt, examples, model="gpt-4", temperature=0.7, max_tokens=1024, stop=None):
"""
Generates a response based on a few-shot prompt using the OpenAI API.
"""
# Build the few-shot prompt
few_shot_prompt = ""
for example in examples:
few_shot_prompt += f"Example prompt: {example['prompt']}\n"
few_shot_prompt += f"Example response: {example['response']}\n\n"
few_shot_prompt += f"Prompt: {prompt}\nResponse:"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": [
{"role": "user", "content": few_shot_prompt}
],
"temperature": temperature,
"max_tokens": max_tokens
}
if stop:
data["stop"] = stop
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
generated_response = response_json["choices"][0]["message"]["content"].strip()
return generated_response
else:
return f"Error {response.status_code}: {response.text}"
In [3]:
def anthropic_few_shot_prompt(api_key, prompt, examples, model="claude-3-5-sonnet-20240620", temperature=0.7, max_tokens=1024):
"""
Generates a response based on a few-shot prompt using the Anthropic API.
"""
# Build the few-shot prompt
few_shot_prompt = ""
for example in examples:
few_shot_prompt += f"Example prompt: {example['prompt']}\n"
few_shot_prompt += f"Example response: {example['response']}\n\n"
few_shot_prompt += f"Prompt: {prompt}\nResponse:"
url = "https://api.anthropic.com/v1/messages"
headers = {
"x-api-key": api_key,
"anthropic-version": "2023-06-01",
"content-type": "application/json"
}
data = {
"model": model,
"max_tokens": max_tokens,
"temperature": temperature,
"messages": [
{"role": "user", "content": few_shot_prompt}
]
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
generated_response = response_json["content"][0]["text"].strip()
return generated_response
else:
return f"Error {response.status_code}: {response.text}"
In [4]:
def run_mistral(api_key, user_message, model="mistral-medium-latest"):
url = "https://api.mistral.ai/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": [
{"role": "user", "content": user_message}
],
"temperature": 0.7,
"top_p": 1.0,
"max_tokens": 1024,
"stream": False,
"safe_prompt": False,
"random_seed": 1337
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
return response_json["choices"][0]["message"]["content"].strip()
else:
return f"Error {response.status_code}: {response.text}"
def mistral_few_shot_prompt(api_key, prompt, examples, model="mistral-medium-latest", temperature=0.7, max_tokens=512):
"""
Generates a response based on a few-shot prompt using the Mistral API.
"""
# Build the few-shot prompt
few_shot_prompt = ""
for example in examples:
few_shot_prompt += f"Example prompt: {example['prompt']}\n"
few_shot_prompt += f"Example response: {example['response']}\n\n"
few_shot_prompt += f"Prompt: {prompt}\nResponse:"
return run_mistral(api_key, few_shot_prompt, model=model)
In [5]:
def generate_few_shot_prompts_with_all_models(openai_key, anthropic_key, mistral_key, prompt, examples, openai_models, anthropic_models, mistral_models, temperature=0.7, max_tokens=100, stop=None):
results = {}
# Générer des réponses avec tous les modèles OpenAI
for model in openai_models:
openai_result = openai_few_shot_prompt(openai_key, prompt, examples, model, temperature, max_tokens, stop)
results[f'openai_{model}'] = openai_result
# Générer des réponses avec tous les modèles Anthropic
for model in anthropic_models:
anthropic_result = anthropic_few_shot_prompt(anthropic_key, prompt, examples, model, temperature, max_tokens)
results[f'anthropic_{model}'] = anthropic_result
# Générer des réponses avec tous les modèles Mistral
for model in mistral_models:
mistral_result = mistral_few_shot_prompt(mistral_key, prompt, examples, model, temperature, max_tokens)
results[f'mistral_{model}'] = mistral_result
return results
In [6]:
if __name__ == "__main__":
openai_key = os.getenv("OPENAI_API_KEY")
anthropic_key = os.getenv("ANTHROPIC_API_KEY")
mistral_key = os.getenv("MISTRAL_API_KEY")
prompt = "Describe the impact of climate change on polar bears."
examples = [
{"prompt": "Describe the impact of climate change on coral reefs.", "response": "Climate change is causing ocean temperatures to rise, leading to coral bleaching and the loss of biodiversity in coral reef ecosystems."},
{"prompt": "Explain how deforestation affects local climates.", "response": "Deforestation leads to a decrease in transpiration, which can alter local weather patterns and contribute to a hotter, drier climate."}
]
openai_models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-turbo", "gpt-4o-mini", "gpt-4o"]
anthropic_models = ["claude-3-5-sonnet-20240620", "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307"]
mistral_models = ["open-mistral-7b", "open-mixtral-8x7b", "open-mixtral-8x22b", "mistral-small-latest", "mistral-medium-latest", "mistral-large-latest"]
results = generate_few_shot_prompts_with_all_models(openai_key, anthropic_key, mistral_key, prompt, examples, openai_models, anthropic_models, mistral_models)
for model_name, result in results.items():
word_count = len(result.split())
print(f"\033[1mResult from {model_name} ({word_count} words):\033[0m\n{result}\n")