ANTHROPIC - TEXT GENERATION
Simon-Pierre Boucher
2024-09-14
This Python script demonstrates how to interact with Anthropic's API for generating responses using a model like claude-3-5-sonnet
. Here's a breakdown of its functionality:
Environment Setup:
- The script loads environment variables, including the API key for Anthropic, from a
.env
file usingdotenv
. This keeps sensitive information secure by avoiding hardcoding it into the script.
- The script loads environment variables, including the API key for Anthropic, from a
Function
generate_anthropic_text()
:- This function sends a request to Anthropic's API to generate a response based on specific parameters:
api_key
: The API key for accessing the Anthropic API.model
: The model used for text generation (in this case,claude-3-5-sonnet-20240620
).messages
: A list of messages representing a conversation. Each message has a role (e.g.,user
), similar to a chat.max_tokens
,temperature
, andtop_p
: These parameters control the response's length, randomness, and diversity.
- The request is made to
https://api.anthropic.com/v1/messages
, including the model and the conversation format. - The function returns the API response as a JSON object or prints an error message in case of a request failure.
- This function sends a request to Anthropic's API to generate a response based on specific parameters:
Function
format_anthropic_response()
:- This function extracts and formats the assistant's response from the API output.
- If a valid response is received, it formats the assistant's message in markdown format for easy display.
- If no valid response is received, it returns an error message.
Generating and Displaying Text:
- The script sets up a conversation where the user asks about quantum entanglement and its challenges to classical notions of locality and realism.
- It then calls the
generate_anthropic_text()
function with the appropriate parameters (model, messages, etc.) and formats the result usingformat_anthropic_response()
.
Summary:¶
- API Interaction: The script interacts with Anthropic's API to generate a conversation response.
- Parameters: Control over text generation parameters like
temperature
,max_tokens
, andtop_p
. - Formatting: The script extracts the response and presents it in a readable format.
This setup allows for seamless generation and formatting of responses from Anthropic’s language model.
In [2]:
import os
import requests
from dotenv import load_dotenv
from IPython.display import display, HTML
import re
# Charger les variables d'environnement depuis le fichier .env
load_dotenv()
# Obtenir la clé API depuis les variables d'environnement
api_key = os.getenv("ANTHROPIC_API_KEY")
In [3]:
import requests
def generate_anthropic_text(api_key, model, messages, max_tokens=1024, temperature=0.7, top_p=0.9):
"""
Generate text using Anthropic's API.
Parameters:
- api_key (str): The API key for Anthropic.
- model (str): The model to use for text generation.
- messages (list): A list of messages to pass to the API in a conversation format.
- max_tokens (int): The maximum number of tokens to generate in the completion.
- temperature (float): Controls randomness in the output (0-1).
- top_p (float): Controls the diversity via nucleus sampling (0-1).
Returns:
- response (dict): The API response as a dictionary.
"""
url = "https://api.anthropic.com/v1/messages"
headers = {
"Content-Type": "application/json",
"x-api-key": api_key,
"anthropic-version": "2023-06-01"
}
data = {
"model": model,
"max_tokens": max_tokens,
"temperature": temperature, # Added temperature parameter
"top_p": top_p, # Added top_p parameter
"messages": messages
}
try:
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
return None
In [4]:
def format_anthropic_response(response):
"""
Formats the response from Anthropic API to display only the assistant's message.
Parameters:
- response (dict): The API response as a dictionary.
Returns:
- formatted_text (str): A formatted string with Markdown for the assistant's message.
"""
if response and "content" in response:
assistant_message = response["content"][0]["text"]
formatted_text = f"**Assistant:**\n\n{assistant_message}\n"
return formatted_text
else:
return "No valid response received."
In [5]:
model = "claude-3-5-sonnet-20240620"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_anthropic_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_anthropic_response(response)
print(formatted_response)
In [6]:
model = "claude-3-opus-20240229"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_anthropic_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_anthropic_response(response)
print(formatted_response)
In [7]:
model = "claude-3-sonnet-20240229"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_anthropic_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_anthropic_response(response)
print(formatted_response)
In [8]:
model = "claude-3-haiku-20240307"
messages = [
{"role": "user", "content": "Explain the concept of quantum entanglement and how it challenges classical notions of locality and realism. What are the implications of entanglement for our understanding of causality and information transfer?"}
]
response = generate_anthropic_text(api_key, model, messages, temperature=0.7, max_tokens=2000, top_p=0.9)
formatted_response = format_anthropic_response(response)
print(formatted_response)