MISTRAL - CHATBOT
Simon-Pierre Boucher
2024-09-14
This code defines a ChatBot
class that interacts with Mistral's API to generate conversational text responses. Here's a detailed summary of how the code works:
1. Environment Setup:¶
- The environment variables are loaded from a
.env
file usingload_dotenv()
. - The API key is fetched using
os.getenv("MISTRAL_API_KEY")
.
2. ChatBot
Class:¶
- The
ChatBot
class is initialized with various parameters, including:api_key
: The Mistral API key.model
: The model to use for text generation (e.g.,codestral-mamba-latest
by default).temperature
andtop_p
: Control randomness and diversity of the output.max_tokens
,min_tokens
: Set token limits for the generated response.tool_choice
,safe_prompt
: Options for tool integration and safety prompts.
- It keeps track of the conversation history in
self.conversation_history
, which is a list of all messages exchanged during the conversation.
3. Adding Messages:¶
- The method
add_message()
appends a message to the conversation history with two parameters:role
: Specifies who is sending the message ("user"
or"assistant"
).content
: The actual text of the message.
4. Getting Responses:¶
- The method
get_response()
takes the user input, adds it to the conversation history, and then calls thegenerate_mistral_text()
method to get a response from the Mistral API. - The response (the assistant's message) is extracted from the API result and added back to the conversation history.
- It then returns the assistant's reply to the user. If an error occurs during the API call, an error message is displayed.
5. Mistral Text Generation:¶
- The
generate_mistral_text()
method sends a POST request to the Mistral API to generate a text response. It uses the following key parameters:model
: The specific AI model to use.messages
: A list of conversation messages (history), including both the user's inputs and previous assistant replies.- Optional settings: Parameters like
temperature
,top_p
,max_tokens
,stop
,random_seed
allow customization of the response generation process.
- The API request is sent using the
requests.post()
method, with error handling to catch and print any request-related issues.
6. Example Usage:¶
- A
ChatBot
instance is created with a specific model (mistral-large-latest
) and the settings for text generation. - The user input "Can you suggest 5 dinner ideas for this week?" is passed to the
get_response()
method. - The bot generates a response, and it is printed.
Summary of Workflow:¶
- User input is received.
- User message is added to conversation history.
- API request is sent to Mistral with the entire conversation history.
- Assistant's response is extracted from the API's response.
- Response is added to the conversation history and printed.
This implementation allows for dynamic, ongoing conversations with Mistral's text-generation API by maintaining a history of interactions and building context over multiple turns of dialogue.
In [2]:
import os
import requests
from dotenv import load_dotenv
from IPython.display import display, HTML
import re
# Charger les variables d'environnement depuis le fichier .env
load_dotenv()
# Obtenir la clé API depuis les variables d'environnement
api_key = os.getenv("MISTRAL_API_KEY")
In [3]:
import requests
class ChatBot:
def __init__(self, api_key, model="codestral-mamba-latest", temperature=0.7, top_p=1.0, max_tokens=None, min_tokens=None, stream=False, stop=None, random_seed=None, tool_choice="auto", safe_prompt=False):
"""
Initialize the ChatBot with API key and parameters.
Parameters:
- api_key (str): The API key for Mistral.
- model (str): The model to use for text generation.
- temperature (float): Controls randomness in the output (0-1.5).
- top_p (float): Nucleus sampling (0-1).
- max_tokens (int): The maximum number of tokens to generate in the completion.
- min_tokens (int): The minimum number of tokens to generate in the completion.
- stream (bool): Whether to stream back partial progress.
- stop (str or list): Stop generation if this token or one of these tokens is detected.
- random_seed (int): The seed to use for random sampling.
- tool_choice (str): Tool choice for the response ("auto", "none", "any").
- safe_prompt (bool): Whether to inject a safety prompt before all conversations.
"""
self.api_key = api_key
self.model = model
self.temperature = temperature
self.top_p = top_p
self.max_tokens = max_tokens
self.min_tokens = min_tokens
self.stream = stream
self.stop = stop
self.random_seed = random_seed
self.tool_choice = tool_choice
self.safe_prompt = safe_prompt
self.conversation_history = []
def add_message(self, role, content):
"""
Add a message to the conversation history.
Parameters:
- role (str): The role of the sender ("system", "user", or "assistant").
- content (str): The content of the message.
"""
self.conversation_history.append({"role": role, "content": content})
def get_response(self, user_input):
"""
Get a response from the Mistral API based on the user input.
Parameters:
- user_input (str): The user's input message.
Returns:
- assistant_reply (str): The assistant's generated response.
"""
# Add user input to conversation history
self.add_message("user", user_input)
# Call the Mistral API to get a response
response = self.generate_mistral_text(
self.api_key,
self.model,
self.conversation_history,
temperature=self.temperature,
top_p=self.top_p,
max_tokens=self.max_tokens,
min_tokens=self.min_tokens,
stream=self.stream,
stop=self.stop,
random_seed=self.random_seed,
tool_choice=self.tool_choice,
safe_prompt=self.safe_prompt
)
if response:
# Extract the assistant's reply
assistant_reply = response.get("choices", [])[0].get("message", {}).get("content", "No reply found.")
# Add assistant's reply to conversation history
self.add_message("assistant", assistant_reply)
return assistant_reply
else:
return "Sorry, I couldn't generate a response."
def generate_mistral_text(self, api_key, model, messages, temperature=0.7, top_p=1.0, max_tokens=None, min_tokens=None, stream=False, stop=None, random_seed=None, tool_choice="auto", safe_prompt=False):
"""
Generate text using Mistral's API.
Parameters:
- api_key (str): The API key for Mistral.
- model (str): The model to use for text generation.
- messages (list): A list of messages to pass to the API in a conversation format.
- temperature (float): Controls randomness in the output (0-1.5).
- top_p (float): Nucleus sampling (0-1).
- max_tokens (int): The maximum number of tokens to generate in the completion.
- min_tokens (int): The minimum number of tokens to generate in the completion.
- stream (bool): Whether to stream back partial progress.
- stop (str or list): Stop generation if this token or one of these tokens is detected.
- random_seed (int): The seed to use for random sampling.
- tool_choice (str): Tool choice for the response ("auto", "none", "any").
- safe_prompt (bool): Whether to inject a safety prompt before all conversations.
Returns:
- response (dict): The API response as a dictionary.
"""
url = "https://api.mistral.ai/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Accept": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": model,
"messages": messages,
"temperature": temperature,
"top_p": top_p,
"stream": stream,
"tool_choice": tool_choice,
"safe_prompt": safe_prompt
}
# Optional parameters
if max_tokens is not None:
data["max_tokens"] = max_tokens
if min_tokens is not None:
data["min_tokens"] = min_tokens
if stop is not None:
data["stop"] = stop
if random_seed is not None:
data["random_seed"] = random_seed
try:
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
return None
In [4]:
bot = ChatBot(api_key, model="mistral-large-latest", temperature=0.7, top_p=1.0, max_tokens=2000)
In [5]:
user_input = "Can you suggest 5 dinner ideas for this week?"
response = bot.get_response(user_input)
print("Assistant:", response)
In [6]:
user_input = "Can you give me the recipe for the first idea?"
response = bot.get_response(user_input)
print("Assistant:", response)
In [7]:
user_input = "Can you give me the recipe for the second idea?"
response = bot.get_response(user_input)
print("Assistant:", response)