Quickstart
Looking for La Plateforme? Head to console.mistral.ai
Getting started with Mistral AI API
Mistral AI API provides a seamless way for developers to integrate Mistral's state-of-the-art
models into their applications and production workflows with just a few lines of code.
Our API is currently available through La Plateforme.
You need to activate payments on your account to enable your API keys.
After a few moments, you will be able to use our chat
endpoint:
- python
- javascript
- curl
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-large-latest"
client = MistralClient(api_key=api_key)
chat_response = client.chat(
model=model,
messages=[ChatMessage(role="user", content="What is the best French cheese?")]
)
print(chat_response.choices[0].message.content)
import MistralClient from '@mistralai/mistralai';
const apiKey = process.env.MISTRAL_API_KEY;
const client = new MistralClient(apiKey);
const chatResponse = await client.chat({
model: 'mistral-large-latest',
messages: [{role: 'user', content: 'What is the best French cheese?'}],
});
console.log('Chat:', chatResponse.choices[0].message.content);
curl --location "https://api.mistral.ai/v1/chat/completions" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"model": "mistral-large-latest",
"messages": [{"role": "user", "content": "Who is the most renowned French painter?"}]
}'
To generate text embeddings using Mistral AI's embeddings API, we can make a request to the API
endpoint and specify the embedding model mistral-embed
, along with providing a list of input texts.
The API will then return the corresponding embeddings as numerical vectors, which can be used for
further analysis or processing in NLP applications.
- python
- javascript
- curl
from mistralai.client import MistralClient
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-embed"
client = MistralClient(api_key=api_key)
embeddings_response = client.embeddings(
model=model,
input=["Embed this sentence.", "As well as this one."]
)
print(embeddings_response)
import MistralClient from '@mistralai/mistralai';
const apiKey = process.env.MISTRAL_API_KEY;
const client = new MistralClient(apiKey);
const embeddingsResponse = await client.embeddings({
model: 'mistral-embed',
input: ["Embed this sentence.", "As well as this one."],
});
console.log(embeddingsResponse);
curl --location "https://api.mistral.ai/v1/embeddings" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"model": "mistral-embed",
"input": ["Embed this sentence.", "As well as this one."]
}'
For a full description of the models offered on the API, head on to the model documentation.
Account setup
- To get started, create a Mistral account or sign in at console.mistral.ai.
- Then, navigate to "Workspace" and "Billing" to add your payment information and activate payments on your account.
- After that, go to the "API keys" page and make a new API key by clicking "Create new key". Make sure to copy the API key, save it safely, and do not share it with anyone.