Embedding
Overview
Embeddings in Griptape are multidimensional representations of text or image data. Embeddings carry semantic information, making them powerful for use-cases like text or image similarity search in a Rag Engine.
Embedding Drivers
OpenAI
The OpenAiEmbeddingDriver uses the OpenAI Embeddings API.
OpenAI Compatible
Many services such as LMStudio and OhMyGPT provide OpenAI-compatible APIs. You can use the OpenAiEmbeddingDriver to interact with these services.
Simply set the base_url
to the service's API endpoint and the model
to the model name. If the service requires an API key, you can set it in the api_key
field.
from griptape.drivers.embedding.openai import OpenAiEmbeddingDriver
embedding_driver = OpenAiEmbeddingDriver(
base_url="http://127.0.0.1:1234/v1",
model="nomic-ai/nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.Q2_K",
)
embeddings = embedding_driver.embed("Hello world!")
# display the first 3 embeddings
print(embeddings[:3])
Tip
Make sure to include v1
at the end of the base_url
to match the OpenAI API endpoint.
Azure OpenAI
The AzureOpenAiEmbeddingDriver uses the same parameters as OpenAiEmbeddingDriver with updated defaults.
Bedrock Titan
Info
This driver requires the drivers-embedding-amazon-bedrock
extra.
The AmazonBedrockTitanEmbeddingDriver uses the Amazon Bedrock Embeddings API.
from griptape.drivers.embedding.amazon_bedrock import AmazonBedrockTitanEmbeddingDriver
from griptape.loaders import ImageLoader
embedding_driver = AmazonBedrockTitanEmbeddingDriver()
embeddings = embedding_driver.embed("Hello world!")
print(embeddings[:3])
# Some models support images!
multi_modal_embedding_driver = AmazonBedrockTitanEmbeddingDriver(model="amazon.titan-embed-image-v1")
image = ImageLoader().load("tests/resources/cow.png")
image_embeddings = multi_modal_embedding_driver.embed(image)
print(image_embeddings[:3])
Info
This driver requires the drivers-embedding-google
extra.
The GoogleEmbeddingDriver uses the Google Embeddings API.
Hugging Face Hub
Info
This driver requires the drivers-embedding-huggingface
extra.
The HuggingFaceHubEmbeddingDriver connects to the Hugging Face Hub API. It supports models with the following tasks:
- feature-extraction
import os
from griptape.drivers.embedding.huggingface_hub import HuggingFaceHubEmbeddingDriver
from griptape.tokenizers import HuggingFaceTokenizer
driver = HuggingFaceHubEmbeddingDriver(
api_token=os.environ["HUGGINGFACE_HUB_ACCESS_TOKEN"],
model="sentence-transformers/all-MiniLM-L6-v2",
tokenizer=HuggingFaceTokenizer(
model="sentence-transformers/all-MiniLM-L6-v2",
max_output_tokens=512,
),
)
embeddings = driver.embed("Hello world!")
# display the first 3 embeddings
print(embeddings[:3])
Ollama
Info
This driver requires the drivers-embedding-ollama
extra.
The OllamaEmbeddingDriver uses the Ollama Embeddings API.
from griptape.drivers.embedding.ollama import OllamaEmbeddingDriver
driver = OllamaEmbeddingDriver(
model="all-minilm",
)
results = driver.embed("Hello world!")
# display the first 3 embeddings
print(results[:3])
Amazon SageMaker Jumpstart
The AmazonSageMakerJumpstartEmbeddingDriver uses the Amazon SageMaker Endpoints to generate embeddings on AWS.
Info
This driver requires the drivers-embedding-amazon-sagemaker
extra.
import os
from griptape.drivers.embedding.amazon_sagemaker_jumpstart import AmazonSageMakerJumpstartEmbeddingDriver
driver = AmazonSageMakerJumpstartEmbeddingDriver(
endpoint=os.environ["SAGEMAKER_ENDPOINT"],
model=os.environ["SAGEMAKER_TENSORFLOW_HUB_MODEL"],
)
embeddings = driver.embed("Hello world!")
# display the first 3 embeddings
print(embeddings[:3])
VoyageAI
The VoyageAiEmbeddingDriver uses the VoyageAI Embeddings API.
Info
This driver requires the drivers-embedding-voyageai
extra.
import os
from griptape.drivers.embedding.voyageai import VoyageAiEmbeddingDriver
from griptape.loaders import ImageLoader
embedding_driver = VoyageAiEmbeddingDriver(api_key=os.environ["VOYAGE_API_KEY"])
embeddings = embedding_driver.embed("Hello world!")
print(embeddings[:3])
# Some models support images!
multi_modal_embedding_driver = VoyageAiEmbeddingDriver(
api_key=os.environ["VOYAGE_API_KEY"], model="voyage-multimodal-3"
)
image = ImageLoader().load("tests/resources/cow.png")
image_embeddings = multi_modal_embedding_driver.embed(image)
print(image_embeddings[:3])
Cohere
The CohereEmbeddingDriver uses the Cohere Embeddings API.
Info
This driver requires the drivers-embedding-cohere
extra.
import os
from griptape.drivers.embedding.cohere import CohereEmbeddingDriver
embedding_driver = CohereEmbeddingDriver(
model="embed-english-v3.0",
api_key=os.environ["COHERE_API_KEY"],
input_type="search_document",
)
embeddings = embedding_driver.embed("Hello world!")
# display the first 3 embeddings
print(embeddings[:3])
Nvidia NIM
The NvidiaNimEmbeddingDriver uses the Nvidia NIM API.
Info
The Nvidia NIM API is OpenAI compatible, except for a single parameter: input_type
. This parameter is controlled by the keyword argument vector_operation
when calling the driver embed
methods.
Override Default Structure Embedding Driver
Here is how you can override the Embedding Driver that is used by default in Structures.
from griptape.configs import Defaults
from griptape.configs.drivers import DriversConfig
from griptape.drivers.embedding.voyageai import VoyageAiEmbeddingDriver
from griptape.drivers.prompt.openai import OpenAiChatPromptDriver
from griptape.structures import Agent
from griptape.tools import PromptSummaryTool, WebScraperTool
Defaults.drivers_config = DriversConfig(
prompt_driver=OpenAiChatPromptDriver(model="gpt-4.1"),
embedding_driver=VoyageAiEmbeddingDriver(),
)
Defaults.drivers_config = DriversConfig(
prompt_driver=OpenAiChatPromptDriver(model="gpt-4.1"),
embedding_driver=VoyageAiEmbeddingDriver(),
)
agent = Agent(
tools=[WebScraperTool(off_prompt=True), PromptSummaryTool(off_prompt=False)],
)
agent.run("based on https://www.griptape.ai/, tell me what Griptape is")
[02/27/25 20:23:16] INFO PromptTask 4b8f768651fd4ac39d06cd14238e4936
Input: based on https://www.griptape.ai/, tell me
what Griptape is
[02/27/25 20:23:17] INFO Subtask 732b03a0ea584eae96fa88b3939c7caa
Actions: [
{
"tag": "call_1Ojazp5tjlXr2dUReQysFYkD",
"name": "WebScraperTool",
"path": "get_content",
"input": {
"values": {
"url": "https://www.griptape.ai/"
}
}
}
]
[02/27/25 20:23:19] INFO Subtask 732b03a0ea584eae96fa88b3939c7caa
Response: You have attempted to use a
DummyVectorStoreDriver's upsert_vector method. This
likely originated from using a `DriversConfig`
without providing a Driver required for this
feature.
[02/27/25 20:23:20] INFO Subtask 48815995dad34e5ca0232348543d4832
Thought: I encountered an issue while trying to
access the content of the Griptape website. Let me
try to summarize the content again.
Actions: [
{
"tag": "call_AD6qAAlDndUZ7upj3riWNppE",
"name": "WebScraperTool",
"path": "get_content",
"input": {
"values": {
"url": "https://www.griptape.ai/"
}
}
}
]
INFO Subtask 48815995dad34e5ca0232348543d4832
Response: You have attempted to use a
DummyVectorStoreDriver's upsert_vector method. This
likely originated from using a `DriversConfig`
without providing a Driver required for this
feature.
[02/27/25 20:23:21] INFO PromptTask 4b8f768651fd4ac39d06cd14238e4936
Output: I am unable to access the content of the
Griptape website directly due to a technical issue.
However, you can visit the website
[Griptape](https://www.griptape.ai/) to learn more
about what Griptape is. If you have any specific
questions or need information on a particular
aspect, feel free to ask!