base_conversation_memory
BaseConversationMemory
Bases: SerializableMixin
, ABC
Source code in griptape/memory/structure/base_conversation_memory.py
autoload = field(default=True, kw_only=True)
class-attribute
instance-attribute
autoprune = field(default=True, kw_only=True)
class-attribute
instance-attribute
conversation_memory_driver = field(default=Factory(lambda: Defaults.drivers_config.conversation_memory_driver), kw_only=True)
class-attribute
instance-attribute
max_runs = field(default=None, kw_only=True, metadata={'serializable': True})
class-attribute
instance-attribute
meta = field(factory=dict, kw_only=True, metadata={'serializable': True})
class-attribute
instance-attribute
runs = field(factory=list, kw_only=True, metadata={'serializable': True})
class-attribute
instance-attribute
__attrs_post_init__()
add_run(run)
add_to_prompt_stack(prompt_driver, prompt_stack, index=None)
Add the Conversation Memory runs to the Prompt Stack by modifying the messages in place.
If autoprune is enabled, this will fit as many Conversation Memory runs into the Prompt Stack as possible without exceeding the token limit.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt_driver
|
BasePromptDriver
|
The Prompt Driver to use for token counting. |
required |
prompt_stack
|
PromptStack
|
The Prompt Stack to add the Conversation Memory to. |
required |
index
|
Optional[int]
|
Optional index to insert the Conversation Memory runs at. Defaults to appending to the end of the Prompt Stack. |
None
|