PydanticAI provides access to messages exchanged during an agent run. These messages can be used both to continue a coherent conversation, and to understand how an agent performed.
Accessing Messages from Results
After running an agent, you can access the messages exchanged during that run from the result object.
Note: The final result message will NOT be added to result messages if you use .stream_text(delta=True) since in this case the result content is never built as one string.
frompydantic_aiimportAgentagent=Agent('openai:gpt-4o',system_prompt='Be a helpful assistant.')result=agent.run_sync('Tell me a joke.')print(result.data)#> Did you hear about the toothpaste scandal? They called it Colgate.# all messages from the runprint(result.all_messages())"""[ ModelRequest( parts=[ SystemPromptPart( content='Be a helpful assistant.', dynamic_ref=None, part_kind='system-prompt', ), UserPromptPart( content='Tell me a joke.', timestamp=datetime.datetime(...), part_kind='user-prompt', ), ], kind='request', ), ModelResponse( parts=[ TextPart( content='Did you hear about the toothpaste scandal? They called it Colgate.', part_kind='text', ) ], model_name='gpt-4o', timestamp=datetime.datetime(...), kind='response', ),]"""
frompydantic_aiimportAgentagent=Agent('openai:gpt-4o',system_prompt='Be a helpful assistant.')asyncdefmain():asyncwithagent.run_stream('Tell me a joke.')asresult:# incomplete messages before the stream finishesprint(result.all_messages())""" [ ModelRequest( parts=[ SystemPromptPart( content='Be a helpful assistant.', dynamic_ref=None, part_kind='system-prompt', ), UserPromptPart( content='Tell me a joke.', timestamp=datetime.datetime(...), part_kind='user-prompt', ), ], kind='request', ) ] """asyncfortextinresult.stream_text():print(text)#> Did you hear#> Did you hear about the toothpaste#> Did you hear about the toothpaste scandal? They called#> Did you hear about the toothpaste scandal? They called it Colgate.# complete messages once the stream finishesprint(result.all_messages())""" [ ModelRequest( parts=[ SystemPromptPart( content='Be a helpful assistant.', dynamic_ref=None, part_kind='system-prompt', ), UserPromptPart( content='Tell me a joke.', timestamp=datetime.datetime(...), part_kind='user-prompt', ), ], kind='request', ), ModelResponse( parts=[ TextPart( content='Did you hear about the toothpaste scandal? They called it Colgate.', part_kind='text', ) ], model_name='gpt-4o', timestamp=datetime.datetime(...), kind='response', ), ] """
(This example is complete, it can be run "as is" — you'll need to add asyncio.run(main()) to run main)
Using Messages as Input for Further Agent Runs
The primary use of message histories in PydanticAI is to maintain context across multiple agent runs.
If message_history is set and not empty, a new system prompt is not generated — we assume the existing message history includes a system prompt.
Reusing messages in a conversation
frompydantic_aiimportAgentagent=Agent('openai:gpt-4o',system_prompt='Be a helpful assistant.')result1=agent.run_sync('Tell me a joke.')print(result1.data)#> Did you hear about the toothpaste scandal? They called it Colgate.result2=agent.run_sync('Explain?',message_history=result1.new_messages())print(result2.data)#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.print(result2.all_messages())"""[ ModelRequest( parts=[ SystemPromptPart( content='Be a helpful assistant.', dynamic_ref=None, part_kind='system-prompt', ), UserPromptPart( content='Tell me a joke.', timestamp=datetime.datetime(...), part_kind='user-prompt', ), ], kind='request', ), ModelResponse( parts=[ TextPart( content='Did you hear about the toothpaste scandal? They called it Colgate.', part_kind='text', ) ], model_name='gpt-4o', timestamp=datetime.datetime(...), kind='response', ), ModelRequest( parts=[ UserPromptPart( content='Explain?', timestamp=datetime.datetime(...), part_kind='user-prompt', ) ], kind='request', ), ModelResponse( parts=[ TextPart( content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.', part_kind='text', ) ], model_name='gpt-4o', timestamp=datetime.datetime(...), kind='response', ),]"""
(This example is complete, it can be run "as is")
Storing and loading messages (to JSON)
While maintaining conversation state in memory is enough for many applications, often times you may want to store the messages history of an agent run on disk or in a database. This might be for evals, for sharing data between Python and JavaScript/TypeScript, or any number of other use cases.
The intended way to do this is using a TypeAdapter.
frompydantic_coreimportto_jsonable_pythonfrompydantic_aiimportAgentfrompydantic_ai.messagesimportModelMessagesTypeAdapteragent=Agent('openai:gpt-4o',system_prompt='Be a helpful assistant.')result1=agent.run_sync('Tell me a joke.')history_step_1=result1.all_messages()as_python_objects=to_jsonable_python(history_step_1)same_history_as_step_1=ModelMessagesTypeAdapter.validate_python(as_python_objects)result2=agent.run_sync('Tell me a different joke.',message_history=same_history_as_step_1)
(This example is complete, it can be run "as is")
Other ways of using messages
Since messages are defined by simple dataclasses, you can manually create and manipulate, e.g. for testing.
The message format is independent of the model used, so you can use messages in different agents, or the same agent with different models.
In the example below, we reuse the message from the first agent run, which uses the openai:gpt-4o model, in a second agent run using the google-gla:gemini-1.5-pro model.
Reusing messages with a different model
frompydantic_aiimportAgentagent=Agent('openai:gpt-4o',system_prompt='Be a helpful assistant.')result1=agent.run_sync('Tell me a joke.')print(result1.data)#> Did you hear about the toothpaste scandal? They called it Colgate.result2=agent.run_sync('Explain?',model='google-gla:gemini-1.5-pro',message_history=result1.new_messages(),)print(result2.data)#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.print(result2.all_messages())"""[ ModelRequest( parts=[ SystemPromptPart( content='Be a helpful assistant.', dynamic_ref=None, part_kind='system-prompt', ), UserPromptPart( content='Tell me a joke.', timestamp=datetime.datetime(...), part_kind='user-prompt', ), ], kind='request', ), ModelResponse( parts=[ TextPart( content='Did you hear about the toothpaste scandal? They called it Colgate.', part_kind='text', ) ], model_name='gpt-4o', timestamp=datetime.datetime(...), kind='response', ), ModelRequest( parts=[ UserPromptPart( content='Explain?', timestamp=datetime.datetime(...), part_kind='user-prompt', ) ], kind='request', ), ModelResponse( parts=[ TextPart( content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.', part_kind='text', ) ], model_name='gemini-1.5-pro', timestamp=datetime.datetime(...), kind='response', ),]"""
Examples
For a more complete example of using messages in conversations, see the chat app example.