Prompt
Overview
Prompt Drivers are used by Gen AI Builder Structures to make API calls to the underlying LLMs. OpenAi Chat is the default prompt driver used in all structures.
You can instantiate drivers and pass them to structures:
from griptape.drivers.prompt.openai import OpenAiChatPromptDriver from griptape.rules import Rule from griptape.structures import Agent agent = Agent( prompt_driver=OpenAiChatPromptDriver(model="gpt-4.1", temperature=0.3), input="You will be provided with a tweet, and your task is to classify its sentiment as positive, neutral, or negative. Tweet: {{ args[0] }}", rules=[Rule(value="Output only the sentiment.")], ) agent.run("I loved the new Batman movie!")
[02/27/25 20:23:55] INFO PromptTask fe65e8e17e80491c8b49615388413739 Input: You will be provided with a tweet, and your task is to classify its sentiment as positive, neutral, or negative. Tweet: I loved the new Batman movie! INFO PromptTask fe65e8e17e80491c8b49615388413739 Output: Positive
Or use them independently:
from griptape.common import PromptStack from griptape.drivers.prompt.openai import OpenAiChatPromptDriver stack = PromptStack() stack.add_system_message("You will be provided with Python code, and your task is to calculate its time complexity.") stack.add_user_message( """ def foo(n, k): accum = 0 for i in range(n): for l in range(k): accum += i return accum """ ) result = OpenAiChatPromptDriver(model="gpt-3.5-turbo-16k", temperature=0).run(stack) print(result.value)
The time complexity of this code is O(n * k), where n is the value of the variable `n` and k is the value of the variable `k`. The outer loop iterates `n` times, and for each iteration of the outer loop, the inner loop iterates `k` times. Therefore, the total number of iterations of the inner loop is n * k. Inside the inner loop, there is a constant time operation of adding `i` to the `accum` variable. Since this operation is performed n * k times, the overall time complexity is O(n * k).
You can pass images to the Driver if the model supports it:
from griptape.artifacts import ListArtifact, TextArtifact from griptape.drivers.prompt.openai import OpenAiChatPromptDriver from griptape.loaders import ImageLoader driver = OpenAiChatPromptDriver(model="gpt-4.1", max_tokens=256) image_artifact = ImageLoader().load("./tests/resources/mountain.jpg") text_artifact = TextArtifact("Describe the weather in the image") driver.run(ListArtifact([text_artifact, image_artifact]))
Structured Output
Some LLMs provide functionality often referred to as "Structured Output". This means instructing the LLM to output data in a particular format, usually JSON. This can be useful for forcing the LLM to output in a parsable format that can be used by downstream systems.
Warning
Each Driver may have a different default setting depending on the LLM provider's capabilities.
Prompt Task
The easiest way to get started with structured output is by using a PromptTask's output_schema parameter.
You can change how the output is structured by setting the Driver's structured_output_strategy to one of:
native
: The Driver will use the LLM's structured output functionality provided by the API.tool
: The Driver will add a special tool, StructuredOutputTool, and will try to force the LLM to use the Tool.rule
: The Driver will add a JsonSchemaRule to the Task's system prompt. This strategy does not guarantee that the LLM will output JSON and should only be used as a last resort.
You can specify output_schema
using either pydantic
or schema
libraries, though pydantic
is recommended.
from pydantic import BaseModel from rich.pretty import pprint from griptape.rules import Rule from griptape.structures import Pipeline from griptape.tasks import PromptTask class Step(BaseModel): explanation: str output: str class Output(BaseModel): steps: list[Step] final_answer: str pipeline = Pipeline( tasks=[ PromptTask( output_schema=Output, rules=[ Rule("You are a helpful math tutor. Guide the user through the solution step by step."), ], ) ] ) output = pipeline.run("How can I solve 8x + 7 = -23").output.value pprint(output) # OutputModel
import schema from rich.pretty import pprint from griptape.rules import Rule from griptape.structures import Pipeline from griptape.tasks import PromptTask pipeline = Pipeline( tasks=[ PromptTask( output_schema=schema.Schema( { "steps": [schema.Schema({"explanation": str, "output": str})], "final_answer": str, } ), rules=[ Rule("You are a helpful math tutor. Guide the user through the solution step by step."), ], ) ] ) output = pipeline.run("How can I solve 8x + 7 = -23").output.value pprint(output) # dict pprint(output["steps"])
Info
PromptTask
s will automatically validate the LLM's output against the output_schema
you provide. If the LLM fails to output valid JSON, the Task will automatically re-prompt the LLM to try again.
You can configure how many times it will retry by setting the max_subtasks
parameter on the Task.
Prompt Drivers
Gen AI Builder offers the following Prompt Drivers for interacting with LLMs.
OpenAI Chat
The OpenAiChatPromptDriver connects to the OpenAI Chat API. This driver uses OpenAi function calling when using Tools.
import os from griptape.drivers.prompt.openai import OpenAiChatPromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=OpenAiChatPromptDriver( api_key=os.environ["OPENAI_API_KEY"], model="gpt-4.1", temperature=0.1, seed=42, ), input="You will be provided with a description of a mood, and your task is to generate the CSS color code for a color that matches it. Description: {{ args[0] }}", ) agent.run("Blue sky at dusk.")
[02/27/25 20:23:57] INFO PromptTask baa30717060e4d0cb7b5769a42043da3 Input: You will be provided with a description of a mood, and your task is to generate the CSS color code for a color that matches it. Description: Blue sky at dusk. [02/27/25 20:23:58] INFO PromptTask baa30717060e4d0cb7b5769a42043da3 Output: For a mood described as "Blue sky at dusk," a fitting CSS color code might be a soft, muted blue with a hint of purple to capture the transition from day to night. Here's a suggestion: ```css color: #4A6FA5; ``` This color reflects the calming and serene atmosphere of a blue sky as it begins to darken at dusk.
Info
response_format
and seed
are unique to the OpenAI Chat Prompt Driver and Azure OpenAi Chat Prompt Driver.
OpenAi's reasoning models can also be used, and come with an additional parameter: reasoning_effort.
import os from griptape.drivers.prompt.openai import OpenAiChatPromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=OpenAiChatPromptDriver( api_key=os.environ["OPENAI_API_KEY"], model="o3-mini", reasoning_effort="medium", ), ) agent.run("""Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.""")
[03/24/25 21:31:22] INFO PromptTask b11591ea8cda497981df35b7ed7c36bd Input: Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format. [03/24/25 21:31:33] INFO PromptTask b11591ea8cda497981df35b7ed7c36bd Output: Below is one complete solution. Save the following script (say as transpose.sh), make it executable (chmod +x transpose.sh) and run it with a quoted matrix string argument. For example: ./transpose.sh '[1,2],[3,4],[5,6]' It will print out: [1,3,5],[2,4,6] Below is the full bash script: --------------------------------------------------- ------------ #!/bin/bash # # This script accepts one argument: a string representing a matrix, # e.g. '[1,2],[3,4],[5,6]' # It prints the transpose of the matrix in the same format. # if [ "$#" -ne 1 ]; then echo "Usage: $0 '<matrix>'" exit 1 fi input="$1" # Split the input into rows. The rows are separated by "]," # We insert a newline after each "],", so each row ends with a ']' rows=() while IFS= read -r line; do rows+=("$line") done < <(echo "$input" | sed 's/],/]\n/g') nrows=${#rows[@]} if [ "$nrows" -eq 0 ]; then echo "Empty matrix!" exit 1 fi # Declare an associative array to hold the matrix entries. declare -A matrix ncols=0 # Parse each row: Remove the '[' and ']', then split by comma. for i in "${!rows[@]}"; do row="${rows[$i]}" # Remove leading '[' and trailing ']' row="${row#[}" row="${row%]}" # Split the row into numbers (assuming comma-separated) IFS=',' read -ra nums <<< "$row" # Set the number of columns using the first row. if [ $i -eq 0 ]; then ncols=${#nums[@]} fi # Check that all rows have the same number of columns. if [ "${#nums[@]}" -ne "$ncols" ]; then echo "Error: inconsistent number of columns in row $((i+1))." exit 1 fi # Store each number in our associative array using key "i,j" for j in $(seq 0 $((ncols-1))); do matrix["$i,$j"]="${nums[$j]}" done done # Build the transposed matrix. # The transpose will have ncols rows and nrows columns. result="" for j in $(seq 0 $((ncols-1))); do rowStr="[" for i in $(seq 0 $((nrows-1))); do rowStr+="${matrix["$i,$j"]}" if [ $i -lt $((nrows-1)) ]; then rowStr+="," fi done rowStr+="]" # Append a comma if this is not the last row. if [ $j -lt $((ncols-1)) ]; then result+="${rowStr}," else result+="${rowStr}" fi done echo "$result" --------------------------------------------------- ------------ Explanation: 1. The script accepts a single argument, the matrix string. 2. It uses sed to replace every "]," with "]\n" so that each row appears on its own line. 3. For each row the script strips the leading '[' and trailing ']' then splits the row by commas. 4. The matrix elements are stored in an associative array indexed by "row,column". 5. Finally, the script builds the transpose by swapping row and column indices and prints the result in the same format. This solution assumes that the input is well-formed and that the matrix is rectangular.
OpenAI Compatible
Many services such as LMStudio and OhMyGPT provide OpenAI-compatible APIs. You can use the OpenAiChatPromptDriver to interact with these services.
Simply set the base_url
to the service's API endpoint and the model
to the model name. If the service requires an API key, you can set it in the api_key
field.
For example, here is how we can connect to LMStudio's API:
from griptape.drivers.prompt.openai import OpenAiChatPromptDriver from griptape.rules import Rule from griptape.structures import Agent agent = Agent( prompt_driver=OpenAiChatPromptDriver( base_url="http://127.0.0.1:1234/v1", model="lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF", stream=True ), rules=[Rule(value="You are a helpful coding assistant.")], ) agent.run("How do I init and update a git submodule?")
Or Groq's API:
import os from griptape.drivers.prompt.openai import OpenAiChatPromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=OpenAiChatPromptDriver( api_key=os.environ["GROQ_API_KEY"], base_url="https://api.groq.com/openai/v1", model="llama-3.3-70b-versatile", ), ) agent.run("Hello")
[02/27/25 20:26:39] INFO PromptTask b1a0f76da1cf42ca81e85fc921f07f80 Input: Hello INFO PromptTask b1a0f76da1cf42ca81e85fc921f07f80 Output: Hello. How can I help you today?
Tip
Make sure to include v1
at the end of the base_url
to match the OpenAI API endpoint.
Azure OpenAI Chat
The AzureOpenAiChatPromptDriver connects to Azure OpenAI Chat Completion APIs. This driver uses Azure OpenAi function calling when using Tools.
import os from griptape.drivers.prompt.openai import AzureOpenAiChatPromptDriver from griptape.rules import Rule from griptape.structures import Agent agent = Agent( prompt_driver=AzureOpenAiChatPromptDriver( api_key=os.environ["AZURE_OPENAI_API_KEY_1"], model="gpt-4.1", azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT_1"], ), rules=[ Rule( value="You will be provided with text, and your task is to translate it into emojis. " "Do not use any regular text. Do your best with emojis only." ) ], ) agent.run("Artificial intelligence is a technology with great promise.")
[02/27/25 20:24:47] INFO PromptTask 1274ee43e52040db9cc3e0e821596588 Input: Artificial intelligence is a technology with great promise. INFO PromptTask 1274ee43e52040db9cc3e0e821596588 Output: 🤖💡👍
Gen AI Builder
The GriptapeCloudPromptDriver connects to the Gen AI Builder chat messages API.
import os from griptape.drivers.prompt.griptape_cloud import GriptapeCloudPromptDriver from griptape.rules import Rule from griptape.structures import Agent agent = Agent( prompt_driver=GriptapeCloudPromptDriver( api_key=os.environ["GT_CLOUD_API_KEY"], ), rules=[ Rule( "You will be provided with a product description and seed words, and your task is to generate product names.", ), ], ) agent.run("Product description: A home milkshake maker. Seed words: fast, healthy, compact.")
[02/27/25 20:24:13] INFO PromptTask 094f8b4fcadf4502816ca5938029d27c Input: Product description: A home milkshake maker. Seed words: fast, healthy, compact. [02/27/25 20:24:17] INFO PromptTask 094f8b4fcadf4502816ca5938029d27c Output: 1. FastBlend Pro 2. HealthyWhirl 3. CompactShake 4. SwiftSip Maker 5. VitaBlend Express 6. QuickWhip 7. NutriShake Compact 8. SmoothieSwift 9. BlendEase 10. ShakeSmart
Cohere
The CoherePromptDriver connects to the Cohere Chat API. This driver uses Cohere tool use when using Tools.
Info
This driver requires the drivers-prompt-cohere
extra.
import os from griptape.drivers.prompt.cohere import CoherePromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=CoherePromptDriver( model="command-r", api_key=os.environ["COHERE_API_KEY"], ) ) agent.run('What is the sentiment of this review? Review: "I really enjoyed this movie!"')
[02/27/25 20:25:18] INFO PromptTask 2a443e4ba2f04b7c8ba46cec3216bf3c Input: What is the sentiment of this review? Review: "I really enjoyed this movie!" INFO PromptTask 2a443e4ba2f04b7c8ba46cec3216bf3c Output: The sentiment of the review is positive. The customer appears to have enjoyed the movie and likely would recommend it to others.
Anthropic
Info
This driver requires the drivers-prompt-anthropic
extra.
The AnthropicPromptDriver connects to the Anthropic Messages API. This driver uses Anthropic tool use when using Tools.
import os from griptape.drivers.prompt.anthropic import AnthropicPromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=AnthropicPromptDriver( model="claude-3-opus-20240229", api_key=os.environ["ANTHROPIC_API_KEY"], ) ) agent.run("Where is the best place to see cherry blossums in Japan?")
[02/27/25 20:23:42] INFO PromptTask 8913166e4e2d4f2b8ed7c47c8812f77d Input: Where is the best place to see cherry blossums in Japan? [02/27/25 20:23:53] INFO PromptTask 8913166e4e2d4f2b8ed7c47c8812f77d Output: There are many great places to see cherry blossoms (sakura) in Japan, but some of the most popular and beautiful locations include: 1. Ueno Park, Tokyo - One of the most popular hanami (cherry blossom viewing) spots in Japan, with over 1,000 cherry trees. 2. Shinjuku Gyoen, Tokyo - A large park with more than 1,000 cherry trees of various varieties, known for its spacious lawns and picturesque scenery. 3. Mount Yoshino, Nara Prefecture - A mountain famous for its 30,000 cherry trees, creating a stunning pink landscape during the blooming season. 4. Philosopher's Path, Kyoto - A peaceful stone path lined with hundreds of cherry trees, following a canal in Kyoto's Higashiyama district. 5. Himeji Castle, Himeji - A UNESCO World Heritage site featuring a majestic castle surrounded by numerous cherry trees. 6. Hirosaki Castle, Aomori Prefecture - Known for its unique "cherry blossom tunnel" formed by the trees along the castle moats. 7. Fuji Five Lakes, Yamanashi Prefecture - Offers stunning views of cherry blossoms with Mount Fuji in the background. The best time to see cherry blossoms varies depending on the location and weather conditions, but generally falls between late March and early April.
Info
This driver requires the drivers-prompt-google
extra.
The GooglePromptDriver connects to the Google Generative AI API. This driver uses Gemini function calling when using Tools.
import os from griptape.drivers.prompt.google import GooglePromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=GooglePromptDriver( model="gemini-2.0-flash", api_key=os.environ["GOOGLE_API_KEY"], ) ) agent.run("Briefly explain how a computer works to a young child.")
[02/27/25 20:25:43] INFO PromptTask 409830097880426f9b46a8ba3b6a6d84 Input: Briefly explain how a computer works to a young child. [02/27/25 20:25:45] INFO PromptTask 409830097880426f9b46a8ba3b6a6d84 Output: Imagine a computer is like a really smart toy box! * **You tell it what to do:** You press buttons or touch the screen, like telling the toy box which toy you want. * **It has a brain:** Inside is a tiny brain that figures out what you asked for. * **It remembers things:** It has a special place to keep all its toys (information) so it doesn't forget. * **It shows you the toy:** It shows you the picture or game on the screen, like giving you the toy you asked for! So, you ask, it thinks, it remembers, and then it shows you the answer!
Amazon Bedrock
Info
This driver requires the drivers-prompt-amazon-bedrock
extra.
The AmazonBedrockPromptDriver uses Amazon Bedrock's Converse API. This driver uses Bedrock tool use when using Tools.
All models supported by the Converse API are available for use with this driver.
from griptape.drivers.prompt.amazon_bedrock import AmazonBedrockPromptDriver from griptape.rules import Rule from griptape.structures import Agent agent = Agent( prompt_driver=AmazonBedrockPromptDriver( model="anthropic.claude-3-sonnet-20240229-v1:0", ), rules=[ Rule( value="You are a customer service agent that is classifying emails by type. I want you to give your answer and then explain it." ) ], ) agent.run( """How would you categorize this email? <email> Can I use my Mixmaster 4000 to mix paint, or is it only meant for mixing food? </email> Categories are: (A) Pre-sale question (B) Broken or defective item (C) Billing question (D) Other (please explain)""" )
[02/27/25 20:26:53] INFO PromptTask 4b5d90ba5a9b4c7bbb1006828e340da3 Input: How would you categorize this email? <email> Can I use my Mixmaster 4000 to mix paint, or is it only meant for mixing food? </email> Categories are: (A) Pre-sale question (B) Broken or defective item (C) Billing question (D) Other (please explain) [02/27/25 20:26:58] INFO PromptTask 4b5d90ba5a9b4c7bbb1006828e340da3 Output: I would categorize this email as (A) Pre-sale question. Explanation: The email is asking about the intended use of the Mixmaster 4000 product, specifically whether it can be used for mixing paint or if it is only meant for mixing food. This is a question someone might ask before deciding to purchase the product, hence it falls under the "Pre-sale question" category.
Ollama
Info
This driver requires the drivers-prompt-ollama
extra.
The OllamaPromptDriver connects to the Ollama Chat Completion API. This driver uses Ollama tool calling when using Tools.
from griptape.drivers.prompt.ollama import OllamaPromptDriver from griptape.structures import Agent from griptape.tools import CalculatorTool agent = Agent( prompt_driver=OllamaPromptDriver( model="llama3.1", ), tools=[CalculatorTool()], ) agent.run("What is (192 + 12) ^ 4")
Hugging Face Hub
Info
This driver requires the drivers-prompt-huggingface
extra.
The HuggingFaceHubPromptDriver connects to the Hugging Face Hub API.
Warning
Not all models featured on the Hugging Face Hub are supported by this driver. Models that are not supported by Hugging Face serverless inference will not work with this driver. Due to the limitations of Hugging Face serverless inference, only models that are than 10GB are supported.
import os from griptape.drivers.prompt.huggingface_hub import HuggingFaceHubPromptDriver from griptape.rules import Rule, Ruleset from griptape.structures import Agent agent = Agent( prompt_driver=HuggingFaceHubPromptDriver( model="HuggingFaceH4/zephyr-7b-beta", api_token=os.environ["HUGGINGFACE_HUB_ACCESS_TOKEN"], ), rulesets=[ Ruleset( name="Girafatron", rules=[ Rule( value="You are Girafatron, a giraffe-obsessed robot. You are talking to a human. " "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. " "Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe." ) ], ) ], ) agent.run("Hello Girafatron, what is your favorite animal?")
[02/27/25 20:26:27] INFO PromptTask 07393146bee34957b2edb75ceaa39cce Input: Hello Girafatron, what is your favorite animal? [02/27/25 20:26:31] INFO PromptTask 07393146bee34957b2edb75ceaa39cce Output: se mis mis mis se Mira es mis se es es la mis mis es es misión es mis es misión es mis es mis es es es se es es es se es esión se es es mis es … es mis es misión es es es mis es … es es es es es es es es es es es es es es es es es es es es es es es es … es es es es es es es es es es es es es es es es es mis es … es es mis es … es es es es es … es … es es es es es es es es es … es es es es … es … es … es es es … es es es es es … es es es es es es es es es es es es es … es es es es es es es es es es es es es es es es es es es es es … es es … es es es … es … es … es es … es es … es … es … es … es es es es … es es es es es es es es es es es … es es es es es … es es es … es … es … es es es … es … es es es es es … es … es … es
Text Generation Interface
The HuggingFaceHubPromptDriver also supports Text Generation Interface for running models locally. To use Text Generation Interface, just set model
to a TGI endpoint.
import os from griptape.drivers.prompt.huggingface_hub import HuggingFaceHubPromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=HuggingFaceHubPromptDriver( model="http://127.0.0.1:8080", api_token=os.environ["HUGGINGFACE_HUB_ACCESS_TOKEN"], ), ) agent.run("Write the code for a snake game.")
Hugging Face Pipeline
Info
This driver requires the drivers-prompt-huggingface-pipeline
extra.
The HuggingFacePipelinePromptDriver uses Hugging Face Pipelines for inference locally.
Warning
Running a model locally can be a computationally expensive process.
from griptape.drivers.prompt.huggingface_pipeline import HuggingFacePipelinePromptDriver from griptape.rules import Rule, Ruleset from griptape.structures import Agent agent = Agent( prompt_driver=HuggingFacePipelinePromptDriver( model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", ), rulesets=[ Ruleset( name="Pirate", rules=[Rule(value="You are a pirate chatbot who always responds in pirate speak!")], ) ], ) agent.run("How many helicopters can a human eat in one sitting?")
[02/27/25 20:25:52] INFO PromptTask 37f53710ed1b42bb89800c0ee24d1b58 Input: How many helicopters can a human eat in one sitting? [02/27/25 20:26:22] INFO PromptTask 37f53710ed1b42bb89800c0ee24d1b58 Output: There is no specific number of helicopters that a human can eat in one sitting. The number of helicopters that a human can eat depends on their size, diet, and overall health. However, some people have been known to consume up to 100-200 helicopters in a single sitting, but this is considered a rare occurrence.
Amazon SageMaker Jumpstart
Info
This driver requires the drivers-prompt-amazon-sagemaker
extra.
The AmazonSageMakerJumpstartPromptDriver uses Amazon SageMaker Jumpstart for inference on AWS.
Amazon Sagemaker Jumpstart provides a wide range of models with varying capabilities.
This Driver has been primarily chat-optimized models that have a Huggingface Chat Template available.
If your model does not fit this use-case, we suggest sub-classing AmazonSageMakerJumpstartPromptDriver and overriding the _to_model_input
and _to_model_params
methods.
import os from griptape.drivers.prompt.amazon_sagemaker_jumpstart import AmazonSageMakerJumpstartPromptDriver from griptape.structures import Agent agent = Agent( prompt_driver=AmazonSageMakerJumpstartPromptDriver( endpoint=os.environ["SAGEMAKER_LLAMA_3_INSTRUCT_ENDPOINT_NAME"], model="meta-llama/Meta-Llama-3-8B-Instruct", ) ) agent.run("What is a good lasagna recipe?")
Grok
The GrokPromptDriver uses Grok's chat completion endpoint.
import os from griptape.drivers.prompt.grok import GrokPromptDriver from griptape.rules import Rule from griptape.structures import Agent agent = Agent( prompt_driver=GrokPromptDriver(model="grok-2-latest", api_key=os.environ["GROK_API_KEY"]), rules=[Rule("You are Grok, a chatbot inspired by the Hitchhikers Guide to the Galaxy.")], ) agent.run("What is the meaning of life, the universe, and everything?")
[02/27/25 20:25:12] INFO PromptTask 3403329d0d734315b27d958c48098f0d Input: What is the meaning of life, the universe, and everything? [02/27/25 20:25:14] INFO PromptTask 3403329d0d734315b27d958c48098f0d Output: The answer to the ultimate question of life, the universe, and everything is 42. It's a bit of a mysterious number, but hey, who said the universe had to make sense? Just go with it and enjoy the ride!
Perplexity
The PerplexityPromptDriver uses Perplexity Sonar's chat completion endpoint.
While you can use this Driver directly, we recommend using it through its accompanying Web Search Driver.
import os from griptape.drivers.prompt.perplexity import PerplexityPromptDriver from griptape.rules import Rule from griptape.structures import Agent from griptape.tasks import PromptTask agent = Agent( tasks=[ PromptTask( prompt_driver=PerplexityPromptDriver(model="sonar-pro", api_key=os.environ["PERPLEXITY_API_KEY"]), rules=[ Rule("Be precise and concise"), ], ) ], ) agent.run("How many stars are there in our galaxy?")
[03/12/25 20:28:09] INFO PromptTask 6debe5ed0178486b8414080ead2d7d62 Input: How many stars are there in our galaxy? [03/12/25 20:28:17] INFO PromptTask 6debe5ed0178486b8414080ead2d7d62 Output: Based on current scientific estimates, there are approximately 100 billion to 400 billion stars in the Milky Way galaxy[1][2][3][4]. Here are some key points about the number of stars in our galaxy: - The exact number is difficult to determine precisely, as we can't simply count all the stars individually[4]. - Astronomers estimate the number by calculating the mass of the galaxy and the percentage of that mass made up of stars[4]. - The most common estimates range from 100 billion on the low end to 400 billion on the high end[4][6]. - Some estimates go even higher, up to 1 trillion stars[3]. - The uncertainty comes from factors like difficulty detecting very faint, low-mass stars and determining the average mass of stars in the galaxy[4]. - For comparison, the neighboring Andromeda galaxy is estimated to contain about 1 trillion stars[6]. - The Milky Way is about 100,000 light-years in diameter and 1,000 light-years thick[1][5]. - Our solar system is located about 25,000-27,000 light-years from the center of the galaxy[5][6]. So in summary, while we don't have an exact count, astronomers estimate there are hundreds of billions of stars in the Milky Way, with 100-400 billion being the most commonly cited range. The true number could potentially be even higher.
- On this page
- Overview
- Structured Output
- Prompt Drivers
← Prev
Chunkers
↑ Up
Gen AI Builder: A Python Framework for Building Generative AI Applications
Next →
Conversation Memory
Could this page be better? Report a problem or suggest an addition!