dummy_tokenizer
Adapted from the Griptape AI Framework documentation.
Bases:
BaseTokenizer
Source Code in griptape/tokenizers/dummy_tokenizer.py
@define class DummyTokenizer(BaseTokenizer): model: Optional[str] = field(default=None, kw_only=True) _max_input_tokens: int = field(init=False, default=0, kw_only=True, alias="max_input_tokens") _max_output_tokens: int = field(init=False, default=0, kw_only=True, alias="max_output_tokens") def count_tokens(self, text: str) -> int: raise DummyError(__class__.__name__, "count_tokens")
_max_input_tokens = field(init=False, default=0, kw_only=True, alias='max_input_tokens')
class-attribute instance-attribute_max_output_tokens = field(init=False, default=0, kw_only=True, alias='max_output_tokens')
class-attribute instance-attributemodel = field(default=None, kw_only=True)
class-attribute instance-attribute
count_tokens(text)
Source Code in griptape/tokenizers/dummy_tokenizer.py
def count_tokens(self, text: str) -> int: raise DummyError(__class__.__name__, "count_tokens")
- On this page
- count_tokens(text)
Could this page be better? Report a problem or suggest an addition!