For a detailed guide on counting tokens using the Gemini API, including how images, audio and video are counted, see the Token counting guide and accompanying Cookbook recipe.
Method: models.countTokens
Runs a model's tokenizer on input Content
and returns the token count. Refer to the tokens guide to learn more about tokens.
Endpoint
posthttps: / /generativelanguage.googleapis.com /v1beta /{model=models /*}:countTokens
Path parameters
model
string
Required. The model's resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the models.list
method.
Format: models/{model}
It takes the form models/{model}
.
Request body
The request body contains data with the following structure:
Optional. The input given to the model as a prompt. This field is ignored when generateContentRequest
is set.
Optional. The overall input given to the Model
. This includes the prompt as well as other model steering information like system instructions, and/or function declarations for function calling. Model
s/Content
s and generateContentRequest
s are mutually exclusive. You can either send Model
+ Content
s or a generateContentRequest
, but never both.
Example request
Text
Python
Node.js
Go
Shell
Java
Chat
Python
Node.js
Go
Shell
Java
Inline media
Python
Node.js
Go
Shell
Java
Video
Python
Node.js
Go
Shell
Python
Cache
Python
Node.js
Go
System Instruction
Go
Java
Tools
Java
Response body
A response from models.countTokens
.
It returns the model's tokenCount
for the prompt
.
If successful, the response body contains data with the following structure:
totalTokens
integer
The number of tokens that the Model
tokenizes the prompt
into. Always non-negative.
cachedContentTokenCount
integer
Number of tokens in the cached part of the prompt (the cached content).
Output only. List of modalities that were processed in the request input.
Output only. List of modalities that were processed in the cached content.
JSON representation |
---|
{ "totalTokens": integer, "cachedContentTokenCount": integer, "promptTokensDetails": [ { object ( |