The models endpoint provides a way for you to programmatically list the available models, and retrieve extended metadata such as supported functionality and context window sizing. Read more in the Models guide.
Gets information about a specific Model such as its version number, token limits, parameters and other metadata. Refer to the Gemini models guide for detailed model information.
Endpoint
get
https://generativelanguage.googleapis.com/v1beta/{name=models/*}
Path parameters
namestring
Required. The resource name of the model.
This name should match a model name returned by the models.list method.
Format: models/{model} It takes the form models/{model}.
Request body
The request body must be empty.
Example request
Python
Go
Shell
Response body
If successful, the response body contains an instance of Model.
Required. The resource name of the Model. Refer to Model variants for all allowed values.
Format: models/{model} with a {model} naming convention of:
"{baseModelId}-{version}"
Examples:
models/gemini-1.5-flash-001
baseModelIdstring
Required. The name of the base model, pass this to the generation request.
Examples:
gemini-1.5-flash
versionstring
Required. The version number of the model.
This represents the major version (1.0 or 1.5)
displayNamestring
The human-readable name of the model. E.g. "Gemini 1.5 Flash".
The name can be up to 128 characters long and can consist of any UTF-8 characters.
descriptionstring
A short description of the model.
inputTokenLimitinteger
Maximum number of input tokens allowed for this model.
outputTokenLimitinteger
Maximum number of output tokens available for this model.
supportedGenerationMethods[]string
The model's supported generation methods.
The corresponding API method names are defined as Pascal case strings, such as generateMessage and generateContent.
thinkingboolean
Whether the model supports thinking.
temperaturenumber
Controls the randomness of the output.
Values can range over [0.0,maxTemperature], inclusive. A higher value will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the model. This value specifies default to be used by the backend while making the call to the model.
Nucleus sampling considers the smallest set of tokens whose probability sum is at least topP. This value specifies default to be used by the backend while making the call to the model.
topKinteger
For Top-k sampling.
Top-k sampling considers the set of topK most probable tokens. This value specifies default to be used by the backend while making the call to the model. If empty, indicates the model doesn't use top-k sampling, and topK isn't allowed as a generation parameter.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-10-30 UTC."],[],[]]