Skip to content
This repository was archived by the owner on Dec 16, 2025. It is now read-only.
This repository was archived by the owner on Dec 16, 2025. It is now read-only.

Prompt caching does not work for models/gemini-1.5-pro-002 #661

@nharada1

Description

@nharada1

Description of the bug:

Using 'models/gemini-1.5-pro-002' is a model for prompt caching fails when creating the cache. To reproduce, run the example code at https://ai.google.dev/gemini-api/docs/caching?lang=python but replace the model name with pro-002

Actual vs expected behavior:

Expected: Caching writes and reads successfully

Actual: On write, I get the error "BadRequest: 400 POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-002:generateContent?%24alt=json%3Benum-encoding%3Dint: Request contains an invalid argument."

Any other information you'd like to share?

The context caching doc claims: "Note: Context caching is only available for stable models with fixed versions (for example, gemini-1.5-pro-001). You must include the version postfix (for example, the -001 in gemini-1.5-pro-001)."

So I'd expect that pro-002 is a stable version and should work.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions