Skip to content

Commit 183daa6

Browse files
mlFanaticErick Friis
and
Erick Friis
authored
google-genai[patch]: on_llm_new_token fix (langchain-ai#16924)
### This pull request makes the following changes: * Fixed issue langchain-ai#16913 Fixed the google gen ai chat_models.py code to make sure that the callback is called before the token is yielded <!-- Thank you for contributing to LangChain! Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified. Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes if applicable, - **Dependencies:** any dependencies required for this change, - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/ If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. --> --------- Co-authored-by: Erick Friis <erick@langchain.dev>
1 parent 10c10f2 commit 183daa6

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

libs/partners/google-genai/langchain_google_genai/chat_models.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -598,6 +598,7 @@ def _stream(
598598
for chunk in response:
599599
_chat_result = _response_to_result(chunk, stream=True)
600600
gen = cast(ChatGenerationChunk, _chat_result.generations[0])
601+
601602
if run_manager:
602603
run_manager.on_llm_new_token(gen.text)
603604
yield gen
@@ -622,6 +623,7 @@ async def _astream(
622623
):
623624
_chat_result = _response_to_result(chunk, stream=True)
624625
gen = cast(ChatGenerationChunk, _chat_result.generations[0])
626+
625627
if run_manager:
626628
await run_manager.on_llm_new_token(gen.text)
627629
yield gen

0 commit comments

Comments
 (0)