Skip to content

Commit c6350d6

Browse files
authored
core[fix]: using async rate limiter methods in async code (langchain-ai#26914)
**Description:** Replaced blocking (sync) rate_limiter code in async methods. **Issue:** langchain-ai#26913 **Dependencies:** N/A **Twitter handle:** no need 🤗
1 parent 02f5962 commit c6350d6

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

libs/core/langchain_core/language_models/chat_models.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -463,7 +463,7 @@ async def astream(
463463
)
464464

465465
if self.rate_limiter:
466-
self.rate_limiter.acquire(blocking=True)
466+
await self.rate_limiter.aacquire(blocking=True)
467467

468468
generation: Optional[ChatGenerationChunk] = None
469469
try:
@@ -905,7 +905,7 @@ async def _agenerate_with_cache(
905905
# we usually don't want to rate limit cache lookups, but
906906
# we do want to rate limit API requests.
907907
if self.rate_limiter:
908-
self.rate_limiter.acquire(blocking=True)
908+
await self.rate_limiter.aacquire(blocking=True)
909909

910910
# If stream is not explicitly set, check if implicitly requested by
911911
# astream_events() or astream_log(). Bail out if _astream not implemented

0 commit comments

Comments
 (0)