Skip to content

Replace inference tip cache with a single reference to the last inference tip result #2183

Closed as not planned
@jacobtylerwalls

Description

@jacobtylerwalls

The inference tip cache is byzantine, still misses a portion of the proper cache key (**kwargs), and for that reason we can't rewrite it as an lru cache:

def _inference_tip_cached(func: InferFn[_NodesT]) -> InferFn[_NodesT]:
"""Cache decorator used for inference tips."""
def inner(
node: _NodesT,
context: InferenceContext | None = None,
**kwargs: Any,
) -> Generator[InferenceResult, None, None]:
partial_cache_key = (func, node)
if partial_cache_key in _CURRENTLY_INFERRING:
# If through recursion we end up trying to infer the same
# func + node we raise here.
raise UseInferenceDefault
try:
yield from _cache[func, node, context]
return
except KeyError:
# Recursion guard with a partial cache key.
# Using the full key causes a recursion error on PyPy.
# It's a pragmatic compromise to avoid so much recursive inference
# with slightly different contexts while still passing the simple
# test cases included with this commit.
_CURRENTLY_INFERRING.add(partial_cache_key)
result = _cache[func, node, context] = list(func(node, context, **kwargs))
# Remove recursion guard.
_CURRENTLY_INFERRING.remove(partial_cache_key)
yield from result
return inner

Since a cache with a maxsize of 1 still produces most of the performance upside, explore replacing the cache with a single reference to the last inference tip result. Then also provide a way to clear that result from clear_inference_tip_cache().

See discussion at #2181.

Measure the performance difference when linting large packages like home-assistant.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions