I have a method that takes (among ot开发者_StackOverflowhers) a dictionary as an argument. The method is parsing strings and the dictionary provides replacements for some substrings, so it doesn't have to be mutable.
This function is called quite often, and on redundant elements so I figured that caching it would improve its efficiency.
But, as you may have guessed, since dict
is mutable and thus not hashable, @functools.lru_cache
can't decorate my function. So how can I overcome this?
Bonus point if it needs only standard library classes and methods. Ideally if it exists some kind of frozendict
in standard library that I haven't seen it would make my day.
PS: namedtuple
only in last resort, since it would need a big syntax shift.
Instead of using a custom hashable dictionary, use this and avoid reinventing the wheel! It's a frozen dictionary that's all hashable.
https://pypi.org/project/frozendict/
Code:
from frozendict import frozendict
def freezeargs(func):
"""Transform mutable dictionnary
Into immutable
Useful to be compatible with cache
"""
@functools.wraps(func)
def wrapped(*args, **kwargs):
args = tuple([frozendict(arg) if isinstance(arg, dict) else arg for arg in args])
kwargs = {k: frozendict(v) if isinstance(v, dict) else v for k, v in kwargs.items()}
return func(*args, **kwargs)
return wrapped
and then
@freezeargs
@lru_cache
def func(...):
pass
Code taken from @fast_cen 's answer
Note: this does not work on recursive datastructures; for example, you might have an argument that's a list, which is unhashable. You are invited to make the wrapping recursive, such that it goes deep into the data structure and makes every dict
frozen and every list
tuple.
(I know that OP nolonger wants a solution, but I came here looking for the same solution, so leaving this for future generations)
Here is a decorator that use @mhyfritz trick.
def hash_dict(func):
"""Transform mutable dictionnary
Into immutable
Useful to be compatible with cache
"""
class HDict(dict):
def __hash__(self):
return hash(frozenset(self.items()))
@functools.wraps(func)
def wrapped(*args, **kwargs):
args = tuple([HDict(arg) if isinstance(arg, dict) else arg for arg in args])
kwargs = {k: HDict(v) if isinstance(v, dict) else v for k, v in kwargs.items()}
return func(*args, **kwargs)
return wrapped
Simply add it before your lru_cache.
@hash_dict
@functools.lru_cache()
def your_function():
...
What about creating a hashable dict
class like so:
class HDict(dict):
def __hash__(self):
return hash(frozenset(self.items()))
substs = HDict({'foo': 'bar', 'baz': 'quz'})
cache = {substs: True}
How about subclassing namedtuple
and add access by x["key"]
?
class X(namedtuple("Y", "a b c")):
def __getitem__(self, item):
if isinstance(item, int):
return super(X, self).__getitem__(item)
return getattr(self, item)
Based on @Cedar answer, adding recursion for deep freeze as suggested:
def deep_freeze(thing):
from collections.abc import Collection, Mapping, Hashable
from frozendict import frozendict
if thing is None or isinstance(thing, str):
return thing
elif isinstance(thing, Mapping):
return frozendict({k: deep_freeze(v) for k, v in thing.items()})
elif isinstance(thing, Collection):
return tuple(deep_freeze(i) for i in thing)
elif not isinstance(thing, Hashable):
raise TypeError(f"unfreezable type: '{type(thing)}'")
else:
return thing
def deep_freeze_args(func):
import functools
@functools.wraps(func)
def wrapped(*args, **kwargs):
return func(*deep_freeze(args), **deep_freeze(kwargs))
return wrapped
Here's a decorator that can be used like functools.lru_cache
. But this is targetted at functions that take only one argument which is a flat mapping with hashable values and has a fixed maxsize
of 64. For your use-case you would have to adapt either this example or your client code. Also, to set the maxsize
individually, one had to implement another decorator, but i haven't wrapped my head around this since i don't needed it.
from functools import (_CacheInfo, _lru_cache_wrapper, lru_cache,
partial, update_wrapper)
from typing import Any, Callable, Dict, Hashable
def lru_dict_arg_cache(func: Callable) -> Callable:
def unpacking_func(func: Callable, arg: frozenset) -> Any:
return func(dict(arg))
_unpacking_func = partial(unpacking_func, func)
_cached_unpacking_func = \
_lru_cache_wrapper(_unpacking_func, 64, False, _CacheInfo)
def packing_func(arg: Dict[Hashable, Hashable]) -> Any:
return _cached_unpacking_func(frozenset(arg.items()))
update_wrapper(packing_func, func)
packing_func.cache_info = _cached_unpacking_func.cache_info
return packing_func
@lru_dict_arg_cache
def uppercase_keys(arg: dict) -> dict:
""" Yelling keys. """
return {k.upper(): v for k, v in arg.items()}
assert uppercase_keys.__name__ == 'uppercase_keys'
assert uppercase_keys.__doc__ == ' Yelling keys. '
assert uppercase_keys({'ham': 'spam'}) == {'HAM': 'spam'}
assert uppercase_keys({'ham': 'spam'}) == {'HAM': 'spam'}
cache_info = uppercase_keys.cache_info()
assert cache_info.hits == 1
assert cache_info.misses == 1
assert cache_info.maxsize == 64
assert cache_info.currsize == 1
assert uppercase_keys({'foo': 'bar'}) == {'FOO': 'bar'}
assert uppercase_keys({'foo': 'baz'}) == {'FOO': 'baz'}
cache_info = uppercase_keys.cache_info()
assert cache_info.hits == 1
assert cache_info.misses == 3
assert cache_info.currsize == 3
For a more generic approach one could use the decorator @cachetools.cache from a third-party library with an appropriate function set as key
.
After deciding to drop lru cache for our use case for now, we still came up with a solution. This decorator uses json to serialise and deserialise the args/kwargs sent to the cache. Works with any number of args. Use it as a decorator on a function instead of @lru_cache. max size is set to 1024.
def hashable_lru(func):
cache = lru_cache(maxsize=1024)
def deserialise(value):
try:
return json.loads(value)
except Exception:
return value
def func_with_serialized_params(*args, **kwargs):
_args = tuple([deserialise(arg) for arg in args])
_kwargs = {k: deserialise(v) for k, v in kwargs.items()}
return func(*_args, **_kwargs)
cached_function = cache(func_with_serialized_params)
@wraps(func)
def lru_decorator(*args, **kwargs):
_args = tuple([json.dumps(arg, sort_keys=True) if type(arg) in (list, dict) else arg for arg in args])
_kwargs = {k: json.dumps(v, sort_keys=True) if type(v) in (list, dict) else v for k, v in kwargs.items()}
return cached_function(*_args, **_kwargs)
lru_decorator.cache_info = cached_function.cache_info
lru_decorator.cache_clear = cached_function.cache_clear
return lru_decorator
The solution might be much simpler. The lru_cache uses parameters as the identifier for caching, so in the case of the dictionary, lru_cache doesn't know how to interpret it. You can serialize dictionary parameter to string and unserialize in the function to the dictionary back. Works like a charm.
the function:
@lru_cache(1024)
def data_check(serialized_dictionary):
my_dictionary = json.loads(serialized_dictionary)
print(my_dictionary)
the call:
data_check(json.dumps(initial_dictionary))
An extension of @Cedar answer, adding recursive freeze:
the recursive freeze:
def recursive_freeze(value):
if isinstance(value, dict):
for k,v in value.items():
value[k] = recursive_freeze(v)
return frozendict(value)
else:
return value
# To unfreeze
def recursive_unfreeze(value):
if isinstance(value, frozendict):
value = dict(value)
for k,v in value.items():
value[k] = recursive_unfreeze(v)
return value
the decorator:
def freezeargs(func):
"""
Transform mutable dictionnary into immutable.
Useful to be compatible with cache
"""
@functools.wraps(func)
def wrapped(*args, **kwargs):
args = tuple([recursive_freeze(arg) if isinstance(arg, dict) else arg for arg in args])
kwargs = {k: recursive_freeze(v) if isinstance(v, dict) else v for k, v in kwargs.items()}
return func(*args, **kwargs)
return wrapped
精彩评论