开发者

How to test for situation where a specific library is missing in Python

开发者 https://www.devze.com 2023-02-01 00:46 出处:网络
I have some packages that have soft dependencies on other packages with a fall back to a default (simple) implementation.

I have some packages that have soft dependencies on other packages with a fall back to a default (simple) implementation.

The problem is that this is very hard to test for using unit tests. I could set up separate virtual environments, but that is hard to manage.

Is there a package or a way to achieve the following: have

import X

work as usual, but

hide_package('X')
import X

will raise an ImportError.

I keep having bugs creep开发者_JS百科 into the fall-back part of my code because it is hard to test this.


It looks a bit dirty, but you can override the __import__ builtin:

save_import = __builtin__.__import__
def my_import(name, *rest):
    if name=="hidden":
        raise ImportError, "Hidden package"
    return save_import(name, *rest)
__builtin__.__import__ = my_import

BTW, have you read PEP 302? It seems that you can make a more robust mechanism with import hooks.


One way is to edit sys.path, especially if your packages install into different directories/zipfiles (e.g. if you are using eggs). Before importing, drop the ones you don't want from sys.path.

If that's not feasible (because all components live in a single sys.path entry), you can hack suppression into the packages themselves. E.g. have a global variable (environment, or something patched into the sys module) list the packages whose import you want to fail:

sys.suppressed_packages=set()
sys.suppressed_packages.add('X')

Then, in each package, explicitly raise an ImportError:

# X.py
import sys
if 'X' in sys.suppressed_packages:
    raise ImportError, 'X is suppressed'

Of course, instead of using the sys module, you can make your own infrastructure for that, along with a hide_package function.

0

精彩评论

暂无评论...
验证码 换一张
取 消