You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a few helpers I've written I copy into each new Trio project. Would any of these be good batteries to include in this project?
An AsyncMock class for mocking coroutines. I forget that the built-in Mock cannot be awaited at least once on every new project. Here's an example implementation.
A fail_after() decorator that limits the time (relative to Trio's clock) that each test can run. With concurrency, I find that its easy to write code (and/or tests) that deadlock, and if you ctrl+c a pytest run you lose a lot of valuable info. Or if you run a deadlocked test on a CI platform like Travis, you'll time out the entire build. Here's an example implementation. I realize that this would step on Add test timeout support #53's toes.
Or as an alternative to fail_after(), I've also written min/max execution time decorators. See examples here. The goal for these isn't necessarily preventing runaway tests, but rather to make assertions regarding timing. For example if you've written something that's supposed to load balance connections, you can use these assertions to ensure that it doesn't dispatch them too quickly or too slowly.
I'm happy to contribute ideas or source code. The 3 decorators mentioned above might also work better as context managers.
The text was updated successfully, but these errors were encountered:
An AsyncMock class for mocking coroutines. I forget that the built-in Mock cannot be awaited at least once on every new project. Here's an example implementation.
A fail_after() decorator [...] might also work better as context managers.
So that would just be trio.fail_after, then...?
I do think #53 would be a good and fairly straightforward thing to implement, that would be strictly better for your use cases, right?
min/max execution time
Huh, interesting. I agree that making these context managers might make more sense, similar to pytest's with assert_raises(...), and they seem like inoffensive standalone utility functions. I am having some trouble imagining use cases for them, though, so I'm not sure if that means they're really niche, or that my imagination is just bad :-).
There are a few helpers I've written I copy into each new Trio project. Would any of these be good batteries to include in this project?
AsyncMock
class for mocking coroutines. I forget that the built-inMock
cannot be awaited at least once on every new project. Here's an example implementation.fail_after()
decorator that limits the time (relative to Trio's clock) that each test can run. With concurrency, I find that its easy to write code (and/or tests) that deadlock, and if you ctrl+c a pytest run you lose a lot of valuable info. Or if you run a deadlocked test on a CI platform like Travis, you'll time out the entire build. Here's an example implementation. I realize that this would step on Add test timeout support #53's toes.fail_after()
, I've also written min/max execution time decorators. See examples here. The goal for these isn't necessarily preventing runaway tests, but rather to make assertions regarding timing. For example if you've written something that's supposed to load balance connections, you can use these assertions to ensure that it doesn't dispatch them too quickly or too slowly.I'm happy to contribute ideas or source code. The 3 decorators mentioned above might also work better as context managers.
The text was updated successfully, but these errors were encountered: