Open
Description
Hypothesis (the inspiration for all my feature requests), saves the failing test cases in a database, so that they can be tried in other test runs. That way you're confident that the generated example that failed before will be testes in the next builds. Link: http://hypothesis.readthedocs.io/en/latest/database.html
I think we should do this too. The database is not very complex, it's just some files in hidden directories (I'm not familiar with the implementation details). Just to be clear, I'm not talking about repeating the whole test suite for the failing tests, just repeating the failing examples. This probably requires storing the random seed just before running a new test. Can StreamData do it?