Description
Running the tests for the web
package runs with an implicit dependency on having some analysers available and working in the environment. For example, web/tests/functional/diff_local
runs analyses with Clang SA. This is very problematic from a development point of view, because the (mis)behaviour or version number mismatch between a specific analyser that the tests were written assuming, vs. what is available in the environment you run the tests with locally (especially in case the assumed analyser version is simply not available without self-compiling!) makes it really troublesome to figure out whether a test failed because of an ongoing feature you were working on, or it failed because of some environmental things.
For example, as I'm working on #4049, I encountered this. And I have no idea so far whether this is because the changes I'm making, or simply because I have Clang 12.0 on my machine. As it is an Ubuntu 20.04 machine, I may only install the official Clang 8, 9, 10, 11, and 12 versions, not anything newer...
self = <functional.diff_local.test_diff_local.DiffLocal testMethod=test_filter_severity_high_low_text>
def test_filter_severity_high_low_text(self):
"""Get the high and low severity unresolved reports."""
out, _, _ = get_diff_results(
[self.base_reports], [self.new_reports], '--unresolved', None,
['--severity', 'high', 'low'])
> self.assertEqual(len(re.findall(r'\[HIGH\]', out)), 18)
E AssertionError: 15 != 18
The analyze
package's tests should ensure that certain actions generate the right format of further inputs (e.g., metadata files, report directory structures, etc.). We should ensure that running the web
tests exercises only our Python code without spawning Clangs and GCCs all over the place, from an appropriately prepared input.
If there are tests that necessitate the existence of both the "analyze" and the "web" facets of CodeChecker, those tests should live on their own, perhaps in the top-level tests/
directory.
And if a tests depends on the behaviour of a specific analyser's specific version, the tests should either "fail fast and fail loud" before doing anything, reporting the version mismatch, or --- and this should be the better solution --- indicate the version mismatch and mark the test as SKIPPED.