Skip to content

docs: improve test README #4916

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 10, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 19 additions & 20 deletions test/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,37 +13,37 @@ You can see all existing tests in [`test/`](https://github.com/intel/cve-bin-too

## Running all tests

To run the tests for `cve-bin-tool`
To run all tests for `cve-bin-tool`:

```console
pytest
```

To run scanner and checker tests
To run scanner and checker tests:

```console
pytest test/test_scanner.py test/test_checkers.py
```

By default, some longer-running tests are turned off. If you want to enable them, you can set the environment variable LONG_TESTS to 1. You can do this just for a single command line as follows:
By default, some longer-running tests are turned off. If you want to enable them, you can set the environment variable `LONG_TESTS` to 1. You can do this just in single command as follows:

```console
LONG_TESTS=1 pytest
LONG_TESTS=1 pytest {$TEST_FILE}
```

For scanner tests
For scanner tests:

```console
LONG_TESTS=1 pytest test/test_scanner.py test/test_checkers.py
```

By default, tests which rely on external connectivity are turned off. If you want to enable them, you can set the environment variable EXTERNAL_SYSTEM to 1. You can do this just for a single command line as follows:
By default, tests which rely on external connectivity are turned off. If you want to enable them, you can set the environment variable `EXTERNAL_SYSTEM` to 1. You can do this just in single command as follows:

```console
EXTERNAL_SYSTEM=1 pytest
EXTERNAL_SYSTEM=1 pytest {$TEST_FILE}
```

For nvd tests
For NVD tests

```console
EXTERNAL_SYSTEM=1 pytest test/test_source_nvd.py
Expand All @@ -52,14 +52,13 @@ EXTERNAL_SYSTEM=1 pytest test/test_source_nvd.py
## Running a single test

To run a single test, you can use the unittest framework. For example, here's how to run
the test for sqlite:
the test for `sqlite`:

```console
python -m unittest test.test_scanner.TestScanner.test_sqlite_3_12_2
```

To run a single test in test_scanner you can use pytest. For example, here's how to run
the test for vendor_package_pairs:
To run a single test in test_scanner you can use the pytest framework. For example, here's how to run the test for `vendor_package_pairs`:

```console
pytest test/test_scanner.py::TestScanner::test_version_mapping
Expand Down Expand Up @@ -92,7 +91,7 @@ deactivate

- You can see the code for scanner tests in ['test/test_scanner.py'](https://github.com/intel/cve-bin-tool/blob/main/test/test_scanner.py)
- You can see checker wise test data in ['test/test_data'](https://github.com/intel/cve-bin-tool/blob/main/test/test_data)
- If you just want to add a new mapping test for a checker, add a dictionary of _product_, _version_ and _version_strings_ in the mapping_test_data list . Here, _version_strings_ are the list of strings that contain version signature or strings that commonly can be found in the module. For example: this is how the current mapping_test_data for gnutls look like. You should add the details of the new test case data at the end of `mapping_test_data` list:
- If you just want to add a new mapping test for a checker, add a dictionary of _product_, _version_ and _version_strings_ in the mapping_test_data list. Here, _version_strings_ are the list of strings that contain version signature or strings that commonly can be found in the module. For example: this is how the current mapping_test_data for gnutls look like. You should add the details of the new test case data at the end of `mapping_test_data` list:

```python
mapping_test_data = [
Expand All @@ -109,17 +108,17 @@ mapping_test_data = [
]
```

- Please note that sometimes the database we're using doesn't have perfect mapping between CVEs and product versions -- if you try to write a test that doesn't work because of that mapping but the description in the CVE says that version should be vulnerable, don't discard it! Instead, please make a note of it in a github issue can investigate and maybe report it upstream.
> Note: sometimes the database we're using doesn't have perfect mapping between CVEs and product versions -- if you try to write a test that doesn't work because of that mapping but the description in the CVE says that version should be vulnerable, don't discard it! Instead, please make a note of it in a GitHub issue so it can be investigated and possibly reported upstream.

## Adding new tests: Signature tests against real files

To make the basic test suite run quickly, we create "faked" binary files to test the **CVE mappings**. However, we want to be able to test real files to test that the **signatures** work on real-world data.

You can see test data for package tests in _package_test_data_ variable of the test data file you are writing test for.

We have `test_version_in_package` function in `test_scanner` that takes a _url_, and _package name_, _module name_ and a _version_, and downloads the package, runs the scanner against it, and makes sure it is the package that you've specified. But we need more tests!
We have a `test_version_in_package` function in `test_scanner` that takes in package details (_url_, _package name_, _module name_, _version_), downloads the package, runs the scanner against it, and confirms it is the package that you've specified. But we need more tests!

- To add a new test, find an appropriate publicly available file (linux distribution packages and public releases of the packages itself are ideal). You should add the details of the new test case in the `package_test_data` variable of the file for which you are writing test for. For example: this is how the current package_test_data for binutils look like. You should add the details of the new test case data at the end of `package_test_data` list:
- To add a new test, find an appropriate publicly available file (linux distribution packages and public releases of the packages itself are ideal). You should add the details of the new test case in the `package_test_data` variable of the file for which you are writing test for. For example: this is how the current `package_test_data` for binutils look like. You should add the details of the new test case data at the end of `package_test_data` list:

```python
package_test_data = [
Expand All @@ -140,15 +139,15 @@ package_test_data = [
]

```
The ```other_products``` attribute might match any binaries provided, so we can check that only the expected products are found in a given binary. (e.g. if an imaginary package called CryptographyExtensions included OpenSSL, we'd expect to detect both in CryptographyExtensions-1.2.rpm).
The ```other_products``` attribute might match any binaries provided, so we can check that only the expected products are found in a given binary. (e.g. if an imaginary package called `CryptographyExtensions` included OpenSSL, we'd expect to detect both in `CryptographyExtensions-1.2.rpm`).

Ideally, we should have at least one such test for each checker, and it would be nice to have some different sources for each as well. For example, for packages available in common Linux distributions, we might want to have one from fedora, one from debian, and one direct from upstream to show that we detect all those versions.
Ideally, we should have at least one such test for each checker, and it would be nice to have some different sources for each as well. For example, for packages available in common Linux distributions, we might want to have one from Fedora, one from Debian, and one direct from upstream to show that we detect all of those versions.

Note that we're getting the LONG_TESTS() from tests.util in the top of the files where it's being used. If you're adding a long test to a test file that previously didn't have any, you'll need to add that at the top of the file as well.
> Note: we're getting the `LONG_TESTS()` from tests.util in the top of the files where it's being used. If you're adding a long test to a test file that previously didn't have any, you'll need to add that at the top of the file as well.

## Adding new tests: Checker filename mappings

To test the filename mappings, rather than making a bunch of empty files, we're calling the checkers directly in `test/test_checkers.py`. You can add a new test by specifying a the name of the checker you want to test, the file name, and the expected result that the scanner should say it "is".
To test the filename mappings, rather than making a bunch of empty files, we're calling the checkers directly in `test/test_checkers.py`. You can add a new test by specifying the name of the checker you want to test, the file name, and the expected result that the scanner should say it "is".

```python
@pytest.mark.parametrize(
Expand All @@ -160,7 +159,7 @@ To test the filename mappings, rather than making a bunch of empty files, we're
)
```

The function test_filename_is will then load the checker you have specified (and fail spectacularly if you specify a checker that does not exist), try to run get_versions() with an empty file content and the filename you specified, then check that it "is" something (as opposed to "contains") and that the modulename that get_version returns is in fact the expected_result you specified.
The function `test_filename_is` will then load the checker you have specified (and fail spectacularly if you specify a checker that does not exist), try to run `get_version()` with an empty file content and the filename you specified, then check that it "is" something (as opposed to "contains") and that the modulename that `get_version` returns is in fact the `expected_result` you specified.

For ease of maintenance, please keep the parametrize list in alphabetical order when you add a new tests.

Expand Down
Loading