Black-box testing vs White-box testing #311
Replies: 3 comments 4 replies
-
Interesting concept. However, I think that the line is fuzzier (pun-intended) between white-box and black-box testing than it may seem at first blush. Presumably, black-box testing involves writing middleware that converts the internal logic into some abstracted interfaces. So then the maintenance moves to the middlewares. I could be wrong, though. A PR that shows how black-box testing works in practice would be helpful. |
Beta Was this translation helpful? Give feedback.
-
I am closing this discussion because I have realized that, to which I also agree, its hard to distinguish between the two. It can also vary on a case-by-case basis. If tests have total coverage and there are sufficient fuzz tests, all possible scenarios should be covered anyways. |
Beta Was this translation helpful? Give feedback.
-
A good way to describe the differences between White-Box and Black-box testing approaches is whether the tester is/has to be actively thinking about the actual lines of code they're testing. In White-Box testing, you're writing the tests 1) knowing the actual implementation and, therefore, 2) being affected by the implementation when creating the tests (which, of course, may lead to missing any test cases that have, also, been forgotten to be covered by the code you're testing). This perspective is useful, e.g., for achieving a high code coverage - and is a better fit for unit/smaller tests. In Black-Box testing, however, the tester doesn't care about the actual implementation that's being tested. Moreover, the tester may not know the programming language in which the product they're testing has been written in. In fact, they may not even be a programmer, at all (there are tools that allow you to create automated testing suites by building the test cases/scenarios from small pieces/blocks that do simple things - like, moving the mouse pointer to a certain position, left-/right-clicking, introducing a text input, etc.). Black-Box testing is a good fit for bigger/more complicated tests - like, Integration, Invariant or Fuzzing. The key is to think from the perspective of the end user, interacting with the product you're testing in all possible ways you/an attacker can think of - and validating/asserting the side-effects that your interactions have produced (i.e. are the results correct or not?). This kind of testing allows you to look at the product in a "zoomed out", more high-level way - and, hopefully, catch the bugs you wouldn't have caught if you continued getting "inspired" by the code you're testing when creating the tests 😉 Therefore, both of these approaches are useful and shouldn't be disregarded. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Flow follows the white-box testing approach.
From ChatGPT
What is white-box testing?
Should testing of a code be dependent on the internal logic/implementation?
What is black-box testing?
Currently, all Flow tests (concrete, fuzz, invariant) follow white-box testing, which means if we change an internal implementation of a function without affecting the APIs (inputs, outputs) and expected returned values, we would also be required to change a lot of tests. IMO a good testing system should test for the behavior of the system and not rely on the internal implementation of the functions.
A good approach would be to have white-box testing in concrete whereas black-box testing in fuzz and invariant, though more knowledge is needed in this domain.
So, starting this discussion to brainstorm on these lines, gather resources, understand it better and eventually refactor our tests to use a mix of black and white box techniques.
Resources
cc @sablier-labs/engineers as it may be relevant to Solana and other repos.
Beta Was this translation helpful? Give feedback.
All reactions