You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm just getting started with the tool, and I've already had it successfully find some bugs in our grahql server. However I'm finding there are some false positives and also generally it's difficult to understand sometimes why EvoMaster generated a test case.
I assume that internally the software must have some additional information that it could print with the test case, for example a certain category of failure, or a certain heuristic that was used or something? I think that would be a nice improvement if possible
Thanks for your work on this software, it's quite interesting!
The text was updated successfully, but these errors were encountered:
thanks for reporting this. Nice to hear that you found faults with this tool ;)
One issue with GraphQL is that it is not trivial to distinguish between "user-error" (eg 4xx in HTTP) and "server-error" (eg 5xx).
Adding more info on why some specific test is generated would be indeed useful (especially for black-box testing)
EvoMaster version: 3.4.0
Running with
evomaster --blackBox true --maxTime 600s --ratePerMinute 60 --problemType GRAPHQL --bbTargetUrl http://localhost:3000/graphql
I'm just getting started with the tool, and I've already had it successfully find some bugs in our grahql server. However I'm finding there are some false positives and also generally it's difficult to understand sometimes why EvoMaster generated a test case.
I assume that internally the software must have some additional information that it could print with the test case, for example a certain category of failure, or a certain heuristic that was used or something? I think that would be a nice improvement if possible
Thanks for your work on this software, it's quite interesting!
The text was updated successfully, but these errors were encountered: