Skip to content

Enhancement request: It's difficult to understand why a test case was generated #1181

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jberryman opened this issue Feb 25, 2025 · 2 comments

Comments

@jberryman
Copy link

jberryman commented Feb 25, 2025

EvoMaster version: 3.4.0
Running with evomaster --blackBox true --maxTime 600s --ratePerMinute 60 --problemType GRAPHQL --bbTargetUrl http://localhost:3000/graphql

I'm just getting started with the tool, and I've already had it successfully find some bugs in our grahql server. However I'm finding there are some false positives and also generally it's difficult to understand sometimes why EvoMaster generated a test case.

I assume that internally the software must have some additional information that it could print with the test case, for example a certain category of failure, or a certain heuristic that was used or something? I think that would be a nice improvement if possible

Thanks for your work on this software, it's quite interesting!

@arcuri82
Copy link
Collaborator

hi @jberryman ,

thanks for reporting this. Nice to hear that you found faults with this tool ;)
One issue with GraphQL is that it is not trivial to distinguish between "user-error" (eg 4xx in HTTP) and "server-error" (eg 5xx).

Adding more info on why some specific test is generated would be indeed useful (especially for black-box testing)

@arcuri82
Copy link
Collaborator

btw, if you are running the application on localhost, likely you don't need the --ratePerMinute 60

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants