You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think I've encountered a bug in ramping-vus executor. I've been testing my application under a specific load profile when active users count increase and decrease sequentially few times. I need this to make sure that my application autoscaler works properly.
I was running tests in k8s cluster using k6-operator. And at some point I got all k6 worker pods killed with OOM error by k8s. It turns out such memory consumption was a side effect of the issue.
I noticed some VUs started to loop over my scenario executing only first step. I wasn't able to find out what is going wrong with them. But I was able to find a workaround by increasing gracefulRampDown parameter to make sure all VU iterations will be executed to the end. This fact made me think that in some cases executor which had received HardStop signal is no more able to execute a scenario properly.
k6 version
0.56, 0.57, 1.0.0-rc
OS
macOS, Ubuntu
Docker version and image (if applicable)
0.56, 0.57, 1.0.0-rc
Steps to reproduce the problem
Following script reproduces an issue:
import{check,sleep}from'k6';importexecfrom'k6/execution';importhttpfrom'k6/http';import{randomIntBetween}from'https://jslib.k6.io/k6-utils/1.0.0/index.js';exportconstoptions={scenarios: {"default": {executor: "ramping-vus",startVUs: 1,gracefulRampDown: "0s",stages: [{duration: "2m",target: 20},{duration: "2m",target: 10},{duration: "2m",target: 20},{duration: "2m",target: 0}]}}};exportdefaultasyncfunction(){constiterations=10;for(leti=0;i<10;i++){console.log(`[[vuNum=${exec.vu.idInInstance}]] Begin iteration #${i}`);constres=awaithttp.asyncRequest("GET","http://test.k6.io/?ts="+Math.round(randomIntBetween(1,200)));console.log(`[[vuNum=${exec.vu.idInInstance}]] Middle of iteration #${i}`);letcheckRes=check(res,{"Homepage welcome header present": (r)=>r.body.indexOf("Welcome to the k6.io demo site!")!==-1});console.log(`[[vuNum=${exec.vu.idInInstance}]] Almost done iteration #${i}`);sleep(1*i);console.log(`[[vuNum=${exec.vu.idInInstance}]] Done iteration #${i}`);}}functionrandomIntBetween(min,max){// min and max includedreturnMath.floor(Math.random()*(max-min+1)+min);}
Brief summary
I think I've encountered a bug in ramping-vus executor. I've been testing my application under a specific load profile when active users count increase and decrease sequentially few times. I need this to make sure that my application autoscaler works properly.
I was running tests in k8s cluster using k6-operator. And at some point I got all k6 worker pods killed with OOM error by k8s. It turns out such memory consumption was a side effect of the issue.
I noticed some VUs started to loop over my scenario executing only first step. I wasn't able to find out what is going wrong with them. But I was able to find a workaround by increasing
gracefulRampDown
parameter to make sure all VU iterations will be executed to the end. This fact made me think that in some cases executor which had received HardStop signal is no more able to execute a scenario properly.k6 version
0.56, 0.57, 1.0.0-rc
OS
macOS, Ubuntu
Docker version and image (if applicable)
0.56, 0.57, 1.0.0-rc
Steps to reproduce the problem
Following script reproduces an issue:
Here is summary of test run:
Full run log file: k6.log.
Here is a fragment of the log (as you can see, vuNum=20 repeatedly logs
Begin iteration #0
):Expected behaviour
All non-interrupted iterations are completed properly, i.e. either failed or succeeded per scenario conditions.
Actual behaviour
Some of the iterations does not proceed to completion.
The text was updated successfully, but these errors were encountered: