-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The kinesis autoscaler can't scale up by less than double #101
Comments
Yes, it appears that this logic has diverged. Very unfortunate - in this case I believe the config parser should be modified as folks will have configurations that rely on the setting of this value. Happy to take a PR for this or can fix sometime next week. |
Nevermind - fixing it now |
This should be fixed in 81b6fb8, version .9.8.3 |
@IanMeyers what's the status of 9.8.3? Is it coming any time soon? |
@IanMeyers @moskyb I noticed that the autoscaler still fails to scale up by less than double. Excerpt from config:
I expect this to add 20% more capacity to the stream.
If I understand correctly, this line of code should add the current shard count to the scale up percent. Could someone look into this? Thanks! |
Yep this looks like it should definitely be scaling up to 2 shards based upon a 105.62% put records threshold. Can you please confirm that you are running version .9.8.4? |
OK - if you could please deploy the .9.8.5 version that's been uploaded into the |
Thanks, the reason is "Not requesting a scaling action because new shard count equals current shard count, or new shard count is 0"
|
Great - can you please turn on DEBUG level logging, and we'll be able to see exactly what the calculation was? |
All good, here are the additional logs
|
Hello, So I missed it in your config the first time. Through version Thx, Ian |
Thanks Ian, the unit tests are really helpful and the documentation is clear :) One small thing I noticed is that a scale up action will always add at least one shard, while scaling down might not change the shard count (apart from min shardCount = 1 of course). So for this case -
Our stream will never scale below 9, which might not be desirable if the min shard count is eg 5 and the shard count naturally sits in that range. Anyway, just my 2 cents. Thanks for clarifying the scaling behaviour! |
Hey there - yes that was intentional. I'd rather we not scale down and leave the stream with ample capacity than over-scale and result in throttling. This could be added as a switch to the overall architecture, but I think it's better to be conservative on scaling down - as you find elsewhere with cool-offs in EC2 etc. |
So a little while ago, after running into issues using a scale up percentage of less than 100%, I submitted this PR. My understanding was if I had an (abridged) config like:
and I had a stream that currently had 100 shards, that the kinesis autoscaler would say "100 shards * 1.15, okay the stream will have 115 shards when I scale up".
As far as I can tell from looking at the code though, that's not actually the case, as this line of code indicates that the autoscaler interprets
scalePct: 115
as "Add 115% the shard's current capacity to it's existing capacity". This means thatscalePct: 115
on a stream with 100 shards will actually scale the stream up to 215 shards.The issue here isn't that this is the behaviour, that's totally fine; however, the config parser will throw an error if
scaleUpPct
is less than 100, meaning that any scale up operation must at least double the capacity of the stream.I'm happy to go in an modify this in whatever way is necessary - change it so that we can use a scaleUpPct > 100, or change the scaling behaviour, but I'm not sure what the actual expected behaviour is - I'm hoping the maintainers can provide some clarity on this :)
The text was updated successfully, but these errors were encountered: