Release v2.11.5
Changelog
Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.
Go Version
- 1.24.4 (#6948)
Dependencies
- github.com/nats-io/nats.go v1.43.0 (#6956)
- golang.org/x/crypto v0.39.0 (#6956)
- golang.org/x/time v0.12.0 (#6956)
Improved
General
- The
connz
monitoring endpoint now includes leafnode connections (#6949) - The
accstatsz
monitoring endpoint now contains leafnode, route and gateway connection stats (#6967)
JetStream
- Sourcing and mirroring should now resync more quickly when sourcing over leafnodes after a connection failure (#6981)
- Reduced lock contention when reporting stream ingest warnings (#6934)
- Log lines for resetting Raft WAL state have been clarified (#6938)
- Determining if acks are required in interest-based streams has been optimised with fewer memory allocations (#6990)
- Ephemeral R1 consumers will no longer log
new consumer leader
on clustered setups, reducing log noise when watchers etc are in use (#7003)
Fixed
General
- Leafnodes with restrictive permissions can now route replies correctly when the message originates from a supercluster (#6931)
- Memory usage is now reported correctly on Linux systems with huge pages enabled (#7006)
JetStream
- Updating the
AllowMsgTTL
setting on a stream will now take effect correctly (#6922) - A potential deadlock when purging stream consumers has been fixed (#6933)
- A race condition that could prevent stream snapshots on shutdown has been fixed (#6942)
- Streams should no longer desync after a partial catchup following a snapshot (#6943)
- Streams should no longer desync due to catchup messages with incorrect quorum (#6944)
- Intersection between two subject trees where one is nil will no longer panic (#6945)
- Consumer pull requests with
NoWait
will now return correctly from replicated consumers (#6960) - Mirrors now remove
Nats-Expected-
headers that could interfere with mirroring operations (#6961) - Network-partitioned Raft nodes should no longer desync by accepting catchups from nodes with lower term (#6951)
- A potential data race when accessing the cluster failed sequence count has been fixed (#6965)
- Corrected handling of append entry response conditions and recycling to the response pool (#6968)
- A potential data race when copying stream metadata has been fixed (#6983)
- Healthchecks will no longer unset a group Raft node when not fully setup (#6984)
- Stream retention policy changes are now correctly propagated to running consumers in all cases (#6995)
- Raft now uses monotonic time for heartbeat tracking and determining quorum, making it resilient against wall-clock drifts or adjustments from NTP (#6999)
- The
healthz
monitoring endpoint no longer tries to fix up cluster node skews, as this could interfere with processing assignments (#7001) - The consumer
DeliverLastPerSubject
delivery policy now correctly deliver messages and handles acks when there are interior deletes, such as whenMaxMsgsPerSubject
limits are in use on the stream (#7005) - Consumers that are up against the
MaxWaiting
limit will no longer respond if the request heartbeat is set, to avoid client tightloops (#7011)