You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// Ensure we don't compact the most recent blocks prematurely when another one of
// the same size still fits in the range. To do it, we consider valid a group only
// if it's before the most recent block or if it fully covers the range.
highestMinTime := blocks[len(blocks)-1].MinTime
for idx := 0; idx < len(groups); {
group := groups[idx]
// If the group covers a range before the most recent block, it's fine.
if group.rangeEnd <= highestMinTime {
idx++
continue
}
// If the group covers the full range, it's fine.
if group.maxTime()-group.minTime() == group.rangeLength() {
idx++
continue
}
// We hit into a group which would compact recent blocks prematurely,
// so we need to filter it out.
groups = append(groups[:idx], groups[idx+1:]...)
To Reproduce
Steps to reproduce the behavior:
Stop ingesting samples into a tenant
The most recent blocks with incomplete range will not be compacted.
Expected behavior
The compactor should compact the block1 and block2 into a single block after a sufficient amount of time.
Storage Engine
Blocks
Chunks
Additional Context
The text was updated successfully, but these errors were encountered:
Add an additional condition which checks that compaction can continue after 1 blockRange time has passed.
// If the group's maxTime is after 1 block range, we can compact assuming that
// all the required blocks have already been uploaded.
if int64(ulid.Now()) > group.maxTime()+group.rangeLength() {
idx++
continue
}
Describe the bug
The compactor doesn't compact blocks with maxTime not matching the blockRange.
Assume that there are following blocks:
t0+0h
| maxTime:t0+1h30m
)t0+0h
maxTime:t0+1h30m
)This scenario can happen if a tenant stops ingesting samples at
t0+1h30m
. The maxTime in the block will be the last ingested sample in the block.Because of the extra checks in the group_planner, these blocks will never be picked for compaction because:
What was the reason why these extra checks are added? These seem to be copied over from that default planner in Thanos. https://github.com/thanos-io/thanos/blob/main/pkg/compact/planner.go#L126
Are these checks useful in parallel compaction?
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The compactor should compact the block1 and block2 into a single block after a sufficient amount of time.
Storage Engine
Additional Context
The text was updated successfully, but these errors were encountered: