Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos in code and docs #5784

Merged
merged 4 commits into from
Feb 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@
* [FEATURE] Compactor: Add `-compactor.skip-blocks-with-out-of-order-chunks-enabled` configuration to mark blocks containing index with out-of-order chunks for no compact instead of halting the compaction. #4707
* [FEATURE] Querier/Query-Frontend: Add `-querier.per-step-stats-enabled` and `-frontend.cache-queryable-samples-stats` configurations to enable query sample statistics. #4708
* [FEATURE] Add shuffle sharding for the compactor #4433
* [FEATURE] Querier: Use streaming for ingester metdata APIs. #4725
* [FEATURE] Querier: Use streaming for ingester metadata APIs. #4725
* [ENHANCEMENT] Update Go version to 1.17.8. #4602 #4604 #4658
* [ENHANCEMENT] Keep track of discarded samples due to bad relabel configuration in `cortex_discarded_samples_total`. #4503
* [ENHANCEMENT] Ruler: Add `-ruler.disable-rule-group-label` to disable the `rule_group` label on exported metrics. #4571
Expand Down Expand Up @@ -443,7 +443,7 @@
* `memberlist_client_kv_store_value_tombstones`
* `memberlist_client_kv_store_value_tombstones_removed_total`
* `memberlist_client_messages_to_broadcast_dropped_total`
* [ENHANCEMENT] Alertmanager: Added `-alertmanager.max-dispatcher-aggregation-groups` option to control max number of active dispatcher groups in Alertmanager (per tenant, also overrideable). When the limit is reached, Dispatcher produces log message and increases `cortex_alertmanager_dispatcher_aggregation_group_limit_reached_total` metric. #4254
* [ENHANCEMENT] Alertmanager: Added `-alertmanager.max-dispatcher-aggregation-groups` option to control max number of active dispatcher groups in Alertmanager (per tenant, also overridable). When the limit is reached, Dispatcher produces log message and increases `cortex_alertmanager_dispatcher_aggregation_group_limit_reached_total` metric. #4254
* [ENHANCEMENT] Alertmanager: Added `-alertmanager.max-alerts-count` and `-alertmanager.max-alerts-size-bytes` to control max number of alerts and total size of alerts that a single user can have in Alertmanager's memory. Adding more alerts will fail with a log message and incrementing `cortex_alertmanager_alerts_insert_limited_total` metric (per-user). These limits can be overrided by using per-tenant overrides. Current values are tracked in `cortex_alertmanager_alerts_limiter_current_alerts` and `cortex_alertmanager_alerts_limiter_current_alerts_size_bytes` metrics. #4253
* [ENHANCEMENT] Store-gateway: added `-store-gateway.sharding-ring.wait-stability-min-duration` and `-store-gateway.sharding-ring.wait-stability-max-duration` support to store-gateway, to wait for ring stability at startup. #4271
* [ENHANCEMENT] Ruler: added `rule_group` label to metrics `cortex_prometheus_rule_group_iterations_total` and `cortex_prometheus_rule_group_iterations_missed_total`. #4121
Expand Down Expand Up @@ -530,7 +530,7 @@
* [ENHANCEMENT] Alertmanager: validate configured `-alertmanager.web.external-url` and fail if ends with `/`. #4081
* [ENHANCEMENT] Alertmanager: added `-alertmanager.receivers-firewall.block.cidr-networks` and `-alertmanager.receivers-firewall.block.private-addresses` to block specific network addresses in HTTP-based Alertmanager receiver integrations. #4085
* [ENHANCEMENT] Allow configuration of Cassandra's host selection policy. #4069
* [ENHANCEMENT] Store-gateway: retry synching blocks if a per-tenant sync fails. #3975 #4088
* [ENHANCEMENT] Store-gateway: retry syncing blocks if a per-tenant sync fails. #3975 #4088
* [ENHANCEMENT] Add metric `cortex_tcp_connections` exposing the current number of accepted TCP connections. #4099
* [ENHANCEMENT] Querier: Allow federated queries to run concurrently. #4065
* [ENHANCEMENT] Label Values API call now supports `match[]` parameter when querying blocks on storage (assuming `-querier.query-store-for-labels-enabled` is enabled). #4133
Expand Down Expand Up @@ -607,7 +607,7 @@
* Prevent compaction loop in TSDB on data gap.
* [ENHANCEMENT] Query-Frontend now returns server side performance metrics using `Server-Timing` header when query stats is enabled. #3685
* [ENHANCEMENT] Runtime Config: Add a `mode` query parameter for the runtime config endpoint. `/runtime_config?mode=diff` now shows the YAML runtime configuration with all values that differ from the defaults. #3700
* [ENHANCEMENT] Distributor: Enable downstream projects to wrap distributor push function and access the deserialized write requests berfore/after they are pushed. #3755
* [ENHANCEMENT] Distributor: Enable downstream projects to wrap distributor push function and access the deserialized write requests before/after they are pushed. #3755
* [ENHANCEMENT] Add flag `-<prefix>.tls-server-name` to require a specific server name instead of the hostname on the certificate. #3156
* [ENHANCEMENT] Alertmanager: Remove a tenant's alertmanager instead of pausing it as we determine it is no longer needed. #3722
* [ENHANCEMENT] Blocks storage: added more configuration options to S3 client. #3775
Expand Down Expand Up @@ -895,7 +895,7 @@ Note the blocks storage compactor runs a migration task at startup in this versi
* [ENHANCEMENT] Chunks GCS object storage client uses the `fields` selector to limit the payload size when listing objects in the bucket. #3218 #3292
* [ENHANCEMENT] Added shuffle sharding support to ruler. Added new metric `cortex_ruler_sync_rules_total`. #3235
* [ENHANCEMENT] Return an explicit error when the store-gateway is explicitly requested without a blocks storage engine. #3287
* [ENHANCEMENT] Ruler: only load rules that belong to the ruler. Improves rules synching performances when ruler sharding is enabled. #3269
* [ENHANCEMENT] Ruler: only load rules that belong to the ruler. Improves rules syncing performances when ruler sharding is enabled. #3269
* [ENHANCEMENT] Added `-<prefix>.redis.tls-insecure-skip-verify` flag. #3298
* [ENHANCEMENT] Added `cortex_alertmanager_config_last_reload_successful_seconds` metric to show timestamp of last successful AM config reload. #3289
* [ENHANCEMENT] Blocks storage: reduced number of bucket listing operations to list block content (applies to newly created blocks only). #3363
Expand Down Expand Up @@ -1453,7 +1453,7 @@ This is the first major release of Cortex. We made a lot of **breaking changes**
* `-flusher.concurrent-flushes` for number of concurrent flushes.
* `-flusher.flush-op-timeout` is duration after which a flush should timeout.
* [FEATURE] Ingesters can now have an optional availability zone set, to ensure metric replication is distributed across zones. This is set via the `-ingester.availability-zone` flag or the `availability_zone` field in the config file. #2317
* [ENHANCEMENT] Better re-use of connections to DynamoDB and S3. #2268
* [ENHANCEMENT] Better reuse of connections to DynamoDB and S3. #2268
* [ENHANCEMENT] Reduce number of goroutines used while executing a single index query. #2280
* [ENHANCEMENT] Experimental TSDB: Add support for local `filesystem` backend. #2245
* [ENHANCEMENT] Experimental TSDB: Added memcached support for the TSDB index cache. #2290
Expand Down Expand Up @@ -2243,7 +2243,7 @@ migrate -path <absolute_path_to_cortex>/cmd/cortex/migrations -database postgre

## 0.4.0 / 2019-12-02

* [CHANGE] The frontend component has been refactored to be easier to re-use. When upgrading the frontend, cache entries will be discarded and re-created with the new protobuf schema. #1734
* [CHANGE] The frontend component has been refactored to be easier to reuse. When upgrading the frontend, cache entries will be discarded and re-created with the new protobuf schema. #1734
* [CHANGE] Removed direct DB/API access from the ruler. `-ruler.configs.url` has been now deprecated. #1579
* [CHANGE] Removed `Delta` encoding. Any old chunks with `Delta` encoding cannot be read anymore. If `ingester.chunk-encoding` is set to `Delta` the ingester will fail to start. #1706
* [CHANGE] Setting `-ingester.max-transfer-retries` to 0 now disables hand-over when ingester is shutting down. Previously, zero meant infinite number of attempts. #1771
Expand Down
4 changes: 2 additions & 2 deletions RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ To prepare release branch, first create new release branch (release-X.Y) in Cort
* `[BUGFIX]`
- Run `./tools/release/check-changelog.sh LAST-RELEASE-TAG...master` and add any missing PR which includes user-facing changes

Once your PR with release prepartion is approved, merge it to "release-X.Y" branch, and continue with publishing.
Once your PR with release preparation is approved, merge it to "release-X.Y" branch, and continue with publishing.

### Publish a release candidate

Expand Down Expand Up @@ -127,7 +127,7 @@ To publish a stable release:
1. Open a PR to add the new version to the backward compatibility integration test (`integration/backward_compatibility_test.go`)

### <a name="sing-and-sbom"></a>Sign the release artifacts and generate SBOM
1. Make sure you have the release brnach checked out, and you don't have any local modifications
1. Make sure you have the release branch checked out, and you don't have any local modifications
1. Create and `cd` to an empty directory not within the project directory
1. Run `mkdir sbom`
1. Generate SBOMs using https://github.com/kubernetes-sigs/bom
Expand Down
2 changes: 1 addition & 1 deletion build-image/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Install website builder dependencies. Whenever you change these version, please also change website/package.json
# and viceversa.
# and vice versa.
RUN npm install -g [email protected] [email protected]

ENV SHFMT_VERSION=3.2.4
Expand Down
2 changes: 1 addition & 1 deletion docs/blocks-storage/migrate-from-chunks-to-blocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Where parameters are:

After starting new pod in `ingester-new` statefulset, script then triggers `/shutdown` endpoint on the old ingester. When the flushing on the old ingester is complete, scale down of statefulset continues, and process repeats.

_The script supports both migration from chunks to blocks, and viceversa (eg. rollback)._
_The script supports both migration from chunks to blocks, and vice versa (eg. rollback)._

### Known issues

Expand Down
4 changes: 2 additions & 2 deletions docs/blocks-storage/querier.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,11 +522,11 @@ blocks_storage:
# CLI flag: -blocks-storage.bucket-store.max-inflight-requests
[max_inflight_requests: <int> | default = 0]

# Maximum number of concurrent tenants synching blocks.
# Maximum number of concurrent tenants syncing blocks.
# CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency
[tenant_sync_concurrency: <int> | default = 10]

# Maximum number of concurrent blocks synching per tenant.
# Maximum number of concurrent blocks syncing per tenant.
# CLI flag: -blocks-storage.bucket-store.block-sync-concurrency
[block_sync_concurrency: <int> | default = 20]

Expand Down
4 changes: 2 additions & 2 deletions docs/blocks-storage/store-gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -633,11 +633,11 @@ blocks_storage:
# CLI flag: -blocks-storage.bucket-store.max-inflight-requests
[max_inflight_requests: <int> | default = 0]

# Maximum number of concurrent tenants synching blocks.
# Maximum number of concurrent tenants syncing blocks.
# CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency
[tenant_sync_concurrency: <int> | default = 10]

# Maximum number of concurrent blocks synching per tenant.
# Maximum number of concurrent blocks syncing per tenant.
# CLI flag: -blocks-storage.bucket-store.block-sync-concurrency
[block_sync_concurrency: <int> | default = 20]

Expand Down
2 changes: 1 addition & 1 deletion docs/configuration/arguments.md
Original file line number Diff line number Diff line change
Expand Up @@ -510,7 +510,7 @@ If you are using a managed memcached service from [Google Cloud](https://cloud.g

## Logging of IP of reverse proxy

If a reverse proxy is used in front of Cortex it might be diffult to troubleshoot errors. The following 3 settings can be used to log the IP address passed along by the reverse proxy in headers like X-Forwarded-For.
If a reverse proxy is used in front of Cortex it might be difficult to troubleshoot errors. The following 3 settings can be used to log the IP address passed along by the reverse proxy in headers like X-Forwarded-For.

- `-server.log_source_ips_enabled`

Expand Down
4 changes: 2 additions & 2 deletions docs/configuration/config-file-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -1067,11 +1067,11 @@ bucket_store:
# CLI flag: -blocks-storage.bucket-store.max-inflight-requests
[max_inflight_requests: <int> | default = 0]

# Maximum number of concurrent tenants synching blocks.
# Maximum number of concurrent tenants syncing blocks.
# CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency
[tenant_sync_concurrency: <int> | default = 10]

# Maximum number of concurrent blocks synching per tenant.
# Maximum number of concurrent blocks syncing per tenant.
# CLI flag: -blocks-storage.bucket-store.block-sync-concurrency
[block_sync_concurrency: <int> | default = 20]

Expand Down
4 changes: 2 additions & 2 deletions docs/guides/gossip-ring-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@ but it can also build its own KV store on top of memberlist library using a goss

This short guide shows how to start Cortex in [single-binary mode](../architecture.md) with memberlist-based ring.
To reduce number of required dependencies in this guide, it will use [blocks storage](../blocks-storage/_index.md) with no shipping to external stores.
Storage engine and external storage configuration are not dependant on the ring configuration.
Storage engine and external storage configuration are not dependent on the ring configuration.

## Single-binary, two Cortex instances

For simplicity and to get started, we'll run it as a two instances of Cortex on local computer.
For simplicity and to get started, we'll run it as two instances of Cortex on local computer.
We will use prepared configuration files ([file 1](../../configuration/single-process-config-blocks-gossip-1.yaml), [file 2](../../configuration/single-process-config-blocks-gossip-2.yaml)), with no external
dependencies.

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/tls.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ openssl x509 -req -in server.csr -CA root.crt -CAkey root.key -CAcreateserial -o

Note that the above script generates certificates that are valid for 100000 days.
This can be changed by adjusting the `-days` option in the above commands.
It is recommended that the certs be replaced atleast once every 2 years.
It is recommended that the certs be replaced at least once every 2 years.

The above script generates keys `client.key, server.key` and certs
`client.crt, server.crt` for both the client and server. The CA cert is
Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/api_design.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Cortex will utilize path based versioning similar to both Prometheus and Alertma

The new API endpoints and the current http prefix endpoints can be maintained concurrently. The flag to configure these endpoints will be maintained as `http.prefix`. This will allow us to roll out the new API without disrupting the current routing schema. The original http prefix endpoints can maintained indefinitely or be phased out over time. Deprecation warnings can be added to the current API either when initialized or utilized. This can be accomplished by injecting a middleware that logs a warning whenever a legacy API endpoint is used.

In cases where Cortex is run as a single binary, the Alertmanager module will only be accesible using the new API.
In cases where Cortex is run as a single binary, the Alertmanager module will only be accessible using the new API.

### Implementation

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/parallel-compaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ slug: parallel-compaction
---

## Introduction
As a part of pushing Cortex’s scaling capability at AWS, we have done performance testing with Cortex and found the compactor to be one of the main limiting factors for higher active timeseries limit per tenant. The documentation [Compactor](https://cortexmetrics.io/docs/blocks-storage/compactor/#how-compaction-works) describes the responsibilities of a compactor, and this proposal focuses on the limitations of the current compactor architecture. In the current architecture, compactor has simple sharding, meaning that a single tenant is sharded to a single compactor. The compactor generates compaction groups, which are groups of Prometheus TSDB blocks that can be compacted together, independently of another group. However, a compactor currnetly handles compaction groups of a single tenant iteratively, meaning that blocks belonging non-overlapping times are not compacted in parallel.
As a part of pushing Cortex’s scaling capability at AWS, we have done performance testing with Cortex and found the compactor to be one of the main limiting factors for higher active timeseries limit per tenant. The documentation [Compactor](https://cortexmetrics.io/docs/blocks-storage/compactor/#how-compaction-works) describes the responsibilities of a compactor, and this proposal focuses on the limitations of the current compactor architecture. In the current architecture, compactor has simple sharding, meaning that a single tenant is sharded to a single compactor. The compactor generates compaction groups, which are groups of Prometheus TSDB blocks that can be compacted together, independently of another group. However, a compactor currently handles compaction groups of a single tenant iteratively, meaning that blocks belonging non-overlapping times are not compacted in parallel.

Cortex ingesters are responsible for uploading TSDB blocks with data emitted by a tenant. These blocks are considered as level-1 blocks, as they contain duplicate timeseries for the same time interval, depending on the replication factor. [Vertical compaction](https://cortexmetrics.io/docs/blocks-storage/compactor/#how-compaction-works) is done to merge all the blocks with the same time interval and deduplicate the samples. These merged blocks are level-2 blocks. Subsequent compactions such as horizontal compaction can happen, further increasing the compaction level of the blocks.

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/ring-multikey.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ type MultiKey interface {
```

* SplitById - responsible to split the codec in multiple keys and interface.
* JoinIds - responsible to receive multiple keys and interface creating the codec objec
* JoinIds - responsible to receive multiple keys and interface creating the codec object
* GetChildFactory - Allow the kv store to know how to serialize and deserialize the interface returned by “SplitById”.
The interface returned by SplitById need to be a proto.Message
* FindDifference - optimization used to know what need to be updated or deleted from a codec. This avoids updating all keys every
Expand Down
6 changes: 3 additions & 3 deletions integration/e2e/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -523,9 +523,9 @@ func (s *HTTPService) Metrics() (_ string, err error) {
localPort := s.networkPortsContainerToLocal[s.httpPort]

// Fetch metrics.
// We use IPv4 ports from Dokcer for e2e tests, so lt's use 127.0.0.1 to force IPv4; localhost may map to IPv6.
// It's possible that same port number map to IPv4 for serviceA and IPv6 for servieB, so using localhost makes
// tests flaky because you connect to serviceB while you want to connec to serviceA.
// We use IPv4 ports from Docker for e2e tests, so let's use 127.0.0.1 to force IPv4; localhost may map to IPv6.
// It's possible that same port number map to IPv4 for serviceA and IPv6 for serviceB, so using localhost makes
// tests flaky because you connect to serviceB while you want to connect to serviceA.
res, err := GetRequest(fmt.Sprintf("http://127.0.0.1:%d/metrics", localPort))
if err != nil {
return "", err
Expand Down
2 changes: 1 addition & 1 deletion pkg/alertmanager/alertmanager.go
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ type Alertmanager struct {
lastPipeline notify.Stage

// The Dispatcher is the only component we need to recreate when we call ApplyConfig.
// Given its metrics don't have any variable labels we need to re-use the same metrics.
// Given its metrics don't have any variable labels we need to reuse the same metrics.
dispatcherMetrics *dispatch.DispatcherMetrics
// This needs to be set to the hash of the config. All the hashes need to be same
// for deduping of alerts to work, hence we need this metric. See https://github.com/prometheus/alertmanager/issues/596
Expand Down
2 changes: 1 addition & 1 deletion pkg/alertmanager/merger/v2_silence_id_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import (

func TestV2SilenceId_ReturnsNewestSilence(t *testing.T) {

// We re-use MergeV2Silences so we rely on that being primarily tested elsewhere.
// We reuse MergeV2Silences so we rely on that being primarily tested elsewhere.

in := [][]byte{
[]byte(`{"id":"77b580dd-1d9c-4b7e-9bba-13ac173cb4e5","status":{"state":"expired"},` +
Expand Down
2 changes: 1 addition & 1 deletion pkg/alertmanager/merger/v2_silences.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ func mergeV2Silences(in v2_models.GettableSilences) (v2_models.GettableSilences,
result = append(result, silence)
}

// Re-use Alertmanager sorting for silences.
// Reuse Alertmanager sorting for silences.
v2.SortSilences(result)

return result, nil
Expand Down
Loading