Skip to content

Release logzio-monitoring Helm chart v5.3.0 #462

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion charts/fluentd/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: A Helm chart for shipping Kubernetes logs via Fluentd.
keywords:
- logging
- fluentd
version: 0.29.1
version: 0.29.2
appVersion: 1.5.1
maintainers:
- name: Yotam loewenbach
Expand Down
2 changes: 2 additions & 0 deletions charts/fluentd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -303,6 +303,8 @@ If needed, the fluentd image can be changed to support windows server 2022 with


## Change log
- **0.29.2**:
- Enhanced env_id handling to support both numeric and string formats.
- **0.29.1**:
- Added `enabled` value, to conditianly control the deployment of this chart by a parent chart.
- Added `daemonset.LogFileRefreshInterval` and `windowsDaemonset.LogFileRefreshInterval` values, to control list of watched log files refresh interval.
Expand Down
2 changes: 1 addition & 1 deletion charts/fluentd/templates/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ data:
{{- end }}

{{- if .Values.env_id }}
env-id.conf : {{ toYaml .Values.configmap.envId | indent 2 }}
env-id.conf : {{ toYaml .Values.configmap.envId | quote | indent 2 }}
{{- end }}
{{- if .Values.configmap.extraConfig }}
{{- range $key, $value := fromYaml .Values.configmap.extraConfig }}
Expand Down
2 changes: 1 addition & 1 deletion charts/fluentd/templates/daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ spec:
value: {{ .Values.windowsDaemonset.containersPath | quote }}
{{- if .Values.env_id }}
- name: ENV_ID
value: {{ .Values.env_id }}
value: {{ .Values.env_id | quote}}
{{- end }}
{{- if .Values.windowsDaemonset.extraEnv }}
{{ toYaml .Values.windowsDaemonset.extraEnv | indent 8 }}
Expand Down
2 changes: 1 addition & 1 deletion charts/logzio-k8s-events/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ keywords:
- logging
- k8s
- kubernetes
version: 0.0.3
version: 0.0.4
appVersion: 0.0.2
maintainers:
- name: Raul Gurshumov
Expand Down
2 changes: 2 additions & 0 deletions charts/logzio-k8s-events/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,8 @@ kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.tain


## Change log
- **0.0.4**:
- Enhanced env_id handling to support both numeric and string formats.
- **0.0.3**:
- Rename listener template.
- **0.0.2**:
Expand Down
2 changes: 1 addition & 1 deletion charts/logzio-k8s-events/templates/secret.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ type: Opaque
stringData:
logzio-log-shipping-token: {{ required "Logzio shipping token is required!" .Values.secrets.logzioShippingToken }}
logzio-log-listener: {{ template "logzio-k8s-events.listenerHost" . }}
env-id: {{ .Values.secrets.env_id }}
env-id: {{ .Values.secrets.env_id | quote }}
{{- end }}
2 changes: 1 addition & 1 deletion charts/logzio-logs-collector/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
apiVersion: v2
name: logzio-logs-collector
version: 1.0.0
version: 1.0.1
description: kubernetes logs collection agent for logz.io based on opentelemetry collector
type: application
home: https://logz.io/
Expand Down
5 changes: 5 additions & 0 deletions charts/logzio-logs-collector/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,5 +142,10 @@ Multi line logs configuration
The collector supports by default various log formats (including multiline logs) such as `CRI-O` `CRI-Containerd` `Docker` formats. You can configure the chart to parse custom multiline logs pattern according to your needs, please read [Customizing Multiline Log Handling](./examples/multiline.md) guide for more details.

## Change log
* 1.0.1
- Update multiline parsing
- Update error detection in logs
- Change default log type
- Enhanced env_id handling to support both numeric and string formats.
* 1.0.0
- kubernetes logs collection agent for logz.io based on opentelemetry collector
47 changes: 26 additions & 21 deletions charts/logzio-logs-collector/examples/multiline.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Creating Custom Formats for Multiline Logs

To configure custom formats, you must understand your logs' structure to accurately use `is_first_entry` or `is_last_entry` expressions. Regular expressions (regex) are powerful tools in matching specific log patterns, allowing you to identify the start or end of a multiline log entry effectively.

Custom multiline `recombine` operators should be added before `move from attributes.log to body`:
Custom multiline `recombine` operators should be added after `move from attributes.log to body`:
```yaml
# Update body field after finishing all parsing
- from: attributes.log
Expand All @@ -41,7 +41,7 @@ config:
operators:
- id: get-format
routes:
- expr: body matches "^\\{"
- expr: body matches "^{.*}$"
output: parser-docker
- expr: body matches "^[^ Z]+ "
output: parser-crio
Expand Down Expand Up @@ -105,17 +105,19 @@ config:
- from: attributes.uid
to: resource["k8s.pod.uid"]
type: move
- id: parser-json
type: json_parser
# Update body field after finishing all parsing
- from: attributes.log
to: body
type: move
# Add custom multiline parsers here. Add more `type: recombine` operators for custom multiline formats
# https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/recombine.md
- type: recombine
id: stack-errors-recombine
combine_field: body
is_first_entry: body matches "^[^\\s]"
source_identifier: attributes["log.file.path"]
# Update body field after finishing all parsing
- from: attributes.log
to: body
type: move
```
### Examples

Expand All @@ -138,15 +140,16 @@ config:
filelog:
operators:
# previous operators
- type: recombine
id: Java-Stack-Trace-Errors
combine_field: body
is_first_entry: body matches "^[\\w]+(Exception|Error)"
combine_with: "\n"
# Update body field after finishing all parsing
- from: attributes.log
to: body
type: move
# custom multiline recombine
- type: recombine
id: Java-Stack-Trace-Errors
combine_field: body
is_first_entry: body matches "^[\\w]+(Exception|Error)"
source_identifier: attributes["log.file.path"]
```

#### Python Tracebacks
Expand All @@ -169,15 +172,16 @@ config:
filelog:
operators:
# previous operators
- type: recombine
id: Python-Tracebacks
combine_field: body
is_first_entry: body matches "^Traceback"
combine_with: "\n"
# Update body field after finishing all parsing
- from: attributes.log
to: body
type: move
# custom multiline recombine
- type: recombine
id: Python-Tracebacks
combine_field: body
is_first_entry: body matches "^Traceback"
source_identifier: attributes["log.file.path"]
```

#### Custom Multiline Log Format
Expand All @@ -199,13 +203,14 @@ config:
filelog:
operators:
# previous operators
- type: recombine
id: custom-multiline
combine_field: body
is_first_entry: body matches "^\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}"
combine_with: "\n"
# Update body field after finishing all parsing
- from: attributes.log
to: body
type: move
# custom multiline recombine
- type: recombine
id: custom-multiline
combine_field: body
is_first_entry: body matches "^\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}"
source_identifier: attributes["log.file.path"]
```
2 changes: 1 addition & 1 deletion charts/logzio-logs-collector/templates/secret.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ metadata:
namespace: {{ .Release.Namespace }}
type: Opaque
stringData:
env-id: {{.Values.secrets.env_id}}
env-id: {{.Values.secrets.env_id | quote}}
log-type: {{ .Values.secrets.logType}}
logzio-listener-region: {{ .Values.secrets.LogzioRegion}}
{{ if .Values.secrets.logzioLogsToken}}
Expand Down
19 changes: 12 additions & 7 deletions charts/logzio-logs-collector/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ secrets:
# environment indentifier attribute that will be added to all logs
env_id: "my_env"
# defualt log type field
logType: "k8s"
logType: "agent-k8s"
# Secret with your logzio logs shipping token
logzioLogsToken: "token"
# Secret with your logzio region
Expand Down Expand Up @@ -61,7 +61,7 @@ config:
- set(attributes["log.level"], "INFO")
- set(attributes["log.level"], "DEBUG") where (IsMatch(body, ".*\\b(?i:debug)\\b.*"))
- set(attributes["log.level"], "WARNING") where (IsMatch(body, ".*\\b(?i:warning|warn)\\b.*"))
- set(attributes["log.level"], "ERROR") where (IsMatch(body, ".*\\b(?i:error|failure|failed|exception|panic)\\b.*"))
- set(attributes["log.level"], "ERROR") where (IsMatch(body, ".*(?i:(?:error|fail|failure|exception|panic)).*"))
transform/log_type:
error_mode: ignore
log_statements:
Expand Down Expand Up @@ -132,7 +132,7 @@ config:
# Find out which format is used by kubernetes
- id: get-format
routes:
- expr: body matches "^\\{"
- expr: body matches "^{.*"
output: parser-docker
- expr: body matches "^[^ Z]+ "
output: parser-crio
Expand Down Expand Up @@ -201,17 +201,22 @@ config:
- from: attributes.uid
to: resource["k8s.pod.uid"]
type: move
# Update body field after finishing all parsing
- from: attributes.log
to: body
type: move
# conditional json parser
- type: json_parser
id: json
parse_from: body
if: 'body matches "^{.*}$"'
# multiline parsers. add more `type: recombine` operators for custom multiline formats
# https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/recombine.md
- type: recombine
id: stack-errors-recombine
combine_field: body
is_first_entry: body matches "^[^\\s]"
source_identifier: attributes["log.file.path"]
# Update body field after finishing all parsing
- from: attributes.log
to: body
type: move
otlp:
protocols:
grpc:
Expand Down
16 changes: 10 additions & 6 deletions charts/logzio-monitoring/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,32 +2,36 @@ apiVersion: v2
name: logzio-monitoring
description: logzio-monitoring allows you to ship logs, metrics, traces and security reports from your Kubernetes cluster using the OpenTelemetry collector for metrics and traces, Fluentd for logs, and Trivy for security reports.
type: application
version: 5.2.4
version: 5.3.0


sources:
- https://github.com/logzio/logzio-helm
dependencies:
- name: logzio-fluentd
version: "0.29.0"
version: "0.29.2"
repository: "https://logzio.github.io/logzio-helm/"
condition: logs.enabled
- name: logzio-k8s-telemetry
version: "4.1.3"
version: "4.2.0"
repository: "https://logzio.github.io/logzio-helm/"
condition: metricsOrTraces.enabled
- name: logzio-trivy
version: "0.3.0"
version: "0.3.1"
repository: "https://logzio.github.io/logzio-helm/"
condition: securityReport.enabled
- name: opencost
version: "1.3.0"
repository: "https://opencost.github.io/opencost-helm-chart"
condition: finops.enabled
- name: logzio-k8s-events
version: "0.0.3"
version: "0.0.4"
repository: "https://logzio.github.io/logzio-helm/"
condition: deployEvents.enabled

- name: logzio-logs-collector
version: "1.0.1"
repository: "https://logzio.github.io/logzio-helm/"
condition: logs.enabled
maintainers:
- name: yotamloe
email: [email protected]
Expand Down
46 changes: 46 additions & 0 deletions charts/logzio-monitoring/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ The `logzio-monitoring` Helm Chart facilitates the shipping of Kubernetes teleme

This project packages the following Helm Charts:
- [logzio-fluentd](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd) for shipping logs via Fluentd.
- [logzio-logs-collector](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-logs-collector) for shipping logs via opentelemetry.
- [logzio-telemetry](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-telemetry) for metrics and traces via OpenTelemetry Collector.
- [logzio-trivy](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-trivy) for security reports via Trivy operator.
- [logzio-k8s-events](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-k8s-events) for Kubernetes deployment events.
Expand Down Expand Up @@ -125,6 +126,22 @@ For example, to change a value named `someField` in `logzio-telemetry`'s `values
--set logzio-k8s-telemetry.someField="my new value"
```

### Migrate to OpenTelemetry for log collection

The `logzio-fluentd` chart will be disabled by default in favor of the `logzio-logs-collector` for log collection in upcoming releases. To migrate to `logzio-logs-collector`, add the following `--set` flags:

```sh
helm install -n monitoring \
--set logs.enabled=true \
--set logzio-fluentd.enabled=false \
--set logzio-logs-collector.enabled=true \
--set logzio-logs-collector.secrets.logzioLogsToken=<<token>> \
--set logzio-logs-collector.secrets.logzioRegion=<<region>> \
--set logzio-logs-collector.secrets.env_id=<<env_id>> \
--set logzio-logs-collector.secrets.logType=<<log_type>> \
logzio-monitoring logzio-helm/logzio-monitoring
```

### Sending telemetry data from eks on fargate

To ship logs from pods running on Fargate, set the `fargateLogRouter.enabled` value to `true`. This will deploy a dedicated `aws-observability` namespace and a `configmap` for the Fargate log router. More information about EKS Fargate logging can be found [here](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html)
Expand Down Expand Up @@ -206,6 +223,35 @@ There are two possible approaches to the upgrade you can choose from:


## Changelog
- **5.3.0**:
- Add `logzio-logs-collector.enabled` + `fluentd.enabled` values
- Upgrade `logzio-k8s-telemetry` to `4.2.0`:
- Upgraded opentelemetry-collector-contrib image to v0.97.0
- Added Kubernetes objects receiver
- Removed servicegraph connector from span metrics configuration
- Allow env_id & p8s_logzio_name non string values
- Upgrade `logzio-logs-collector` version to `1.0.1`:
- Create NOTES.txt for Helm install notes
- Enhanced env_id handling to support both numeric and string formats
- Change default log type
- Update multiline parsing and error detection
- Update error detection in logs
- Upgrade `logzio-fluentd` to `0.29.2`:
- Enhanced env_id handling to support both numeric and string formats
- Upgrade `logzio-trivy` to `0.4.0`:
- Enhanced env_id handling to support both numeric and string formats
- Upgrade `logzio-k8s-events` to `0.0.4`:
- Enhanced env_id handling to support both numeric and string formats
- **5.2.5**:
- **Depreciation notice** `logzio-fluentd` chart will be disabled by default in favour of `logzio-logs-collector` for log collection in upcoming releases.
- Added `logzio-logs-collector` version `1.0.0`:
- otel collector daemonset designed and configured to function as log collection agent
- eks fargate support
- adds logzio required fields (`log_level`, `type`, `env_id` and more)
- `enabled` value to enable/disable deployment from parent charts
- Upgrade `logzio-fluentd` to `0.29.1`:
- Added `enabled` value, to conditianly control the deployment of this chart by a parent chart.
- Added `daemonset.LogFileRefreshInterval` and `windowsDaemonset.LogFileRefreshInterval` values, to control list of watched log files refresh interval.
- **5.2.4**:
- Update `logzio-k8s-telemetry` sub chart version to `4.1.3`
- **5.2.3**:
Expand Down
37 changes: 37 additions & 0 deletions charts/logzio-monitoring/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@

{{- if and (.Values.logs.enabled) (not (index .Values "logzio-logs-collector" "enabled")) (index .Values "logzio-fluentd" "enabled") }}
[ DEPRICATION ] You are using fluetnd agent for log collection, this option will be disabled by default in upcoming releases.
You can change to opentelemetry logzio-logs-collector by setting the following values:

--set logs.enabled=true \
--set logzio-fluentd.enabled=false \
--set logzio-logs-collector.enabled=true \
--set logzio-logs-collector.secrets.logzioLogsToken=<<token>> \
--set logzio-logs-collector.secrets.logzioRegion=<<region>> \
--set logzio-logs-collector.secrets.env_id=<<env_id>> \
--set logzio-logs-collector.secrets.logType=<<log_type>> \

{{ end }}


{{- if and (.Values.logs.enabled) (index .Values "logzio-logs-collector" "enabled") (index .Values "logzio-fluentd" "enabled") }}
[ WARNING ] You enabled both fluetnd agent and opentelemetry logzio-logs-collector for log collection, you will have duplicated log data entries in logz.io
You can change to opentelemetry logzio-logs-collector by setting the following values:

--set logs.enabled=true \
--set logzio-fluentd.enabled=false \
--set logzio-logs-collector.enabled=true \
--set logzio-logs-collector.secrets.logzioLogsToken=<<token>> \
--set logzio-logs-collector.secrets.logzioRegion=<<region>> \
--set logzio-logs-collector.secrets.env_id=<<env_id>> \
--set logzio-logs-collector.secrets.logType=<<log_type>> \
{{ end }}


{{- if and (.Values.logs.enabled) (index .Values "logzio-logs-collector" "enabled")}}
[ INFO ] You enabled opentelemetry logzio-logs-collector for log collection.

{{ end }}



Loading