Skip to content

Order Metrics in the Buffer by Timestamp by Creating an Aggregator Plugin or Other Solution #12963

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
crflanigan opened this issue Mar 27, 2023 · 23 comments · Fixed by #12994
Closed
Labels
feature request Requests for new plugin and for new features to existing plugins plugin/aggregator 1. Request for new aggregator plugins 2. Issues/PRs that are related to aggregator plugins

Comments

@crflanigan
Copy link
Contributor

crflanigan commented Mar 27, 2023

Use Case

Certain data sinks require metrics to be emitted in order, such as Google Stackdriver. If even a single metric comes in that is newer than any of the others all metrics older than that newer metric will be rejected by the Stackdriver API. This is reiterated in the outputs.stackdriver plugin:

The Stackdriver API does not allow writing points which are out of order, older than 24 hours, or more with resolution greater than than one per point minute. Since Telegraf writes the newest points first and moves backwards through the metric buffer, it may not be possible to write historical data after an interruption.

There was a feature request that was supposed to help with this leveraging FIFO (first in first out) but the issue persists. After discussing more with the InfluxData developers over Slack I found that the buffer keeps the metrics in the order they are received and the PR changed that the first received metric is also first sent to the output, but that it doesn't account for timestamp. This means that if the metrics in the buffer are out of order for any reason all older metrics could be rejected by the Stackdriver API.

Scenario:
I configure the outputs.stackdriver plugin to send metrics to the Stackdriver API successfully. I then disconnect my ethernet cable from my network adaptor to simulate a network outage. Metrics begin to fill the buffer as expected. I then connect the cable again.

Expected behavior

When I reconnect the cable the buffer begins draining the metrics to the outputs.stackdriver which sends the metrics to the Stackdriver API without errors.

Actual behavior

When I reconnect the cable the buffer fails to drain, and actually grows, while spamming this message and others similar over and over again:

2023-03-17T16:20:09Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written: Points must be written in order. One or more of the points specified had an older end time than the most recent point.: prometheus_target{location:us-east1-b,instance:test,cluster:onprem,namespace:metrics,job:Test} timeSeries[0-25,27-32,34,35,37,40,41,43-45,47,48,50-67,69-72,74-76,80,84,86,87,94-101,103-105,111,113,114,116,119,120,127,130,134,135,140,155,159,161,167,171,173,176,184,187]

That said, some metrics do appear to still be coming in, but many aren't as the buffer continues to grow quickly.

Additional info

Proposed Solution:
To address this issue, an aggregator plugin can be developed that sorts metrics by their timestamps before sending them to the buffer. This will ensure that metrics are always in chronological order, preventing older metrics from being rejected by the Stackdriver API due to out-of-order timestamps. The aggregator plugin should perform the following steps:

  1. Receive incoming metrics from input plugins.
  2. Sort the metrics by their timestamps in ascending order.
  3. Pass the sorted metrics to the buffer for processing by the output plugins.
  4. By implementing this aggregator plugin, we can ensure that metrics are always in the correct order before they enter the buffer, resolving the issue with the Stackdriver API rejecting out-of-order metrics.

This could also be resolved by changing the outputs.stackdriver output plugin to sort the metrics prior to being sent.

@crflanigan crflanigan added the feature request Requests for new plugin and for new features to existing plugins label Mar 27, 2023
@Hipska Hipska added the plugin/aggregator 1. Request for new aggregator plugins 2. Issues/PRs that are related to aggregator plugins label Mar 28, 2023
@powersj
Copy link
Contributor

powersj commented Mar 28, 2023

Hi,

I found that the buffer keeps the metrics in the order they are received and the PR changed that the first received metric is also first sent to the output, but that it doesn't account for timestamp.

This reads like you have newer metrics (i.e. showed up later in the buffer) with older timestamps. Is that accurate? Trying to understand how we could test this scenario.

This means that if the metrics in the buffer are out of order for any reason all older metrics could be rejected by the Stackdriver API.

When you say out of order, you are saying out of order according with respect to the timestamp on the metric?

That said, some metrics do appear to still be coming in, but many aren't as the buffer continues to grow quickly.

This is what I would expect. Namely, newer data may get in because that data is ready to write does not need to go to the buffer. The data gets written to the output since the output is alive again, see here for cases that cause us to kick off a write. Then data in the buffer sits and waits, but because newer data was already written it gets stuck. I am not sure how this proposal prevents this situation.

Let me know your thoughts! This issue has definety come up before, but the solution is still not clear to me.

@powersj powersj added the waiting for response waiting for response from contributor label Mar 28, 2023
@crflanigan
Copy link
Contributor Author

crflanigan commented Mar 28, 2023

This reads like you have newer metrics (i.e. showed up later in the buffer) with older timestamps. Is that accurate? Trying to understand how we could test this scenario.

What I experienced was me installing Telegraf from brew, adding the [inputs.cpu] input plugin, standard boilerplate agent configuration, and the [outputs.stackdriver] output plugin, with Google auth configured via env variable. I let the agent run for a while and validated metrics were being sent into the Google API (in our case it was our Google Managed Prometheus project, but this seems to impact the Google Monitoring API as we use the same stackdriver plugin). Then I disconnected my network cable, let the buffer fill to about 100 and then plugged it back in. From there I started getting tons of errors in the logs complaining about metrics being out of order. I think you could perform the same test.

When you say out of order, you are saying out of order according with respect to the timestamp on the metric?

Timestamp. Though I'm not positive on that, I'm not sure how else the API could tell what order the metrics are supposed to be in.

Yeah, not sure of the exact answer, but maybe instead of an aggregator it should be sorted by the plugin before it is sent. I guess the purpose of this was to make the stackdriver output plugin more functional as any scenario that leads to metrics being out of order (network interruption, input plugin flushing metrics out of order, etc) makes this plugin seem unreliable in a production scenario.

@telegraf-tiger telegraf-tiger bot removed the waiting for response waiting for response from contributor label Mar 28, 2023
@powersj
Copy link
Contributor

powersj commented Mar 29, 2023

I am going to need a way to reproduce this. I tried digging into this last night and this morning, thinking if it was a larger telegraf issue then I could reproduce with an exec/execd input and http output. From those experiments I always saw metrics ordered by oldest to newest even after an interruption to the output.

I did set up stackdriver and used the following config to reproduce your CPU + unplugging the network cable example and it recovered as expected:

[agent]
omit_hostname = true
debug = true

[[inputs.cpu]]

[[outputs.stackdriver]]
  project = "flawless-span-322118"
  namespace = "telegraf"

and recovered just fine:

❯ make telegraf && ./telegraf --config config.toml 
CGO_ENABLED=0 go build -tags "" -ldflags " -X github.com/influxdata/telegraf/internal.Commit=7088ec13 -X github.com/influxdata/telegraf/internal.Branch=master -X github.com/influxdata/telegraf/internal.Version=1.27.0-7088ec13" ./cmd/telegraf
2023-03-29T15:01:28Z I! Loading config file: config.toml
2023-03-29T15:01:28Z I! Starting Telegraf 1.27.0-7088ec13
2023-03-29T15:01:28Z I! Available plugins: 235 inputs, 9 aggregators, 27 processors, 22 parsers, 57 outputs, 2 secret-stores
2023-03-29T15:01:28Z I! Loaded inputs: cpu
2023-03-29T15:01:28Z I! Loaded aggregators: 
2023-03-29T15:01:28Z I! Loaded processors: 
2023-03-29T15:01:28Z I! Loaded secretstores: 
2023-03-29T15:01:28Z I! Loaded outputs: stackdriver
2023-03-29T15:01:28Z I! Tags enabled: 
2023-03-29T15:01:28Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"", Flush Interval:10s
2023-03-29T15:01:28Z D! [agent] Initializing plugins
2023-03-29T15:01:28Z D! [agent] Connecting outputs
2023-03-29T15:01:28Z D! [agent] Attempting connection to [outputs.stackdriver]
2023-03-29T15:01:28Z D! [agent] Successfully connected to outputs.stackdriver
2023-03-29T15:01:28Z D! [agent] Starting service inputs
2023-03-29T15:01:38Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:01:48Z D! [outputs.stackdriver] Wrote batch of 33 metrics in 498.948597ms
2023-03-29T15:01:48Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:01:58Z D! [outputs.stackdriver] Wrote batch of 33 metrics in 246.858905ms
2023-03-29T15:01:58Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:02:08Z D! [outputs.stackdriver] Wrote batch of 33 metrics in 272.504395ms
2023-03-29T15:02:08Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:02:28Z W! [agent] ["outputs.stackdriver"] did not complete within its flush interval
2023-03-29T15:02:28Z D! [outputs.stackdriver] Buffer fullness: 66 / 10000 metrics
2023-03-29T15:02:30Z E! [outputs.stackdriver] Unable to write to Stackdriver: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2023-03-29T15:02:30Z D! [outputs.stackdriver] Buffer fullness: 99 / 10000 metrics
2023-03-29T15:02:30Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2023-03-29T15:02:48Z W! [agent] ["outputs.stackdriver"] did not complete within its flush interval
2023-03-29T15:02:48Z D! [outputs.stackdriver] Buffer fullness: 132 / 10000 metrics
2023-03-29T15:02:50Z E! [outputs.stackdriver] Unable to write to Stackdriver: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2023-03-29T15:02:50Z D! [outputs.stackdriver] Buffer fullness: 165 / 10000 metrics
2023-03-29T15:02:50Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2023-03-29T15:03:08Z W! [agent] ["outputs.stackdriver"] did not complete within its flush interval
2023-03-29T15:03:08Z D! [outputs.stackdriver] Buffer fullness: 198 / 10000 metrics
2023-03-29T15:03:10Z E! [outputs.stackdriver] Unable to write to Stackdriver: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2023-03-29T15:03:10Z D! [outputs.stackdriver] Buffer fullness: 231 / 10000 metrics
2023-03-29T15:03:10Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2023-03-29T15:03:19Z D! [outputs.stackdriver] Wrote batch of 231 metrics in 1.612288182s
2023-03-29T15:03:19Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:03:28Z D! [outputs.stackdriver] Wrote batch of 33 metrics in 259.043784ms
2023-03-29T15:03:28Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:03:38Z D! [outputs.stackdriver] Wrote batch of 33 metrics in 245.288613ms
2023-03-29T15:03:38Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
^C2023-03-29T15:03:40Z D! [agent] Stopping service inputs
2023-03-29T15:03:40Z D! [agent] Input channel closed
2023-03-29T15:03:40Z I! [agent] Hang on, flushing any cached metrics before shutdown
2023-03-29T15:03:41Z D! [outputs.stackdriver] Wrote batch of 33 metrics in 245.228011ms
2023-03-29T15:03:41Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:03:41Z I! [agent] Stopping running outputs
2023-03-29T15:03:41Z D! [agent] Stopped Successfully

Are you certain you have a single telegraf writing to that bucket? Are there any stackdriver options I need to be aware of?

@powersj powersj added the waiting for response waiting for response from contributor label Mar 29, 2023
@crflanigan
Copy link
Contributor Author

Hey buddy,

I just reproduced it.

Stackdriver config:

[[outputs.stackdriver]]
  project = "io1-sandbox"
  resource_type = "prometheus_target"
  namespace = "store_pc_metrics"   
  [outputs.stackdriver.resource_labels]
    cluster = "onprem"
    instance = "test"
    location = "us-east1-b"
    namespace = "store_pc_metrics"

Also note that I configured my environment variable $GOOGLE_APPLICATION_CREDENTIALS to point to the path of my private JWT key file.

Metrics being ingested:

2023-03-29T15:39:09Z D! [outputs.stackdriver] Wrote batch of 11 metrics in 140.127917ms
2023-03-29T15:39:09Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:39:19Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:39:29Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:39:39Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:39:49Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:39:59Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics
2023-03-29T15:40:09Z D! [outputs.stackdriver] Wrote batch of 11 metrics in 136.691334ms
2023-03-29T15:40:09Z D! [outputs.stackdriver] Buffer fullness: 0 / 10000 metrics

Metrics can be found in the Metrics Explorer:

image

Now I'm going to disconnect my network cable and let it buffer a bit:

2023-03-29T15:56:49Z D! [outputs.stackdriver] Buffer fullness: 110 / 10000 metrics
2023-03-29T15:56:49Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp: lookup monitoring.googleapis.com: no such host"

Plugged it back in:

2023-03-29T15:58:59Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written: Points must be written in order. One or more of the points specified had an older end time than the most recent point.: prometheus_target{instance:test,location:us-east1-b,job:WalzTest,cluster:onprem,namespace:store_pc_metrics} timeSeries[0-25,27,28,30-35,37-40,42-47,49-55,58-60,63-67,69,70,75,77,78,80,81,85,86,88,92,98,100,102,104,106-111,114,116,117,126,128,131,134-137,139,140,142-144,147,148,151,155,156,158,163,182,187,192,198]: custom.googleapis.com/store_pc_metrics/cpu/usage_iowait{host:RVX95JTPF9MBP,env:Production_MacOS,cpu:cpu0}; Field timeSeries[162] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[170] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[188] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[152] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[195] had an invalid value: Duplicate TimeSeries encountered. 

Additionally, buffer continues to grow:

2023-03-29T16:05:39Z D! [outputs.stackdriver] Buffer fullness: 209 / 10000 metrics

These error messages keep spamming in which would be problematic for the log file.

That said, some metrics are still coming in (It's currently 11:02 or 10:02 on the chart):
image

Only one instance of Telegraf is running:

% ps aux | grep telegraf
USER           4610   0.0  0.3 409266096  49184 s002  S+   10:35AM   0:02.64 telegraf

Does that help?

@telegraf-tiger telegraf-tiger bot removed the waiting for response waiting for response from contributor label Mar 29, 2023
@powersj
Copy link
Contributor

powersj commented Mar 29, 2023

Unfortunately, no. I have two differences from your config:

  1. I used the gcloud cli to log in, which the credentials shouldn't matter
  2. had to include a job resource label as well.

custom.googleapis.com/store_pc_metrics/cpu/usage_iowait{host:RVX95JTPF9MBP,env:Production_MacOS,cpu:cpu0};
Field timeSeries[162] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;
Field timeSeries[170] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;
Field timeSeries[188] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;
Field timeSeries[152] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;
Field timeSeries[195] had an invalid value: Duplicate TimeSeries encountered.

Scrolling to the end of the error message, this does not sound like out of order metrics, like the start of the error message says. It sounds like we are attempting to write more than one point to a time series in the same request. It makes some sense that you only see this when the buffer grows, because that is the time when you might have multiple series to write.

This was briefly mentioned in #5404 when trying to reduce the number of requests from one request per metric to instead batching requests. In Write() we start off by creating a timeseries for ever element in a batch. Then proceed to group these into 200 at a time, but I am not picking up on where we unique a time series.

@powersj
Copy link
Contributor

powersj commented Mar 29, 2023

I've played with this some more this afternoon. It looks like the call to Add() hashes the metric to prevent metrics from ending up in the same timeseries.

What I'd like to do is create a debug telegraf that prints out the time series and the values to see if we can spot any duplicates, like it says when you push. I'll try to get something up tomorrow.

@powersj
Copy link
Contributor

powersj commented Mar 30, 2023

I put up PR #12994 with debug output. If you could try it on your mac where you are able to reproduce this easily and give me the full logs and output I'd like to look through it.

I am not sure why circleci is not putting artifacts on PRs, but you can download the MacOS artifacts here:

Thanks!

@crflanigan
Copy link
Contributor Author

Hi,

This is the output when it sent metrics out using Stackdriver without any network interruption:

2023-03-30T20:56:05Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = Internal desc = One or more TimeSeries could not be written: Internal error encountered. Please retry after a few seconds. If internal errors persist, contact support at https://cloud.google.com/support/docs.: prometheus_target{cluster:onprem,namespace:store_pc_metrics,instance:test,job:WalzTest,location:us-east1-b} timeSeries[0-71]: custom.googleapis.com/store_pc_metrics/mem/free{env:Production_MacOS,host:COMPUTERNAME}
error details: name = Unknown  desc = total_point_count:72  errors:{status:{code:13}  point_count:72}
2023-03-30T20:56:12Z I! [outputs.stackdriver] recv 10 metrics
2023-03-30T20:56:12Z I! [outputs.stackdriver] 11498794796971297246:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE,seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 777593223515082363:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE,seconds:1680209760,int64_value:524288000
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17523851033666368320:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE,seconds:1680209760,int64_value:503902208
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 16697505905828004248:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE,seconds:1680209760,int64_value:3795697202
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 2248757426990152845:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/used,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,int64_value:13357973504
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 5205454125783624787:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE,seconds:1680209760,int64_value:503902208
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 1002088686174971944:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE,seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 15820304553527863781:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE,seconds:1680209760,int64_value:349475
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 12695487181463271854:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE,seconds:1680209760,int64_value:203264
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 9504633670359018160:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/free,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,int64_value:214384640
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 16110456800632834635:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE,seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 18212270925521874618:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE,seconds:1680209760,int64_value:3795698100
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 8509394735685183663:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE,seconds:1680209760,int64_value:2
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4225974530337088264:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE,seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17913467175583715930:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE,seconds:1680209760,int64_value:3795697162
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 2748272787775006009:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE,seconds:1680209760,int64_value:0
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 11296739396455243078:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE,seconds:1680209760,int64_value:0
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 6460885549402341996:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE,seconds:1680209760,int64_value:3797066580
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 12754153796540925274:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE,seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 2581206237919102115:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE,seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4701856294841612830:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE,seconds:1680209760,int64_value:4920970
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 7682883357790209542:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE,seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 11993345173913929913:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE,seconds:1680209760,double_value:100
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17396433126058583857:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/inactive,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,int64_value:3607511040
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 8707560441608305354:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE,seconds:1680209760,int64_value:4920920
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 5416559734029560187:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE,seconds:1680209760,double_value:3.8882812500000004
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 1380578990694304413:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE,seconds:1680209760,int64_value:940
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 11662060768343881355:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE,seconds:1680209760,int64_value:4920920
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4494747409174735160:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE,seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 18372269473372998671:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE,seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 9423136752745863077:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/available_percent,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,double_value:22.246360778808594
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 10840267446474832940:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/active,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,int64_value:3587080192
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4448429521926303195:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE,seconds:1680209760,int64_value:503902208
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 2423985306692941380:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE,seconds:1680209760,double_value:3.8882812500000004
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 13534462306743376062:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE,seconds:1680209760,int64_value:1
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4423960074751968845:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE,seconds:1680209760,int64_value:42
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 16718546106629078951:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE,seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 15021363843903049668:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE,seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 13560370162739873711:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE,seconds:1680209760,int64_value:203264
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 2454865405236486978:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE,seconds:1680209760,int64_value:4920920
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4273545959760777639:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE,seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 5566942942225203946:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE,seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 741494757258643963:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE,seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 15129469317993484377:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE,seconds:1680209760,int64_value:1369420
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4208756490468553073:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE,seconds:1680209760,int64_value:50
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 6084512328015438150:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE,seconds:1680209760,double_value:3.8882812500000004
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 1998602150935759404:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE,seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17413046905469006607:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE,seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 10867650288899618231:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE,seconds:1680209760,int64_value:688
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 13986691057266844763:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/available,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,int64_value:3821895680
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17076815117655153357:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE,seconds:1680209760,int64_value:4920921
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4696016513286001629:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE,seconds:1680209760,int64_value:20385792
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 9835097448385357435:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE,seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 6414670183602310892:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/total,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,int64_value:17179869184
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 7879263277810538476:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE,seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 6133835899537316092:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE,seconds:1680209760,int64_value:524288000
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 2616806056611545566:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE,seconds:1680209760,int64_value:3796046635
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 15030280444260371208:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE,seconds:1680209760,int64_value:20385792
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 12246244951712863696:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE,seconds:1680209760,int64_value:4920950
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 10593582215027114850:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE,seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 9964240152213035600:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE,seconds:1680209760,int64_value:688
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 3168220498697204615:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE,seconds:1680209760,int64_value:524288000
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 1803920656270235857:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE,seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 15821282043649585182:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE,seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 16716367241423907107:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/wired,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,int64_value:2644049920
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 4584320728786124224:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE,seconds:1680209760,int64_value:20385792
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17256206256870363656:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE,seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 15483751846954207064:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE,seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 13168859725450447839:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/used_percent,map[env:Production_MacOS host:COMPUTERNAME],GAUGE,seconds:1680209760,double_value:77.7536392211914
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17497652168275150293:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE,seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 17367722330593368333:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE,seconds:1680209760,int64_value:30
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] 2247023086263251656:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE,seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z I! [outputs.stackdriver] sending time series:
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:524288000
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:940
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/used,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:13357973504
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:3.8882812500000004
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:4920920
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3796046635
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:0
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:524288000
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:50
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:42
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:503902208
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:20385792
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:20385792
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:4920970
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:503902208
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:3.8882812500000004
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:3.8882812500000004
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:524288000
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/total,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:17179869184
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3797066580
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:2
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:4920920
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/available_percent,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:22.246360778808594
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/free,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:214384640
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:688
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/active,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3587080192
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:688
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:0
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:4920920
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:100
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:4920950
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:203264
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/used_percent,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:77.7536392211914
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:1
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:devfs env:Production_MacOS fstype:devfs host:COMPUTERNAME mode:rw path:/dev],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:203264
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/available,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3821895680
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used_percent,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,double_value:21.381200917687977
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk1s3 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Hardware],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:20385792
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:1369420
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:349475
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/used,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:105705406464
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s4 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Update],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795697202
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/wired,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:2644049920
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:4920921
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_free,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795697160
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/iSCPreboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:30
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/mem/inactive,map[env:Production_MacOS host:COMPUTERNAME],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3607511040
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/total,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:494384795648
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s5 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Data],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk1s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/xarts],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:503902208
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s6 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/VM],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795697162
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/inodes_total,map[device:disk3s2 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:rw path:/System/Volumes/Preboot],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:3795698100
2023-03-30T20:56:12Z I! [outputs.stackdriver]   - custom.googleapis.com/store_pc_metrics/disk/free,map[device:disk3s1s1 env:Production_MacOS fstype:apfs host:COMPUTERNAME mode:ro path:/],GAUGE
2023-03-30T20:56:12Z I! [outputs.stackdriver]     - seconds:1680209760,int64_value:388679389184
2023-03-30T20:56:12Z I! [outputs.stackdriver] 
2023-03-30T20:56:12Z D! [outputs.stackdriver] Wrote batch of 10 metrics in 147.95675ms

@crflanigan
Copy link
Contributor Author

When I disconnected the ethernet cable providing network access, letting it buffer to 110 and then plugging the cable back in (pasting it without formatting as it will be a couple really long lines):

2023-03-30T21:12:59Z D! [outputs.stackdriver] Buffer fullness: 110 / 10000 metrics
2023-03-30T21:12:59Z E! [agent] Error writing to outputs.stackdriver: rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written: Points must be written in order. One or more of the points specified had an older end time than the most recent point.: prometheus_target{instance:test,job:WalzTest,cluster:onprem,namespace:store_pc_metrics,location:us-east1-b} timeSeries[0-7,9,11-17,19,22-24,26,27,29,31,32,34-37,39,42,43,45-49,51,52,57,58,61,64,67-69,75,76,80,83,85,87,90,98,101,102,107,112,114-116,119,124,125,128,131,148,161,181,183]: custom.googleapis.com/store_pc_metrics/disk/total{fstype:apfs,env:Production_MacOS,mode:ro,host:COMPUTERNAME,device:disk3s1s1,path:/}; Field timeSeries[62] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[105] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[177] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[141] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[33] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[184] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[84] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[120] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[163] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[199] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[55] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[127] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[91] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[134] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[170] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[142] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[106] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[178] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[185] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[70] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[113] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[149] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[192] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[156] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[77] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[41] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[164] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[92] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[56] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[135] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[20] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[63] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[171] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[99] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[53] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[89] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[132] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[96] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[168] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[175] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[60] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[103] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[139] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[147] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[111] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[154] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[190] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[82] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[118] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[197] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[10] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[133] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[18] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[169] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[25] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[97] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[176] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[104] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[140] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[155] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[40] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[191] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[162] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[126] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[198] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[54] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[44] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[195] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[123] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[8] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[159] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[166] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[94] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[130] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[66] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[138] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[145] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[30] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[73] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[109] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[152] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[188] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[160] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[88] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[95] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[167] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[174] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[59] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[182] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[146] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[74] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[110] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[153] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[38] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[189] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[196] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[117] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[81] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[186] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[71] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[150] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[193] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[78] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[157] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[121] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[93] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[129] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[172] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[136] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[21] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[100] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[143] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[179] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[28] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[187] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[151] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[194] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[79] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[122] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[86] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[158] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[165] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[50] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[173] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[65] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[137] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[144] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[180] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[72] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[108] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.
error details: name = Unknown desc = total_point_count:200 success_point_count:25 errors:{status:{code:3} point_count:175}
2023-03-30T21:12:59Z I! [agent] Stopping running outputs

@crflanigan
Copy link
Contributor Author

As a side note, I saw this error using the inputs.cpu plugin that I didn't get in the brew installed version of Telegraf:

[inputs.cpu] Error in plugin: error getting CPU info: not implemented yet

Not super relevant to this issue, but thought I would let you know.

@powersj
Copy link
Contributor

powersj commented Mar 31, 2023

As a side note, I saw this error using the inputs.cpu plugin that I didn't get in the brew installed version of Telegraf:

This is because the brew folks build telegraf with CGO enabled. This pulls in the ability to monitor CPU usage on darwin/macOS systems.

@powersj
Copy link
Contributor

powersj commented Mar 31, 2023

Can I get the entire logs, from start to finish? You can upload a file or even mail (jpowers at influxdata) it to me if you don't want to post it.

@crflanigan
Copy link
Contributor Author

Hi @powersj,

I emailed you the logs.

Thanks for the help!

@powersj
Copy link
Contributor

powersj commented Apr 5, 2023

Here we go :)

Write Failure 1

At timestamp 2023-03-31T16:25:47Z I see:

Unable to write to Stackdriver: rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written:

Which then has a total of 130 messages similar to the following:

    Field timeSeries[8] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;
    Field timeSeries[9] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;
    Field timeSeries[18] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;

In the 130 additional messages, only the timeSeries index is the only difference. Also note at the very end of the message:

desc = total_point_count:200  success_point_count:70  errors:{status:{code:3}  point_count:130

Even with all of these errors, there are 70 data points writing successfully.

Duplicate TimeSeries

What is a duplicate TimeSeries in Stackdriver? From what I gather it is based on the metric type and labels. Looking at the 2nd timeseries telegraf generated I see the type custom.googleapis.com/store_pc_metrics/disk/inodes_used and the following labels:

  • device: disk1s1
  • env: Production_MacOS
  • fstype: apfs
  • host: RVX95JTPF9MBP
  • mode: rw
  • path: /System/Volumes/iSCPreboot

If I look through the "sending time series:` debug output I see not one, but three of this same timeseries in the attempted send:

Timeseries 2
  - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s1 env:Production_MacOS fstype:apfs host:RVX95JTPF9MBP mode:rw path:/System/Volumes/iSCPreboot],GAUGE
    - seconds:1680279780,int64_value:30
Timeseries 41
  - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s1 env:Production_MacOS fstype:apfs host:RVX95JTPF9MBP mode:rw path:/System/Volumes/iSCPreboot],GAUGE
    - seconds:1680279360,int64_value:30
Timeseries 49
  - custom.googleapis.com/store_pc_metrics/disk/inodes_used,map[device:disk1s1 env:Production_MacOS fstype:apfs host:RVX95JTPF9MBP mode:rw path:/System/Volumes/iSCPreboot],GAUGE
    - seconds:1680279300,int64_value:30

Looking through the errors I see:

    Field timeSeries[41] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;
    Field timeSeries[49] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.;

Meaning the first data point was written, but then when Stackdriver got to the 2nd and 3rd, the writes fail. The only difference between these points is the endtime and possibly the value itself, but not in this case. Also note that the the time in seconds is newest for the first one, which brings us to....

Write Failure 2

During the next flush attempt the error message starts off differently. The below error message is with additional whitespace changes from me to make it easier to read:

Points must be written in order. One or more of the points specified had
an older end time than the most recent point.:
prometheus_target{
  cluster:onprem,
  location:us-east1-b,
  namespace:store_pc_metrics,
  instance:test,
  job:WalzTest
} 
timeSeries[
  0-9,12-20,25,26,28-30,32,34,35,37,39-41,43-45,47,52-54,
  56,57,60,63,69,72,75-77,87,88,92,95,96,98,101,109,110,
  116,120,122,124,130,131,134,137,155,168,172,173,197]:
custom.googleapis.com/store_pc_metrics/disk/total{
  device:disk3s1s1,
  env:Production_MacOS,
  mode:ro,
  path:/,
  fstype:apfs,
  host:RVX95JTPF9MBP
};

Then the message also contains another 131 messages about duplicate time series similar to the first write failure.

Points must be written in order

From the first write failure telegraf already saw that points are not written oldest to newest. The payload telegraf sends is a mix of timestamps as well. Meaning later attempts can fail since newer data is present.

To make matters worse, telegraf does not toss the points that wrote successfully. Instead all data points that were used stay in the buffer on a failure.

Next Steps

Currently telegraf builds the timeseries, grabs up to 200 of them, and send them over in a request. This logic needs to get a bit smarter. My hypothesis is that telegraf needs to group the timeseries requests by endtime and sort oldest to newest. This would:

  1. ensuring metrics are sent oldest data first
  2. prevent duplicate time series

The duplication is avoided because if two metrics in telegraf have the same name, tags, and timestamp, then it is already the same metric. In the logs the only other difference was time, which telegraf is now grouping on. I tried sending duplicate points myself and saw the ouptut correctly consolidate two duplicate metrics down to one timeseries.

The cons of this approach is that this will require additional requests, but I think that is the cost a user has to pay versus losing data or never sending data. The worst case would be getting individual metrics each with a unique timestamp. However, I think telegraf would still want to send them in different requests.

How to do this, I'm not clear on quite yet. At first I saw that a timeseries take a slice of points, but the docs say it must be only one point. So that should still work.

The other behavior I think we need to sort is what to do on errors. Because we already check for errStringPointsTooOld and return nil, we should be removing things from the buffer, but we are not based on these logs. So I think we need to revisit that as well.

Thoughts?

@powersj
Copy link
Contributor

powersj commented Apr 11, 2023

argh I forgot to comment here again...

@crflanigan I pushed another commit to try to group time series by time on #12994 would you be willing to give that another shot?

@crflanigan
Copy link
Contributor Author

crflanigan commented Apr 11, 2023

@powersj
Sure thing!

Download and run this?
image

@powersj
Copy link
Contributor

powersj commented Apr 11, 2023

yes please!

@crflanigan
Copy link
Contributor Author

crflanigan commented Apr 11, 2023

@powersj
Are you wanting the logs, or for me to test it out and see how it works?

@powersj
Copy link
Contributor

powersj commented Apr 11, 2023

I will want the logs please - I'd like to go and see what happens with the change. Not entirely convinced this fixes the issue, but want to see how it behaves.

@crflanigan
Copy link
Contributor Author

Sure @powersj

I will have to re-run it so it tee's to a file, but for now, it looks like a similar or same result:

Field timeSeries[198] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.
error details: name = Unknown  desc = total_point_count:200 success_point_count:68 errors:{status:{code:3} point_count:132}

@crflanigan
Copy link
Contributor Author

Hi @powersj,
Logs sent to your email.

@powersj
Copy link
Contributor

powersj commented Apr 28, 2023

@crflanigan I have updated the PR #12994 again, could you give that another try and let me know how it goes? If you see failures, getting all the logs as you have done before would be most helpful.

Thanks!

@crflanigan
Copy link
Contributor Author

@powersj
Looks great!

powersj added a commit to powersj/telegraf that referenced this issue May 3, 2023
In a perfect world, metrics come in to stackdriver grouped by timestamp
and all the timestamps are the same. This means that metrics go out
together and a user never has to worry about out of order metrics.
The world however is not perfect.

In the event that the connection to stackdriver goes down, telegraf will
start to save metrics to a buffer. Once reconnected, telegraf will send
a batch size of data to stackdriver to send. Stackdriver would then
ensure the metrics are sorted, but then break the metrics into
timeseries. In doing so metrics would no longer be in any order and
metrics could send duplicate time series. Meaning two idential metrics,
but with different time stamps.

What the user would see is first a message about duplicate timeseries
and then an error about out of order metrics, where an older metric was
trying to get added.

This ensures that we avoid different timestamps by batching metrics by
time. This way we avoid duplicate timeseries in the first place and
ensure we always send oldest to newest.

Fixes: influxdata#12963
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Requests for new plugin and for new features to existing plugins plugin/aggregator 1. Request for new aggregator plugins 2. Issues/PRs that are related to aggregator plugins
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants