-
Notifications
You must be signed in to change notification settings - Fork 17
Metrics - ZFS disk information missing for sda and sdb #55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @WindowsHyun, could you please try running the capture agent by commenting this part of code out in
|
I have confirmed that the output from this code is not as expected and is not working correctly. Below is the Go code I am using to check ZFS sizes. Please review this code and incorporate it to help resolve the issue.
output:
|
@mertssmnoglu, Adding ZFS support might cater to a niche group of advanced users rather than addressing an end-user problem. It would be important to weigh the benefits against the potential complexities and resource requirements before proceeding with such an implementation. What say? |
Hey guys, sorry for the delay. Thanks for the samples @WindowsHyun , I think we can use the same DiskData struct for the output with some nil values. type DiskData struct {
Device string `json:"device"` // Device
TotalBytes *uint64 `json:"total_bytes"` // Total space of device in bytes
FreeBytes *uint64 `json:"free_bytes"` // Free space of device in bytes
UsedBytes *uint64 `json:"used_bytes"` // Used space of device in bytes
UsagePercent *float64 `json:"usage_percent"` // Usage percent of device
TotalInodes *uint64 `json:"total_inodes"` // Total space of device in inodes
FreeInodes *uint64 `json:"free_inodes"` // Free space of device in inodes
UsedInodes *uint64 `json:"used_inodes"` // Used space of device in inodes
InodesUsagePercent *float64 `json:"inodes_usage_percent"` // Usage percent of device in inodes
ReadBytes *uint64 `json:"read_bytes"` // Amount of data read from the disk in bytes
WriteBytes *uint64 `json:"write_bytes"` // Amount of data written to the disk in bytes
ReadTime *uint64 `json:"read_time"` // Cumulative time spent performing read operations
WriteTime *uint64 `json:"write_time"` // Cumulative time spent performing write operations
} We can check if the file system requires additional steps to receive disk usage data, and can do a different branch of operations. // pseudo
if isZFS() {
collectZFSData()
} else {
collectDiskData()
} I don't have ZFS filesystem in my device so i'm not able to test it. Can you please validate these package main
const ZFS_SUPER_MAGIC = 0x2FC12FC1
func isPathZFS(path string) (bool, error) {
var stat syscall.Statfs_t
err := syscall.Statfs(path, &stat)
if err != nil {
return false, err
}
return stat.Type == ZFS_SUPER_MAGIC, nil
}
func isZFS() bool {
file, err := os.Open("/proc/mounts")
if err != nil {
return false
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
fields := strings.Fields(line)
if len(fields) >= 3 && fields[2] == "zfs" {
return true
}
}
if err := scanner.Err(); err != nil {
return false
}
return false
} |
package main
import (
"bufio"
"fmt"
"os"
"strings"
"syscall"
)
const ZFS_SUPER_MAGIC = 0x2FC12FC1
func isPathZFS(path string) (bool, error) {
var stat syscall.Statfs_t
err := syscall.Statfs(path, &stat)
if err != nil {
return false, err
}
return stat.Type == ZFS_SUPER_MAGIC, nil
}
func isZFS() bool {
file, err := os.Open("/proc/mounts")
if err != nil {
return false
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
fields := strings.Fields(line)
if len(fields) >= 3 && fields[2] == "zfs" {
return true
}
}
if err := scanner.Err(); err != nil {
return false
}
return false
}
func main() {
fmt.Println("isZFS():", isZFS())
result, err := isPathZFS("/exthdd")
if err != nil {
fmt.Println("error:", err.Error())
os.Exit(1)
}
fmt.Println("isPathZFS(/exthdd):", result)
result, err = isPathZFS("/home")
if err != nil {
fmt.Println("error:", err.Error())
os.Exit(1)
}
fmt.Println("isPathZFS(/home):", result)
}
This is the result of running it directly based on the code you wrote. Thank you for your attention. |
Hi @WindowsHyun, sorry for this late response. I try to reproduce your ZFS setup on an AWS EC2 instance but the code block acts different on my end. I couldn't find a solution that covers everyone. Please let me know if I am doing something wrong. My EC2 Config: - AMI: ami-0d4a55af1d81bc708 # FreeBSD 14.2 with ZFS Root
- Storage:
- /dev/sda1: 10 GiB # ZFS Root
- /dev/sdb: 15 GiB Used commands zpool create new-pool /dev/nda1
zpool destroy new-pool
gpart create -s gpt /dev/nda1
gpart add -t freebsd-zfs -l new-pool-label /dev/nda1
zpool create new-pool /dev/gpt/new-pool-label
lsblk
zpool status Output of
|
My environment is ubuntu 22.04 environment. |
#55 Signed-off-by: Mert Şişmanoğlu <[email protected]>
#55 (#64) Signed-off-by: Mert Şişmanoğlu <[email protected]>
Hello,
I am encountering an issue where metrics are not being collected for my sda and sdb disks. Currently, only my nvme0n1 disk is being captured by the metrics system.
Here is the output of the lsblk command from my system, which shows my disk configuration:
As you can see from the lsblk output, I have sda, sdb, and nvme0n1 disks. However, in my metrics dashboard (or wherever I am viewing the metrics), I only see information for nvme0n1.
Could you please advise on how to make the metrics system recognize and collect data for sda and sdb disks as well? Also, if possible, could you explain why sda and sdb might be missing from the metrics currently?
Thank you for your help!
The text was updated successfully, but these errors were encountered: