Vector collects host metric and fails to write to cnosdb database, the log as follow:
The application panicked (crashed).
Message: begin <= end (34 <= 10) when slicing `mountpoint=/sys/fs/cgroup/net_cls,net_prio,collector=filesystem,device=cgroup,host=cicd_ujv23.100020,filesystem=cgroup,metric_type=gauge value=0 1695122686073428698
host.filesystem_total_bytes,collector=filesystem,mountpoint=/sys/fs/cgroup/net_cls,net_prio`[...]
Location: common/protocol_parser/src/lib.rs:235
I want to collect host metrics through vector and write them to cnosdb to build a visual monitoring.my vector configure as follow:
data_dir = "/var/lib/vector"
[sources.hostmetrics001]
type = "host_metrics"
[transforms.tf001hostmetrics001]
type = "filter"
inputs = ["hostmetrics001"]
condition = .name != "filesystem_used_ratio"
[transforms.tfhostmetrics001]
type = "remap"
inputs = ["tf001hostmetrics001"]
source = """
# # 与 Vector Log 类似,不同的是需要添加到 metric 的 tags 中,写入的 table 名为 metric 的 name 和 namespace(namespace.name),所以不需要指定 table 名
.tags._tenant = "cnosdb"
.tags._database = "hostmetrics"
.tags._user = "root"
.tags._password = ""
"""
[sinks.skhostmetrics001]
type = "vector"
inputs = ["tfhostmetrics001"]
address = "127.0.0.1:12006"
And I start vector,
sudo vector --config-toml /etc/vector/vector.toml
the vector log as follow:
2023-09-20T02:53:47.849792Z INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=info,rdkafka=info,buffers=info,lapin=info,kube=info"
2023-09-20T02:53:47.862010Z INFO vector::app: Loading configs. paths=["/etc/vector/vector.toml"]
2023-09-20T02:53:47.868310Z INFO source{component_kind="source" component_id=hostmetrics001 component_type=host_metrics component_name=hostmetrics001}: vector::sources::host_metrics: PROCFS_ROOT is unset. Using default /proc for procfs root.
2023-09-20T02:53:47.868379Z INFO source{component_kind="source" component_id=hostmetrics001 component_type=host_metrics component_name=hostmetrics001}: vector::sources::host_metrics: SYSFS_ROOT is unset. Using default /sys for sysfs root.
2023-09-20T02:53:47.885149Z INFO vector::topology::running: Running healthchecks.
2023-09-20T02:53:47.885348Z INFO vector: Vector has started. debug="false" version="0.31.0" arch="x86_64" revision="0f13b22 2023-07-06 13:52:34.591204470"
2023-09-20T02:53:47.885410Z INFO vector::app: API is disabled, enable by setting `api.enabled` to `true` and use commands like `vector top`.
2023-09-20T02:53:47.927543Z INFO vector::topology::builder: Healthcheck passed.
2023-09-20T02:53:49.046900Z WARN sink{component_kind="sink" component_id=skhostmetrics001 component_type=vector component_name=skhostmetrics001}:request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=Request failed: status: Cancelled, message: "h2 protocol error: http2 error: stream error received: stream no longer needed", details: [], metadata: MetadataMap { headers: {} } internal_log_rate_limit=true
2023-09-20T02:53:50.090529Z WARN sink{component_kind="sink" component_id=skhostmetrics001 component_type=vector component_name=skhostmetrics001}:request{request_id=1}: vector::sinks::util::retries: Internal log [Retrying after error.] is being suppressed to avoid flooding.
and I check the cnosdb log,as follow:
The application panicked (crashed).
Message: begin <= end (34 <= 10) when slicing `mountpoint=/sys/fs/cgroup/net_cls,net_prio,collector=filesystem,host=cicd_ujv23.100020,filesystem=cgroup,device=cgroup,metric_type=gauge value=0 1695178427886213462
host.filesystem_total_bytes,device=cgroup,collector=filesystem,mountpoint=/sys/fs/cgroup/ne`[...]
Location: common/protocol_parser/src/lib.rs:235
why the cnosdb crashd?