no idea of a edgecore error message

102 views
Skip to first unread message

Peak P

unread,
May 13, 2024, 2:10:56 AM5/13/24
to KubeEdge
Hi ,I encount a edgecore error message : cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs", please help me to find out the reasons causing the error? thank you !
My edgenode is a vmware virtual machine :arch:x86_64, os openeuler22.03,conttainerd is the container runtime.containerd starts sucessfully , the edgecore is deployed by keadm 1.15.2. Following the logs of edgecore service:
May 13 13:55:37 kubeedge-edge1 systemd[1]: Started edgecore.service.
May 13 13:55:39 kubeedge-edge1 edgecore[820]: W0513 13:55:39.572690     820 validation_others.go:24] NodeIP is empty , use default ip which can connect to cloud.
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.573066     820 server.go:102] Version: v1.15.2
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.821171     820 sql.go:21] Begin to register twin db model
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.824267     820 module.go:52] Module twin registered successfully
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.872831     820 module.go:52] Module edged registered successfully
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.873676     820 module.go:52] Module websocket registered successfully
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.873916     820 module.go:52] Module eventbus registered successfully
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.874426     820 metamanager.go:41] Begin to register metamanager db model
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.877073     820 module.go:52] Module metamanager registered successfully
May 13 13:55:39 kubeedge-edge1 edgecore[820]: W0513 13:55:39.877260     820 module.go:55] Module servicebus is disabled, do not register
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.878106     820 edgestream.go:55] Get node local IP address successfully: 192.168.3.101
May 13 13:55:39 kubeedge-edge1 edgecore[820]: W0513 13:55:39.885151     820 module.go:55] Module edgestream is disabled, do not register
May 13 13:55:39 kubeedge-edge1 edgecore[820]: W0513 13:55:39.885276     820 module.go:55] Module testManager is disabled, do not register
May 13 13:55:39 kubeedge-edge1 edgecore[820]: table `device` already exists, skip
May 13 13:55:39 kubeedge-edge1 edgecore[820]: table `device_attr` already exists, skip
May 13 13:55:39 kubeedge-edge1 edgecore[820]: table `device_twin` already exists, skip
May 13 13:55:39 kubeedge-edge1 edgecore[820]: table `sub_topics` already exists, skip
May 13 13:55:39 kubeedge-edge1 edgecore[820]: table `meta` already exists, skip
May 13 13:55:39 kubeedge-edge1 edgecore[820]: table `meta_v2` already exists, skip
May 13 13:55:39 kubeedge-edge1 edgecore[820]: table `target_urls` already exists, skip
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.911991     820 core.go:46] starting module twin
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.912317     820 core.go:46] starting module edged
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.912391     820 core.go:46] starting module websocket
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.912437     820 core.go:46] starting module eventbus
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.912665     820 process.go:119] Begin to sync sqlite
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.913101     820 core.go:46] starting module metamanager
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.927055     820 dmiworker.go:67] dmi worker start
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.938061     820 dmiworker.go:215] success to init device model info from db
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.938193     820 dmiworker.go:235] success to init device info from db
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.938260     820 dmiworker.go:255] success to init device mapper info from db
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.941297     820 server_others.go:13] init uds socket: /etc/kubeedge/dmi.sock
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.948449     820 server.go:105] Subscribe internal topic to $hw/events/upload/#
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.948489     820 server.go:105] Subscribe internal topic to $hw/events/device/+/state/update
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.948502     820 server.go:105] Subscribe internal topic to $hw/events/device/+/twin/+
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.948511     820 server.go:105] Subscribe internal topic to $hw/events/node/+/membership/get
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.948521     820 server.go:105] Subscribe internal topic to SYS/dis/upload_records
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.948531     820 server.go:105] Subscribe internal topic to +/user/#
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.949036     820 server.go:113] list edge-hub-cli-topics status, no record, skip sync
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.949396     820 eventbus.go:87] Launch internal mqtt broker tcp://127.0.0.1:1884 successfully
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.949479     820 edged.go:122] Starting edged...
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.993611     820 server.go:368] "Kubelet version" kubeletVersion="v0.0.0-master+$Format:%H$"
May 13 13:55:39 kubeedge-edge1 edgecore[820]: I0513 13:55:39.993852     820 server.go:370] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.029660     820 certmanager.go:165] Certificate rotation is enabled.
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.034616     820 websocket.go:51] Websocket start to connect Access
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.066997     820 server.go:419] "No api server defined - no events will be sent to API server"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.071419     820 server.go:521] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to
/"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.092442     820 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists"
cgroupRoot=[]
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.093401     820 ws.go:46] dial wss://192.168.3.8:10000/e632aba927ea4ac2b575ec1603d56f10/kubeedge-edge1/events s
uccessfully
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.093601     820 websocket.go:93] Websocket connect to cloud access successful
May 13 13:55:40 kubeedge-edge1 edgecore[820]: W0513 13:55:40.094808     820 eventbus.go:168] Action not found
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.098867     820 process.go:301] DeviceTwin receive msg
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.099185     820 process.go:70] Send msg to the CommModule module in twin
May 13 13:55:40 kubeedge-edge1 edgecore[820]: E0513 13:55:40.099473     820 process.go:417] metamanager not supported operation: connect
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.103246     820 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeCo
nfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRoot
Dir:/var/lib/edged ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods
:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinRecla
im:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Val
ue:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinR
eclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s Experimental
MemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPo
licy:none ExperimentalTopologyManagerPolicyOptions:map[]}
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.104575     820 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="
none" topologyScopeName="container"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.105090     820 container_manager_linux.go:307] "Creating device plugin manager"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.113191     820 state_mem.go:36] "Initialized new in-memory state store"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.148540     820 kubelet.go:259] "Adding static pod path" path="/etc/kubeedge/manifests"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.154936     820 kuberuntime_manager.go:243] "Container runtime initialized" containerRuntime="containerd" versi
on="1.6.31" apiVersion="v1"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.167565     820 server.go:784] "Started kubelet"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.173815     820 server.go:189] "Starting to listen read-only" address="192.168.3.101" port=10350
May 13 13:55:40 kubeedge-edge1 edgecore[820]: E0513 13:55:40.174990     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unab
le to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: E0513 13:55:40.177652     820 kubelet.go:1282] "Image garbage collection failed once. Stats initialization may not have compl
eted yet" err="invalid capacity 0 on image filesystem"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.183132     820 scope.go:115] "RemoveContainer" containerID="96247db62621b10ec02397514d93d7dc88cb6b4452ac1afdec
15b38dcd1ee399"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.230915     820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.236405     820 server.go:430] "Adding debug handlers to kubelet server"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.282255     820 volume_manager.go:293] "Starting Kubelet Volume Manager"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.290214     820 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.313371     820 scope.go:115] "RemoveContainer" containerID="e48c3b6b4d56030caa631af98a4a77835462de043c08d069d2
45e1746b9a2947"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.564674     820 kubelet_node_status.go:68] "Attempting to register node" node="kubeedge-edge1"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: E0513 13:55:40.589179     820 imitator.go:266] failed to unmarshal message content to unstructured obj: Object 'Kind' is miss
ing in '{"metadata":{"name":"kubeedge-edge1","creationTimestamp":null,"labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd
64","kubernetes.io/hostname":"kubeedge-edge1","kubernetes.io/os":"linux","node-role.kubernetes.io/agent":"","node-role.kubernetes.io/edge":""},"annotations":{"volumes.kube
rnetes.io/controller-managed-attach-detach":"true"}},"spec":{},"status":{"capacity":{"cpu":"2","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"3457076Ki","pods":"110"},"
allocatable":{"cpu":"2","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"3354676Ki","pods":"110"},"conditions":[{"type":"MemoryPressure","status":"False","lastHeartbeatTi
me":"2024-05-13T05:55:40Z","lastTransitionTime":"2024-05-13T05:55:40Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"
DiskPressure","status":"False","lastHeartbeatTime":"2024-05-13T05:55:40Z","lastTransitionTime":"2024-05-13T05:55:40Z","reason":"KubeletHasNoDiskPressure","message":"kubele
t has no disk pressure"},{"type":"PIDPressure","status":"False","lastHeartbeatTime":"2024-05-13T05:55:40Z","lastTransitionTime":"2024-05-13T05:55:40Z","reason":"KubeletHas
SufficientPID","message":"kubelet has sufficient PID available"},{"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-13T05:55:40Z","lastTransitionTime":"2024-05-
13T05:55:40Z","reason":"KubeletNotReady","message":"[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"}],"add
resses":[{"type":"InternalIP","address":"192.168.3.101"},{"type":"Hostname","address":"kubeedge-edge1"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10350}},"nodeInfo":{"
machineID":"319f41bab7ba4c0fa8d1ac2d7bb3c635","systemUUID":"23544d56-1674-6bea-9862-2d086b66eeeb","bootID":"05781b40-1fd2-408a-910d-6eb30c617609","kernelVersion":"5.10.0-1
82.0.0.95.oe2203sp3.x86_64","osImage":"openEuler 22.03 (LTS-SP3)","containerRuntimeVersion":"containerd://1.6.31","kubeletVersion":"v1.26.10-kubeedge-v1.15.2","kubeProxyVe
rsion":"v0.0.0-master+$Format:%H$","operatingSystem":"linux","architecture":"amd64"}}}'
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.636283     820 kubelet_node_status.go:106] "Node was previously registered" node="kubeedge-edge1"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.647303     820 kubelet_node_status.go:71] "Successfully registered node" node="kubeedge-edge1"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.688956     820 kuberuntime_manager.go:1107] "Updating runtime config through cri with podcidr" CIDR="10.244.1.
0/24"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.698627     820 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.1.0/24"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.699785     820 setters.go:462] "Node became not ready" node="kubeedge-edge1" condition={Type:Ready Status:Fals
e LastHeartbeatTime:2024-05-13 13:55:40.699647495 +0800 CST m=+2.620862296 LastTransitionTime:2024-05-13 13:55:40.699647495 +0800 CST m=+2.620862296 Reason:KubeletNotReady
 Message:[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]}
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.986911     820 cpu_manager.go:214] "Starting CPU manager" policy="none"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.986962     820 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.987013     820 state_mem.go:36] "Initialized new in-memory state store"
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.989539     820 state_mem.go:88] "Updated default CPUSet" cpuSet=""
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.989685     820 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
May 13 13:55:40 kubeedge-edge1 edgecore[820]: I0513 13:55:40.995360     820 policy_none.go:49] "None policy: Start"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.001416     820 memory_manager.go:169] "Starting memorymanager" policy="None"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.001710     820 state_mem.go:35] "Initializing new in-memory state store"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.002996     820 state_mem.go:75] "Updated machine memory state"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.106022     820 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint"
err="checkpoint is not found"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.108212     820 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: E0513 13:55:41.111069     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.479554     820 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.703241     820 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.703382     820 status_manager.go:176] "Starting to sync pod status with apiserver"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.703489     820 kubelet.go:1999] "Starting kubelet main sync loop"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: E0513 13:55:41.716537     820 kubelet.go:2023] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be su
ccessful"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.729684     820 kubelet_node_status.go:424] "Fast updating node status as it just became ready"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.817759     820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb5be3af46
09baf3c872bbf5418e286fab847692ad12ec4565f00a91b47afe5"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.817890     820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d426cd215
193a0abbb269c3b79cb0d745095d66ae376123d2b8ad894c52ba4"
May 13 13:55:41 kubeedge-edge1 edgecore[820]: I0513 13:55:41.817917     820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62d053610b
bc8714eeba18368a6c503f1e8de63dad7de48ac46b70e493ef93f"
May 13 13:55:49 kubeedge-edge1 edgecore[820]: I0513 13:55:49.990087     820 topology_manager.go:210] "Topology Admit Handler" podUID=e44ffb98-41af-4cd3-9be0-296ef7fe0ba7 p
odNamespace="default" podName="mqtt-kubeedge"
May 13 13:55:49 kubeedge-edge1 edgecore[820]: E0513 13:55:49.990239     820 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1eeb8ef4-6d0f-4d28-91fd-ada9
8f17cfa3" containerName="nginx"
May 13 13:55:49 kubeedge-edge1 edgecore[820]: I0513 13:55:49.990306     820 memory_manager.go:346] "RemoveStaleState removing state" podUID="1eeb8ef4-6d0f-4d28-91fd-ada98f
17cfa3" containerName="nginx"
May 13 13:55:49 kubeedge-edge1 edgecore[820]: I0513 13:55:49.999466     820 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
May 13 13:55:50 kubeedge-edge1 edgecore[820]: I0513 13:55:50.004447     820 topology_manager.go:210] "Topology Admit Handler" podUID=1eeb8ef4-6d0f-4d28-91fd-ada98f17cfa3 p
odNamespace="default" podName="nginx"
May 13 13:55:50 kubeedge-edge1 edgecore[820]: I0513 13:55:50.054111     820 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume
\"mqtt-path\" (UniqueName: \"kubernetes.io/host-path/e44ffb98-41af-4cd3-9be0-296ef7fe0ba7-mqtt-path\") pod \"mqtt-kubeedge\" (UID: \"e44ffb98-41af-4cd3-9be0-296ef7fe0ba7\"
) " pod="default/mqtt-kubeedge"
May 13 13:55:50 kubeedge-edge1 edgecore[820]: I0513 13:55:50.054161     820 reconciler.go:41] "Reconciler: start to sync state"
May 13 13:55:50 kubeedge-edge1 edgecore[820]: E0513 13:55:50.255513     820 serviceaccount.go:112] query meta "default"/"default"/[]string(nil)/3607/v1.BoundObjectReferenc
e{Kind:"Pod", APIVersion:"v1", Name:"nginx", UID:"1eeb8ef4-6d0f-4d28-91fd-ada98f17cfa3"} length error
May 13 13:55:51 kubeedge-edge1 edgecore[820]: E0513 13:55:51.111902     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:56:01 kubeedge-edge1 edgecore[820]: E0513 13:56:01.115056     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:56:11 kubeedge-edge1 edgecore[820]: E0513 13:56:11.118484     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:56:21 kubeedge-edge1 edgecore[820]: E0513 13:56:21.122376     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:56:31 kubeedge-edge1 edgecore[820]: E0513 13:56:31.125947     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:56:41 kubeedge-edge1 edgecore[820]: E0513 13:56:41.128113     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:56:51 kubeedge-edge1 edgecore[820]: E0513 13:56:51.131226     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:57:01 kubeedge-edge1 edgecore[820]: E0513 13:57:01.134346     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:57:11 kubeedge-edge1 edgecore[820]: E0513 13:57:11.137455     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:57:21 kubeedge-edge1 edgecore[820]: E0513 13:57:21.138821     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:57:31 kubeedge-edge1 edgecore[820]: E0513 13:57:31.140155     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:57:41 kubeedge-edge1 edgecore[820]: E0513 13:57:41.142709     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:57:51 kubeedge-edge1 edgecore[820]: E0513 13:57:51.143496     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
May 13 13:58:01 kubeedge-edge1 edgecore[820]: E0513 13:58:01.146366     820 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cann
ot find filesystem info for device \"/dev/mapper/openeuler-root\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Reply all
Reply to author
Forward
0 new messages