[help] Deploying WASM workload to Knative Serving service using containerd-wasm-shims throws error

80 views
Skip to first unread message

Emilian Filip

unread,
Nov 29, 2022, 4:05:03 PM11/29/22
to Knative Users
Hi,

## Quick description
I'm attempting to create a Knative service that runs a WebAssembly workload using the spin shims provided by deislabs. The error message says, in short, that the queue-proxy container could not find a "spin.toml" manifest file in its rootfs.

## Longer description
I am part of a university project which aims to integrate WebAssembly workloads (specifically using the spin framework) into a Knative Serving service using the containerd wasm shims by deislabs. As part of this integration effort, we have followed the spin quickstart guide to create the equivalent of a "hello world" project, which we are now trying to deploy as a Knative Serving service. This project's folder and name is called "wasm-spin-rust".

All the necessary requisites for deployment have been satisfied; I have installed the shims, I have configured the runtime classes for spin, I have changed the Knative feature and extension flags to allow for specifying runtime classes, I have built a docker image of the hello world app I'm trying to deploy (and pushed the image to a local repository, and configured Knative to be able to use that repository instead of Dockerhub).

However, the issue arises when I actually attempt to deploy the workload. The pod is created, the image is acquired, the container for the image is created, but the queue-proxy container has an issue, saying that it cannot find the "spin.toml" manifest file.

The following is a copy of the Events section gotten from running kubectl describe pod wasm-spin-rust (the name of my deployment service):
```
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  23s                default-scheduler  Successfully assigned default/wasm-spin-rust-00001-deployment-6658d7bb45-dndjn to cloud-vm-42-37
  Normal   Pulled     23s                kubelet            Container image "localhost:5000/wasm-spin-rust:latest" already present on machine
  Normal   Created    23s                kubelet            Created container wasmrust
  Normal   Started    23s                kubelet            Started container wasmrust
  Warning  BackOff    14s (x3 over 21s)  kubelet            Back-off restarting failed container
  Normal   Pulled     1s (x3 over 23s)   kubelet            Container image "gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb" already present on machine
  Normal   Created    0s (x3 over 23s)   kubelet            Created container queue-proxy
  Warning  Failed     0s (x3 over 23s)   kubelet            Error: failed to start containerd task "queue-proxy": Cannot read manifest file from "/run/containerd/io.containerd.runtime.v2.task/k8s.io/queue-proxy/rootfs/spin.toml": unknown
```
localhost:5000 is the address of our local docker registry, and cloud-vm-42-37 is the name of our node (there is only one in the cluster, this is the master).

spin.toml is a file that is used by the spin framework which is used when starting a web server, comprised of compiled WASM binaries. For some reason, queue-proxy needs access to this file but it cannot find it, and I am not sure how to proceed from here. Any and all advice would be greatly appreciated.

P.S. This is also my first post in this Google group, I hope the format of my message is satisfactory. I can provide more details on request.

## More details
### Dockerfile
```
FROM scratch
COPY /target/wasm32-wasi/release/wasm_spin_rust.wasm /
COPY spin.toml /
```
### knative.yaml (For deployment)
```
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: wasm-spin-rust
  namespace: default
spec:
  template:
    metadata:
      labels:
        app: wasm-spin-rust
      annotations:
        # Knative concurrency-based autoscaling (default).
        autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
        autoscaling.knative.dev/metric: concurrency
        # Target 10 requests in-flight per pod.
        autoscaling.knative.dev/target: "10"
        # Disable scale to zero with a min scale of 1.
        autoscaling.knative.dev/min-scale: "1"
        # Limit scaling to 100 pods.
        autoscaling.knative.dev/max-scale: "10"
    spec:
      runtimeClassName: wasmtime-spin-v1
      containers:
        - name: wasmrust
          image: localhost:5000/wasm-spin-rust:latest
          imagePullPolicy: IfNotPresent
          command: ["/"]
```
### runtimes.yaml (For specifying the spin runtime to use)
```
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: "wasmtime-spin-v1"
handler: "spin"
scheduling:
  nodeSelector:
    "kubernetes.azure.com/wasmtime-spin-v1": "true"
```
### spin.toml (Found in the root directory of the app project)
```
spin_version = "1"
authors = ["root"]
description = "An example app for the quickstart"
name = "wasm-spin-rust"
trigger = { type = "http", base = "/" }
version = "0.1.0"

[[component]]
id = "wasm-spin-rust"
source = "wasm_spin_rust.wasm"
[component.trigger]
route = "/hi"
[component.build]
command = "cargo build --target wasm32-wasi --release"
```
### kubectl describe pod wasm-spin-rust
```
Name:         wasm-spin-rust-00001-deployment-6658d7bb45-dndjn
Namespace:    default
Priority:     0
Node:         cloud-vm-42-37/146.169.42.37
Start Time:   Tue, 29 Nov 2022 20:47:47 +0000
Labels:       app=wasm-spin-rust
              pod-template-hash=6658d7bb45
              service.istio.io/canonical-name=wasm-spin-rust
              service.istio.io/canonical-revision=wasm-spin-rust-00001
              serving.knative.dev/configuration=wasm-spin-rust
              serving.knative.dev/configurationGeneration=1
              serving.knative.dev/configurationUID=0b9bb305-e97f-4576-ad3f-7f23654b29fb
              serving.knative.dev/revision=wasm-spin-rust-00001
              serving.knative.dev/revisionUID=9681715a-1cc1-4169-9f5a-583af9c6503a
              serving.knative.dev/service=wasm-spin-rust
              serving.knative.dev/serviceUID=082b6868-7bd9-4c26-b434-d24b83e4a6b4
Annotations:  autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
              autoscaling.knative.dev/max-scale: 10
              autoscaling.knative.dev/metric: concurrency
              autoscaling.knative.dev/min-scale: 1
              autoscaling.knative.dev/target: 10
              cni.projectcalico.org/podIP: 192.168.0.111/32
              cni.projectcalico.org/podIPs: 192.168.0.111/32
              serving.knative.dev/creator: kubernetes-admin
Status:       Running
IP:           192.168.0.111
IPs:
  IP:           192.168.0.111
Controlled By:  ReplicaSet/wasm-spin-rust-00001-deployment-6658d7bb45
Containers:
  wasmrust:
    Container ID:  containerd://593662fa2f731538a1dd224f3b70ec8937b7a783ee23656dfaea3071d0d6d6d4
    Image:         localhost:5000/wasm-spin-rust:latest
    Image ID:      localhost:5000/wasm-spin-rust@sha256:8fb277c7cad3ec5ff6b2a42f07ba9649bbb887f55c32c7814b1fa5cdf84993fb
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /
    State:          Running
      Started:      Tue, 29 Nov 2022 20:47:48 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      PORT:             8080
      K_REVISION:       wasm-spin-rust-00001
      K_CONFIGURATION:  wasm-spin-rust
      K_SERVICE:        wasm-spin-rust
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2svdl (ro)
  queue-proxy:
    Container ID:   containerd://09e9812dbbc9db98c086814a8fdd6fe74c09d5e6f212a3dd790eec891f3b6361
    Image:          gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb
    Image ID:       gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb
    Ports:          8022/TCP, 9090/TCP, 9091/TCP, 8012/TCP, 8112/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      failed to start containerd task "09e9812dbbc9db98c086814a8fdd6fe74c09d5e6f212a3dd790eec891f3b6361": Cannot read manifest file from "/run/containerd/io.containerd.runtime.v2.task/k8s.io/09e9812dbbc9db98c086814a8fdd6fe74c09d5e6f212a3dd790eec891f3b6361/rootfs/spin.toml": unknown
      Exit Code:    128
      Started:      Thu, 01 Jan 1970 01:00:00 +0100
      Finished:     Tue, 29 Nov 2022 20:58:46 +0000
    Ready:          False
    Restart Count:  7
    Requests:
      cpu:      25m
    Readiness:  http-get http://:8012/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SERVING_NAMESPACE:                 default
      SERVING_SERVICE:                   wasm-spin-rust
      SERVING_CONFIGURATION:             wasm-spin-rust
      SERVING_REVISION:                  wasm-spin-rust-00001
      QUEUE_SERVING_PORT:                8012
      QUEUE_SERVING_TLS_PORT:            8112
      CONTAINER_CONCURRENCY:             0
      REVISION_TIMEOUT_SECONDS:          300
      SERVING_POD:                       wasm-spin-rust-00001-deployment-6658d7bb45-dndjn (v1:metadata.name)
      SERVING_POD_IP:                     (v1:status.podIP)
      SERVING_LOGGING_CONFIG:
      SERVING_LOGGING_LEVEL:
      SERVING_REQUEST_LOG_TEMPLATE:      {"httpRequest": {"requestMethod": "{{.Request.Method}}", "requestUrl": "{{js .Request.RequestURI}}", "requestSize": "{{.Request.ContentLength}}", "status": {{.Response.Code}}, "responseSize": "{{.Response.Size}}", "userAgent": "{{js .Request.UserAgent}}", "remoteIp": "{{js .Request.RemoteAddr}}", "serverIp": "{{.Revision.PodIP}}", "referer": "{{js .Request.Referer}}", "latency": "{{.Response.Latency}}s", "protocol": "{{.Request.Proto}}"}, "traceId": "{{index .Request.Header "X-B3-Traceid"}}"}
      SERVING_ENABLE_REQUEST_LOG:        false
      SERVING_REQUEST_METRICS_BACKEND:   prometheus
      TRACING_CONFIG_BACKEND:            none
      TRACING_CONFIG_ZIPKIN_ENDPOINT:
      TRACING_CONFIG_DEBUG:              false
      TRACING_CONFIG_SAMPLE_RATE:        0.1
      USER_PORT:                         8080
      SYSTEM_NAMESPACE:                  knative-serving
      METRICS_DOMAIN:                    knative.dev/internal/serving
      SERVING_READINESS_PROBE:           {"tcpSocket":{"port":8080,"host":"127.0.0.1"},"successThreshold":1}
      ENABLE_PROFILING:                  false
      SERVING_ENABLE_PROBE_REQUEST_LOG:  false
      METRICS_COLLECTOR_ADDRESS:
      CONCURRENCY_STATE_ENDPOINT:
      CONCURRENCY_STATE_TOKEN_PATH:      /var/run/secrets/tokens/state-token
      HOST_IP:                            (v1:status.hostIP)
      ENABLE_HTTP2_AUTO_DETECTION:       false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2svdl (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-2svdl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.azure.com/wasmtime-spin-v1=true
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/wasm-spin-rust-00001-deployment-6658d7bb45-dndjn to cloud-vm-42-37
  Normal   Pulled     15m                kubelet            Container image "localhost:5000/wasm-spin-rust:latest" already present on machine
  Normal   Created    15m                kubelet            Created container wasmrust
  Normal   Started    15m                kubelet            Started container wasmrust
  Normal   Pulled     14m (x4 over 15m)  kubelet            Container image "gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb" already present on machine
  Normal   Created    14m (x4 over 15m)  kubelet            Created container queue-proxy
  Warning  Failed     14m (x4 over 15m)  kubelet            Error: failed to start containerd task "queue-proxy": Cannot read manifest file from "/run/containerd/io.containerd.runtime.v2.task/k8s.io/queue-proxy/rootfs/spin.toml": unknown
  Warning  BackOff    5s (x77 over 15m)  kubelet            Back-off restarting failed container
```

Paul Schweigert

unread,
Nov 29, 2022, 5:17:10 PM11/29/22
to Knative Users
Sidestepping the question of _why_ queue-proxy needs access, if you need to add a file to queue-proxy I think one of the simpler things to do is use the existing queue-proxy image [1] as a base to build your own queue-proxy [2] that includes the spin.toml at the expected place. Probably not ideal as a long-term solution, but it should at least let you get started.

[1] Release images for QP can be found at: https://console.cloud.google.com/gcr/images/knative-releases/us/knative.dev/serving/cmd/queue
[2] You can set a custom image for QP in deployment configmap: https://github.com/knative/serving/blob/0ea12f418b7283aaefbb915c22b497cf30828e81/config/core/configmaps/deployment.yaml#L29

---
Paul Schweigert
Knative TOC / Serving WG Lead
IBM Open Source Development

________________________________________
From: knativ...@googlegroups.com <knativ...@googlegroups.com> on behalf of Emilian Filip <emil.fi...@gmail.com>
Sent: Tuesday, November 29, 2022 4:05 PM
To: Knative Users
Subject: [EXTERNAL] [help] Deploying WASM workload to Knative Serving service using containerd-wasm-shims throws error

Hi, ## Quick description I'm attempting to create a Knative service that runs a WebAssembly workload using the spin shims provided by deislabs. The error message says, in short, that the queue-proxy container could not find a "spin. toml" manifest

ZjQcmQRYFpfptBannerStart
This Message Is From an Untrusted Sender
You have not previously corresponded with this sender.

ZjQcmQRYFpfptBannerEnd
Hi,

## Quick description
I'm attempting to create a Knative service that runs a WebAssembly workload using the spin shims provided by deislabs. The error message says, in short, that the queue-proxy container could not find a "spin.toml" manifest file in its rootfs.

## Longer description
I am part of a university project which aims to integrate WebAssembly workloads (specifically using the spin framework) into a Knative Serving service using the containerd wasm shims by deislabs. As part of this integration effort, we have followed the spin quickstart guide<https://developer.fermyon.com/spin/quickstart> to create the equivalent of a "hello world" project, which we are now trying to deploy as a Knative Serving service. This project's folder and name is called "wasm-spin-rust".

--
You received this message because you are subscribed to the Google Groups "Knative Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to knative-user...@googlegroups.com<mailto:knative-user...@googlegroups.com>.
To view this discussion on the web visit https://groups.google.com/d/msgid/knative-users/7923e23e-d6f9-4f79-b360-97deb9b3cc0bn%40googlegroups.com<https://groups.google.com/d/msgid/knative-users/7923e23e-d6f9-4f79-b360-97deb9b3cc0bn%40googlegroups.com?utm_medium=email&utm_source=footer>.

Emilian Filip

unread,
Nov 30, 2022, 3:35:16 PM11/30/22
to Knative Users
Hi Paul,

Thanks for the idea. I've managed to pull the queue-proxy image v1.8.1 from the container registry, and added the spin.toml file to it (by creating a temporary container running the image, using `docker cp` and then committing the container back to my local registry under the same name). I've also included the compiled binary WASM file that actually runs the app into the image as well, using the same process. I then changed the deployment-config config map to use my local queue-proxy image accordingly, and continued with the deployment.

Confoundingly, the previous error disappeared and the kubectl event logs seem to be fine, except that the queue-proxy container is still stuck in the CrashLoopBackoff status, but for no apparent reason:
```
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m20s                  default-scheduler  Successfully assigned default/wasm-spin-rust-00001-deployment-6665c8b4b7-89vjg to cloud-vm-42-37
  Normal   Pulled     5m19s                  kubelet            Container image "localhost:5000/wasm-spin-rust:latest" already present on machine
  Normal   Created    5m19s                  kubelet            Created container wasmrust
  Normal   Started    5m19s                  kubelet            Started container wasmrust
  Normal   Pulled     5m19s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 119.511824ms
  Normal   Pulled     5m19s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 76.775515ms
  Normal   Pulled     4m57s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 143.695006ms
  Normal   Pulling    4m25s (x4 over 5m19s)  kubelet            Pulling image "localhost:5000/queue-proxy:latest"
  Normal   Created    4m24s (x4 over 5m19s)  kubelet            Created container queue-proxy
  Normal   Started    4m24s (x4 over 5m19s)  kubelet            Started container queue-proxy
  Normal   Pulled     4m24s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 97.262598ms
  Warning  BackOff    11s (x29 over 5m18s)   kubelet            Back-off restarting failed container
```
As you can see, it seems that the queue-proxy container keeps failing and getting re-launched periodically. Running `kubectl get pods` confirms this, as the wasm-spin-rust pod is already on its sixth restart:
```
NAME                                                       READY   STATUS             RESTARTS      AGE
wasm-spin-rust-00001-deployment-6665c8b4b7-89vjg           1/2     CrashLoopBackOff   6 (12s ago)   6m1s
```
I am at a loss on what could be causing this. Perhaps the key is in the queue proxy code itself rather than some faulty configuration?

The only error reporting I managed to get was by looking at the queue-proxy container section of `kubectl describe pod wasm-spin-rust`, where it says that the error code is 137, corresponding to a SIGKILL, but this is usually accompanied with a OOMKilled status. Could nonetheless still be the case that the pod is running out of memory?

### Full result of `kubectl describe pod wasm-spin-rust`
```
Name:         wasm-spin-rust-00001-deployment-6665c8b4b7-89vjg

Namespace:    default
Priority:     0
Node:         cloud-vm-42-37/146.169.42.37
Start Time:   Wed, 30 Nov 2022 20:21:27 +0000
Labels:       app=wasm-spin-rust
              pod-template-hash=6665c8b4b7
              service.istio.io/canonical-name=wasm-spin-rust
              service.istio.io/canonical-revision=wasm-spin-rust-00001
              serving.knative.dev/configuration=wasm-spin-rust
              serving.knative.dev/configurationGeneration=1
              serving.knative.dev/configurationUID=13b5f9a8-6f1c-4f2d-a436-deeaa4c0508e
              serving.knative.dev/revision=wasm-spin-rust-00001
              serving.knative.dev/revisionUID=d89bf84f-1666-408b-b450-280f23873db1
              serving.knative.dev/service=wasm-spin-rust
              serving.knative.dev/serviceUID=2cf035fe-8d3d-484b-8fd6-e6c0989b5d64

Annotations:  autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
              autoscaling.knative.dev/max-scale: 10
              autoscaling.knative.dev/metric: concurrency
              autoscaling.knative.dev/min-scale: 1
              autoscaling.knative.dev/target: 10
              cni.projectcalico.org/podIP: 192.168.0.124/32
              cni.projectcalico.org/podIPs: 192.168.0.124/32
              serving.knative.dev/creator: kubernetes-admin
Status:       Running
IP:           192.168.0.124
IPs:
  IP:           192.168.0.124
Controlled By:  ReplicaSet/wasm-spin-rust-00001-deployment-6665c8b4b7
Containers:
  wasmrust:
    Container ID:  containerd://4dcd1f5da784fe1315d848837dd3b0391563e0d13475815f0c7c672a8e3d2550
    Image:         localhost:5000/wasm-spin-rust:latest
    Image ID:      localhost:5000/wasm-spin-rust@sha256:21b347564e48d727ef17ef539689063d994078e9c7d26542cd129f8330a9890b

    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /
    State:          Running
      Started:      Wed, 30 Nov 2022 20:21:28 +0000

    Ready:          True
    Restart Count:  0
    Environment:
      PORT:             8080
      K_REVISION:       wasm-spin-rust-00001
      K_CONFIGURATION:  wasm-spin-rust
      K_SERVICE:        wasm-spin-rust
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7v9d4 (ro)
  queue-proxy:
    Container ID:   containerd://42ec85bc9a1ecbd313f6489f7ea1b1414bc4d3aa808717b845503c3f9af07e56
    Image:          localhost:5000/queue-proxy:latest
    Image ID:       localhost:5000/queue-proxy@sha256:043867ca5d88b7ff73aaaab5c882aa3ae9e6a5c4401b65b811b1631933022af6

    Ports:          8022/TCP, 9090/TCP, 9091/TCP, 8012/TCP, 8112/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 30 Nov 2022 20:24:27 +0000
      Finished:     Wed, 30 Nov 2022 20:24:27 +0000
    Ready:          False
    Restart Count:  5

    Requests:
      cpu:      25m
    Readiness:  http-get http://:8012/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SERVING_NAMESPACE:                 default
      SERVING_SERVICE:                   wasm-spin-rust
      SERVING_CONFIGURATION:             wasm-spin-rust
      SERVING_REVISION:                  wasm-spin-rust-00001
      QUEUE_SERVING_PORT:                8012
      QUEUE_SERVING_TLS_PORT:            8112
      CONTAINER_CONCURRENCY:             0
      REVISION_TIMEOUT_SECONDS:          300
      SERVING_POD:                       wasm-spin-rust-00001-deployment-6665c8b4b7-89vjg (v1:metadata.name)

      SERVING_POD_IP:                     (v1:status.podIP)
      SERVING_LOGGING_CONFIG:
      SERVING_LOGGING_LEVEL:
      SERVING_REQUEST_LOG_TEMPLATE:      {"httpRequest": {"requestMethod": "{{.Request.Method}}", "requestUrl": "{{js .Request.RequestURI}}", "requestSize": "{{.Request.ContentLength}}", "status": {{.Response.Code}}, "responseSize": "{{.Response.Size}}", "userAgent": "{{js .Request.UserAgent}}", "remoteIp": "{{js .Request.RemoteAddr}}", "serverIp": "{{.Revision.PodIP}}", "referer": "{{js .Request.Referer}}", "latency": "{{.Response.Latency}}s", "protocol": "{{.Request.Proto}}"}, "traceId": "{{index .Request.Header "X-B3-Traceid"}}"}
      SERVING_ENABLE_REQUEST_LOG:        false
      SERVING_REQUEST_METRICS_BACKEND:   prometheus
      TRACING_CONFIG_BACKEND:            none
      TRACING_CONFIG_ZIPKIN_ENDPOINT:
      TRACING_CONFIG_DEBUG:              false
      TRACING_CONFIG_SAMPLE_RATE:        0.1
      USER_PORT:                         8080
      SYSTEM_NAMESPACE:                  knative-serving
      METRICS_DOMAIN:                    knative.dev/internal/serving
      SERVING_READINESS_PROBE:           {"tcpSocket":{"port":8080,"host":"127.0.0.1"},"successThreshold":1}
      ENABLE_PROFILING:                  false
      SERVING_ENABLE_PROBE_REQUEST_LOG:  false
      METRICS_COLLECTOR_ADDRESS:
      CONCURRENCY_STATE_ENDPOINT:
      CONCURRENCY_STATE_TOKEN_PATH:      /var/run/secrets/tokens/state-token
      HOST_IP:                            (v1:status.hostIP)
      ENABLE_HTTP2_AUTO_DETECTION:       false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7v9d4 (ro)

Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-7v9d4:

    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.azure.com/wasmtime-spin-v1=true
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m20s                  default-scheduler  Successfully assigned default/wasm-spin-rust-00001-deployment-6665c8b4b7-89vjg to cloud-vm-42-37
  Normal   Pulled     5m19s                  kubelet            Container image "localhost:5000/wasm-spin-rust:latest" already present on machine
  Normal   Created    5m19s                  kubelet            Created container wasmrust
  Normal   Started    5m19s                  kubelet            Started container wasmrust
  Normal   Pulled     5m19s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 119.511824ms
  Normal   Pulled     5m19s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 76.775515ms
  Normal   Pulled     4m57s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 143.695006ms
  Normal   Pulling    4m25s (x4 over 5m19s)  kubelet            Pulling image "localhost:5000/queue-proxy:latest"
  Normal   Created    4m24s (x4 over 5m19s)  kubelet            Created container queue-proxy
  Normal   Started    4m24s (x4 over 5m19s)  kubelet            Started container queue-proxy
  Normal   Pulled     4m24s                  kubelet            Successfully pulled image "localhost:5000/queue-proxy:latest" in 97.262598ms
  Warning  BackOff    11s (x29 over 5m18s)   kubelet            Back-off restarting failed container
```

Paul Schweigert

unread,
Nov 30, 2022, 5:12:19 PM11/30/22
to Knative Users
It's kind of hard to say what's happening here (as I don't know anything about your application).

That said, is there a reason you added the WASM binary to queue proxy? Unless you're seeing an error where QP needs the binary (which I don't know why that would ever be the case), I'd leave it off and just try adding the config file to start.


---
Paul Schweigert
Knative TOC / Serving WG Lead
IBM Open Source Development

________________________________________
From: knativ...@googlegroups.com <knativ...@googlegroups.com> on behalf of Emilian Filip <emil.fi...@gmail.com>

Sent: Wednesday, November 30, 2022 3:35 PM
To: Knative Users
Subject: [EXTERNAL] Re: [help] Deploying WASM workload to Knative Serving service using containerd-wasm-shims throws error

Hi Paul, Thanks for the idea. I've managed to pull the queue-proxy image v1. 8. 1 from the container registry, and added the spin. toml file to it (by creating a temporary container running the image, using `docker cp` and then committing the container


ZjQcmQRYFpfptBannerStart
This Message Is From an Untrusted Sender
You have not previously corresponded with this sender.

ZjQcmQRYFpfptBannerEnd
Hi Paul,

[1] Release images for QP can be found at: https://console.cloud.google.com/gcr/images/knative-releases/us/knative.dev/serving/cmd/queue<https://console.cloud.google.com/gcr/images/knative-releases/us/knative.dev/serving/cmd/queue>
[2] You can set a custom image for QP in deployment configmap: https://github.com/knative/serving/blob/0ea12f418b7283aaefbb915c22b497cf30828e81/config/core/configmaps/deployment.yaml#L29<https://github.com/knative/serving/blob/0ea12f418b7283aaefbb915c22b497cf30828e81/config/core/configmaps/deployment.yaml#L29>

---
Paul Schweigert
Knative TOC / Serving WG Lead
IBM Open Source Development

________________________________________
From: knativ...@googlegroups.com <knativ...@googlegroups.com> on behalf of Emilian Filip <emil.fi...@gmail.com>
Sent: Tuesday, November 29, 2022 4:05 PM
To: Knative Users
Subject: [EXTERNAL] [help] Deploying WASM workload to Knative Serving service using containerd-wasm-shims throws error

Hi, ## Quick description I'm attempting to create a Knative service that runs a WebAssembly workload using the spin shims provided by deislabs. The error message says, in short, that the queue-proxy container could not find a "spin. toml" manifest

ZjQcmQRYFpfptBannerStart
This Message Is From an Untrusted Sender
You have not previously corresponded with this sender.

ZjQcmQRYFpfptBannerEnd
Hi,

## Quick description
I'm attempting to create a Knative service that runs a WebAssembly workload using the spin shims provided by deislabs. The error message says, in short, that the queue-proxy container could not find a "spin.toml" manifest file in its rootfs.

## Longer description
I am part of a university project which aims to integrate WebAssembly workloads (specifically using the spin framework) into a Knative Serving service using the containerd wasm shims by deislabs. As part of this integration effort, we have followed the spin quickstart guide<https://developer.fermyon.com/spin/quickstart<https://developer.fermyon.com/spin/quickstart>> to create the equivalent of a "hello world" project, which we are now trying to deploy as a Knative Serving service. This project's folder and name is called "wasm-spin-rust".

All the necessary requisites for deployment have been satisfied; I have installed the shims, I have configured the runtime classes for spin, I have changed the Knative feature and extension flags to allow for specifying runtime classes, I have built a docker image of the hello world app I'm trying to deploy (and pushed the image to a local repository, and configured Knative to be able to use that repository instead of Dockerhub).

However, the issue arises when I actually attempt to deploy the workload. The pod is created, the image is acquired, the container for the image is created, but the queue-proxy container has an issue, saying that it cannot find the "spin.toml" manifest file.

The following is a copy of the Events section gotten from running kubectl describe pod wasm-spin-rust (the name of my deployment service):
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23s default-scheduler Successfully assigned default/wasm-spin-rust-00001-deployment-6658d7bb45-dndjn to cloud-vm-42-37
Normal Pulled 23s kubelet Container image "localhost:5000/wasm-spin-rust:latest" already present on machine
Normal Created 23s kubelet Created container wasmrust
Normal Started 23s kubelet Started container wasmrust
Warning BackOff 14s (x3 over 21s) kubelet Back-off restarting failed container

Normal Pulled 1s (x3 over 23s) kubelet Container image "gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb<http://gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb>" already present on machine


Normal Created 0s (x3 over 23s) kubelet Created container queue-proxy

Warning Failed 0s (x3 over 23s) kubelet Error: failed to start containerd task "queue-proxy": Cannot read manifest file from "/run/containerd/io.containerd.runtime.v2.task/k8s.io/queue-proxy/rootfs/spin.toml<http://k8s.io/queue-proxy/rootfs/spin.toml>": unknown


```
localhost:5000 is the address of our local docker registry, and cloud-vm-42-37 is the name of our node (there is only one in the cluster, this is the master).

spin.toml is a file that is used by the spin framework which is used when starting a web server, comprised of compiled WASM binaries. For some reason, queue-proxy needs access to this file but it cannot find it, and I am not sure how to proceed from here. Any and all advice would be greatly appreciated.

P.S. This is also my first post in this Google group, I hope the format of my message is satisfactory. I can provide more details on request.

## More details
### Dockerfile
```
FROM scratch
COPY /target/wasm32-wasi/release/wasm_spin_rust.wasm /
COPY spin.toml /
```
### knative.yaml (For deployment)
```

apiVersion: serving.knative.dev/v1<http://serving.knative.dev/v1>


kind: Service
metadata:
name: wasm-spin-rust
namespace: default
spec:
template:
metadata:
labels:
app: wasm-spin-rust
annotations:
# Knative concurrency-based autoscaling (default).

autoscaling.knative.dev/class<http://autoscaling.knative.dev/class>: kpa.autoscaling.knative.dev<http://kpa.autoscaling.knative.dev>
autoscaling.knative.dev/metric<http://autoscaling.knative.dev/metric>: concurrency


# Target 10 requests in-flight per pod.

autoscaling.knative.dev/target<http://autoscaling.knative.dev/target>: "10"


# Disable scale to zero with a min scale of 1.

autoscaling.knative.dev/min-scale<http://autoscaling.knative.dev/min-scale>: "1"


# Limit scaling to 100 pods.

autoscaling.knative.dev/max-scale<http://autoscaling.knative.dev/max-scale>: "10"


spec:
runtimeClassName: wasmtime-spin-v1
containers:
- name: wasmrust
image: localhost:5000/wasm-spin-rust:latest
imagePullPolicy: IfNotPresent
command: ["/"]
```
### runtimes.yaml (For specifying the spin runtime to use)
```

apiVersion: node.k8s.io/v1<http://node.k8s.io/v1>


kind: RuntimeClass
metadata:
name: "wasmtime-spin-v1"
handler: "spin"
scheduling:
nodeSelector:

"kubernetes.azure.com/wasmtime-spin-v1<http://kubernetes.azure.com/wasmtime-spin-v1>": "true"


```
### spin.toml (Found in the root directory of the app project)
```
spin_version = "1"
authors = ["root"]
description = "An example app for the quickstart"
name = "wasm-spin-rust"
trigger = { type = "http", base = "/" }
version = "0.1.0"

[[component]]
id = "wasm-spin-rust"
source = "wasm_spin_rust.wasm"
[component.trigger]
route = "/hi"
[component.build]
command = "cargo build --target wasm32-wasi --release"
```
### kubectl describe pod wasm-spin-rust
```
Name: wasm-spin-rust-00001-deployment-6658d7bb45-dndjn
Namespace: default
Priority: 0

Node: cloud-vm-42-37/146.169.42.37<http://146.169.42.37>


Start Time: Tue, 29 Nov 2022 20:47:47 +0000
Labels: app=wasm-spin-rust
pod-template-hash=6658d7bb45

service.istio.io/canonical-name=wasm-spin-rust<http://service.istio.io/canonical-name=wasm-spin-rust>
service.istio.io/canonical-revision=wasm-spin-rust-00001<http://service.istio.io/canonical-revision=wasm-spin-rust-00001>
serving.knative.dev/configuration=wasm-spin-rust<http://serving.knative.dev/configuration=wasm-spin-rust>
serving.knative.dev/configurationGeneration=1<http://serving.knative.dev/configurationGeneration=1>
serving.knative.dev/configurationUID=0b9bb305-e97f-4576-ad3f-7f23654b29fb<http://serving.knative.dev/configurationUID=0b9bb305-e97f-4576-ad3f-7f23654b29fb>
serving.knative.dev/revision=wasm-spin-rust-00001<http://serving.knative.dev/revision=wasm-spin-rust-00001>
serving.knative.dev/revisionUID=9681715a-1cc1-4169-9f5a-583af9c6503a<http://serving.knative.dev/revisionUID=9681715a-1cc1-4169-9f5a-583af9c6503a>
serving.knative.dev/service=wasm-spin-rust<http://serving.knative.dev/service=wasm-spin-rust>
serving.knative.dev/serviceUID=082b6868-7bd9-4c26-b434-d24b83e4a6b4<http://serving.knative.dev/serviceUID=082b6868-7bd9-4c26-b434-d24b83e4a6b4>
Annotations: autoscaling.knative.dev/class<http://autoscaling.knative.dev/class>: kpa.autoscaling.knative.dev<http://kpa.autoscaling.knative.dev>
autoscaling.knative.dev/max-scale<http://autoscaling.knative.dev/max-scale>: 10
autoscaling.knative.dev/metric<http://autoscaling.knative.dev/metric>: concurrency
autoscaling.knative.dev/min-scale<http://autoscaling.knative.dev/min-scale>: 1
autoscaling.knative.dev/target<http://autoscaling.knative.dev/target>: 10
cni.projectcalico.org/podIP<http://cni.projectcalico.org/podIP>: 192.168.0.111/32<http://192.168.0.111/32>
cni.projectcalico.org/podIPs<http://cni.projectcalico.org/podIPs>: 192.168.0.111/32<http://192.168.0.111/32>
serving.knative.dev/creator<http://serving.knative.dev/creator>: kubernetes-admin


Status: Running
IP: 192.168.0.111
IPs:
IP: 192.168.0.111
Controlled By: ReplicaSet/wasm-spin-rust-00001-deployment-6658d7bb45
Containers:
wasmrust:
Container ID: containerd://593662fa2f731538a1dd224f3b70ec8937b7a783ee23656dfaea3071d0d6d6d4
Image: localhost:5000/wasm-spin-rust:latest
Image ID: localhost:5000/wasm-spin-rust@sha256:8fb277c7cad3ec5ff6b2a42f07ba9649bbb887f55c32c7814b1fa5cdf84993fb
Port: 8080/TCP
Host Port: 0/TCP
Command:
/
State: Running
Started: Tue, 29 Nov 2022 20:47:48 +0000
Ready: True
Restart Count: 0
Environment:
PORT: 8080
K_REVISION: wasm-spin-rust-00001
K_CONFIGURATION: wasm-spin-rust
K_SERVICE: wasm-spin-rust
Mounts:

/var/run/secrets/kubernetes.io/serviceaccount<http://kubernetes.io/serviceaccount> from kube-api-access-2svdl (ro)


queue-proxy:
Container ID: containerd://09e9812dbbc9db98c086814a8fdd6fe74c09d5e6f212a3dd790eec891f3b6361

Image: gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb<http://gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb>
Image ID: gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb<http://gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb>


Ports: 8022/TCP, 9090/TCP, 9091/TCP, 8012/TCP, 8112/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: StartError

Message: failed to start containerd task "09e9812dbbc9db98c086814a8fdd6fe74c09d5e6f212a3dd790eec891f3b6361": Cannot read manifest file from "/run/containerd/io.containerd.runtime.v2.task/k8s.io/09e9812dbbc9db98c086814a8fdd6fe74c09d5e6f212a3dd790eec891f3b6361/rootfs/spin.toml<http://k8s.io/09e9812dbbc9db98c086814a8fdd6fe74c09d5e6f212a3dd790eec891f3b6361/rootfs/spin.toml>": unknown


Exit Code: 128
Started: Thu, 01 Jan 1970 01:00:00 +0100
Finished: Tue, 29 Nov 2022 20:58:46 +0000
Ready: False
Restart Count: 7
Requests:
cpu: 25m
Readiness: http-get http://:8012/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVING_NAMESPACE: default
SERVING_SERVICE: wasm-spin-rust
SERVING_CONFIGURATION: wasm-spin-rust
SERVING_REVISION: wasm-spin-rust-00001
QUEUE_SERVING_PORT: 8012
QUEUE_SERVING_TLS_PORT: 8112
CONTAINER_CONCURRENCY: 0
REVISION_TIMEOUT_SECONDS: 300

SERVING_POD: wasm-spin-rust-00001-deployment-6658d7bb45-dndjn (v1:metadata.name<http://metadata.name>)


SERVING_POD_IP: (v1:status.podIP)
SERVING_LOGGING_CONFIG:
SERVING_LOGGING_LEVEL:
SERVING_REQUEST_LOG_TEMPLATE: {"httpRequest": {"requestMethod": "{{.Request.Method}}", "requestUrl": "{{js .Request.RequestURI}}", "requestSize": "{{.Request.ContentLength}}", "status": {{.Response.Code}}, "responseSize": "{{.Response.Size}}", "userAgent": "{{js .Request.UserAgent}}", "remoteIp": "{{js .Request.RemoteAddr}}", "serverIp": "{{.Revision.PodIP}}", "referer": "{{js .Request.Referer}}", "latency": "{{.Response.Latency}}s", "protocol": "{{.Request.Proto}}"}, "traceId": "{{index .Request.Header "X-B3-Traceid"}}"}
SERVING_ENABLE_REQUEST_LOG: false
SERVING_REQUEST_METRICS_BACKEND: prometheus
TRACING_CONFIG_BACKEND: none
TRACING_CONFIG_ZIPKIN_ENDPOINT:
TRACING_CONFIG_DEBUG: false
TRACING_CONFIG_SAMPLE_RATE: 0.1
USER_PORT: 8080
SYSTEM_NAMESPACE: knative-serving

METRICS_DOMAIN: knative.dev/internal/serving<http://knative.dev/internal/serving>


SERVING_READINESS_PROBE: {"tcpSocket":{"port":8080,"host":"127.0.0.1"},"successThreshold":1}
ENABLE_PROFILING: false
SERVING_ENABLE_PROBE_REQUEST_LOG: false
METRICS_COLLECTOR_ADDRESS:
CONCURRENCY_STATE_ENDPOINT:
CONCURRENCY_STATE_TOKEN_PATH: /var/run/secrets/tokens/state-token
HOST_IP: (v1:status.hostIP)
ENABLE_HTTP2_AUTO_DETECTION: false
Mounts:

/var/run/secrets/kubernetes.io/serviceaccount<http://kubernetes.io/serviceaccount> from kube-api-access-2svdl (ro)


Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-2svdl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable

Node-Selectors: kubernetes.azure.com/wasmtime-spin-v1=true<http://kubernetes.azure.com/wasmtime-spin-v1=true>
Tolerations: node.kubernetes.io/not-ready:NoExecute<http://node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute<http://node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s


Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/wasm-spin-rust-00001-deployment-6658d7bb45-dndjn to cloud-vm-42-37
Normal Pulled 15m kubelet Container image "localhost:5000/wasm-spin-rust:latest" already present on machine
Normal Created 15m kubelet Created container wasmrust
Normal Started 15m kubelet Started container wasmrust

Normal Pulled 14m (x4 over 15m) kubelet Container image "gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb<http://gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:a40f6e84de1a0d145d27084a94cc7fa221159e75cafde7d332ac8f4f0aed58fb>" already present on machine


Normal Created 14m (x4 over 15m) kubelet Created container queue-proxy

Warning Failed 14m (x4 over 15m) kubelet Error: failed to start containerd task "queue-proxy": Cannot read manifest file from "/run/containerd/io.containerd.runtime.v2.task/k8s.io/queue-proxy/rootfs/spin.toml<http://k8s.io/queue-proxy/rootfs/spin.toml>": unknown


Warning BackOff 5s (x77 over 15m) kubelet Back-off restarting failed container
```

--
You received this message because you are subscribed to the Google Groups "Knative Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to knative-user...@googlegroups.com<mailto:knative-user...@googlegroups.com>.

To view this discussion on the web visit https://groups.google.com/d/msgid/knative-users/7923e23e-d6f9-4f79-b360-97deb9b3cc0bn%40googlegroups.com<https://groups.google.com/d/msgid/knative-users/7923e23e-d6f9-4f79-b360-97deb9b3cc0bn%40googlegroups.com><https://groups.google.com/d/msgid/knative-users/7923e23e-d6f9-4f79-b360-97deb9b3cc0bn%40googlegroups.com?utm_medium=email&utm_source=footer<https://groups.google.com/d/msgid/knative-users/7923e23e-d6f9-4f79-b360-97deb9b3cc0bn%40googlegroups.com?utm_medium=email&utm_source=footer>>.

--
You received this message because you are subscribed to the Google Groups "Knative Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to knative-user...@googlegroups.com<mailto:knative-user...@googlegroups.com>.

To view this discussion on the web visit https://groups.google.com/d/msgid/knative-users/0f9b3f0a-488f-44ec-84e3-d589654bb462n%40googlegroups.com<https://groups.google.com/d/msgid/knative-users/0f9b3f0a-488f-44ec-84e3-d589654bb462n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Reply all
Reply to author
Forward
0 new messages