Centos examples

15 views
Skip to first unread message

Martinez Gonzalez, Joshua

unread,
Feb 20, 2023, 4:21:44 PM2/20/23
to us...@gramineproject.io

I'm trying to use gsc for some centos containers, however I don't understand what is needed to make it work. I've checked the documentation for the .manifest files, but I haven't had any success with them, could you possibly have an example with a centos container?

BR.

Dmitrii Kuvaiskii

unread,
Feb 21, 2023, 2:47:15 AM2/21/23
to Martinez Gonzalez, Joshua, us...@gramineproject.io
Dear Joshua,

Did you check our documentation with example? Please check this:
https://gramine.readthedocs.io/projects/gsc/en/latest/#example

Roughly, you need to do the following:
1. Modify default `config.yaml` to use `centos:8` (if your original
Docker image is CentOS 8)
2. Run `gsc build` on your original Docker image
3. Run `gsc sign-image` on the image built at step 2
4. You're done, now you can run the "graminized" Docker image via `docker run`

You will of course need an (abridged) manifest file that is tailored
to your original Docker image. Examples can be found here:
https://github.com/gramineproject/gsc/tree/master/test
> --
> You received this message because you are subscribed to the Google Groups "Gramine Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to gramine-user...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/gramine-users/DM6PR11MB4691A27D81723614F287E4A1D9A49%40DM6PR11MB4691.namprd11.prod.outlook.com.



--
Yours sincerely,
Dmitrii Kuvaiskii

Martinez Gonzalez, Joshua

unread,
Mar 2, 2023, 12:24:05 PM3/2/23
to Dmitrii Kuvaiskii, us...@gramineproject.io
I was able to sign the image and run it, but now I have a huge doubt How can I ensure that the image is running with sgx? Because I ran the container in a node that has sgx enable and worked, but I also ran the container in a node that do not have the sgx enable and also worked.

BR.

Dmitrii Kuvaiskii

unread,
Mar 3, 2023, 2:59:36 AM3/3/23
to Martinez Gonzalez, Joshua, us...@gramineproject.io
> I was able to sign the image and run it, but now I have a huge doubt How can I ensure that the image is running with sgx? Because I ran the container in a node that has sgx enable and worked, but I also ran the container in a node that do not have the sgx enable and also worked.

You would need to provide us exact information what you've done.
1. What commands did you execute to "graminize" the original Docker
image with GSC?
2. What command do you use to start the resulting Docker image?
3. What is the workload that you're testing, what is the output?
4. How do you run the Docker image on the SGX-enabled node and the
non-SGX-enabled node? Please paste the logs.

On Thu, Mar 2, 2023 at 6:24 PM Martinez Gonzalez, Joshua
> To view this discussion on the web visit https://groups.google.com/d/msgid/gramine-users/DM6PR11MB469138E4EAC958CE86E44D14D9B29%40DM6PR11MB4691.namprd11.prod.outlook.com.

Martinez Gonzalez, Joshua

unread,
Mar 6, 2023, 11:56:46 AM3/6/23
to Dmitrii Kuvaiskii, us...@gramineproject.io
Answering your questions:
1. What commands did you execute to "graminize" the original Docker image with GSC?
docker build --tag ive-pcie-test --file Dockerfile .
cd ../..
./gsc build -c config.yaml --insecure-args ive-pcie-test:latest test/generic.manifest
./gsc sign-image ive-pcie-test:latest enclave-key.pem
2. What command do you use to start the resulting Docker image?
docker run --device=/dev/sgx_enclave -e SANDSTONE_LOOP=false -e SANDSTONE_BIN="/ive/content/ive-watch" -e SANDSTONE_ARGS="-name=ive-test -display=both -- 27:00.0=.,.,x8 27:00.1=.,.,x8" -v /pub/logs:/var/local/logs --privileged --entrypoint /tests/scripts/run-specific.sh gsc-ive-pcie-test:latest
3. What is the workload that you're testing, what is the output?
I created the workload, it shows some error that can be fixed.
4. How do you run the Docker image on the SGX-enabled node and the non-SGX-enabled node? Please paste the logs.
I used the following command to run the container in both nodes:
docker run --device=/dev/sgx_enclave -e SANDSTONE_LOOP=false -e SANDSTONE_BIN="/ive/content/ive-watch" -e SANDSTONE_ARGS="-name=ive-test -display=both -- 27:00.0=.,.,x8 27:00.1=.,.,x8" -v /pub/logs:/var/local/logs --privileged --entrypoint /tests/scripts/run-specific.sh gsc-ive-pcie-test:latest

This is the log in both nodes:

Thu Mar 2 21:21:39 UTC 2023
test-start-indicator
[IVE-WATCH]: Logging "ive-test" messaging into "/ive/content/logs/ive-test_20230302-212140.log" using "both" display method
[IVE-WATCH]: WARNING! TIME parameter undefined or set to zero! Setting runtime to one year and disabling time control functions.
[IVE-WATCH]: WARNING! KEEPALIVE parameter undefined or set to zero! Setting periodic alive messaging every thirty minutes.
[IVE-WATCH]: "ive-test" executing under "container" environment
[IVE-WATCH]: The following environment variables are available:
[IVE-WATCH]: NAME = ive-test
[IVE-WATCH]: TIME = 31536000
[IVE-WATCH]: OUT_FOLDER = /ive/content/output
[IVE-WATCH]: RUN_ENV = container
[IVE-WATCH]: LOG_FOLDER = /ive/content/logs
[IVE-WATCH]: OUT_FOLDER = /ive/content/output
[IVE-WATCH]: Starting source/content-run.sh 27:00.0=.,.,x8 27:00.1=.,.,x8

not ok - Device 01:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 02:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 03:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 04:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 15:00.0 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 15:00.1 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 15:00.2 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 16:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 16:00.1 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 2d:00.0 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 2d:00.1 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 2d:00.2 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 2e:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 2e:00.1 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 45:00.0 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 45:00.1 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 45:00.2 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 75:03.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 75:03.1 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 75:03.2 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 75:04.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 76:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 78:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 7a:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 7c:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device 7e:00.0 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 7e:00.1 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device 7e:00.2 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device c5:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device c5:00.1 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device db:00.0 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device db:00.1 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device db:00.2 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device f2:03.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device f2:03.1 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device f2:03.2 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device f2:04.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device f3:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device f5:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device f7:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device f9:00.0 has errors: CESta[AdvNonFatalErr+], DevSta[CorrErr+], DevSta[UnsupReq+], UESta[UnsupReq+],
not ok - Device fe:00.0 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device fe:00.1 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],
not ok - Device fe:00.2 has errors: CESta[AdvNonFatalErr+], UESta[UnsupReq+],

not ok - 44 errors found, 298 devices checked

Dmitrii Kuvaiskii

unread,
Mar 6, 2023, 12:08:03 PM3/6/23
to Martinez Gonzalez, Joshua, us...@gramineproject.io
You are running this:
docker run ... --entrypoint /tests/scripts/run-specific.sh

The `--entrypoint` argument to Docker-run does *not* execute Gramine's
entrypoint but executes some `run-specific.sh` Bash script.

That's why you see the same results on the SGX node and on the non-SGX
node -- you're never running Gramine at all. You're overwriting the
entrypoint of Gramine with your own entrypoint.

On Mon, Mar 6, 2023 at 5:56 PM Martinez Gonzalez, Joshua
> To view this discussion on the web visit https://groups.google.com/d/msgid/gramine-users/DM6PR11MB4691DDE828A8ED33676D6345D9B69%40DM6PR11MB4691.namprd11.prod.outlook.com.

Martinez Gonzalez, Joshua

unread,
Mar 6, 2023, 12:45:46 PM3/6/23
to Dmitrii Kuvaiskii, us...@gramineproject.io
Then what I need to do in order to run it with gramine? Because that script is the one that run my code.

Dmitrii Kuvaiskii

unread,
Mar 6, 2023, 12:56:07 PM3/6/23
to Martinez Gonzalez, Joshua, us...@gramineproject.io
You must:
1. Specify this script as the ENTRYPOINT of your original Docker image,
2. Build as you did before with `gsc build` + `gsc sign-image`,
3. Finally, run with `docker run ...` but do NOT add `--entrypoint ...`

On Mon, Mar 6, 2023 at 6:45 PM Martinez Gonzalez, Joshua
> To view this discussion on the web visit https://groups.google.com/d/msgid/gramine-users/DM6PR11MB469118B1D23A71D4F41807B0D9B69%40DM6PR11MB4691.namprd11.prod.outlook.com.

Martinez Gonzalez, Joshua

unread,
Mar 6, 2023, 1:21:05 PM3/6/23
to Dmitrii Kuvaiskii, us...@gramineproject.io
I remove the entrypoint and now I've got this:

docker run --device=/dev/sgx_enclave -v /var/run/aesmd/aesm.socket:/var/run/aesmd/aesm.socket -e SANDSTONE_LOOP=false -e SANDSTONE_BIN="/ive/content/ive-watch" -e SANDSTONE_ARGS="-name=ive-test -display=both -- 27:00.0=.,.,x8 27:00.1=.,.,x8" -it --privileged gsc-ive-pcie-test:latest

Gramine is starting. Parsing TOML manifest file, this may take some time...
error: Detected deprecated syntax: 'sgx.thread_num'. Consider switching to 'sgx.max_threads'.
-----------------------------------------------------------------------------------------------------------------------
Gramine detected the following insecure configurations:

- loader.insecure__use_cmdline_argv = true (forwarding command-line args from untrusted host to the app)

Gramine will continue application execution, but this configuration must not be used in production!
-----------------------------------------------------------------------------------------------------------------------

error: Detected deprecated syntax: 'sgx.thread_num'. Consider switching to 'sgx.max_threads'.
/dev/mapper: mkdir failed: Permission denied
Failure to communicate with kernel device-mapper driver.
Check that device-mapper is available in the kernel.
Incompatible libdevmapper 1.02.175-RHEL8 (2021-01-28) and kernel driver (unknown version).
Command failed.
error: Detected deprecated syntax: 'sgx.thread_num'. Consider switching to 'sgx.max_threads'.
mkdir: cannot create directory '/sys/fs/cgroup': No such file or directory
error: Detected deprecated syntax: 'sgx.thread_num'. Consider switching to 'sgx.max_threads'.
error: Detected deprecated syntax: 'sgx.thread_num'. Consider switching to 'sgx.max_threads'.
mount: /sys/fs/cgroup: mount(2) system call failed: Function not implemented.
Could not make a tmpfs mount. Did you use --privileged?

Dmitrii Kuvaiskii

unread,
Mar 7, 2023, 4:20:59 AM3/7/23
to Martinez Gonzalez, Joshua, us...@gramineproject.io
You seem to be executing some very low-level (close-to-hardware)
workload in Gramine. This is largely unsupported -- Gramine is NOT
suited for running such workloads that require close interaction with
hardware devices.

In particular:
- "/dev/mapper: mkdir failed" -- Gramine doesn't support random `/dev/` devices
- "Incompatible libdevmapper 1.02.175-RHEL8 (2021-01-28) and kernel
driver (unknown version)" -- the software you're using seems to
require some very specific Linux kernel versions (and obviously can't
recognize Gramine)
- "mkdir: cannot create directory '/sys/fs/cgroup'" -- sysfs is only
partially supported in Gramine; cgroups are not supported at all
- "Could not make a tmpfs mount" -- dynamic mount operation is not
supported in Gramine

Why are you trying to run this workload in Gramine? This looks like a
doomed attempt.

On Mon, Mar 6, 2023 at 7:21 PM Martinez Gonzalez, Joshua
> To view this discussion on the web visit https://groups.google.com/d/msgid/gramine-users/DM6PR11MB4691EF6FB37E72C557F15B2AD9B69%40DM6PR11MB4691.namprd11.prod.outlook.com.
Reply all
Reply to author
Forward
0 new messages