[PATCH] kas: Add support for AWS SSO login credentials

21 views
Skip to first unread message

Łukasz Płachno

unread,
Jan 26, 2026, 8:32:29 AMJan 26
to kas-...@googlegroups.com, Łukasz Płachno
Users might want to run aws sso login --profile prior
to running kas, this change copies config file and .aws/sso/cache
directory to allow running Yocto within context of AWS session.
---
kas/libcmds.py | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/kas/libcmds.py b/kas/libcmds.py
index bf02fce..e7d1bc9 100644
--- a/kas/libcmds.py
+++ b/kas/libcmds.py
@@ -273,6 +273,7 @@ class SetupHome(Command):
aws_dir = self.tmpdirname + "/.aws"
conf_file = aws_dir + "/config"
shared_creds_file = aws_dir + "/credentials"
+ sso_cache_dir = aws_dir + "/sso/cache"
os.makedirs(aws_dir)
aws_conf_file = self._path_from_env('AWS_CONFIG_FILE')
aws_shared_creds_file = \
@@ -297,6 +298,14 @@ class SetupHome(Command):
config.write(fds)
shutil.copy(aws_web_identity_token_file, webid_token_file)

+ # SSO workflow
+ if os.environ.get('AWS_CONFIG_FILE'):
+ aws_cache_dir = os.path.join(Path.home(), ".aws/sso/cache")
+
+ if os.path.isdir(aws_cache_dir):
+ shutil.copy(aws_conf_file, conf_file)
+ shutil.copytree(aws_cache_dir, sso_cache_dir)
+
@staticmethod
def _setup_gitlab_ci_ssh_rewrite(config):
ci_host = os.environ.get('CI_SERVER_HOST', None)
--
2.47.1

Jan Kiszka

unread,
Jan 27, 2026, 1:05:00 AM (14 days ago) Jan 27
to Łukasz Płachno, kas-...@googlegroups.com
On 26.01.26 14:31, 'Łukasz Płachno' via kas-devel wrote:
> Users might want to run aws sso login --profile prior
> to running kas, this change copies config file and .aws/sso/cache
> directory to allow running Yocto within context of AWS session.

Thanks for the patch. Please see CONTRIBUTING.md regarding the need for
a signed-off here.
This may not only expose credentials that are configured in
AWS_CONFIG_FILE, doesn't it? Can we confine it to the accounts specified
in the config file?

Also, is that cache path officially documented in aws-cli, or are we
forwarding an internal artifact here that may or may no longer work in
the future?

> @staticmethod
> def _setup_gitlab_ci_ssh_rewrite(config):
> ci_host = os.environ.get('CI_SERVER_HOST', None)

Jan

--
Siemens AG, Foundational Technologies
Linux Expert Center

Łukasz Płachno

unread,
Jan 27, 2026, 5:43:15 AM (13 days ago) Jan 27
to Jan Kiszka, kas-...@googlegroups.com
This will expose all of the cached sso logins, however according to AWS
documentation the maximum timeout for sso login session is 12hours,
also after aws sso logout the secret from cache is no longer usable.
https://docs.aws.amazon.com/singlesignon/latest/userguide/howtosessionduration.html

There is no entry in cache files providing a link to profile name or
sso_role_name
that would allow to determine easily which cache file is for which profile.

Maybe the way to handle it is to use the path to sso/cache based on
variable instead of home directory. AWS_CONFIG_FILE or AWS_DIR similar
to what the user provides to kas-container command.
If user wants to keep separate config files for sso roles he can easily store
cache files in the same directory by calling:
cp ~/.aws/config ~/test-sso/.aws/config
HOME=test-sso aws sso login --profile <build-profile>
AWS_CONFIG_FILE=~/test-sso/.aws/config kas build

> Also, is that cache path officially documented in aws-cli, or are we
> forwarding an internal artifact here that may or may no longer work in
> the future?
The path is mentioned in AWS documentation here:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html#cli-configure-sso-login

>
> > @staticmethod
> > def _setup_gitlab_ci_ssh_rewrite(config):
> > ci_host = os.environ.get('CI_SERVER_HOST', None)
>
> Jan
>
> --
> Siemens AG, Foundational Technologies
> Linux Expert Center
Lukasz

Jan Kiszka

unread,
Jan 28, 2026, 3:20:56 AM (13 days ago) Jan 28
to Łukasz Płachno, kas-...@googlegroups.com
This is what I found in the AWS SDK:

https://github.com/aws/aws-sdk-js/blob/9d3c66eca8c4416a9d347d0703f27b65775d65ef/lib/token/sso_token_provider.js#L104

I'm not sure yet if that code is also backing "aws sso", but even if
not, a lot of things are hard-coded in this variant, among them the
cache path.

IOW, if we expose the cache folder to the build container, active tokens
of all profiles are available to it, not only those that might be
configures in the AWS_CONFIG_FILE. It might be possible to correlate the
token file name with the configured profiles and only expose them, but
that correlation is no official API. And it would also require to
evaluate the AWS config file.

I guess the most pragmatic approach here is to clearly document what kas
is doing, point out the implications, and suggest mitigations (logout
from all profiles, only log into the one the build env should see).

Łukasz Płachno

unread,
Jan 28, 2026, 6:57:07 PM (12 days ago) Jan 28
to kas-...@googlegroups.com, jan.k...@siemens.com, Łukasz Płachno
Users might want to run aws sso login --profile prior
to running kas, this change copies config file and .aws/sso/cache
directory to allow running Yocto within context of AWS session.

Signed-off-by: Łukasz Płachno <lukasz....@verkada.com>
---
docs/userguide/credentials.rst | 9 ++++++++-
kas/libcmds.py | 16 ++++++++++++++++
2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/docs/userguide/credentials.rst b/docs/userguide/credentials.rst
index 94770be..8cf6aaa 100644
--- a/docs/userguide/credentials.rst
+++ b/docs/userguide/credentials.rst
@@ -23,11 +23,18 @@ into the isolated environment first.
AWS Configuration
-----------------

-For AWS, both conventional AWS config files as well as the environment
+For AWS, conventional AWS config files, AWS SSO as well as the environment
variable controlled OAuth 2.0 workflow are supported. Note, that KAS
internally rewrites the ``AWS_*`` environment variables into a AWS
config file to also support older versions of bitbake.

+.. note::
+ When using AWS SSO credentials the entire content of ~/.aws/sso/cache
+ directory is copied into the kas workspace. This might expose all active
+ user sessions, including those not defined in the ``AWS_CONFIG_FILE``.
+ To mitigate security risks, log out of unnecessary profiles before
+ starting a build or use a separate system account to run the build.
+
Git Configuration
-----------------

diff --git a/kas/libcmds.py b/kas/libcmds.py
index bf02fce..4da1644 100644
--- a/kas/libcmds.py
+++ b/kas/libcmds.py
@@ -273,6 +273,7 @@ class SetupHome(Command):
aws_dir = self.tmpdirname + "/.aws"
conf_file = aws_dir + "/config"
shared_creds_file = aws_dir + "/credentials"
+ sso_cache_dir = aws_dir + "/sso/cache"
os.makedirs(aws_dir)
aws_conf_file = self._path_from_env('AWS_CONFIG_FILE')
aws_shared_creds_file = \
@@ -297,6 +298,21 @@ class SetupHome(Command):
config.write(fds)
shutil.copy(aws_web_identity_token_file, webid_token_file)

+ # SSO workflow
+ # when using kas-container ~/.aws is mounted in /var/kas/userdata/.aws,
+ # within container ~/.aws/sso/cache will not exist.
+ if aws_conf_file:
+ aws_cache_dir_conf = \
+ os.path.join(os.path.dirname(aws_conf_file), "sso/cache")
+ aws_cache_dir_home = os.path.join(Path.home(), ".aws/sso/cache")
+
+ if os.path.isdir(aws_cache_dir_conf):
+ shutil.copy(aws_conf_file, conf_file)
+ shutil.copytree(aws_cache_dir_conf, sso_cache_dir)
+ elif os.path.isdir(aws_cache_dir_home):
+ shutil.copy(aws_conf_file, conf_file)
+ shutil.copytree(aws_cache_dir_home, sso_cache_dir)
+
@staticmethod
def _setup_gitlab_ci_ssh_rewrite(config):
ci_host = os.environ.get('CI_SERVER_HOST', None)
--
2.47.1

Jan Kiszka

unread,
Jan 29, 2026, 10:38:10 AM (11 days ago) Jan 29
to Łukasz Płachno, kas-...@googlegroups.com
So, SSO will not work with kas-container? That would break our promise
to have both interfaces widely aligned. I suppose we need to expose the
cache to the container when AWS_CONFIG_FILE is set.

> + if aws_conf_file:
> + aws_cache_dir_conf = \
> + os.path.join(os.path.dirname(aws_conf_file), "sso/cache")

This assumes that either the config file is in $HOME/.aws or someone
copied it there (as we know aws-cli only operates against $HOME). This
is at least not documented, and I cannot assess if such a
login-then-copy workflow would be acceptable to many.

Can't we expose the cache as well via kas-container when AWS_CONFIG_FILE
is set?

> + aws_cache_dir_home = os.path.join(Path.home(), ".aws/sso/cache")
> +
> + if os.path.isdir(aws_cache_dir_conf):
> + shutil.copy(aws_conf_file, conf_file)
> + shutil.copytree(aws_cache_dir_conf, sso_cache_dir)
> + elif os.path.isdir(aws_cache_dir_home):
> + shutil.copy(aws_conf_file, conf_file)
> + shutil.copytree(aws_cache_dir_home, sso_cache_dir)
> +
> @staticmethod
> def _setup_gitlab_ci_ssh_rewrite(config):
> ci_host = os.environ.get('CI_SERVER_HOST', None)

Łukasz Płachno

unread,
Jan 29, 2026, 2:42:27 PM (11 days ago) Jan 29
to Jan Kiszka, kas-...@googlegroups.com
The comment was trying to justify need for checking two locations for
sso/cache directory. Location based on AWS_CONFIG_FILE is used
for building in kas-container.
sso/cache is already exposed but not in $HOME directory. It is available in:
/var/kas/userdata/.aws/sso/cache.

> > + if aws_conf_file:
> > + aws_cache_dir_conf = \
> > + os.path.join(os.path.dirname(aws_conf_file), "sso/cache")
>
> This assumes that either the config file is in $HOME/.aws or someone
> copied it there (as we know aws-cli only operates against $HOME). This
> is at least not documented, and I cannot assess if such a
> login-then-copy workflow would be acceptable to many.
>
There is no need for the user to copy ~/.aws/sso./cache content into
the directory
with AWS_CONFIG_FILE. The content from $HOME/.aws/sso/cache will be used when
sso/cache does not exist in the directory with AWS_CONFIG_FILE.

My idea is to look for the sso/cache in two directories:
- $AWS_CONF_FILE/../sso/cache (checked first).
- $HOME/.aws/sso/cache

In non container builds config is copied from $AWS_CONFIG_FILE and cache from
${HOME}/.aws/sso/cache because $AWS_CONFIG_FILE/../sso/cache does not exist.

In container builds config is copied from
$AWS_CONFIG_FILE=/var/kas/userdata/.aws/config
(AWS_CONFIG_FILE is set by kas-container script) and cache is copied
from /var/kas/userdata/.aws/sso/cache

> Can't we expose the cache as well via kas-container when AWS_CONFIG_FILE
> is set?
>
sso/cache is exposed when --aws-dir is set:
AWS_CONFIG_FILE is set to: /var/kas/userdata/.aws/config
sso/cache is mounted in: /var/kas/userdata/.aws/sso/cache

> > + aws_cache_dir_home = os.path.join(Path.home(), ".aws/sso/cache")
> > +
> > + if os.path.isdir(aws_cache_dir_conf):
> > + shutil.copy(aws_conf_file, conf_file)
> > + shutil.copytree(aws_cache_dir_conf, sso_cache_dir)
> > + elif os.path.isdir(aws_cache_dir_home):
> > + shutil.copy(aws_conf_file, conf_file)
> > + shutil.copytree(aws_cache_dir_home, sso_cache_dir)
> > +
> > @staticmethod
> > def _setup_gitlab_ci_ssh_rewrite(config):
> > ci_host = os.environ.get('CI_SERVER_HOST', None)
>
Łukasz

Jan Kiszka

unread,
Feb 4, 2026, 6:19:35 AM (5 days ago) Feb 4
to Łukasz Płachno, kas-...@googlegroups.com
Sorry, forgot to answer here.
But that cache is empty on each kas-container start-up. You cannot do a
login outside of the container and then expect the SSO token(s) to be
available inside it. The only option is the chain an SSO login and
bitbake command inside a container run, but that is not really practical.

Have a look at what kas-container maps so far.

Łukasz Płachno

unread,
11:08 AM (6 hours ago) 11:08 AM
to Jan Kiszka, kas-...@googlegroups.com
Sorry for late response, I had no access to the PC last week.
Within the kas-container shell ~/.aws/sso does not exist, but whole
directory passed
via --aws-dir parameter is mounted as read-only volume into
/var/kas/userdata/.aws/.
Running this command exposes tokens to the container:
kas-container --aws-dir ~/.aws shell
With the changes from this patch /var/kas/userdata/.aws/sso/cache is copied to
~/.aws/sso/sso/cache and effectively sso session is passed to the container.

Łukasz
Reply all
Reply to author
Forward
0 new messages