e2e tests

62 views
Skip to first unread message

Nikita Malyschkin

unread,
Nov 2, 2025, 6:10:26 AMNov 2
to Chrome Built-in AI Early Preview Program Discussions

Hello everyone

So submission period is over and I was getting started on implementing e2e tests. (I know it's late, sue me)
I got playwright running my local chrome version with all feature flags set properly.

So the first issue I had was that with every new context the models were not present and I didn't want to download them every test run. So I created a persistent profile directory and after an initial download it started working:
https://github.com/nmalyschkin/self-reflection/blob/post-nov/e2e/context.ts
Problem with this is, that everything else get persisted as well (localstorage, indexedDB, workers etc). I can manually clean up after every test but feels kind of annoying.
Did anyone have similar experience, maybe a more elegant solution?
Maybe we can get a chrome flag that can point to an existing model and newly created profiles softlink to it (food for thought)

Secondly
When running the tests I noticed that availability changes without any download being started (create not executed).
Running this test: https://github.com/nmalyschkin/self-reflection/blob/post-nov/e2e/basicReflection.test.ts
I get this output:
(beforeAll)
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'available', 'available', 'available' ]
(first test)
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'downloadable', 'downloadable', 'downloadable' ]
[ 'available', 'available', 'available' ] 

I thought it might be a testing fluke. But when hard restarting chrome I get the same behaviour. This does not seem like a big issue but it breaks UX.
Any ideas on that? Is this a "me" problem or has this been observed before?

Thank you :)

Ezequiel Cicala

unread,
Nov 2, 2025, 5:38:15 PMNov 2
to Chrome Built-in AI Early Preview Program Discussions, Nikita Malyschkin
¿Can you make a script to clear all files you want reset after running the tests?

Nikita Malyschkin

unread,
Nov 3, 2025, 12:44:27 AMNov 3
to Chrome Built-in AI Early Preview Program Discussions, Ezequiel Cicala, Nikita Malyschkin
@Ezequiel: Yes, I have a script to clean up the context after each use for basic things like localstorage, indexeddb.
But in my case I'm also using PWA features, running service workers and cache resources for offline availability.
Although I can't say for sure that I will run into testing issues because of these, I would be happyier if I could just purge the whole context/profile and get a fresh one just with the models already downloaded.

Ezequiel Cicala

unread,
Nov 3, 2025, 6:11:35 AMNov 3
to Nikita Malyschkin, Chrome Built-in AI Early Preview Program Discussions

Maybe you can create a new profile, download Gemini Nano and language pairs and save that profile. Then set up a copy of that profile for the tests and delete it after they ran.

El 3/11/25 a las 02:44, Nikita Malyschkin escribió:
Reply all
Reply to author
Forward
0 new messages