Access to LanguageModel bare minimum settings

205 views
Skip to first unread message

Mihail-Andrei Tugui

unread,
Oct 12, 2025, 12:26:52 PM10/12/25
to Chrome Built-in AI Early Preview Program Discussions
Hi everyone.

I tried running a demo on: Version 143.0.7467.0 (Official Build) canary (64-bit)

In chrome://on-device-internals/ I get: Foundational model state: Not Eligible

And LanguageModel.availability() returns 'unavailable'.

I enabled:
- chrome://flags/#optimization-guide-on-device-model
- chrome://flags/#prompt-api-for-gemini-nano-multimodal-input

I tried again after also enabling:
- chrome://flags/#prompt-api-for-gemini-nano

Why is it not eligible? On regular chrome (141.0.7390.77) it works without an issue. Do I need to be signed in?

Thanks in advance.
Andrei

Mihail-Andrei Tugui

unread,
Oct 12, 2025, 12:30:46 PM10/12/25
to Chrome Built-in AI Early Preview Program Discussions, Mihail-Andrei Tugui
I'm essentially trying to see what the minimum setup is for running LanguageModel (in terms of flags & code) so I can provide these steps as instructions in my repo.

Links: live demo and its repo.

Thomas Steiner

unread,
Oct 13, 2025, 3:52:16 AM10/13/25
to Mihail-Andrei Tugui, Chrome Built-in AI Early Preview Program Discussions
Hi Mihail-Andrei,

Please see the hardware requirements in this expandable. If you can't work on your own computer, there may be an alternative: You can maybe work with a streamed browser like Browserstack offers. I just went to https://googlechrome.github.io/samples/downloading-built-in-models/index.html in the streamed session, downloaded the model, and could then interact with it.

Screenshot 2025-10-09 at 10.55.33.png

Screenshot 2025-10-09 at 11.04.27.png

Since all APIs are now either shipped or in origin trial, this should work as long as your app is web-exposed. You can't set `chrome://flags` or access local apps directly, but you can expose them through a service like ngrok.

Hope this workaround unlocks you and lets you participate!

Cheers,
Tom

--
You received this message because you are subscribed to the Google Groups "Chrome Built-in AI Early Preview Program Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chrome-ai-dev-previe...@chromium.org.
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/chrome-ai-dev-preview-discuss/ca6a96db-b491-45cb-9831-71d846486fcfn%40chromium.org.


--
Thomas Steiner, PhD—Developer Relations Engineer (blog.tomayac.comtoot.cafe/@tomayac)

Google Spain, S.L.U.
Torre Picasso, Pl. Pablo Ruiz Picasso, 1, Tetuán, 28020 Madrid, Spain

CIF: B63272603
Inscrita en el Registro Mercantil de Madrid, sección 8, Hoja M­-435397 Tomo 24227 Folio 25

----- BEGIN PGP SIGNATURE -----
Version: GnuPG v2.4.8 (GNU/Linux)

iFy0uwAntT0bE3xtRa5AfeCheCkthAtTh3reSabiGbl0ck
0fjumBl3DCharaCTersAttH3b0ttom.xKcd.cOm/1181.
----- END PGP SIGNATURE -----

Mihail-Andrei Tugui

unread,
Oct 13, 2025, 12:12:15 PM10/13/25
to Chrome Built-in AI Early Preview Program Discussions, Thomas Steiner, Chrome Built-in AI Early Preview Program Discussions, Mihail-Andrei Tugui
Hello Thomas,

I appreciate your answer, however I think I didn't make clear that the Chrome browser works while Chrome Canary doesn't (on the same device).

This is strange. On Canary, I get "Uncaught NotAllowedError: The device is not eligible for running on-device model." On my main Chrome browser, all works fine.

The flags I enabled:
chrome://flags/#optimization-guide-on-device-model
chrome://flags/#prompt-api-for-gemini-nano
chrome://flags/#prompt-api-for-gemini-nano-multimodal-input

Flags for Summarization, Proofreader, Writer & Rewriter (although these are optional)

My device has 8GB of RAM.

Any idea what could be preventing Canary from running the API?

Much appreciated,
Andrei

Mihail-Andrei Tugui

unread,
Oct 13, 2025, 12:26:03 PM10/13/25
to Chrome Built-in AI Early Preview Program Discussions, Mihail-Andrei Tugui, Thomas Steiner, Chrome Built-in AI Early Preview Program Discussions
Apologies for the second reply, I should've included these screenshots earlier, if they're of any help.

1gDCdXkku6.pngchrome_vuEMjp3SYa.png

Thomas Steiner

unread,
Oct 14, 2025, 10:34:50 AM10/14/25
to Mihail-Andrei Tugui, Chrome Built-in AI Early Preview Program Discussions, Thomas Steiner
Hi Mihail-Andrei again,

Could I ask you to run through the steps at this troubleshooting guide on the browser where it isn't working? Could it maybe be the disk space? 

Cheers,
Tom 

Mihail-Andrei Tugui

unread,
Oct 14, 2025, 1:43:12 PM10/14/25
to Chrome Built-in AI Early Preview Program Discussions, Thomas Steiner, Chrome Built-in AI Early Preview Program Discussions, Mihail-Andrei Tugui
Hi Thomas,

I glanced over your response earlier, but because the previous screenshots show:
  • device capable - false
  • disk space available - true
I thought that it can't be disk space, and decided to come back later to carry out the steps as a last resort.
But that's because I didn't notice you linked to a document that isn't part of the API documentation.
I just assumed wrong, otherwise I would've checked it right there and then, and followed the steps.
Sadly, I came back later thinking it's best to reinstall Canary and save both of us time.
And sure thing, a simple reinstall did the trick and device capable was now showing as 'true'.
The availability method also returned 'downloadable'. It's worth noting how after relaunching,
the status under on-device internals showed "Foundational model state: Install Not Complete".
So, I went in chrome://components/ and carried out the last step of updating:
Optimization Guide On Device Model - Version: 2025.8.8.1141

Great, I made it work, but the sad part is I lost the chance to find what caused the previous behavior.
I have to say, it's not a nice feeling, and surely I won't rush things next time.

All the best,
Andrei

Thomas Steiner

unread,
Oct 14, 2025, 1:46:48 PM10/14/25
to Mihail-Andrei Tugui, Chrome Built-in AI Early Preview Program Discussions, Thomas Steiner
Hi again,

Glad to hear it worked in the end, but also sharing your frustration that you couldn't really pinpoint the culprit. We've definitely heard from other developers as well that there's room for improvement when it comes to making sure the model download works, and should it fail that failure cases be debuggable for developers. It's 100% on the team's radar to improve our offering here. Thanks for your patience while we get there.

Cheers,
Tom
Reply all
Reply to author
Forward
0 new messages