Die4Ever2005
unread,Sep 5, 2025, 1:50:28 AM (2 days ago) Sep 5Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Chrome Built-in AI Early Preview Program Discussions, Bobo Zhou, Thomas Steiner, Die4Ever2005, Chrome Built-in AI Early Preview Program Discussions, Clark Duvall
> We use the highest quality model that the device is capable of running.
This translates to devices with High/Very High performance class as
reported by chrome://on-device-internals using Highest Quality, and
devices with Low/Medium using Fastest Inference.
My laptop is "Very Low" and I cannot use multimodal on here. However chrome://on-device-internals/ allows me to run the Highest Quality model and even attach an image and do multimodal prompts in there, so I know the hardware is capable, but I cannot do this in my own Javascript.
This is pretty punishing for when I'm trying to dev on the go, I cannot even test my code.
I've noticed the JSON output is FAR less reliable on my laptop compared to my desktop computer, I had to make my JSON prompts auto retry just for JSON.parse to succeed, but I don't think I've ever seen it fail on my desktop. I imagine failing and retrying is slower than running the higher quality model in the first place, not to mention the garbage data it outputs like JSON strings starting with symbols instead of the desired text.