This is probably a hardcoded limit which is the max number of tokens the model will accept. A token is loosly a word, but not always. Depends on the tokenization strategy they use internally.
They might have a limit on the number of Vectors after embeedding these tokens as well, as this is typically the limiting factor regarding the context window the LLM can process.
--
You received this message because you are subscribed to the Google Groups "Chrome Built-in AI Early Preview Program Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chrome-ai-dev-previe...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chrome-ai-dev-preview-discuss/CAK2avF-Qgma9HQdEziGHpZx21dK7b6QYnuGtFutAUpjkJgR3rg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Chrome Built-in AI Early Preview Program Discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chrome-ai-dev-previe...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chrome-ai-dev-preview-discuss/2bfaaf4e-68d5-4c1f-b517-2b6aa05a1b7fn%40chromium.org.