Hi Colin,
The inputQuota and inputUsage attribute of the model object should give you the context window size.
> const m = await LanguageModel.create();
> await m.append('Hello! How are you?')
> m.inputUsage
< 11
> m.inputQuota
< 9216
You can also access the topK and temperature parameter of the model for the sampling parameter.