Added max_tokens manual setting function
planned
b
bayunaiyin
Currently, most large models can set the output max_tokens, so that the maximum amount of content output by the large model is controlled by the user.
I'd like this feature to be added to the advanced settings, at the same level as contextual message limits, rigor and imagination (temperature).
Gaoyang Li
Agreed. Maybe devs could copy the function of anythingllm...?
Yukia
agree, pls add
max_tokens
or max_completion_tokens
for max content output🙏Yukia
it can be auto (
undefined
) or set to numberxianz
planned
Future versions will consider adding more custom parameters