Added max_tokens manual setting function
planned
b
bayunaiyin
Currently, most large models can set the output max_tokens, so that the maximum amount of content output by the large model is controlled by the user.
I'd like this feature to be added to the advanced settings, at the same level as contextual message limits, rigor and imagination (temperature).
xianz
planned
Future versions will consider adding more custom parameters