Chatbox AI
Create
Log in
Sign up
Roadmap
Feedback
功能建议
251
Changelog
Boards
Feature suggestions
Problem feedback
Powered by Canny
Feature suggestions
Please describe only one function point per article and try to be as brief as possible. Keep an eye out for similar suggestions, and comment or vote if you already have them.
Details
Category
Select a category
Showing
Trending
Sort
Trending
Top
New
Filter
Under Review
Planned
In Progress
Complete
posts in
All Categories
All Categories
mobile terminal (40)
desktop side (131)
Model support (37)
OpenRouter Gemini 2.5 Flash Image not working
The responded image is not displayed while tokens are used
1
1
Add Thinking Budgets Parameter for Gemini
Add Thinking Budgets Parameter for Gemini Gemini now supports controlling the level of "thinking" using the thinkingBudget parameter. Setting it to "0" and "24576" will result in completely different effects and use cases. I hope to add support for this. OpenAI also supports the reasoning.effort parameter, and I hope this functionality can be updated soon.
2
·
in progress
3
Model of Moonshot(Kimi)
kimi updated the k2 model to be very easy to use, I want to add presets for the kimi model
0
1
pruning tubro
Please help me access Ollama Turbo.
0
1
Custom switch thinking chain function
Custom switch thinking chain function The underlying performance of many models is already good. Sometimes it is hoped that the model can quickly answer and prevent situations where the deep thought chain fits excessively. Can a switch model add the ability to think deeply about the inference chain, so that users can decide for themselves whether they need to use deep thinking to solve problems.
0
2
Allow Multiple endpoints for the same type
Allow adding for example multiple Endpoints / secrets for types like openAI and Azure OpenAI as now some models fall under different endpoint / resource groups
0
1
Add safety settings to Gemini the ability to adjust block levels, or add custom parameters
Now Gemini 2.5 pro is too easy to interrupt the response. I hope it can be adjusted directly to block none/safety filters off, or add custom parameters/request bodies to adjust manually
0
1
In Ollama,there is no models available.
When I open the Chatbox,click the button"Setting"-"Ollama",there is no models available.
0
1
For embedding models there are nowhere to set batch size
Using Qwen embedding model, however nowhere to set batch size so that there is always a 'Al APICallError: <400>InternalError.Algo.InvalidParameter: Value error, batch size is invalid, it should not be larger than 10 .: input.contents' Error. Searched on the net that if it's possible to manually set batch size, the situation will be better
0
1
Available settings
Please ignore grammar mistakes because I am not an English speaker,thank you. Please add user-defined params in the http body,http head and http query,so that users can channel some settings in some special models. For example,"max tokens" is not availble for deepseek models. Having added function,we can realize the need by putting a "max_token" in the http body.
0
1
Load More
→
Powered by Canny