Problem feedback

Please report only one exception per article, and provide basic information such as software version number, device used, model service provider, etc. And keep it as simple as possible. Keep an eye out for similar feedback, and comment or vote if you already have one.
[Detailed details included] After too many chat messages, older AI messages will have redundant remnants greater than the “>” symbol
Device Model: OPPO K13 Turbo Pro 5G System version number: ColorOS15 PLE110_15.0.2.701 (CN01B10P01) Android 15 Model providers: SiliconFlow, custom blank provider The model was used: ■ SiliconFlow: deepseek-ai/deepseek-v3 deepseek-ai/deepseek-r1 ■ EMPTY: empty model ━━━━━━━━━━━━━━ Verification steps: After nearly 100 tests, the occurrence of this bug is inevitable and is related to the maximum number of contextual messages (hereinafter referred to as the “maximum number”) . The timeline is as follows: User information is sent (message No. 0), the “maximum number” is 75, and Ai responds (Figure 1) Message No. 75 appears at the beginning of the paragraph with the symbol “>”, which is recognized by the Markdown function as a “citation note”, while deeply thinking about the loss of content (red box in Figure 2) When AI starts responding to generating the -1 message (Figure 3) Message No. 75 returns to normal, and deep thought content will also be restored (bright green box in Figure 4) Figure 5 shows the effect when the “maximum quantity” is equal to number 77, which is also triggered. It does not trigger when the “maximum number” is equal to 76, because number 76 is a user message, so you can see that user messages are not affected. This happens when I use both “inference” and “non-inference” models, and you can see that this situation is not limited to models. The “>” sign that appears in normal historical AI information disappears on its own when the AI responds, but sometimes abnormalities occur, causing errors greater than the number to be saved. The exact cause is unknown. Impact: If there is an abnormal residue greater than the number, it may cause the reading of Figure 6 to be poor. The redundant number in Figure 7 causes a large amount of tokens to be consumed, and the permanent deep thought content in Figure 2 is lost.
1
There is a problem with the response format of the large model.
When large models respond, they will also display the thinking process, and there are also issues with the format of the responses. THE LARGE MODEL PRESENTED TO MY CONTENT AS DESIGNED: { “thought”: “The user asks if the contextual information in the current conversation will be lost if the local file is moved (path changes). We need to explain the mechanism of file upload: the file is uploaded to the server/cloud, and the model reads a copy after the upload, rather than reading the file on the user's local hard drive in real time. As a result, moving, renaming, or even deleting local files won't affect what the current conversation has already obtained.” , “response”: “Don't worry, won't be missing , and nothing will be affected. \n\nThe reasons are as follows:\n\n1. The file is an “upload” rather than a “link” : \nWhen you send me the file, you are actually copying a copy of this file and uploaded it to this chat system's server. What I'm reading is the copy on the server. \n\n2. I can't access your local hard drive: \nFor security and privacy reasons, I don't have permission to go directly to the 'C:\Users\...' path on your computer to find files. The path shown just now is just an “attribute information” of the file, telling me what it was originally called and where it came from, but I didn't rely on this path to read the content. \n\n For example: \nThis is like you sent a friend a email with an attachment. After the email is successfully sent, no matter where you move that file on your computer, or even delete it, your friend will still be able to open the attachment in the email. \n\n Summary: \nAs long as I am in this conversation window (Session), the contents of the file have already been “remembered” by me. No matter how you toss that file locally, it won't affect our conversation.” }
0
Load More