[Detailed details included] After too many chat messages, older AI messages will have redundant remnants greater than the “>” symbol
赵
赵志轩
Device Model:
OPPO K13 Turbo Pro 5G
System version number:
ColorOS15 PLE110_15.0.2.701 (CN01B10P01)
Android 15
Model providers:
SiliconFlow, custom blank provider
The model was used:
■ SiliconFlow:
- deepseek-ai/deepseek-v3
- deepseek-ai/deepseek-r1
■ EMPTY:
- empty model
━━━━━━━━━━━━━━
Verification steps:
After nearly 100 tests, the occurrence of this bug is inevitable and is related to the
maximum number of contextual messages (hereinafter referred to as the “maximum number”)
. The timeline is as follows:User information is sent (message No. 0), the “maximum number” is 75, and Ai responds (Figure 1)
Message No. 75 appears at the beginning of the paragraph with the symbol “>”, which is recognized by the Markdown function as a “citation note”, while deeply thinking about the loss of content (red box in Figure 2)
When AI starts responding to generating the -1 message (Figure 3)
Message No. 75 returns to normal, and deep thought content will also be restored (bright green box in Figure 4)
Figure 5 shows the effect when the “maximum quantity” is equal to number 77, which is also triggered.
It does not trigger when the “maximum number” is equal to 76, because number 76 is a user message, so you can see that user messages are not affected.
This happens when I use both “inference” and “non-inference” models, and you can see that this situation is not limited to models.
The “>” sign that appears in normal historical AI information disappears on its own when the AI responds, but sometimes abnormalities occur, causing errors greater than the number to be saved. The exact cause is unknown.
Impact:
If there is an abnormal residue greater than the number, it may cause the reading of Figure 6 to be poor. The redundant number in Figure 7 causes a large amount of tokens to be consumed, and the permanent deep thought content in Figure 2 is lost.
赵
赵志轩
The reason for triggering Figure 6 and Figure 7 is currently unclear. Currently, the best way to reduce this situation is to set the upper limit of the number of contextual messages to an even number.