When large models respond, they will also display the thinking process, and there are also issues with the format of the responses.
THE LARGE MODEL PRESENTED TO MY CONTENT AS DESIGNED:
{
“thought”: “The user asks if the contextual information in the current conversation will be lost if the local file is moved (path changes). We need to explain the mechanism of file upload: the file is uploaded to the server/cloud, and the model reads a copy after the upload, rather than reading the file on the user's local hard drive in real time. As a result, moving, renaming, or even deleting local files won't affect what the current conversation has already obtained.” ,
“response”: “Don't worry,
won't be missing
, and nothing will be affected. \n\nThe reasons are as follows:\n\n1.
The file is an “upload” rather than a “link” :
\nWhen you send me the file, you are actually
copying
a copy of this file and uploaded it to this chat system's server. What I'm reading is the
copy
on the server. \n\n2.
I can't access your local hard drive:
\nFor security and privacy reasons, I don't have permission to go directly to the 'C:\Users\...' path on your computer to find files. The path shown just now is just an “attribute information” of the file, telling me what it was originally called and where it came from, but I didn't rely on this path to read the content. \n\n
For example:
\nThis is like you sent a friend a
email
with an attachment. After the email is successfully sent, no matter where you move that file on your computer, or even delete it, your friend will still be able to open the attachment in the email. \n\n
Summary:
\nAs long as I am in this conversation window (Session), the contents of the file have already been “remembered” by me. No matter how you toss that file locally, it won't affect our conversation.”
}
·