there is a token limit per request. for your sake its around 1.5k characters to 6k characters
No wonder my artificial memory thing is bugging out!
{
"chatLog": [
{
"prompt": "Hello! This is a memory test to see if this works :)",
"response": "Hello! This is a memory test to see if this works :) How can I assist you today?"
},
{
"prompt": "Silly GPT-3!",
"response": "It seems like you're referencing a memory test with a specific chat log. However, I'm unable to retain or recall information from previous interactions or specific logs. Each interaction with me is independent. How can I help you today?"
},
{
"prompt": "I'm using an object as a chatlog and a prompt recorder.",
"response": "It seems like you're referencing a memory test with a specific chat log. However, I'm unable to retain or recall information from previous interactions or specific logs. Each interaction with me is independent. How can I help you today?" // repeated
}
],
"prompt": "I'm using an object as a chatlog and a prompt recorder." // last recorded prompt
}
That doesn't work properly, it thinks I put openai/
in the prompt.
theres a better way to send a prompt with memory
@avi_shor Ah, sometimes we have to remember that DeepSeek now is at the top of the list.
Also, it has an open source API to mix with ChatGPT compatible syntax.
I registered an account recently to get a nice try, hopefully it was with zero remorse.
pretty sure its not free
yes
is there a way, where if you ask it for something and the response is so long it runs off the bottom of the screen, is it possible to make it where you can scroll down to see the rest of text