Are you planning on using a cloud or BrowserDB to store all the names and questions and things?
Also I think you should code some sort of AI which can take questions from other chats online and then learn from those questions. But the moderators say that would be using JS to hack and could lead to a (permanent) ban so I suggest you shouldn't.
Currently, there is no such thing as a real AI. Most AI chatbots use clever hacks to fool us humans into thinking it's a good AI (and thus it could pass, or "pass", the Turing Test)
All my AI does is find similar questions in a database, and answer with the most relevant one. And if it can't find any within a certain percentage of accuracy, It creates a new item for use in learning. Then it occasionally asks questions to get answers for the question that has the least answers.
C'mon. That was the state of the art in 1966 (ELIZA), but today even the (super annoying) chatbots on commercial web sites legitimately extract meaning, within limits, from what people say to them, let alone Siri and the like.
Seymour Papert used to complain about the "superhuman human fallacy," wherein people would complain that chess-playing programs weren't real AI because (back then) they couldn't beat chess grandmasters. The point is, even then, chess programs were better than most human chess players. (Similarly, you could make a case that commercial web site chatbots aren't any worse than the human customer support people who just read from a script: "Did you try turning the computer off then on again?")
People put too much weight on the Turing test. Turing made that up as the criterion for convincing an AI skeptic, but it's not the criterion for getting actual benefit from AI research; we're way past that milestone.
Note that I didn't capitallize the "c." I meant that generically. You've probably never heard the name of the one I mean (and I don't remember it) because it only lasted about two days before someone pointed out to management how racist it had become.
Yeah I recall something about a chatbot being up for a short time before being taken down because it was racist.
I've been using Craiyon just to play around with, it's really facinating, but it has a disclaimer that the images it generates may include stereotypes by nature of them being present in its training data.
I have an idea for how an AI could be made to have a consistent personality when the training data is from different people; we could attempt to mimic the way a character would talk (although I think ethical questions come up when the bot is sufficiently sophisticated).
I don't understand how the two parts of that sentence fit together. If you're talking about "training data" then you're not writing an algorithm to control its speech. Am I confused about what you mean?
Oh my god, the sign in part/login has so many issues and faults. To enter a pin you have to first enter a wrong Touch ID (to make it quicker) then click choose sign in options then choose pin. Sometimes you can just type in the pin at first but not always.