I just wrote this post about using ChatGPT to create simple Snap! programs. While it was more helpful than I expected, it clearly can be much more helpful when programming in a textual language.
That's pretty good. I was guessing before I read the details that it would solve your problems using the Scratch subset of Snap!, but it did way better than that.
The big problem, of course, is that it doesn't believe in functional programming; it consistently thinks it has to modify the value of the variable MY LIST. Next time you should specify "use functional programming" and see what that does.
I don't know if this is its error or yours, but the code picture that claims to use MAP is actually a duplicate of the previous code picture.
It's too bad it can't generate a script pic with code attached! But then I guess Jens would have a fit. :~)
I fixed copy and paste map image mistake, thanks for reporting it.
I updated the doc with your suggestion to ask it to use functional programming. It ignored me. But then when I said don't use "set" it got the idea and did fine.
Not sure which part you need explained. Do you know that when you make a script pic, the resulting picture includes the runnable Snap! code?
So, does it learn from user interactions? Does it now know what "functional programming" means?
It did make one small but important mistake: It didn't tell you to click on "Reporter" in the make-a-block dialog.
Another mistake that I didn't notice earlier is that it doesn't understand that the "v" at the end of "[my list v]" in text representation (or Scratchblocks input representation) of instructions that set the value of a variable represents a dropdown menu and is not part of the name, so instead of
((2) * [my list v])
it should have said
((2) * (my list))
If I refer to "funcational programming" in that conversation (which is stored and I can resume at will) then it very likely will have learned what it means. But if I start another conversation it will remember nothing of this conversation. Also it is limited to remembering at most the last 6000 words of a conversation. (There is version of GPT-4 that has 4 times as much context and is being rolled out but I don't have access (yet).)
Huh. Well I guess that's good because users can't train it to be racist, but it also can't learn useful new stuff.
What I read here made me wonder if instead of telling Chat-GPT what it should not use, maybe showing a few examples of using the prefered approach would do the trick, too.
When I was using GPT 3 (text-davinci-003) I found presenting a few examples of what I wanted before requesting something new worked well. But with GPT 4 it seems to do fine (in general) without any examples.
I'm not on ChatGPT Plus subscription, that runs the GPT 4 version, because I would need to pay a sum of money for it on a monthly basis.
So the version 4, you say, is a zero-shot version?
I would say that version 4 very frequently behaves as desired with a zero-shot prompt but I'm sure there are many situations where providing a few examples would help.
@bh i meant 1. whos jens is he from minecraft? 2. why would he have a fit
Jens is the main Snap! programmer
Jens doesn't like it when users use software other than Snap! to create Snap! saved-project files, partly because they often get it wrong and then report "Snap! bugs," but mainly because it means the user thinks there are things they can't do in Snap! itself.
I asked it to solve a task.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.