This discussion of the use of ChatGPT to build a Linux terminal is interesting:
@toontalk @bh Can you help me understand how these emerging systems relate to the sort of things that can be done in Snap! (if there are any such connections)?
This discussion of the use of ChatGPT to build a Linux terminal is interesting:
@toontalk @bh Can you help me understand how these emerging systems relate to the sort of things that can be done in Snap! (if there are any such connections)?
I remember being intrigued by an implementation of Eliza in Logo. ... Years later a NY Times columnist describes the results of the following prompt to ChatGPT: “Write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR."
The prompt and response are both good and funny.
I've been playing with Snap! blocks that use GPT-3 (or Cohere.ai's almost as good but free equivalent) and DALL-E (a free Stable Diffusion version coming soon). Snap! programs can put together prompts and then use the results in further computation.
Text and image generation relies upon "prompt engineering" which is like programming and yet different in many interesting ways.
I've written a short post about this
Comments very welcome.
Very interesting!
One of the hallmarks of ChatGPT seems to be that it is good at presenting incorrect information in a plausible way. Reddit paused use of ChatGPT solutions because hundreds of incorrect and incomplete solutions were being posted on the forums. ChatGPT seems to vacuum up all sorts of information - some correct and some incorrect - that it uses in construction of its responses.
Would it be correct to say that this is a case of GIGO? And does it seem likely that the next generation of GPT will address this, or does this seem more likely to be a persistent characteristic of this particular technology.
Sometimes the "garbage" is useful. Many neural network program generators will generate hundreds of possible answers and then run them to see which one satisfies the test cases. Still useful even if only a few percent pass.
And it does "know" where the say
block is. But it claims it can produce both written and spoken output (perhaps "say" wasn't the best name for this block).
That's great! Really!
Here’s another ChatGPT gem, in answer to my “logic” question:
… ChatGPT responded:
So there you have it. I’m impressed by ChatGPT’s ability to produce utter nonsense as unchallengeable truth.
However I must admit that more often ChatGPT is correct and will not be fooled by trick questions. E.g. I asked it why France invaded Belgium, starting WW I, and it answered, correctly, that it was actually Germany that invaded Belgium, adding some relevant background info.
On the other hand, if you ask ChatGPT to solve a set of two equations (like 3y = x + 7 & 2y + 3 = x). it may miscalculate (as it did with me); and if you tell it so, and ask it to make a correct calculation now, it will start random guessing … hilarious!