My (remixed) TLDR:
OpenAI’s Brockman and Codex's lead Wojciech Zaremba demonstrated it to me online. Brockman found a silhouette of a person on Google Images. (...) After adding it to the stage, Brockman said: 'make it a bit bigger', then he said: 'now make it controllable with the left and right arrow keys'.
Codex perfectly translated it from English to code, but because the figure kept disappearing off-stage, he added another request: 'constantly check, if the person is off the page, and put it back on the page, if so'.
Curious how precise these instructions need to be, I suggested we try a different request: 'make sure the person can’t exit the stage.'
I had to laugh.
Read the original here
Or watch a demo on YouTube OpenAI Codex Live Demo - YouTube
[edited by moderator]
but watching it into 25 min 30 sec I am in shock what they did with the Microsoft Word API. It is wonderful!! EDIT I will try to do something similar in my Text Editor's code, I'm so excited!
P. P. P. S.
I know you are deservingly enjoying your summer vacations in beautiful southern France, but @jens - that is something you'd want to be made for Snap!, too, wouldn't you?
P. P. P. P. S.
I've just joined the waiting list to be able to try the Codex hands-on. You can do it here