Inspired by this French museum's post.)
Inspired by this French museum's post.)
I built this, it shows the difference between the hue in the pictures, so it can be easier to see them on you own.
Actually, playing with @fridolinux's project, I find that all I need is to switch back and forth between the two sprite costumes, and the differences just jump out at me.
But if you want to do it in code, there's no need for two nested FOR loops, because costumes are stored as linear bitmaps, so you can just
I tried something really similar yesterday, but it didn't work as expected... I didn't tried it a second time, because my script took maybe 15 minutes and the output wasn't usable
So when I woke up this afternoon and it was still running from yesterday, I decided to investigate... Those pictures have 172K pixels! So I tried it after reducing the pictures by 50% (linear measure) and it only took an hour or so to compute. But hardly any pixels turned out to be exactly the same. To make it work I'd have to do some sort of fuzzy comparison, which would make it even slower. :~( This sort of thing looks easy when Jens does it...
No need to
warp anything, but it does help to cache one set of pixels and define a blank color before, then this otherwise identical script runs almost instantly on my laptop:
It's supposed to show only those pixels of the first picture that are different in the second one, and it does, but it shows waaay more than expected. I'm pretty sure that's because of the not-so-lossless JPG compression algorithm. It would be fun to try this on the actual uncompressed bitmaps, because this way what could have been an interesting media-comp activity is basically a lesson in compression-artifacts.
Ah. I actually did think to make the blank pixel in advance, but I thought the PIXELS OF COSTUME block was just a pointer dereference, same as looking in a variable.
But mainly, one of the dozen-odd programs I need to make my computer feel like my computer must be slowing it down. Someday I'll get around to turning them off one at a time. (I don't think I could get anything done turning off half at a time...)
Me too, is there any particular reason why it is not?
Why is it that defining it in advance makes any difference?
Making a new list involves allocating space for the four items, plus a header object. That takes a lot longer than just copying a pointer. Also, besides the time to make them, the more than 100K copies of the list occupy a lot of memory, which indirectly affects time because of paging.
Naw, getting pixel data in JS is not just a reference but some complex transformation operation that takes time. Therefore if you're going to access it repeatedly it's better to just keep it around in a cache. the "blank" pixels doesn't really have to be cached.
Hi everybody! This is just an interesting thread!
My suggestion would be to use a threshold (to avoid compression effects) and to use only one color (to increase speed; working with pictures with thousand of colors, areas with changes will be change all RGB values... of course, you can build an example that break this solution).
For example, doing
the result is
Wow, that's a very beautiful and elegant solution, and a visually compelling one, too! Thanks, Joan
That's brilliant! I'm surprised checking only red didn't give you more false positives. I guess there's a lot of redundancy in real pictures.
And your solution made it even more interesting, Joan!
I wonder if your solution could be also used to "sense" a hand being moved up, down, left or right on webcam video (providing nothing else has been moved) by comparing the 'before' and 'after' images?
We already have that as primitive!
Hi! Some comments...
Yes Brian, if we check only R, it could have changed pixels that are not detected (because they have the same R, and different G and/or B). But working with real pictures, and having 24bit for colors... only one channel will be enough for normal cases.
Usually, when detection algorithms want to consider all colors (to avoid these problems with some pictures or some formats (with a few colors)) they convert previously pictures to grayscale and then, do the job with this new mixed value (and not checking every three color values).
Here, as Jens pointed, we need a threshold for compression issues. In other cases, we can check pixels with no threshold (check if equals) but I think a little threshold is a good suggestion to avoid different problems and to have better performance in "general cases".
And yes, we can control video motion doing this. But we will not have the same efficiency as our primitives. Check our code. We use 32bit typed arrays and the "optical flow algorithm" to get the best performance (speed).
But, of course, results are not the most important thing! And playing with this (mapping and using high order functions with image arrays) we can get fun, and learn a lot about math, about digital transformations (more math and tech, like compression issues) and about colors, pictures, effects...
I am impressed! This is totally useful and super fast! I love it!
Another question. Is it possible to do something like this - see the video.
Sure it's possible! But I don't know if it's easy. It'd be easier in StarLogo or NetLogo, where each pixel of the stage is an object that can run code, because the way to model it is that each pixel that moves pulls along its neighbors.
Maybe we could achieve that by having pixel-sized clones of the "front" pixel, the one you're dragging.
Failing that it'd probably be some horribly large matrix multiplication thing... But now that I've said so, Jens or Joan will come up with a simple algorithm. :~)