I noticed that inside this block there is an old version of the code present. I performed tests with the old code. Surprise: the same request takes 5.5 seconds!
current block (8.5 minutes)
old code (5.5 seconds)
I notice 1 thing with the old code: if i remove the "warp", 4.5 seconds to execute the code ! (??????? i don't understand...)
I'm not a huge fan of those terribly inefficient library blocks!
Why don't you use the analyze reporter in the "frequency distribution analysis" library to make your own one:
I've just now tried it on a huge list of 10 Million (!) items and got the result instantly.
I've just now updated the official "remove duplicates" reporter in the list library with my solution above. You might have to clear your cache to try it in the current v7 dev version.
thanks! Sorry for the glitch, I've just now fixed it, make sure to clear your cache so you get the correct version next time you load the list utilities library.
(@d4s_over_dt4, if you're reading: excuse me for necroposting )
I have an issue with .
The (very fast) current (Snap! 8.2.3) implementation:
... doesn't work for lists containing composite data:
... whereas the previous (admittedly, much slower) implementation:
... does:
I can see why the current implementation is preferable in many cases. On the other hand, both from a pedagogical perspective and whenever lists with composite data are involved, the older implementation is often to be prefered.
Therefore I propose that both implementations - or perhaps a third one, containing both algorithms, and detetermining which is applicable - will be part of the "List utilities" library, including help text explaining the various characteristics.
@d4s_over_dt4: congrats, you guessed right! @ego-lay_atman-bay; thanks for the explanation! If I’ve done my homework right, me posting might actually have been called “necro-bumping”.
Now what do you folks (a/o others) think of my point regarding REMOVE DUPLICATES FROM ?