Matrix library

I just tested it again.Your algorithm was actually faster by 0.3ms.
I opened too much chrome tabs when i got that bad answer.

as someone who sometimes works with ai (yess data science let's go), this is the truth. having to monitor 5 different versions of TF (tensorflow) just to get a single module off its feet is super fun, especially alongside having to transpose code that is meant to be run on a computer with a good gpu to colab or kaggle :slight_smile:

But now we have to explicitly stack analog tensor units and matrix hardwares just to do AI!
You could just do the matrix things back then by hand but now it gets too long and people turned out with some matrices and hardware units

structure of those analog hardware units

its just an physical analog,you shine a light from the input vector and it gets multiplied and reflected down and it adds at the end,and you get a horizontal vector(for AIs,you could induce the bias here and the relu latches here(cuz relu is just a latch that encourages flow on one direction and gets resistance ish on the other side(diode?)))
they even reused flash memory chips as changable resistors(the only way you get multiplied is when you go thru a resistor and volts turn into amperes,also flash memory resistors get their resistance inversely preportional to the latched voltage,so you multiply two voltages together to make a current,and add that current cuz its properties on intersection)

https://snap.berkeley.edu/snap/snap.html#present:Username=d4s_over_dt4&ProjectName=matrices%20vs%20hardcoding
WAT WHY WOULD THE TRIGS BE SLOWER????????????????????????????????????
Now I'm convinced about the matrix stuff overuse on the neural networks,they would be faster(more precisely,less affected to the high-level-languages-slow-things effect and more affected to GPUs and able to harness the matrix hardware)
Anyways,you will not be able to use TPU power in Snap,even if you could,it won't earn back the transmission delay(cuz you would have to do a server outside)
3d stuff->snap,really really computational intensitive and slow task->c++

I think you're missing the point. It's not that we had fewer data back then; it's that AI wasn't data-based at all! It was based on symbolic programming, things like understanding a text by analyzing parts of speech from a dictionary to build a sort of abstract syntax tree to work out who did what to whom.

Wow!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.