How do I implement the dot product in Snap?

I made a set of blocks to work with matrices, but I can’t figure out how to implement a dot product with them. My project link is: Math Blocks 1.0 by sunnyandshivcodebros | Snap! Build Your Own Blocks. Please help!

If dot product is defined as

\vec p ^{[n]} \cdot \vec q ^{[n]} = \sum\limits_{i=1}^n p_iq_i

untitled script pic (48)

The only problem I see in your matrix multiplication procedure is that the last line should REPORT OUTPUT rather than REPORT (ITEM 1 OF OUTPUT).

But you're making it harder than necessary by not taking advantage of Snap!'s functional programming capabilities.

First define vector dot product:


(To get the SUM block, drag the × block over the arrowheads :reverse_button:︎⋮:play_button:︎ of the + block. It adds all the items of the input list, like COMBINE WITH +.)

Then you can multiply the matrices:


(COLUMNS OF is one of the options in the LENGTH OF block; it transposes the matrix. To get the INPUT NAMES part of the grey rings, click on the right arrowhead :play_button:︎.)

In functional programming, you don't create an array and then modify it with a bunch of REPLACE calls. Instead, you generate the array you want to begin with. This should be easy to understand in the relatively simple vector dot product block; the result you want is the sum of all the termwise products of the items of the two vectors.

In the matrix product block, you want to operate on the transpose of the right input, so as to combine the rows of A with the columns of B. You can just use COLUMNS OF B as one of the MAP inputs to accomplish that without having to put it in a variable first. And then, each item of the result matrix is the vector dot product of a row of A with a column of B.

P.S. You confused @dardoro by using the name "dot product" for the matrix inner product. "Dot product" is generally used only for the scalar product of vectors.

Thank you! I really needed help, I am a new user to Snap, and I am really used to approaching things like I would in scratch.

Edit: I also used

(item [1 v] of (output))

because apparently the variable sum was a list for some reason???

Edit: I also used [snapblocks]
(item [1 v] of (output))
[/snapblocks] because apparently the variable sum was a list for some reason???

I'm confused. Of course it's a list; the product of two matrices is a matrix! Maybe you really were thinking of the dot product of two vectors?

Yeah, but like, I was trying to get a singular value of the matrix.

Apparently,

(combine $list with (() + ()) ::list

doesn’t work right in my original program?

in my original program

... which I can't read any more because you saved a new version! :~/

I was trying to get a singular value of the matrix.

I don't know what you mean by "a singular value." Do you mean a number? The product of two matrices is a matrix. There are various numbers associated with a matrix, such as its determinant and its eigenvalues. But those aren't products of matrices.

Again, the dot product of two vectors is a number; maybe that's what you're thinking of?

Or maybe we're having some confusion about what the word "value" means. A matrix is a value, one value, even though it isn't one number.

I mean, I was trying to get one item of the matrix.

Edit: Like, item i, j of the resulting matrix)

Edit 2: the code was

set [sum V] to(combine ((row (i) of (input 1) ::#00ff00) x (column (j) of (input 2) ::#00ff00)) using ((()+()) @addInput))
replace item (i) (j) of output with (sum) ::#0f0

Do you really think of Frobenius inner product - Wikipedia ?
For the matrix of real values


it can be calculated as
untitled script pic (52)
or
untitled script pic (53)
or APL library

Do you have a test case?

I put the test cases in my project. (Matrix 5 is the inner product of matrices 1 and 2)

Also, I wasn’t thinking about the Frobenius inner product, I was thinking of the standard matrix product that returns a matrix.

???

I'm not sure what a "standard" means. According to lectures below it's a scalar
\langle x, y \rangle = \sum\limits_ i \sum\limits_ j x_{ij}y_{ij}

Auburn University, Randall R Holmes
https://web.auburn.edu/holmerr/2660/Textbook/innerproduct-print.pdf :link:
9.4 Example: Matrix space

Princeton University, Amir A Ahmadi
https://www.princeton.edu/~aaa/Public/Teaching/ORF523/S16/ORF523_S16_Lec2_gh.pdf :link:
1.1.2 Examples


So whatever you want to calculate, if You consider the problem solved - EOT for me.

According to lectures below it's a scalar

Interesting. 60 years ago I knew all about linear algebra and Hilbert spaces, which is where Wikipedia leads me. And yes, they define "inner product" as you do, producing a scalar.

But all the APL books use "inner product" to mean matrix product, the kind where an m×n matrix times an n×p matrix gives an m×p matrix.

ChatGPT:

How come my linear algebra book thinks the "inner product" of two matrices is a scalar, whereas APL thinks the "inner product" of two matrices is a matrix?

ChatGPT said:

Great question, and it's a subtle but important point — the term inner product means different things in different contexts.


:small_blue_diamond: In linear algebra:

The inner product (also called the dot product in some contexts) is defined for vectors, not matrices. For two vectors u,v∈Rn\mathbf{u}, \mathbf{v} \in \mathbb{R}^nu,v∈Rn, their inner product is a scalar:

⟨u,v⟩=∑i=1nuivi\langle \mathbf{u}, \mathbf{v} \rangle = \sum_{i=1}^n u_i v_i⟨u,v⟩=i=1∑n​ui​vi​

So when your linear algebra book talks about inner products, it's referring to vector inner products, and naturally the result is a scalar.

For matrices, linear algebra usually uses terms like matrix multiplication, Frobenius inner product, or trace inner product, depending on context.


:small_orange_diamond: In APL:

APL is a language designed for high-level array manipulation, and it generalizes operations like inner product to work on arrays of any rank, not just vectors.

APL's inner product operator ∘.F (where F is a function like +, ×, etc.) combines two arrays using two functions:

A F.G BA ; F.G ; BAF.GB

This means:

  • Apply G (e.g., multiplication) elementwise between rows of A and columns of B
  • Then apply F (e.g., addition) to reduce the result

This mimics matrix multiplication, but can generalize further. So >in APL, the inner product of two 2D arrays (matrices) is a >matrix (like matrix multiplication).


:counterclockwise_arrows_button: Summary:

Context "Inner Product" of Matrices Result
Linear Algebra Usually not defined as "inner product"; uses matrix multiplication or Frobenius inner product Often a scalar (Frobenius) or matrix (multiplication)
APL Generalized inner product (matrix multiplication or more) A matrix

So your book and APL are using the same phrase, "inner product", for different ideas — one based in formal math for vectors, and the other a powerful generalization for array programming.

So you pays your money and you makes your choice, I guess. Note that ChatGPT thinks the linear algebra inner/dot product only applies to vectors.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.