### Scalar product

In mathematics, the **dot product**, or **scalar product** (or sometimes **inner product** in the context of Euclidean space), is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number. This operation can be defined either algebraically or geometrically. Algebraically, it is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the magnitudes of the two vectors and the cosine of the angle between them. The name "dot product" is derived from the centered dot " **·** " that is often used to designate this operation; the alternative name "scalar product" emphasizes the scalar (rather than vectorial) nature of the result.

In three-dimensional space, the dot product contrasts with the cross product of two vectors, which produces a pseudovector as the result. The dot product is directly related to the cosine of the angle between two vectors in Euclidean space of any number of dimensions.

## Contents

## Definition

The dot product is often defined in one of two ways: algebraically or geometrically. The equivalence of these definitions is proven later.

The geometric definition is based on the notion of angle. It should be noted that, in the modern presentation of Euclidean geometry, the points are defined as coordinates vectors. In such a presentation of the geometry, the notions of length and angles are not primitive and need to be defined. Therefore, in this case, the length of a vector is defined as the square root of the dot product of the vector by itself, and the geometric definition of the dot product is inverted to define the notion of (non oriented) angle.

### Algebraic definition

The dot product of two vectors **a** = [*a*_{1}, *a*_{2}, ..., *a*_{n}] and **b** = [*b*_{1}, *b*_{2}, ..., *b*_{n}] is defined as:^{[1]}

- $\backslash mathbf\{a\}\backslash cdot\; \backslash mathbf\{b\}\; =\; \backslash sum\_\{i=1\}^n\; a\_ib\_i\; =\; a\_1b\_1\; +\; a\_2b\_2\; +\; \backslash cdots\; +\; a\_nb\_n$

where Σ denotes summation notation and *n* is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors [1, 3, −5] and [4, −2, −1] is:

- $$

\begin{align} \ [1, 3, -5] \cdot [4, -2, -1] &= (1)(4) + (3)(-2) + (-5)(-1) \\ &= 4 - 6 + 5 \\ &= 3. \end{align}

### Geometric definition

In Euclidean space, a Euclidean vector is a geometrical object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector **A** is denoted by $\backslash |\backslash mathbf\{A\}\backslash |$. The dot product of two Euclidean vectors **A** and **B** is defined by^{[2]}

- $\backslash mathbf\; A\backslash cdot\backslash mathbf\; B\; =\; \backslash |\backslash mathbf\; A\backslash |\backslash ,\backslash |\backslash mathbf\; B\backslash |\backslash cos\backslash theta,$

where θ is the angle between **A** and **B**.

In particular, if **A** and **B** are orthogonal, then the angle between them is 90° and

- $\backslash mathbf\; A\backslash cdot\backslash mathbf\; B=0.$

At the other extreme, if they are codirectional, then the angle between them is 0° and

- $\backslash mathbf\; A\backslash cdot\backslash mathbf\; B\; =\; \backslash |\backslash mathbf\; A\backslash |\backslash ,\backslash |\backslash mathbf\; B\backslash |$

This implies that the dot product of a vector **A** by itself is

- $\backslash mathbf\; A\backslash cdot\backslash mathbf\; A\; =\; \backslash |\backslash mathbf\; A\backslash |^2,$

which gives

- $\backslash |\backslash mathbf\; A\backslash |\; =\; \backslash sqrt\{\backslash mathbf\; A\backslash cdot\backslash mathbf\; A\},$

the formula for the Euclidean length of the vector.

### Scalar projection and the equivalence of the definitions

The scalar projection (or scalar component) of a Euclidean vector **A** in the direction of a Euclidean vector **B** is given by

- $A\_B=\backslash |\backslash mathbf\; A\backslash |\backslash cos\backslash theta$

where θ is the angle between **A** and **B**.

In terms of the geometric definition of the dot product, this can be rewritten

- $A\_B\; =\; \backslash mathbf\; A\backslash cdot\backslash widehat\{\backslash mathbf\; B\}$

where $\backslash widehat\{\backslash mathbf\; B\}\; =\; \backslash mathbf\; B/\backslash |\backslash mathbf\; B\backslash |$ is the unit vector in the direction of **B**.

The dot product is thus characterized geometrically by^{[3]}

- $\backslash mathbf\; A\backslash cdot\backslash mathbf\; B\; =\; A\_B\backslash |\backslash mathbf\{B\}\backslash |=B\_A\backslash |\backslash mathbf\{A\}\backslash |.$

The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar α,

- $(\backslash alpha\backslash mathbf\{A\})\backslash cdot\backslash mathbf\; B=\backslash alpha(\backslash mathbf\; A\backslash cdot\backslash mathbf\; B)=\backslash mathbf\; A\backslash cdot(\backslash alpha\backslash mathbf\; B).$

It also satisfies a distributive law, meaning that

- $\backslash mathbf\; A\backslash cdot(\backslash mathbf\; B+\backslash mathbf\; C)\; =\; \backslash mathbf\; A\backslash cdot\backslash mathbf\; B+\backslash mathbf\; A\backslash cdot\backslash mathbf\; C.$

As a consequence, if $\backslash mathbf\; e\_1,\backslash dots,\backslash mathbf\; e\_n$ are the standard basis vectors in $\backslash mathbb\{R\}^n$, then writing

- $\backslash begin\{align\}$

\mathbf A &= [A_1,\dots,A_n] = \sum_i A_i\mathbf e_i\\ \mathbf B &= [B_1,\dots,B_n] = \sum_i B_i\mathbf e_i \end{align} we have

- $\backslash mathbf\; A\backslash cdot\backslash mathbf\; B\; =\; \backslash sum\_i\; B\_i(\backslash mathbf\; A\backslash cdot\backslash mathbf\; e\_i)\; =\; \backslash sum\_i\; B\_iA\_i$

which is precisely the algebraic definition of the dot product. More generally, the same identity holds with the **e**_{i} replaced by any orthonormal basis.

## Properties

The dot product fulfils the following properties if **a**, **b**, and **c** are real vectors and *r* is a scalar.^{[1]}^{[2]}

**Commutative:**- $\backslash mathbf\{a\}\; \backslash cdot\; \backslash mathbf\{b\}\; =\; \backslash mathbf\{b\}\; \backslash cdot\; \backslash mathbf\{a\}.$
- which follows from the definition (
*θ*is the angle between**a**and**b**): - $\backslash mathbf\{a\}\backslash cdot\; \backslash mathbf\{b\}\; =\; \backslash |\backslash mathbf\{a\}\backslash |\backslash |\backslash mathbf\{b\}\backslash |\backslash cos\backslash theta\; =\; \backslash |\backslash mathbf\{b\}\backslash |\backslash |\backslash mathbf\{a\}\backslash |\backslash cos\backslash theta\; =\; \backslash mathbf\{b\}\backslash cdot\backslash mathbf\{a\}$

**Distributive over vector addition:**- $\backslash mathbf\{a\}\; \backslash cdot\; (\backslash mathbf\{b\}\; +\; \backslash mathbf\{c\})\; =\; \backslash mathbf\{a\}\; \backslash cdot\; \backslash mathbf\{b\}\; +\; \backslash mathbf\{a\}\; \backslash cdot\; \backslash mathbf\{c\}.$

**Bilinear**:- $\backslash mathbf\{a\}\; \backslash cdot\; (r\backslash mathbf\{b\}\; +\; \backslash mathbf\{c\})$

= r(\mathbf{a} \cdot \mathbf{b}) + (\mathbf{a} \cdot \mathbf{c}).

**Scalar multiplication:**- $(c\_1\backslash mathbf\{a\})\; \backslash cdot\; (c\_2\backslash mathbf\{b\})\; =\; c\_1\; c\_2\; (\backslash mathbf\{a\}\; \backslash cdot\; \backslash mathbf\{b\})$

**Orthogonal:**- Two non-zero vectors
**a**and**b**are*orthogonal*if and only if**a**⋅**b**= 0.

- Two non-zero vectors
**No cancellation:**- Unlike multiplication of ordinary numbers, where if
*ab*=*ac*, then*b*always equals*c*unless*a*is zero, the dot product does not obey the cancellation law: - If
**a**⋅**b**=**a**⋅**c**and**a**≠**0**, then we can write:**a**⋅ (**b**−**c**) = 0 by the distributive law; the result above says this just means that**a**is perpendicular to (**b**−**c**), which still allows (**b**−**c**) ≠**0**, and therefore**b**≠**c**.

- Unlike multiplication of ordinary numbers, where if
**Derivative:**If**a**and**b**are functions, then the derivative (denoted by a prime ′) of**a**⋅**b**is**a**′ ⋅**b**+**a**⋅**b**′.

### Application to the cosine law

Given two vectors **a** and **b** separated by angle *θ* (see image right), they form a triangle with a third side **c** = **a** − **b**. The dot product of this with itself is:

- $$

\begin{align} \mathbf{c}\cdot\mathbf{c} & = (\mathbf{a}-\mathbf{b})\cdot(\mathbf{a}-\mathbf{b}) \\

& =\mathbf{a}\cdot\mathbf{a} - \mathbf{a}\cdot\mathbf{b} - \mathbf{b}\cdot\mathbf{a} + \mathbf{b}\cdot\mathbf{b}\\ & = a^2 - \mathbf{a}\cdot\mathbf{b} - \mathbf{a}\cdot\mathbf{b} + b^2\\ & = a^2 - 2\mathbf{a}\cdot\mathbf{b} + b^2\\ c^2 & = a^2 + b^2 - 2ab\cos \theta\\

\end{align}

which is the law of cosines.

## Triple product expansion

This is a very useful identity (also known as **Lagrange's formula**) involving the dot- and cross-products. It is written as:^{[1]}^{[2]}

- $\backslash mathbf\{a\}\; \backslash times\; (\backslash mathbf\{b\}\; \backslash times\; \backslash mathbf\{c\})\; =\; \backslash mathbf\{b\}(\backslash mathbf\{a\}\backslash cdot\backslash mathbf\{c\})\; -\; \backslash mathbf\{c\}(\backslash mathbf\{a\}\backslash cdot\backslash mathbf\{b\})$

which is easier to remember as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in physics.

## Physics

In physics, vector magnitude is a scalar in the physical sense, i.e. a physical quantity independent of the coordinate system, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system.
Examples include:^{[4]}^{[5]}

- Mechanical work is the dot product of force and displacement vectors.
- Magnetic flux is the dot product of the magnetic field and the area vectors.

## Generalizations

### Complex vectors

For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance the dot product of a vector with itself would be an arbitrary complex number, and could be zero without the vector being the zero vector (such vectors are called isotropic); this in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the scalar product, through the alternative definition^{[1]}

- $\backslash mathbf\{a\}\backslash cdot\; \backslash mathbf\{b\}\; =\; \backslash sum\{a\_i\; \backslash overline\{b\_i\}\}$

where *b _{i}* is the complex conjugate of

*b*. Then the scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However this scalar product is thus sesquilinear rather than bilinear: it is conjugate linear and not linear in

_{i}**b**, and the scalar product is not symmetric, since

- $\backslash mathbf\{a\}\; \backslash cdot\; \backslash mathbf\{b\}\; =\; \backslash overline\{\backslash mathbf\{b\}\; \backslash cdot\; \backslash mathbf\{a\}\}.$

The angle between two complex vectors is then given by

- $\backslash cos\backslash theta\; =\; \backslash frac\{\backslash operatorname\{Re\}(\backslash mathbf\{a\}\backslash cdot\backslash mathbf\{b\})\}\{\backslash |\backslash mathbf\{a\}\backslash |\backslash ,\backslash |\backslash mathbf\{b\}\backslash |\}.$

This type of scalar product is nevertheless useful, and leads to the notions of Hermitian form and of general inner product spaces.

### Inner product

The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers $\backslash mathbb\{R\}$ or the field of complex numbers $\backslash mathbb\{C\}$. It is usually denoted by $\backslash langle\backslash mathbf\{a\}\backslash ,\; ,\; \backslash mathbf\{b\}\backslash rangle$.

The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite.

### Functions

Vectors have a discrete number of entries, that is, an integer correspondence between natural number indices and the entries.

A function *f*(*x*) is the continuous analogue: an uncountably infinite number of entries where the correspondence is between the variable *x* and value *f*(*x*) (see domain of a function for details).

Just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval. For example, a the inner product of two real continuous functions *u*(*x*), *v*(*x*) may be defined on the interval *a* ≤ *x* ≤ *b* (also denoted [*a, b*]):^{[1]}

- $(u\; ,\; v\; )\backslash equiv\; \backslash langle\; u\; ,\; v\; \backslash rangle\; =\; \backslash int\_a^b\; u(x)v(x)dx$

This can be generalized to complex functions ψ(*x*) and χ(*x*), by analogy with the complex inner product above:^{[1]}

- $(\backslash psi\; ,\; \backslash chi\; )\; \backslash equiv\; \backslash langle\; \backslash psi\; ,\; \backslash chi\; \backslash rangle\; =\; \backslash int\_a^b\; \backslash psi(x)\backslash overline\{\backslash chi(x)\}dx.$

### Weight function

Inner products can have a weight function, i.e. a function which weight each term of the inner product with a value.

### Dyadics and matrices

Matrices have the Frobenius inner product, which is analogous to the vector inner product. It is defined as the sum of the products of the corresponding components of two matrices **A** and **B** having the same size:

- $\backslash bold\{A\}:\backslash bold\{B\}\; =\; \backslash sum\_i\backslash sum\_j\; A\_\{ij\}\backslash overline\{B\_\{ij\}\}\; =\; \backslash mathrm\{tr\}(\backslash mathbf\{A\}^*\; \backslash mathbf\{B\})\; =\; \backslash mathrm\{tr\}(\backslash mathbf\{A\}\; \backslash mathbf\{B\}^*).$
- $\backslash bold\{A\}:\backslash bold\{B\}\; =\; \backslash sum\_i\backslash sum\_j\; A\_\{ij\}B\_\{ij\}\; =\; \backslash mathrm\{tr\}(\backslash mathbf\{A\}^\backslash mathrm\{T\}\; \backslash mathbf\{B\})\; =\; \backslash mathrm\{tr\}(\backslash mathbf\{A\}\; \backslash mathbf\{B\}^\backslash mathrm\{T\}).$ (For real matrices)

Dyadics have a dot product and "double" dot product defined on them, see Dyadics (Product of dyadic and dyadic) for their definitions.

### Tensors

The inner product between a tensor of order *n* and a tensor of order *m* is a tensor of order *n* + *m* − 2, see tensor contraction for details.

## See also

## References

## External links

- Template:Springer
- MathWorld.
- Explanation of dot product including with complex vectors
- Wolfram Demonstrations Project, 2007.