In the previous post we showed a simple example of a vectorization process. One thing we did there was to assume certain behaviour of operation. Before getting to a more complicated vectorization example, we have to discuss our previous omission. It will be critical to understanding more complex example.

We will also discuss problem of operation symbols ambiguity and why we actually have to use multiple operations with different symbols.

## Operations definition (aka. operator overloading)

Before we try applying second round of vectorization to our previous equations, we have to discuss one more thing: **definition of operations**. (Remember my comment about vs. symbols mentioned in previous post?) Here’s the explanation.

If we have an equation of following form:

and we want to represent it in a shorter, vector form, we could write:

We know this simplification as **vector dot product.** In mathematics it is usually denoted with ‘‘ symbol.

On the other hand, we might face a set of equations, such as:

In which case our best possible choice could be to re-write it as:

This element-wise multiplication is known as **Hadamard product**, and in commonly accepted mathematical notation, should be written as:

On one hand, we want to have simpler notations, which boils down to smaller number of different symbols, and variable types. On the other hand, we would like the notation to be as readable as possible, so we increase the number of types and symbols. In programming languages we are free to define as many ‘functions’ as possible to describe behavior of operations we will be performing. But having too many functions, requires programmers to memorize many of them to navigate the code efficiently.

What is then a practical limitation that prohibits us from re-using operation symbols? If we used the same symbol for both operations, the only way we could tell one operation from another, would be to look at the type of result. In the first case this is a scalar, in the second: a vector. This would be possible, but when dealing with more complex equations can really make our notation more complicated.

First lets define ‘+’ operation in a following way:

Example of an expression to be evaluated:

We have now to work it back, starting from the type of a result:

- is a vector, so according to regular precedence, the result of ‘+’ operation has to be a vector.
- The result of is a vector only if both and are vectors.
- Both ‘‘ operations result in a vector only when the requested operation is a Hadamard product.

So in the example above the meaning of would be of mathematical .

What about following expression in which the only difference is that is a scalar:

Again working it backwards:

- is a scalar in multiple situations:
- result is scalar and is a vector
- result is vector and is a scalar
- Both results of: and are scalars

- Since both and cannot be a vector at the same time, the result of is a vector only when result of is a scalar.
- Since both and cannot be a vector at the same time, the result of is a vector only when result of is a scalar.

At this moment our deduction reveals a phenomenon known in programming as** symbol ambiguity**. This phenomenon is a nightmare of both programming language and compiler designers. Between points 2 and 3 there exists an ambiguity saying: we cannot deduce anything about the type of result of one operand, without knowing the result of the other operand. Without any additional rule saying anything about how such conflict should be resolved, we cannot simply deduce anything about ‘‘ operations! We could modify our rules for saying for example: ‘*if a ** operator is supposed to result in a scalar, assume both of it’s operands to result in scalar’*. If such a rule held, then we could immediately say that the meaning of would be that of . The definition of:

would immediately become:

Multiplying such rules is not very welcome, as it can lead to an ambiguity avalanche and, as a result, explosion in the description of our notation. When an ambiguity happens, the simplest solution is then to use a separate symbol, having its’ own definition.

The continuation in: **Vectorization #1: the difficulty of notation, part 3**