Category Archives: Vectorization

UME::SIMD Tutorial 3: Getting back to scalar world (LOAD/STORE)

We already discussed two steps of computations using UME::SIMD: initializing vectors and performing calculations. But what can we do with vectors once we finished our calculations? Unfortunatelly, most of the standard functions and existing libraries don’t use the UME::SIMD types. What is necessary is to somehow get back to scalar world.

One method of doing so, presented in previous tutorial was called horizontal reduction. Since horizontal reductions return a scalar type, it is possible to use returned scalars as inputs to any already defined function.

 
#include <umesimd/UMESimd.h>

float foo(float x) {
... // Do something with x
}

...

int main() {
    UME::SIMD::SIMDVec<float, 4> a, b, c;
    ...
    a=b*c;
    float t0=a.hmul(); // Horizontal multiplication
    foo(t0);
    return 0;
}

Whil extremaly useful in certain situations, horizontal reductions are not always a proper solution. Very often we want to perform only vertical operations, and store the results into some memory array. Here is how to do it:

#include <umesimd/UMESimd.h>

float foo(float *x) {
... // Do something with an array of x
}

...

int main() {
    UME::SIMD::SIMDVec<float, 4> a, b, c;
    float temp[4];
    ...
    a=b*c;

    // STORE the values from vector 
    // in consecutive fields of 'temp'
    a.store(temp);

    // pass 'temp' to function
    foo(temp);
    return 0;
}

In the most often scenario, the workflow with SIMD types will be as follows:

  1. LOAD values from memory locations to UME::SIMD vectors
  2. perform calculations using UME::SIMD vectors
  3. STORE values from UME::SIMD vectors to memory locations.

Remember that storing values from a vector doesn’t destroy the values within the vector, so you can still use the vector for some other calculations.

PERFORMANCE HINT: once values are loaded to a vector, perform as many calculations as possible before storing the results back. LOAD/STORE operations are equivalent to moving data between memory and registers, and can introduce significant latency into your computational kernels.

In C++ the load and store operations are hidden from the programmer under the array indexing operator[]. Most of modern compilers make some additional deductions hiding the load/store (MOV in x86) operations when it is possible to. Since the compiler doesn’t have any knowledge about UME::SIMD types, the burden of deciding when these operations should happen rests on you, the User.

UME::SIMD Tutorial 2: Calculations

Horizontal vs. Vertical vectorization

Once the SIMD vectors are initialized with data, they will be operated upon. When operating on vectors, we need to distinguish two directions of operations: horizontal and vertical. The easiest way to understand the difference is by looking at a following diagram:

horizontal_vertical_vectorization

In the diagram we can distinguish two directions relating to either data orientation or instruction orientation. Vertical vectorization will be a process of operating in an elementwise manner on elements of each vector. For the first ADD operation, the result will be, as if we executed following list of operations:

c0=a0+b0;
c1=a1+b1;
c2=a2+b2;
c3=a3+b3;

Mind that vertical operations can also operate on a single vector, the same way as SQRT operation:

f0=sqrt(e0);
f1=sqrt(e1);
f2=sqrt(e2);
f3=sqrt(e3);

A horizontal operation, or a reduction , is one involving applying an operator between elements of a vector. From the example above this would be HADD operation, which would have the same meaning as:

x=f0+f1+f2+f3;

Another way of remembering the distinction between horizontal and vertical operations is as follows: vertical operation always returns a vector, while horizontal always returns a scalar. By convention names of hoirzontal operations are prefixed with H- letter, such as in HADD.

Invoking operations in UME::SIMD

When operating on vectors simply use overloaded operators, as if you were working with scalars:

UME::SIMD::SIMDVec<float, 4> a, b, c, d, e, f;
...
c = a + b;
e = c * d;

For additional operations, the ones that don’t have an equivalent C++ operator, you can either use a special namespace UME::SIMD::FUNCTIONS or a Member Function Interface (MFI) invocation convention:

f = UME::SIMD::FUNCTIONS::sqrt(e); // or
f = e.sqrt(); // MFI call

MFI invocation convention is especially useful when using UME::SIMD as a code generation target, as it allows invoking every operation supported by the library in the same way. For regular development I advise using operators and functions as this will result in better code readability.

Invocation of a horizontal operation cannot be done using operator syntax, as the concept of reduction operation is not present in current C++ language. You have to therefor use either FUNCTIONS namespace or MFI:

float x = UME::SIMD::FUNCTIONS::hadd(f); // or
float x = e.hadd(); // MFI call

Which syntax convention to use is up to you.

UME::SIMD Tutorial 1: Vector declaration and basic initialization

The programming model used in UME::SIMD is very simple. Instead of using scalar variables, use vector variables. A simple vector declaration can look like:

UME::SIMD::SIMDVec<float, 8> x_vec;

In the above declaration two template parameters have to be passed: number of elements packed in the vector (8) and the fundamental type used to represent each element (float). The fundamental types supported are:

  • unsigned integer: uint8_t (8b), uint16_t (16b), uint32_t (32b) and uint64_t (64b);
  • signed integer: int8_t (8b), int16_t (16b), int32_t (32b) and int64_t (64b);
  • floating point: float(32b) and double (64b).

For the vector length two rules apply:

  • vector length is power of 2, starting with ‘1’,
  • maximum size of a vector is not higher than 1024b.

But what is the reason for these rules? Both limitations are comming from hardware constraints. On hardware level it only makes sense to have registers of length being power of 2, as having the arbitrary vector sizes would require additional die surface to be used. At the same time, the hardware limit is being put on number of bits, rather than on number of elements: vector of 32 64-bit elements would occupy 2048 bit registers, while  a vector of 32 8-bit elements would only use 256 bit registers. Unfortunatelly this means that in the current model, we can operate on up to 128-element uint8_t vectors, but only on 16-element uint64_t vectors.

Once a vector is declared it can be used in a similar way as any fundamental type… with some exceptions. First problem is: how to put actual data in this vector type?

Initialization from scalars

All vector elements can be initialized with the same value already contained in  a scalar variable or constant. To initialize the vector with scalar (or a constant) it is possible to simply write something like:

UME::SIMD::SIMDVec<float, 8> x_vec1(3.14f);
UME::SIMD::SIMDVec<float, 8> x_vec2=3.14f;

float y=2.71f;

UME::SIMD::SIMDVec<float, 8> y_vec1(y);
UME::SIMD::SIMDVec<float, 8> y_vec2=y;

If the initialization has to take place after the vector declaration, it is enough to use the assignment operator (‘=’):

UME::SIMD::SIMDVec<float, 8> x_vec1, x_vec2;
x_vec1=3.14f;
float x=2.71f;
x_vec2=x;

The broadcast initialization is not always feasible as we might want to have different initial values in every vector cell. You can initialize a vector using multiple scalar variables in a following way:

float x1=1.0f, x2=2.0f, x3=3.0f, x4=4.0f;
UME::SIMD::SIMDVec<float, 4> x_vec(x1, x2, x3, x4);

Initialization from memory

In many situations, we don’t want to simply propagate a single value to a vector register. If the initialization data is stored in an array of memory, it is possible to load the variables into the vector register:

float x[4]={1.0f, 2.0f, 3.0f, 4.0f}
// Load-initialize with 'x':
UME::SIMD::SIMDVec<float, 4> x_vec1(x);
// Load from 'x' after vector declaration
UME::SIMD::SIMDVec<float, 4> x_vec2;
x_vec2.load(x);

Loading is important as it allows use of memory pointers instead of scalar variables. This is an important initialization mode especially for long vectors (imagine initializing SIMDVec using scalars…).

In next tutorial we will look into some basic computations that can be performed using vector primitives.

UME::SIMD Tutorial 0: Installation

The library is provided in a header-only form, which makes its’ installation trivial. We will present installation procedures for both Linux and Windows operating systems. Mind that as there is no perfectly portable build system, we only limit ourselves to most common tools used.

This installation procedure shows only steps required for UME::SIMD installation as a standalone library, but we make some small adjustments so that switching to full UME framework would be possible in the future.

Linux installation procedure

The library in its’ primitive form doesn’t require any build system. The only configuration that has to be made is by passing proper compilation flags to the compiler. The specific flags used will depend on both: hardware, compiler and operating system. We will discuss specific configurations in future tutorials in this series.

This installation procedure should work for most of the Linux based systems and requires a GIT client (https://git-scm.com/).

    1. Navigate to the directory where you want to store downloaded the current version of the library and create a directory for the framework:
$ mkdir ume
$ cd ume 
    1. Clone the repository:
$ git clone https://github.com/edanor/umesimd.git 

 

    1. You might want to checkout a tagged version to keep track of library updates:

 

$ cd umesimd
$ git checkout tags/v0.8.1
$ cd ..
    1. export the library directory
$ pwd
/home/ume
$ export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/ume
    1. Include the library code into your C++ project:

Now you should be able to include the library into your program using following directive:

 #include <umesimd/UMESimd.h>

That’s it!

Windows installation procedure

Windows installation procedure is similar, but requires additional steps in configuring the specific IDE. We will discuss this configuration for MS Visual Studio 2015 only, but similar rules apply for others.

    1. Navigate to the directory where you want to store downloaded the current version of the library and create a directory for the framework:
c:\> mkdir ume
c:\> cd ume 
    1. Clone the repository:
c:\ume> git clone https://github.com/edanor/umesimd.git 

 

    1. You might want to checkout a tagged version to keep track of library updates:

 

c:\ume> cd umesimd
c:\ume\umesimd> git checkout tags/v0.8.1
c:\ume> cd ..
    1. Create a new project (or open an existing one):

[Ctrl+Shift+N] or File->New->Project…

Create_project

    1. Add new source file to the project:

[Ctrl+Shift+A] or Project->Add New Item…

Add_main_file

    1. Fill the main.cpp with:
#include <umesimd/UMESimd.h>

int main()
{
    return 0;
}
    1. Open project properties and configure path to the ‘ume’ directory created before

[Alt+Enter] or Project->Properties

Add_ume_path.png

There you go!

Vectorization #1: the difficulty of notation, part 2

In the previous post we showed a simple example of a vectorization process. One thing we did there was to assume certain behaviour of \cdot operation. Before getting to a more complicated vectorization example, we have to discuss our previous omission. It will be critical to understanding more complex example.

We will also discuss problem of operation symbols ambiguity and why we actually have to use multiple operations with different symbols.

Operations definition (aka. operator overloading)

Before we try applying second round of vectorization to our previous equations, we have to discuss one more thing: definition of operations. (Remember my comment about \circ vs. \cdot symbols mentioned in previous post?) Here’s the explanation.

If we have an equation of following form:

a = b_1 \cdot c_1 + b_2 \cdot c_2 + ... + b_N \cdot c_N

and we want to represent it in a shorter, vector form, we could write:

\textbf{b} = (b_1, b_2, ..., b_N)
\textbf{c} = (c_1, c_2, ... , c_N)

a = \textbf{b} * \textbf{c}

We know this simplification as vector dot product. In mathematics it is usually denoted with ‘\cdot ‘ symbol.

On the other hand, we might face a set of equations, such as:

a_1 = b_1 \cdot c_1
a_2 = b_2 \cdot c_2
...
a_N = b_N \cdot c_N

In which case our best possible choice could be to re-write it as:

\textbf{a} = (a_1, a_2, ..., a_N)
\textbf{b} = (b_1, b_2, ..., b_N)
\textbf{c} = (c_1, c_2, ..., c_N)

\textbf{a} = \textbf{b} * \textbf{c}

This element-wise multiplication is known as Hadamard product, and in commonly accepted mathematical notation, should be written as:

\textbf{a} = \textbf{b} \circ \textbf{c}

On one hand, we want to have simpler notations, which boils down to smaller number of different symbols, and variable types. On the other hand, we would like the notation to be as readable as possible, so we increase the number of types and symbols. In programming languages we are free to define as many ‘functions’ as possible to describe behavior of operations we will be performing. But having too many functions, requires programmers to memorize many of them to navigate the code efficiently.

What is then a practical limitation that prohibits us from re-using operation symbols? If we used the same symbol for both operations, the only way we could tell one operation from another, would be to look at the type of result. In the first case this is a scalar, in the second: a vector. This would be possible, but when dealing with more complex equations can really make our notation more complicated.

First lets define ‘+’ operation in a following way:

\textbf{a} + \textbf{b} = (a_1 + b_1, a_2 + b_2, ..., a_N + b_N)
a + \textbf{b} = a + b_1 + b_2 + ... + b_N
\textbf{a} + b = a_1 + a_2 + ... + a_N + b

Example of an expression to be evaluated:

\textbf{a} = \textbf{b} * \textbf{c} + \textbf{d} * \textbf{e}

We have now to work it back, starting from the type of a result:

  1. \textbf{a} is a vector, so according to regular precedence, the result of ‘+’ operation has to be a vector.
  2. The result of \textbf{b} * \textbf{c} + \textbf{d} *\textbf{e} is a vector only if both \textbf{b} * \textbf{c} and \textbf{d} * \textbf{e} are vectors.
  3. Both ‘* ‘ operations result in a vector only when the requested operation is a Hadamard product.

So in the example above the meaning of * would be of mathematical \circ .

What about following expression in which the only difference is that a is a scalar:

a = \textbf{b} * \textbf{c} + \textbf{d} * \textbf{e}

Again working it backwards:

  1. a is a scalar in multiple situations:
    1. \textbf{b} * \textbf{c} result is scalar and \textbf{d} * \textbf{e} is a vector
    2. \textbf{b} * \textbf{c} result is vector and \textbf{d} * \textbf{e} is a scalar
    3. Both results of: \textbf{b} * \textbf{c} and \textbf{d} * \textbf{e} are scalars
  2. Since both \textbf{b} * \textbf{c} and  \textbf{d} * \textbf{e} cannot be a vector at the same time, the result of \textbf{b} * \textbf{c} is a vector only when result of \textbf{d} * \textbf{e} is a scalar.
  3. Since both \textbf{b} * \textbf{c} and  \textbf{d} * \textbf{e} cannot be a vector at the same time, the result of \textbf{d} * \textbf{e} is a vector only when result of \textbf{b} * \textbf{c} is a scalar.

At this moment our deduction reveals a phenomenon known in programming as symbol ambiguity. This phenomenon is a nightmare of both programming language and compiler designers. Between points 2 and 3 there exists an ambiguity saying: we cannot deduce anything about the type of result of one operand, without knowing the result of the other operand. Without any additional rule saying anything about how such conflict should be resolved, we cannot simply deduce anything about ‘\cdot ‘ operations! We could modify our rules for + saying for example: ‘if a + operator is supposed to result in a scalar, assume both of it’s operands to result in scalar’. If such a rule held, then we could immediately say that the meaning of \cdot would be that of \circ . The definition of:

\textbf{a} + \textbf{b} = (a_1 + b_1, a_2 + b_2, ..., a_N + b_N)

would immediately become:

\textbf{a} + \textbf{b}  -> vector: \textbf{a} + \textbf{b} = (a_1 + b_1, a_2 + b_2, ..., a_N + b_N)
\textbf{a} + \textbf{b}  -> scalar: \textbf{a} + \textbf{b} = a_1 + a_2 + ... + a_N + b_1 + b_2 + a_N

Multiplying such rules is not very welcome, as it can lead to an ambiguity avalanche and, as a result, explosion in the description of our notation. When an ambiguity happens, the simplest solution is then to use a separate symbol, having its’ own definition.

The continuation in: Vectorization #1: the difficulty of notation, part 3

Vectorization #1: the difficulty of notation, part 1

In this series of posts, I will try looking at different aspects of vectorization. We will start today with a scalar/vector/matrix notations, and try explaining what are the benefits and complications related to using such notations. I won’t be talking much about the performance or programming in the strict sense, but I will rather try explaining what is the problem in finding a fair notation. The mathematics presented here are simplified so that non-mathematicians could also follow the thought process without (too) much effort. Although the knowledge about real vector/matrix algebra might be of some use.

Vectorizing scalar equations

The vector term originates from mathematics. Say you have a group of equations, like:

a_1 = b_1 + c_1 \cdot d_1
a_2 = b_2 + c_2 \cdot d_2
...
a_N = b_N + c_N \cdot d_N

All of the variables: a_1, b_1, c_1, d_1, a_2, b_2, c_2, d_2 ..., a_N, b_N, c_N, d_N are used in the equation of the same exact form:

\alpha = \beta + \gamma \cdot \delta

In order to simplify the first notation, you can first group the variables at the same position in equations in a following way, forming vectors:

\textbf{a} = (a_1, a_2, ..., a_N)
\textbf{b} = (b_1, b_2, ..., b_N)
\textbf{c} = (c_1, c_2, ..., c_N)
\textbf{d} = (d_1, d_2, ..., d_N)

Now you can replace the original set of equations with a vector form (a cautious mathematician can now point that ‘\cdot ‘ is not a correct vector operation symbol here, and that we should use \circ instead. This is a part of the thought experiment. Please bear with me in this insanity for a while.):

\textbf{a} = \textbf{b} + \textbf{c} \cdot \textbf{d}

An entity representing a singular value, such as a_1 is called a scalar. A list (or more precisely ‘an ordered-set’) of elements, such as \textbf{a} is called a vector.

By vectorizing our equations, we replaced an unknown number of equations (N )  of known number of variables (a_1, b_1, c_1, d_1), with a known number of equations (1), and unknown number of packed variables(a_1, ..., a_N, ... , b_N, ..., c_N, ..., d_N ).

The obvious gain we will get from having vector notation is the simplification of mathematical proofs. Instead of repeating each time that a certain property applies to equations for a_1, a_2, ..., a_N we can simply state that the property applies to all elements of \textbf{a}. We still have more complicated definition of the types we manipulate, but we usually have to deal with that only in the beginning and possibly at the end of a proof.

Now imagine following set of equations:

a_{11} = 3 \cdot b_{11} + 4 \cdot c_{11} + 5 \cdot d_{11}
a_{12} = 3 \cdot b_{12} + 4 \cdot c_{12} + 5 \cdot d_{12}
...
a_{1M} = 3 \cdot b_{1M} + 4 \cdot b_{1M} + 5 \cdot d_{1M}
a_{21} = 1 \cdot b_{21} + 5 \cdot c_{21} + 4 \cdot d_{21}
a_{22} = 1 \cdot b_{22} + 5 \cdot c_{22} + 4 \cdot d_{22}
...
a_{2M} = 1 \cdot b_{2M} + 5 \cdot c_{2M} + 4 \cdot d_{2M}
...
a_{N1} = 7 \cdot b_{N1} + 3 \cdot c_{N1} + 9 \cdot d_{N2}
a_{N2} = 7 \cdot b_{N2} + 3 \cdot c_{N2} + 9 \cdot d_{N2}
...
a_{NM} = 7 \cdot b_{NM} + 3 \cdot c_{NM} + 9 \cdot d_{NM}

If we apply our vector notation then we have:

\textbf{a}_1 = (a_{11}, a_{12}, ..., a_{1M})
\textbf{a}_2 = (a_{21}, a_{22}, ..., a_{2M})

\textbf{a}_N = (a_{N1}, a_{N2}, ..., a_{NM})

\textbf{b}_1 = (b_{11}, b_{12}, ..., b_{1M})
\textbf{b}_2 = (b_{21}, b_{22}, ..., b_{2M})

\textbf{b}_N = (b_{N1}, b_{N2}, ..., b_{NM})

\textbf{c}_1 = (c_{11}, c_{12}, ..., c_{1M})
\textbf{c}_2 = (c_{21}, c_{22}, ..., c_{2M})

\textbf{c}_N = (c_{N1}, c_{N2}, ..., c_{NM})

\textbf{d}_1 = (d_{11}, d_{12}, ..., d_{1M})
\textbf{d}_2 = (d_{21}, d_{22}, ..., d_{2M})

\textbf{d}_N = (d_{N1}, d_{N2}, ..., d_{NM})

And the re-written equations:

\textbf{a}_1 = 3 \cdot \textbf{b}_1 + 4 \cdot \textbf{c}_1 + 5 \cdot \textbf{d}_1
\textbf{a}_2 = 1 \cdot \textbf{b}_2 + 5 \cdot \textbf{c}_2 + 4 \cdot \textbf{d}_2

\textbf{a}_N = 7 \cdot \textbf{b}_N + 3 \cdot \textbf{c}_N + 9 \cdot \textbf{d}_N

So we replaced a long set of equations, with a shorter set of equations. But again if we start performing proofs on such equations, we would end up repeating everything as in the first example.

We can also think about another way of representing the same set of equations. Instead of grouping all a ‘s and all b ‘s together, we could use following grouping:

\textbf{p}_1 = (3, 4, 5)
\textbf{p}_2 = (1, 5, 4)
...
\textbf{p}_N = (7, 3, 9)

\textbf{q}_{11} = (b_{11}, c_{11}, d_{11})
\textbf{q}_{12} = (b_{12}, c_{12}, d_{12})
...
\textbf{q}_{NM} = (b_{NM}, c_{NM}, d_{NM})

Our second vector form becomes then:

a_{11} = \textbf{p}_1 \cdot \textbf{q}_{11}
a_{12} = \textbf{p}_1 \cdot \textbf{q}_{12}
...
a_{1M} = \textbf{p}_1 \cdot \textbf{q}_{1M}
a_{21} = \textbf{p}_2 \cdot \textbf{q}_{21}
a_{22} = \textbf{p}_2 \cdot \textbf{q}_{22}
...
a_{2M} = \textbf{p}_2 \cdot \textbf{q}_{2M}
...
a_{N1} = \textbf{p}_N \cdot \textbf{q}_{N1}
a_{N2} = \textbf{p}_N \cdot \textbf{q}_{N2}
...
a_{NM} = \textbf{p}_N \cdot \textbf{q}_{NM}

While in this case we didn’t decrease the number of equations, we made each of them much simpler. As for the previous transformation, we could vectorize again and get yet simpler notation for the set of our equations. We will do that in a moment. But first we need to clarify something about ‘\cdot ‘ symbol we’ve been using carelessly so far.

The continuation in: Vectorization #1: the difficulty of notation, part 2