C++ Matrix & Lambda Programming

The purpose of this post is to show and explain a bit more about practical usage of  C++ “lambda power”.  This time in developing a light, fully functional, fast, and hopefully not that often seen, matrix creation utility. Functional programming style.

GitHub gist is here.   The previous post with another very interesting matrix variation is here.

Before we proceed:  I do know about Expression Templates and few other optimization techniques. Alas. I do consider them overkill in this context.

All nicely done for you

You are preparing Functional Programing, matrix something library. Let’s say some matrix operations. And you do know why the core of that library should be something a bit better than T[rows][cols]. Thus I am assuming you need a quick and easy to use, but fully functional, core of your matrix suite. The one you can use straight away.  The core API where you actually contain the functionality of keeping the matrices and making them available.

Usage of my single function API is simple.


// int 3x3 matrix
// static version + compile time
// arguments are: type, width, height
constexpr auto matrix_= 
    dbj_mx_make_stack(int, 3, 3);

// int 3x3 matrix 
// heap runtime
auto matrix_= 
    dbj_mx_make_heap(int, 3, 3 );

You need to preserve those matrix dimensions.  Changing the cell value is simple.


// change the value through one function API
matrix_(1, 1) = 42 ;

The same call is used to both change the value of a matrix cell, or get to the value of the cell.


// obtain the cell value
auto cell_val = matrix(1,1);
assert( cell_val, 42 );

The whole API is just one function with two arguments: row and column ordinal. How is matrix actually created, is it on the stack or on the heap, you do not need to know, Just use it? As the core of your matrix  FP solution.

So. Why is “my” solution good? Because it is simple, it is standard C++ and completely decouples form the tiny core that depends on the std lib. This means the two std lib types used can be replaced with the same but developed “in house”. This in turn means this matrix solution can be used in mission-critical or embedded or IOT runtime environments. Let us first study the obvious alternatives.

The full std:: lib way

Stack means small(er) matrices, with nothing to worry about, regarding freeing the memory used by the matrix. Also, one can create and use those 2D arrays at compile time.

Let start with using  C++ std:: lib types. For the stack-based matrix, we shall use std::array.

Note: coding style in this part is a bit pedestrian. That is deliberate, it helps to understand.


#include <array>
using std::array;

// matrix template (alias)
template< typename T, size_t width, size_t height >
using mtx_stack =  array<  
  array<T, width >
  , height
>; 

// matrix type
using mtxint_3_2 = mtx_stack<int,3,2>;

// matrix instance + initialization
constexpr mtxint_3_2 mx_1{
   array{1,2,3} ,
   array{4,5,6}
};

// row 0 compile time values
static_assert(  mx_1[0][0] == 1 );
static_assert(  mx_1[0][1] == 2 );
static_assert(  mx_1[0][2] == 3 );

// row 1 compile time values
static_assert(  mx_1[1][0] == 4 );
static_assert(  mx_1[1][1] == 5 );
static_assert(  mx_1[1][2] == 6 );

Above is somewhat equal to int mx_1[2][3]. But here we use std::array  as one very handy and simple std type, for generic stack-based 2D arrays of type T. And unlike native arrays, std::array instances can be moved, can be used with standard templates and algorithms easily.

Heap-based standard solution

Heap is for much larger run-time matrices. With the added issue of memory fragmentation. And strict run-time usage.

At first thought, for the heap-based array I decided to use (also c++17 standard) std::unique_ptr<T[]>, basically a smart pointer to an array. Here is a basic, complex but workable solution.


#include <memory>
using std::unique_ptr;
/*
  Example -- height : 2, width: 3
  2 rows         3 columns
  +----+     +----+----+----+
  |  0 +---> |  0 |  1 | 2  |
  +----+     +----+----+----+
  |  1 +---> |  0 |  1 | 2  |
  +----+     +----+----+----+
*/
// heap array template (alias)
template<typename T>
using heap_rows = unique_ptr<T[]>;

// heap matrix template (alias)
template< typename T >
using mtx_heap = unique_ptr<
// array of rows
  heap_rows<T>[]
>;

// heap matrix type
using mtxh_int = mtx_heap<int>;

// creation has to be runtime
mtxh_int make_mtxh_int_3_2( ) 
{
using std::make_unique;

 // make rows array
 auto rows_arr = make_unique< heap_rows<int>[] >( 2 /*height of two rows*/ );

 // each element of array of rows is pointing to array of values 
 const int width = 3;
 rows_arr[0] = make_unique< int[] >(width);
 rows_arr[1] = make_unique< int[] >(width);
 // unique ptr can move
 return rows_arr;
}

static auto use_mtxh_int_3_2 = []() {
 mtxh_int mtxh_3_2 = make_mtxh_int_3_2();

 // row 0
 mtxh_3_2[0][0] = 1;
 mtxh_3_2[0][1] = 2;
 mtxh_3_2[0][2] = 3;
 // row 1
 mtxh_3_2[1][0] = 4;
 mtxh_3_2[1][1] = 5;
 mtxh_3_2[1][2] = 6;
     return true;
}();

First to note: I made the wrong choice here. The std::vector<T>  is a much more elaborate type, but crucially, with the added benefit of giving to the whole mechanism, ability to be both copied and moved around.  Also, the above can be expanded, made more complex, and involved; if for example, we extend the functionality so we can create a heap-based matrix of any size, or we can create a structure around this core, etc.

The Alternative

In any case, I am sure you will agree, not a very simple looking code. To use it you probably just have to believe, I know what I am doing and that this will all work in all sorts of applications you might have.

I somehow do not subscribe to that point of view. Standard C++, 17 and beyond have rapidly increased in complexity. In my opinion, unwarranted complexity. These two solutions so far, just look too complex to me.

Armed with the C++ core language “Lambda Expressions” feature,  and two tinies, “base” functions, I might be so bold to present all you need to create and use standard C++ std lib 2D arrays in a very FP fashion.

All std lib usage is nicely hidden (aka encapsulated) and decoupled inside two tiny lambdas. Without your code knowing or worrying are you on the stack or on the heap. Besides you of course knowing the unique limitations of both kinds of memory.

Without further cogitations, here is the front of the mechanism:


template< typename T, size_t rows, size_t cols,	typename F >
/* potentially constexpr function */ 
inline constexpr 
/* return type is lambda*/
  auto mx(
/* 
  argument is a callable object that has to generate 
  memory block, big enough to store  T[rows x cols]
  made on stack or on heap
  result has to be movable
  F type has to be moveable and copyable
  and has to have indexing operator '[]'
*/
  F source_ 
)
{
/* return a lambda */
  return[
/* make a call to hand over the array memory 
   and make 'arry' member, available inside lambda
   note that we copy the result to 'arry' with '='
   yet another lambda feature
*/
     arry = source_() 
   ]
/*
  lambda is always to be called with row and col
  of the matrix stored in the member 'arry'
*/
  (size_t row_, size_t col_) 
/* this lambda is potentially compile time 
   and returned type is mutable reference 
   to type T */
   mutable->T& {
   assert(row_ < rows);
   assert(col_ < cols);
/* 
we use a single contiguous memory block to
represent 2D matrix
to return a single element we use the formula as bellow
*/
  return arry[row_ * rows + col_];
   };
} // mx

Please take your time, going through the code above.  It is really packed with a lot of modern C++. Showing the “power of lambda”.

What surprises most readers, is the capture statement of the lambda above:  [ arry = source_() ]. That is a legal C++.  Lambda capture statement is how users pass lambda object constructor arguments, and name members to be used inside the lambda.

Thus above we say: the result of the call source_() will be copied to the member arry upon constructing the lambda object. Watch carefully. The synopsis of the function object generated by the compiler to implement this lambda is approximately this:


// not legal C++
lambda __some_uid_123_ {
   size_t rows_{/* take from the closure */};
   size_t cols_{/* take from the closure */};
   // the array memory 
   T arry_[] {} ;
  // constructor
   __some_uid_123_ ()
   {
      __copy_to( arry, source_() ) ;
   }
  // this is what users see and use as closure
  T & operator ()
   (size_t row_, size_t col_) 
  {
    assert(row_ < rows);
    assert(col_ < cols);
    return arry[row_ * rows_ + col_];
   }
} ;

The key point of the above is: when implemented that lambda represents the so-called “local class”.  Let us repeat once more: We return the instance of a local class when returning lambda from inside a function.


template< typename T, size_t rows, size_t cols, typename F >
inline constexpr auto mx(F source_)
{
 // inner lambda returned
 return[arry = source_()]
  (size_t row_, size_t col_) /*constexpr*/ mutable->T&
 {
  assert(row_ <= rows);
  assert(col_ <= cols);
  return arry[row_ * rows + col_];
 };
} // mx

And this is the secret sauce. This gives us the greatest degree of encapsulation and decoupling. We use the inner lambda returned.  It has access to the closure where it was born. Most importantly to the memory block given to the constructor of the closure where our matrix lambda was born.


// arry is 1D array memory 
// kept inside the cosure
// we return reference to 
// one element
return arry[row_ * rows + col_];

Remember we do return by reference. And the lambda is “mutable” which is yet another, secret sauce.

API Sampling

How do we use this in practice? Provided we all understand why is it done as it is. First the inevitable macros, we actually used at the beginning of this post, without telling you.


// T is the type, R is rows, C is columns
// use the function K as the argument to the 
// mx< T, R * C>
// K is one of the two array memory providers
#define dbj_mx_make(T, R, C, K) mx<T, R, C>(K<T, R * C>)

// us the first macro and use the heap arr making function
#define dbj_mx_make_heap(T,R,C) dbj_mx_make(T, R, C, arr_heap)

// use the first macro and use the stack arr making function
#define dbj_mx_make_stack(T,R,C) dbj_mx_make(T, R, C, arr_stack)

As in the gist provided, for the above macros, we need two tiny functions to provide 2D arrays allocated on the stack or on the heap. arr_stack and arr_heap.


// make 1D array on the stack
template< typename T, size_t len_ >
inline constexpr auto arr_stack() { return std::array<T, len_>{ {}}; }
// make 1D array on the heap
template< typename T, size_t len_ >
inline std::vector<T> arr_heap() { return std::vector<T>(len_); }

Above we have completely encapsulated the mechanism of making arrays and thus decoupled the solution from this mechanism completely. Let us talk about this FP tiny API a bit more.

The requirements for actual types used above are: [ ] (aka indexing) operator and the ability to both be moved and copied. In case you want to replace them with something else you might fancy better.

Functional programming

The type of the lambda is irrelevant, its functionality is what matters. This is a single function API for both 2D array types: stack and heap. Now we can write matrix manipulation functions that are completely decoupled from matrix creation issues.


// receive the matrix 'as a function' by value
inline auto changer = [](
   // pass by value
   auto matrix_, size_t row_, size_t col_, auto value_) 
{
// on the matrix given, change the cell required
   matrix_(row_, col_) = value_;
// return again by value
  return matrix_;
};
// print the matrix
inline auto printer = [](
  // pass by value
  auto matrix_, size_t width_, size_t height_)
 -> void {
 using namespace std;
  for (size_t row_ = 0; row_ < width_; row_++) {
	cout << endl << "{";
  for (size_t col_ = 0; col_ < height_; col_++) {
        // here we use the matrix, again through one function call
	cout << setw(3) <<  matrix_(row_, col_) << " ";
  }
	cout << "}" << endl;
   }
	cout << endl;
};

We simply have a matrix completely encapsulated in a function. The full test, again in an FP style is at the bottom of that gist, already mentioned.

This is one very small but useful utility. Showing a lot of good sides of a mixture of modern C++ and functional programming. All simply enabled with lambdas aka closures.

Usability

Using this mechanism we do “carry around” our matrices. Wherever we can copy or move the lambda instance we will take the access of the 2Darray kept inside it. I think you might have already noticed that.

This is giving the ability to pass these matrices by value and also return them by value.  And it all “just works”. Thanks to lambda implementation and thanks to a few key points of this design.

You will quickly find out, to actually use this API, FP style is the best fit. As in the examples above. Which does not mean the OOP style is excluded. It is only, this solution is not suited for OOP. As it is not simple to keep lambda as a member in a class.

Finally

Several measures can be added to stop the abuse, but as it is, this utility can be used in very real, demanding projects. Also, both std::array and std::vector,quality alternative implementations can be found on the net. That is in case your code is running in mission-critical environments.

So little C++ so much good!
So little C++ so much good!

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.