Saturday, December 20, 2014

C++11 for Math Vector Types

Something about code has been ringing true since I heard of the idea at school: each line of code has a maintenance cost.  Code is rarely never changing.

Come in C++11 and the common issue of dealing with the vector types.  These include position (we have the CGPoint in iOS, skVector3 in SpriteKit, and so forth), size (CGSize), colour (4 either floats or bytes), etc.

What changes between these?  The number of elements, the type of the elements.

What remains constant?  The operations.  Dot product can be used for luminance as it can be used for getting the angle between two vectors.  Even it can be treated as an innate part of matrix multiplication which we use for general transforms in space, such as from RGB to YUV.

In C++11 we can define something very nice, a template with two parameters, a type and a size.  A colour will typically be 3 or 4 components with either a uint8_t or a float as a type.  A position can be 2-4 components and is typically a float but can also be a double.

For the constructor, we can use variadic templates to ensure that 0-N components can be initialized at once.  For example, in a RGBA colour we could initialize just the first 3 components.

Pushing this idea further, we can make a lot of code much simpler.  Consider parsing a range of memory containing an image.  We usually have a stride for the x and y which determines how many bytes until the next x pixel and next y pixel.  X is typically 4 for 32-bit colour and Y is the width of the image times 4.

Imagine we were to store the strides in a vector, then we could dot the stride with the desired pixel position to get the byte offset.  We then have replaced what may appear to be a bunch of semi-random operations all over the code with dot products.

Yes, the example is contrived, and would need some tweaking to be just as efficient - however the point is that all the varied structures described above can be represented using a single structure with a common set of operations.

Consider, my vector for a colour is now defined as: typedef Vector<uint8_t, 4> Colour;

For small hobbyist projects, this is an essential trick to have a rich set of types with relatively little effort.

Update March 1st, 2015 -- Sample code:
#pragma once//


#include <assert.h>
#include <cmath>
#include <initializer_list>
#include <stdlib.h>

template<class T, int N>
class LVector
{
private:
typedef LVector<T, N> _type;
T _d[N];
// Utility so constructor can take N elements
LVector(T*, int) {}
template<typename ... Args>
LVector(T *idx, T v, Args... args)
: LVector(idx+1, args...)
{
assert(idx <= _d);
idx[0] = v;
}
// Utility so swizzle can take N elements
template<int M>
void _swizzle(LVector<T, M> &ref, const int i)
{ assert(i == M); /* Ensure that we have given all params. */ }
template<int M, typename ... Args>
void _swizzle(LVector<T, M> &ref, const int i, const int idx, Args... args)
{
ref._d[i] = (*this)[idx];
_swizzle(ref, i+1, args...);
}
public:
LVector() = default;
LVector(const LVector<T, N>&) = default;
template<typename ... Args>
LVector(T v, Args... args)
: LVector(_d+1, args...)
{
static_assert(N > 0, "Too many elements for size of vector");
_d[0] = v;
}
_type operator+(const _type &a) const
{
_type r;
for (int i=0; i<N; i++)
r._d[i] = _d[i] + a._d[i];
return r;
}
// Swizzle is a common operation to extract vectors.
template<int M, typename ... Args>
LVector<T, M> swizzle(Args... args)
{
LVector<T, M> v;
_swizzle(v, args...);
return v;
}
// Cast to bool (enables operators to work)
operator bool() const
{
for (int i=0; i<N; i++)
{ if (_d[i] != 0) return false; }
return false;
}
// Comparators (we return masks as they may be multiplied + avoid type issues)
_type operator >(const T& a) const
{
_type r;
for (int i=0; i<N; i++)
{ r._d[i] = (_d[i] > a) ? 1 : 0; }
return r;
}
_type operator >=(const T& a) const
{
_type r;
for (int i=0; i<N; i++)
{ r._d[i] = (_d[i] >= a) ? 1 : 0; }
return r;
}
_type operator >(const _type &a) const
{
_type r;
for (int i=0; i<N; i++)
{ r._d[i] = (_d[i] > a._d[i]) ? 1 : 0; }
return r;
}
_type operator <(const T& a) const
{
_type r;
for (int i=0; i<N; i++)
{ r._d[i] = (_d[i] < a) ? 1 : 0; }
return r;
}
_type operator <(const _type &a) const
{
_type r;
for (int i=0; i<N; i++)
{ r._d[i] = (_d[i] < a._d[i]) ? 1 : 0; }
return r;
}
_type operator ==(const _type &a) const
{
_type r;
for (int i=0; i<N; i++)
{ r._d[i] = (_d[i] == a._d[i]) ? 1 : 0; }
return r;
}
// Easy indexing
T &operator[](int i) { return _d[i]; }
operator[](int i) const { return _d[i]; }
// C++11 utilities
static int size() { return N; }
T* begin() { return _d; }
T* end() { return _d+N; }
};


// Useful derived types
typedef LVector<uint8_t, 4> LColour;
typedef LVector<float, 2> LVector2;
typedef LVector<float, 3> LVector3;
typedef LVector<int, 3> LIVector3;


// Common offsets
enum
{
kX = 0,
kY = 1,
kZ = 2,
kW = 3,
kR = 0,
kG = 1,
kB = 2,
kA = 3
};


// Useful derived operations
template<class T, int N>
T abs(const LVector<T, N> &v)
{
T s;
for (int i=0; i<N; i++)
s[i] = abs(v[i]);
return s;
}


template<class T, int N>
T dot(const LVector<T, N> &l, const LVector<T, N> &r)
{
T s;
for (int i=0; i<N; i++)
s += l[i] * r[i];
return s;
}


template<class T, int N>
T max(const LVector<T, N> &l)
{
T s = l[0];
for (int i=1; i<N; i++)
{
if (s < l[i])
s = l[i];
}
return s;

}

Sunday, December 14, 2014

Social Assumptions Regarding Blogs

Often times the obvious just hits me.  Years later.  The obvious, this time, is the social encoding found within blogs.  There is a codified set of assumptions based upon how people will consume content which determine what content creators can do.

Yes.  Painfully obvious, isn't it.  Also, nefarious.

My latest project on this platform has been a short story.  Each post continues on the previous.  I find the exercise to be quite entertaining as it forces me to consider different types of scenarios and also my mind gets bored by the mundane and usual so I've allowed myself to come up with the atypical.  The latest, for example, is a spiral escalator.

Blogger enforces that newer posts appear first.  Is there a problem with that?  Inherently no.  Most blogs follow the right template.  New stuff is awesome, old stuff just gets stuck behind.  For example, product reviews thrive on the new.  Events thrive on the new.  Even documenting technology or how to do something can thrive on the new.

Of course, you could argue that if something isn't new and worthwhile, it will be linked to a million times voer and Google will provide an link to it.  Sure.

Now, the core of the issue - everything is independent.  There is no prescribed reading order.  People just are supposed to jump in at any given point in time and be able to pick up on the information.  What if I want to describe something complex in a linear format over severall posts?  Then I'd have to fight the digital system (as others have) to ensure that posts appear in te desired order, and that the front page would always be the first post.

Or, I could stop being lazy and could include small summaries of the story with each post.  It's all about the person reading the material - after all it's not as though they are starved for content/entertainment.  Even though I'd like whoever (if anyone) who reads that blog to read it in order, I should make it convenient to read from whenever.

That is if I cared about readers.  To me it's a nice platform to simply write.

Discouraging how I've turned around and through a royal meh and passively accepted my fate as a person writing using this service.  (probably since it's free and I don't feel like moving it anywhere else.

Sunday, December 7, 2014

Review of the Lego Big Ben (350 piece) model

Armed with a 20% off coupon, I ended up buying this small set of bricks.  The Whitehouse was the first set I got in the series, and I thoroughly enjoyed building it.  The series itself tends to do clever things with the bricks in sets that aren't too big or expensive (ie. the Parisian Cafe which I'd love to have).

Yes, it is a small model.  Very thin, very small.  And for that size, it has a lot of bricks.  Why?  it's the small detail and that there is very little, if any, empty space within the 3-brick-thick walls.

Most of the model uses bricks that I could find on my spaceships of yore - much of the detail comes from clever brick building.  And that is why I must write a blog post about this model.  I enjoyed following the instructions and building it since it achieved so much detail with so few bricks.  Even though I could tell from the box that they couldn't have built it in many other ways, the person who translated the model into lego did a great job.

It is the latest landmark to exist on my desk.  :)

Saturday, December 6, 2014

The Ire of Perpetual Change in Software

Software is becoming a special beast.  It is ever changing its face.

Just look at the applications that are "cloud" driven.  This is more of accepting the reality that application development has to follow the changes of the landscape (operating system, etc.) and that feature-wise many of these software are more than complete.

Consider Office.  You have styles, bibliography building, rudimentary grammar correction, layout, indexing, cross-referencing and whatnot.  Same features available in LaTeX if you can stomach the scripting.  I would argue that since 2000 the software has been sufficiently feature-complete for my needs and have been relearning to find the same features in reorganized menus since.

Windows is endemic of the problem.  Click advanced properties on a file and you get something reminiscent of Windows 2000.

Arguably, OS X has had no intention of preserving the old.  Yes, there is a new way to launch applications.  There is an updated look.  Updated usability.  A moving target for every piece of software.

If utensils were made by software companies, then each two years we would have a new way to eat.  A new easier way to hold them.  A new easier set of foods.  Something much more fashionable.  All the time.

I have no issue with progress, but do we have to slow down the machine.  Have fewer software developers.  More stability.

May be we don't need to change the user interface of our digital utensils every year.  Maybe once a decade.  Or less.

Yet again, new sells.  And in software that can be expensive.  Is it worth the cost?