Here's the problem:

As efficiently as possible, generate all unique binary integer arrays of length 5. An example declaration of a single 'binary integer array' might look like this: new int[ ]{0,1,0,0,1};

Each array needs to be anewobject (i.e. no overwriting of previous arrays.

## Establishing a base-line

*public IEnumerable<int[]> enumerateHandMasks()*

{

for (int i1 = 0; i1 < 2; i1++)

for (int i2 = 0; i2 < 2; i2++)

for (int i3 = 0; i3 < 2; i3++)

for (int i4 = 0; i4 < 2; i4++)

for (int i5 = 0; i5 < 2; i5++)

yield return new int[] { i1, i2, i3, i4, i5};

}

{

for (int i1 = 0; i1 < 2; i1++)

for (int i2 = 0; i2 < 2; i2++)

for (int i3 = 0; i3 < 2; i3++)

for (int i4 = 0; i4 < 2; i4++)

for (int i5 = 0; i5 < 2; i5++)

yield return new int[] { i1, i2, i3, i4, i5};

}

To test it's speed we use this code:

*DateTime start = DateTime.Now;*

for (int i = 0; i < 1000000; i++)

{

foreach (int[] arr in enumerateHandMasks())

{

//this is to ensure that the body of the loop is doing something, but something that is trivial

arr[0] = arr[1];

for (int i = 0; i < 1000000; i++)

{

foreach (int[] arr in enumerateHandMasks())

{

//this is to ensure that the body of the loop is doing something, but something that is trivial

arr[0] = arr[1];

*//leaving this line out*

}

}

Console.WriteLine(DateTime.Now.Subtract(start).ToString());

**could**result in compile time optimisations that skip the enumerator entirely}

}

Console.WriteLine(DateTime.Now.Subtract(start).ToString());

When we run this enumerator to completion one million times (generating 32 million arrays), it takes 1.3 seconds. All right, that's a good baseline, but it is so inelegant... maybe we can improve it.

## A more elegant and adaptable version

*public IEnumerable<int[]> enumerateBinaryMasks(int length)*

{

int[] retArr = new int[length];

for (int i = 0; i < Math.Pow(2,length); i++)

{

//the body of this loop is a binary converter

int remainder = i;

int cnt = 0;

do

{

retArr[(retArr.Length - 1) - cnt] = remainder % 2;

remainder = remainder / 2;

cnt++;

} while (remainder > 0);

yield return retArr;

}

}

{

int[] retArr = new int[length];

for (int i = 0; i < Math.Pow(2,length); i++)

{

//the body of this loop is a binary converter

int remainder = i;

int cnt = 0;

do

{

retArr[(retArr.Length - 1) - cnt] = remainder % 2;

remainder = remainder / 2;

cnt++;

} while (remainder > 0);

yield return retArr;

}

}

Time to execute one million runs (and generate 32 million arrays): 3.23 second. Damn... the resulting method is much more versatile, but we've decreased efficiency by about 250%. There must be a way to improve our baselines efficiency.

## Pre-computing **all** the Enumerator's Codes

public IEnumerable<int[]> enumerateHandMasksPreComputed()

{

yield return new int[]{0,0,0,0,0};

yield return new int[]{0,0,0,0,1};

.....

yield return new int[]{1,1,1,1,0};

yield return new int[]{1,1,1,1,1};

}

3.9 seconds! What? So somehow despite being even simpler than our base version, the efficiency has decreased by nearly 3 times. Maybe a hybrid of the two will do better.

## Hybrid

public IEnumerable<Tuple<int[],int[],int[],int[]>> enumerateHandMasks_ThreadVersion()

{

for (int i1 = 0; i1 < 2; i1++)

for (int i2 = 0; i2 < 2; i2++)

for (int i3 = 0; i3 < 2; i3++)

yield return new Tuple<int[],int[],int[],int[]>(

(new int[] { i1, i2, i3, 0, 0 }),

(new int[] { i1, i2, i3, 0, 1 }),

(new int[] { i1, i2, i3, 1, 0 }),

(new int[] { i1, i2, i3, 1, 1 }));

}

We have a winner: we manage to generate all 32 million arrays in just 1.02 seconds. That's more than a 30% improvement on our base case,

**and**the resulting tuple is better suited to handling the results using multi-threading (4 threads working together would bring down the final computation time by a factor of 4).

Suggestions or comments on possible ways to improve these results are always welcome.