Skip to Main Menu

Mastermind

an algorithm for the generalised version of the Mastermind game

Table of Contents

Technical notes

This document is intended as a suplementary description of my solution to the 3rd assignment of module Introduction to Computing at Imperial College London.

When dealing with code, C++ is assumed and used, unless stated othwerwise. C++11 features are used.

The sourcecode is accesible in my github repository.

Typographical notes

  • Cursive signifies terms specific to this document and solution,
  • Bold and cursive singnifies a definition of such a term,
  • bold is used for general highlighting, or small headlines.

Introduction

Mastermind rules

In the beginning the code maker generates a valid code, which will be refered to as codeword.

A valid code is any arbitrary sequence of $n$ non-negative integers (colors), each from the range $[0,k)$. Since in the original version of Mastermind the codes consist of “colors” instead of non-negative integers, in this document we will use the term color as a shorthand in a similar meaning. Also, in the original version the parameters are kept constant as follows: $n=4, k=6$, this document, however, considers the general case.

Following this, the code solver can query a valid code of his choice and is given a response of two integers $b$ and $w$. These correspond to the black and white pegs in the original game.

The objective of the game is to guess the codeword in the least possible number of queries. A game is only considered as finished after the solver has querried the correct codeword, even if he might have deduced the codeword from his previous querries.

The objective of my solution

The objective of my solution is to twofold: Firstly, it is to guess the codeword in the least amount of queries, but also to provide an algorithm suitable for large $n$ and $k$, with realistic computational complexities.

Formal definitions of $w$ and $b$

$b$ is always equal to the number of correctly guessed colors (i.e. correct colors on the correct position). Formally: Let the code maker’s codeword be $c_1 c_2 c_3 … c_{n-1}$ and the queried code $a_1 a_2 a_3 … a_{n-1}$. $b$ equals to the number of indeces $i$ such that $c_i = a_i$.

Commonly and vaguely, $w$ is defined as the number of incorrectly positioned colors in an query. Formally: Let $C_i$ and $A_i$ be the number of occurences of color $i$ in the secret code and the query code, respectively. Then:

$$ w = \biggl( {\sum_{i=0}^{k-1} min(C_i, A_i)} \biggr) - b $$

The code maker

Let us begin with the code maker as this player needs a significantly simpler algorithm.

The approach for $b$ is self evident. The calculation of $w$ is done by implementing the definition formula above. This algorithm can be optimized by calculating color_count_codeword at the beginning.

// n and k is given, as above
int black = 0;
int white = 0; 
std::vector< int > color_count_codeword = std::vector<int>(k, 0)
std::vector< int > color_count_query = std::vector<int>(k, 0)

for(int i = 0; i < n, i++){ // n - length of codeword
	if(codeword[i] == query[i]){
		black++;
	}
	color_count_codeword[ codeword[i] ]++;
	color_count_query[ query[i] ]++;
}
for(int j = 0; j < k, j++){ //k - number of colors
	white += min(color_count_query[i],color_count_codeword[i]); 
	// either use std::min() or own implementation
}
white -= black;
/* done, return black and white */

Complexity of the code maker’s algorithms

The complexity of the two algorithms is:

  • Time complexity: $O(n+k)$
  • Space complexity: $O(k)$

One can hardly optimize these complexities. Also, it is not a goal of my solution to do so, as the code maker’s response is considered to be a rather externally given mechanism. It certainly could be optimised by omitting white pegs - as we will be shown later -, but again, this would be an interference with the external mechanism and the rules of the game.

The code solver

In the rest of this document we will predominantly speak in terms of number of queries (query complexity) and will distinguish distinguish this term from time complexity and space complexity.

Review of strategies

The strategies on the Internet and in research papers can be divided into 2 main approaches:

  • Approach for small numbers: $n,k \leq 10$
  • Approach for large numbers: $n,k > 10$

Approaches for small numbers aim to find a optimal or suboptimal strategy, and therefore often feature algorithms iterating trough the whole space of possible codes, which is $k^n$. Consequently, these algorithms only work for small numbers, hence my naming of this approach. Some examples are:

Knuth’s algorithm Although being a subotimal strategy for $n=4, k=6$, the research paper provides some hints on how to generalize.

Player that never makes mistakes
Is an approach in which the solver only queries codes which are not ruled out by his previous attempts. The query generation may be random or deliberate. Often is combined with the minimax method, which makes the query generation process deliberate for the price of $O((k^n)^2)$ time complexity.

Approaches for large numbers are characterized by systematic guessing, often resulting in overheads for small numbers. A good example of this kind of approach is the algorithm proposed by Doerr, Benjamin, et al. “Playing mastermind with many colors..

The following two sections will go trough the details and specific implementations of these 2 different approaches. The proposed solutions are my own.

Approach for $n \leq 10$: Player that makes no mistakes

The chosen algorithm for small numbers is the pure player that makes no mistakes algorithm. A pseudo description of the algorithm may be seen as follows:

  1. Let $b(A, B)$ and $w(A, B)$ be the functions returning the $b$ and $w$ values for two codes $A, B$. And let $S$ denote the secret codeword geenrated by the maker.
  2. Create a set of codes $PossibleCodes$ and fill it with all the possible codes.
  3. Select (randomly) a code $C$ from $PossibleCodes$ and query it’s $b(C,S)$ and $w(C,S)$ from the code maker.
  4. Remove all codes $X$ from $PossibleCodes$ which do not satisfy $b(C,S)=b(X,S) \land w(C,S)=w(X,S)$.
  5. Repeat from step 3, until $b \neq n$.

Generating a set of possible codes

The generation of a set of possible codes can be implemented in a particularly elegant way using recursion as follows. The function requires a std::vector<int> of size $n$ supplied.

void generate_possible(
	int n,
	int k, 
	std::set<std::vector<int>> & possible_codes, 
	std::vector<int> & code, 
	int depth = 0
	){
        for (int color = 0; color<k; color++){
            code[depth]=color;
            if(depth == n-1){
                possible_codes.insert(code);
            }
            else{
                generate_possible(n,k,possible_codes,code,depth+1);
            }
        }
    }

Elimination of codes

The following segment of code is given the black_hits and white_hits values and it removes all impossible codes. The evaluate function returns the $b, w$ values to the black_measured, white_measured variables.

int black_measured;
int white_measured;
for(std::set<std::vector<int>>::iterator code_it=possible_codes.begin();
code_it != possible_codes.end(); 
code_it
){
    evaluate(C,*code_it,black_measured,white_measured);
    if(black_hits!=black_measured || white_hits!=white_measured ){
        possible_codes.erase(code_it++);
    }
    else{
        code_it++;
    }
}

Complexity and performance

The time complexity has taken into account the time complexity of iterating trough a std::set, which using “any kind of iterator” according to § 24.2.1 of the Standard for Programming Language C++11 is is $O(n)$ and therefore submissive to $O(k^n)$.

  • Time complexity: $O(k^n)$
  • Space complexity $O(n*k^n)$

Due to the demanding time complexity, this algorithm is unsuitable for large numbers of possible codes. As a general rule of thumb $k^n < 100 \space 000$ can be used as a condition approximating the feasibility of the computation. For my solution a similar condition is used. Exceeding that condition the program chooses to go with the strategy, described in the next section.

Stage 1: Monochromatic queries

The first stage is rather simple. The code solver queries $i, i \epsilon [0,k)$ queries each consisting only of color $i$ (monochromatic).

Example

Let $n=4, k=6$ and let the secret codeword be:

1 4 4 2

Stage 1 would look like this:

0 0 0 0  Black: 0, White: 0 	// color 0  is present 0 times
1 1 1 1  Black: 1, White: 0 	// color 1  is present 1 times
2 2 2 2  Black: 1, White: 0 	// color 2  is present 1 times
3 3 3 3  Black: 0, White: 0 	// color 3  is present 0 times
4 4 4 4  Black: 2, White: 0 	// color 4  is present 2 times
5 5 5 5  Black: 0, White: 0 	// color 5  is present 0 times

From these information we can set up an array of frequencies of colors which we will utilise in later stages.

Complexity of Stage 1
  • Query complexity: $O(k)$
  • Complexity of generating each query:
    • Time complexity: $O(n)$
    • Space complexity: $O(n)$

Stage 2: Finding a dummy

We will consider a dummy code any code that gives a response of zero black pegs. A trivial dummy might be a monochromatic sequence tried in stage 1. Since nothing guarntees that such a monochromatic query will exist, this shortcut is implemented just as an optimisation.

To guaranteedly find a dummy we query $n$ codes with one 1 on position $i, i \epsilon [0,n) $, and the remaining $n-1$ slots filled with 0-s. Let $C_0$ be the number of 0-s in the codeword and $c_i$ the i-th color of the codeword. Than for each value of $i$ three cases are possible based on the number of black pegs $b_i$ received as response to such a query:

  1. $C_0 < b_i \implies c_i = 1$
  2. $C_0 = b_i \implies c_i \neq 1, c_i \neq 0$
  3. $C_0 > b_i \implies c_i = 0$

Another optimization is implemented: if this stage is taken (i.e. no dummy found in stage 1), we discover the positions of all 0s and 1s and in the next stage we don’t have to deal with them.

Example

Let $n=4, k=6$ and let the secret codeword be:

3 1 0 5

Note, that this stage would have been skipped in this case, although we will pretend that the optimization was not implemented. In stage 1 we already tested 0 0 0 0 with result Black: 1, White: 0.

The queries stage 2 would be:

1 0 0 0  Black: 1, White: 1 	// position 1 is neither 0 nor 1
0 1 0 0  Black: 2, White: 0 	// position 2 is 1
0 0 1 0  Black: 0, White: 2 	// position 3 is 0
0 0 0 1  Black: 1, White: 1 	// position 4 is neither 0 nor 0

The codeword is: X 1 0 X 	// X = unknown
A dummy code is: 0 0 1 0
Complexity of Stage 2
  • Query complexity: $O(n)$
  • Complexity of generating each query:
    • Time complexity: $O(n)$
    • Space complexity: $O(n)$

The dummy sequence is a profoundly useful tool. It enables us to query the number of occurences of color $C$ in arbitrary ranges of positions. We can do this by submitting the following query:

D0 D1 C C C D5 D6 D7

where D-s are the colors of the dummy code. The response $b$ for this query would represent the number of occurences of color $C$ in range of positions $2$,$3$ and $4$.

Interval tree

Stage 3 consist of $c, c \epsilon [0,k)$ substages. Each substage finds all the positions of color $c$. It does so using a Interval tree as follows:

struct interval_node{
	int min;				// values from (including)
	int max;				// values upto (including)
	int remaining;			// count of potentially 
							// present colors c under this interval
	interval_node * left;	// interval [min,floor((min+max)/2)]
	interval_node * right;	// interval [ceil((min+max)/2), max]
	interval_node * parent;	// pointer to parent, NULL for root
};

This data structure satisfies the following criteria:

  • Binary tree
  • Search tree (ordered)
  • Interval (segment) tree
  • Upwards and downwards traversible tree

The following conditions will be always true for this interval tree:

  1. In the beginning each interval has a remaining equal to the number of occurences of the given color.
  2. A interval A’s span should be the value equal to A.max-A.min+1
  3. An intervall shall be called accesible if all its ancestors have remaining > 0.
  4. After an interval A was querried then A‘a sibling’s remaining is set equal to A.parent->remaining - A.remaining.
  5. From the latter follows that if an interval was querried, it’s sibling is considered effectively querried, and both may be said to have been effectively querried.
  6. A interval B of positions is discovered (contains only color $c$) if B.remaining == B.max - B.min + 1. Then, B.remaining and all B’s parents’s remainings are reduced by B’s span.
  7. It follows that if all instances of color $c$ are discovered the root interval node’s remaining is 0 and the algorithm can proceed to the next substage.

Now only one last piece of information is missing, which is: which interval should be querried next? From now on many possible implementations are possible, important should only be that the next interval should always be a accesible interval which’s parent has already been efectively querried. Note, that from the recoursive nature of this statement it follows that a querried interval’s all ancestors exluding the root interval have been querried. (Note, again, that the last note is not entirely true as the root interval has “effectively” (now using this term in it’s original meaning) already been querried in stage 1! 😜 )

After each querry the response of black pegs has to be teached to the interval tree in accordance with rules 4, 6, ( and 7).

The above mentioned rules are implemented in my program and for simplicity, their implementation won’t be shown here. For the interested readers, the code is on my github repository.

Complexity od Stage 3
  • Query complexity of stage 3: $O(n \log_2n)$
  • Time complexity of generating each query: $O(\log_2n)$
  • Time complexity of generating the interval tree for each color $O(n)$
  • Overall time complexity (including code maker’s response): $O(k * n + n^2 \log_2n)$

Complexity of the Binary search algorithm

A summary of query complexity of each stages:

  • Stage 1: $O(k)$
  • Stage 2: $O(n)$
  • Stage 3: $O(n \log_2n)$

The overall query complexity reduces to:

$$O(k+n \log_2n)$$

With a total running time complexity(including code maker’s response) clearly dominated by stage 3 :

$$O(k*n + n^2 \log_2n)$$

As we can see a $k >> n$ would only have effect by prolonging stage 1. Also, research literature on this topic devotes attention mainly to the “prominent” case when $k \approx n$ as otherwise the game has lots of unused colors, or on the contrary many repetitive color. The latter results in a possible optimization of skipping stage 2, the former is exploited by the interval tree. For these reasons $k=n$ is assumed for comparing the complexities of algorithms of this kind.

Comparing with other systematic algorithms

This algorithm features a rather modest space complexity ( $O(n+k)$ since we only need to maintain a $n$ sized binary interval tree and a constant number of arrays of size $n$ and $k$), compared to the algorithms in the recent research papers, most prominently Doerr, which maintain a set of colors of size $k$ for each position ($O(k*n)$)!

In my personal opinion the achieved time and query complexity can be seen as a good result given the lightweight nature of this approach. On the other hand the query complexity of Doerr is silghtly better: $O(n \log_2 \log_2 n)$, compared to mine $O(n \log_2 n)$.

Optimizations and performance boosts

“Premature optimization is the root of all evil” — D. Knuth

A significant query complexity optimization could be achieved by restricting the binary search to positions which have not been determined yet. This would however require a very much different interval tree, and possibly its re-building for each color. Therefore this optimization was not implemented.

The following two query complexity optimizations were implemented because of their very simple implementation:

  1. Finding all 0-s and 1-s during stage 2
  2. Skipping stage 2 if a monochromatic dummmy is found in stage 1. (common for $n \lessapprox k$)

Performance can be significantly boosted by omitting the printouts of the queried array ($O(n)$ time complexity). The difference is demonstrated below:

Binary search strategy on n=100, k=100m with printouts

Binary search strategy on $n=100, k=100$, with printouts [gif]

Binary search strategy on n=100, k=100, without printouts

Binary search strategy on $n=100, k=100$, without printouts [gif]

Performance and summary

Time has come to combine the two strategies. The following two graphs compare the average (32 samples) number of queries for $n,k \leq 20$:

/images/mastermind/nom_g_20.png

Average [32] number of queries for * Plaer that makes no mistakes* algorithm for $n,k \leq 20$

By carefull inspection, one can see that even though for smaller $n,k$ the binary search method has large overheads in terms of queries, it can cover a substantially larger range of $n,k$. /images/mastermind/bin_g_20.png

Average [32] number of queries for Binary search algorithm for $n,k \leq 20$

…indeed, this graph shows the performance of Binary search algorithm for $n,k \leq 100$. And that is not all: the binary search was tested up to $n,k=10 \space 000$. /images/mastermind/nom_g_100.png

Average [32] number of queries for Binary search algorithm for $n,k \leq 100$

A combination of the two algorithms (Player that makes no mistakes and Binary search) guarantees for $n,k \leq 100$ a running time $<1s$! The following table provides a informative overview of running times depending on $n,k$.

$n,k \leq $ Running time Notes
$6$ $5 sec$ nomistakes algorithm
$100$ $1 sec$ binary search algorithm
$1 \space 000$ $1 min$ binary search algorithm
$10 \space 000$ $5 min$ binary search algorithm

Notes on design patterns used

The mm_colver class-structure vauely implements a strategy design pattern by transfering the create_attempt() and learn() calls to the appropriate strategy call.

The Strategy_binary_search class implements a finite state pattern.

References

  1. Doerr, Benjamin, et al. “Playing mastermind with many colors.” Journal of the ACM (JACM) 63.5 (2016): 42.

  2. Working Draft, Standard for Programming Language C++11

  3. Knuth, Donald E. “The computer as master mind.” Journal of Recreational Mathematics 9.1 (1976): 1-6.

  4. Frans van Gool’s MasterMind algorithm

  5. Ville, Geoffroy. “An optimal mastermind (4, 7) strategy and more results in the expected case.” arXiv preprint arXiv:1305.1010 (2013).