This content originally appeared on DEV Community and was authored by Andrei Visoiu

Everyone has had at least slight meeting with backtracking throughout their lives, be it unknowingly - for example, as little kids, when navigating mazes for fun, when faced with a turn, say left or right, we will always choose one. However, the choice might not always prove correct - and we will go back, coming at the crossroads again, now choosing the other way, and reaching the exit.

This is exactly the kind of work that backtracking algorithms do. Their aim is to find all (or some) solutions to some constraint satisfaction computational problems:

- **decision problems**, using it to find a feasible solution.

- **optimisation problems**, using it to find the best solution.

- **enumeration problems**, using it to find all the solutions.

Defined formally, backtracking is an algorithmic technique for solving problems recursively, aiming to build a solution incrementally. It removes the candidate solutions that fail to satisfy the constraint as soon as it builds them and backtracks, going to the previous solution, trying to derive other solutions.

## Implementing a basic backtracking problem

Let us now take one of the most basic backtracking problems. Let **N** be a natural number. Generate all the permutations of the numbers from 1 through **N** in lexicographic order.

For example, let **N** equal 3. Then, the solutions to our problem will be:

1 2 3

1 3 2

2 1 3

2 3 1

3 1 2

3 2 1

We can clearly see that this is an **enumeration problem**, asking us to generate *all* permutations of these numbers.

A straightforward recursive backtracking approach is fairly easy to come up with. At any point in time, our recursive function should know **how many numbers it generated until now**, and **what does numbers are** (we can also call that a partial solution). Then, it should follow the next steps:

1. If it has already generated **N** numbers, stop and print the solution.

2. Loop through all numbers from 1 to **N** in ascending order. If a certain number **i** is already added, skip it. If not, add it to the solution and make another recursive call.

We have two ways in which to check if a number **i** was already added:

- loop through our partial solution and check if
**i**is already there. - use an additional boolean array, which will
*true*at index**i**if the element has already been added, or*false*otherwise.

Let's look at a basic implementation using the second method to check for added numbers:

Note that, in this example, I passed the arrays to the recursive function at **every single call**.

However, when constricted by memory limits, another approach might be better suited to our needs.

Let's first talk about why this approach might become problematic in memory-limited scenarios.

## The call stack

Recursion uses something called a call stack to store information about the function calls. We can imagine this stack like a stack of boxes, one on top of the other.

Each consecutive recursive call adds up to the stack **with all its parameters**. So, a function call with arrays as arguments will take up much more space than one with no arguments. And, as our number **N** increases, the size of our call stack increases. For bigger and bigger values of **N** and tight memory limits, it might overflow.

**In Python, by default, the call stack limit is 1000**. This refers to the number of function calls "on top of one another", and does not specify any limit. Although not recommended due to the way Python handles recursion, the limit can be modified using the **setrecursionlimit()** function from the **sys** module.

We can, then, make our two arrays, **solution** and **appears** global. As you can see, we already made the necessary resets when exiting recursion - in general, when making some operations on variables before recursion (in our case, modify the **appears** array and add the number to our **solution** array), and making the exact inverses after coming back from recursion will left the variables as they were before. That makes sense, considering the way in which the call stack is managed, following a **First In, First Out** approach. The last recursively called function is the first to finish execution, resetting all the variables to their pre-recursion states everytime a function is "popped" from the call stack.

Below is the modified snippet:

In Python, in order to make our global variables valid inside a funtion definition, we have to use the keyword **global** before accessing them.

## A way to visualise backtracking

As you may remember if you have read my previous articles, we had a mini-series of articles running regarding Graph Theory. Well, we can transpose backtracking into a graph theory problem by considering that our search space is a directed graph. For example, let us consider our problem and suppose we have already reached depth **1** (i.e. we have already chosen the first number of our permutation).

The exact steps of our algorithm are showcased in the picture above. Considering it already chose 1, the algorithm will try to put 1 again, dismissing the solution. Then choose 2, which will work, and try to add 1, 2 and 3 to it, of which only 3 will be feasible.

Backtracking can be considered a depth first search on this state graph of our problem.

I hope I have given you some insights regarding backtracking with the article. *The Magic of Computing* will be back next week with another interesting computational topic, but, until then, why don't you dwelve into some mathematical subjects, say... prime numbers? Or maybe the Fibonacci series is more to your liking.

This content originally appeared on DEV Community and was authored by Andrei Visoiu

*» Exploring backtracking*. Retrieved from https://www.scien.cx/2021/05/01/exploring-backtracking/.