Beyond the Local Maximum: Lessons from Hill Climbing

We all make countless decisions each day — probably a lot more than you realize. Some sources suggest that this could amount to 35,000 spread across a typical day. Most of these occur in our subconscious without us even thinking of it as a decision, things like what to say or when to take a sip of water. Other decisions happen with us being slightly more aware about them, for instance what we wear or eat that day. All of these decisions are heavily influenced by our priors, which is a concept from cognitive science that refers to the preconceived expectations, beliefs, or assumptions that influence how we interpret and process information. 

For the vast majority of our daily decisions, we are not actively looking for the best possible outcome, but merely for one that is good enough. That’s why priors are so useful, allowing us to make quick and effortless decisions, drawing on our past experiences and accumulated knowledge. 

Other decisions might have bigger (potential) consequences, and we are intuitively investing more computation — for instance, what activities we plan to do on the weekend or where we want to travel on our next vacation. 

And then are some decisions where the best outcome is the only option for us that we aim to achieve and actively seek. Choosing a career is one of them, and we devote much time and other resources so we can arrive at the best possible decision. 

I now want to introduce the hill climbing algorithm as a way to think and become aware about these decision structures, which we encounter and apply daily. The hill climbing algorithm is a concept from optimization and AI research, with the goal to achieve the best possible outcome (either maximum or minimum) by making incremental changes to a starting solution. 

Imagine you are on a multi-day backcountry hiking trip in the Alps. Unfortunately, midway through the hike, you lose your map, and a heavy fog rolls in, reducing visibility to just a few meters around you.

In order to call for help with your phone, you need to climb the highest peak because only there you have a chance to get some phone reception.  

With poor visibility, you can’t see the entire mountain or other peaks. You can only assess the elevation of the immediate surroundings (a few meters around you). To reach the highest point, you decide to follow a hill climbing strategy: you’ll climb in the direction that increases your altitude the most. 

As you hike uphill, you might reach a smaller peak (local maximum) that feels like the top of the mountain. Because visibility is poor, you might think you’ve reached the highest point. However, there could be a taller peak nearby that you can’t see or assess. If you rely solely on the hill climbing strategy, you’ll get stuck here, mistakenly believing you’re at the global maximum.

Finding a global maximum is akin to making the best possible decision, while the local maximum refers to a solution that depending on what you are trying to achieve may be good enough (for example when weighing dinner options).

Our priors, in this context, correspond to a rough mental map of the terrain, where we can find local maxima in close proximity. 

The key aspect of this model is realizing that global optimums are difficult to come by and even more challenging to identify as such. For most decision we make, it is sufficient to trust our guts (following the prior) and go with whatever the best immediate feasible option is. However, what should we do, when we ought to find the global maximum and can’t risk being stuck by on a local maxima? We need something smarter than our simple hill climbing algorithm. 

A better strategy, which increases your chance of finding a global optimum, is simulated annealing. It works by starting with an initial solution and then exploring neighboring solutions, while occasionally accepting worse solutions to escape local optima. It gradually reduces the likelihood of accepting worse solutions as it progresses, which allows it to explore the solution space widely at first and then refine the best solution found as it cools down.

Sumulated Annealing with gradually decreasing temperature

In essence, we are trying to explore our terrain, even if that means occasionally loosing some altitude. As we explore, we slowly narrow down our options by heading uphill again.  

For simple everyday decisions, it is enough to choose the closest peak. For more important decisions, we need to dedicate some time for exploration and recognize that the easiest might not be the best decision. It also reveals that consciously acknowledging our prior every once in a while can be tremendously valuable.

This model helps me to be more aware of the decision I make, and to be mindful that solutions can sometimes be found in unexpected ways.

Leave a Comment