The Knapsack problem, according to Wikipedia, is one of the most studied problems in combinatorial optimization.
The general description of the knapsack problem is the following:
- Given a set of
nitems, where each item has an associated profit
p_jand a corresponding weight
w_j, perform a series of binary decisions to select a subset of items such that the profit is maximized while the cost is kept within constraints.
Knapsack stems from the problem faced by someone who is picking items to put in his knapsack such that he fills the bag with the “best” items that he can find. However, because he can only hold so much in his finite knapsack, he has to weigh out his choices carefully so as to not exceed his maximum weight.
While some Leet Code style questions tend to be “removed” (to put it nicely) from the usefulness of everyday life, the Knapsack problem actually expresses a very real-world scenario.
Before delving further into the theory and practical applications, it is first important to understand exactly what is being expressed:
- I want to make a choice.
x_jrepresents the binary choice: Do I choose this item or not?
- If I do choose this item, I will be benefitting by precisely
- However, by choosing this item, I will be paying
w_jfor my choice.
- I would like to maximize my profits:
SUM(p_j * x_j).
- While respecting my total budget of
W. Respecting my total budget means that
SUM(w_j * x_j) <= W.
It is important to remember that at the end of the day, the knapsack problem is a way to express intent while respecting certain constraints.
With that being said, let us look at some of the variants of this problem.
- Bounded Knapsack Problem: In this variant, I am limited to choosing an item only a finite amount of times.
- Unbounded Knapsack Problem: In this variant, I can freely choose an item as many times as I wish.
- Fractional Knapsack Problem: In this variant, an item is able to be used in partial quantities. As can be understood with some analysis, this problem does not require a knapsack approach, as an optimal decision can be made in a greedy fashion. Simply select the item with the highest
While there are other variants which extend this idea in quite a few different dimensions, the heart of the problem is expressed in the two variants described above.
Application to Leet Code Problems
Coin Change 2 : Unbounded Knapsack
One of the variations of the knapsack problem expressed earlier is the unbounded knapsack problem. This is specified by the condition in the problem statement that says that you have an infinite number of each coin.
In order to start looking for a solution to this problem, it is first important to remember what the core of the knapsack problem is. At its core, the knapsack problem is a framework for making choices. The choices that can be made are the following.
- I choose to use the coin to contribute to the desired total amount of money. In this situation, due to the unbounded nature of the objects, I allow myself to reuse the same denomination.
- I choose not to use the coin, thus reducing the problem to one where I use the other coins to make up the amount of money.
In this two-step breakdown of the problem, it is clear that the solution can be represented in terms of its subproblems. The process of defining this relationship between subproblems is known as building the Recurrence Relation (RR). Whenever we think of subproblems and recurrence relations, we should think of Dynamic Programming as a way to cache results to eliminate repeated work. This is because the essential methodology of arriving at a solution using a Recurrence Relation is to start at the trivial case, and build your way up to the full solution.
The manner in which we can go about this is by building up the solution along two dimensions:
- Amount: We start by analyzing the case where we want to generate coins that add up to an amount of 0. This is by definition possible, because we express the choice of not using a given coin, and we indeed arrive at an amount of 0. This is know as the Base Case, or a result that can be solved trivially.
- Coin types: We start by considering the set of subproblems that deal with using a single coin. For that single coin, we generate all the subproblem solutions, from 0 → total amount. Then, we consider a second coin, and how that improves the state of subproblems. After considering usage of all the coins, then the answer to the original problem is simply the solution of using all the coins to generate the original amount.
The general pseudocode would be as follows:
for each coin in coins:
for each amount, from 0 up to the total amount:
if the amount is 0:
record there as being 1 possible solution
# Decision 1: Don't use the coin.
record = # ways to form valid amount by excluding this coin # Decision 2: Use the coin
If using the coin doesn't put me negative, get # ways to make up
this new amount
Record the summation of # ways via these two methods
return the solution for using all coins to make up the total amount.
And the python code is as follows.
As with any given solution, it is crucial to perform a complexity analysis on the algorithm.
We can see from a time analysis, this above solution runs in quadratic time, as there are two
for loops, representing decision making along two axes. Therefore, the time complexity is: O(
Because this solution was structured around a recurrence relation, which builds upon previous subproblems, we are using a memoization table to reduce the runtime on duplicate problems. Therefore, for the same reason as the above time analysis, the space complexity is: O(
I am aware that there is a solution which uses O(
|amount|) space, due to the fact that
amount can essentially overlap on top of one another.
This is the second (hopefully out of many) posts where I attempt to summarize my findings on certain programming concepts. If you would kindly submit a response to this article, it would help me out greatly to