Mathematical Optimization in the Personalized Meal Delivery Industry

Mathematical Optimization

Optimization is defined as the action of making the best or most effective use of a resource. Optimization problems arise in various aspects of businesses:

  • Supply chain — optimal placement of inventory in warehouses to reduce transportation costs
  • Production — efficient allocation of tasks to employees to maximize their time and abilities
  • Delivery — shortest-path routing for last-mile delivery of products

Sometimes, optimization is a core component of the product itself. Such is the case in the personalized meal delivery industry, where each meal is tailored to meet a customer’s macronutrient requirements and delivered to their doorstep. At first glance this may seem like a straightforward case of increasing proteins, carbohydrates (carbs), and fats until a customer’s requirement is met. However in reality, proteins, carbs, and fats are not divorced from one another, and increasing one will disproportionately increase the other, making the task a lot more complicated.

Meal Balancing Problem

Let’s dig into it with a simple meal balancing example of a customer eating just one meal a day consisting of grilled chicken breast, boiled broccoli, and brown rice. The macronutrient information for each component along with the daily macronutrient requirement for a customer is given in Table 1.

Grilled Chicken Breast (1 g)1.92 kcal0.30 g0.00 g0.08 g
Boiled Broccoli (1 g)0.45 kcal0.02 g0.07 g0.01 g
Brown Rice (1 g)1.13 kcal0.02 g0.24 g0.01 g
Requirement (daily)2,291.00 kcal135.00 g296.00 g63.00 g

Table 1: The macronutrient information per gram for each component of a simple meal and a customer’s daily requirement.

As you can see, each meal component contains a mix of proteins, carbs, and fats, rather than just a single macronutrient. Meal components containing only a single macronutrient is very much a rarity. As such, a naive approach where each macronutrient is considered in isolation would not be viable. For example: increasing the quantity of grilled chicken breast to meet the fat requirement will disproportionately increase the quantity of protein and overshoot the protein requirement.

Analytical Solution

At this point you might dust off your 8th-grade math textbook, recognizing that this is essentially a system of linear equations, like so:

1.92𝑥 + 0.45𝑦 + 1.13𝑧 =
0.30𝑥 + 0.02𝑦 + 0.02𝑧 =
0.07𝑦 + 0.24𝑧 =
0.08𝑥 + 0.01𝑦 + 0.01𝑧 =
2291 Calories (1)
135 Proteins (2)
296 Carbs (3)
63 Fats (4)

𝑥, 𝑦, and 𝑧 represent grams of grilled chicken breast, boiled broccoli, and brown rice respectively. The calories equation (1)  is a linear combination of the other three equations, congruent with the standard calories per macronutrient, and can therefore be discarded:

Calories = 4 × Proteins + 4 × Carbs + 9 × Fats

This leaves us with 3 variables, and 3 equations, which means that the system may have an analytical solution. We can solve this via the substitution method by first expressing 𝑦 in terms of 𝑧 using the carbs equation (3):

0.07𝑦 + 0.24𝑧 =
7𝑦/100 + 24𝑧/100 =
𝑦 =
29600/7 – 24𝑧/7

Next, we can substitute for 𝑦 in the proteins equation (2) to express 𝑥 in terms of 𝑧:

0.30𝑥 + 0.02𝑦 + 0.02𝑧 =
3𝑥/10 + 2𝑦/100 + 2𝑧/100 =
3𝑥/10 + (2/100) × (29600/7 – 24𝑧/7) + 2𝑧/100 =
𝑥 =
3530/21 + 17𝑧/105

Then, we can substitute for both 𝑥 and 𝑦 in the fats equation (4) to solve for 𝑧:

0.08𝑥 + 0.01𝑦 + 0.01𝑧 =
8𝑥/100 + 𝑦/100 + 𝑧/100 =
8/100 × (3530/21 + 17𝑧/105) + (1/100) × (29600/7 – 24𝑧/7) + 𝑧/100 =
𝑧 =

Finally, we can substitute for 𝑧 in our previously derived expressions to solve for 𝑥 and 𝑦:

𝑥 = 3530/21 + 17𝑧/105 = 3530/21 + (17/105) × (-10900/17) =      450/7
𝑦 = 29600/7 –   24𝑧/7 = 29600/7 –   (24/7) × (-10900/17) = 764800/119

Which yields the following unique solution:

𝑥 = 450/7 ≈   64
𝑦 = 764800/119 ≈ 6427
𝑧 = -10900/17 ≈ -641

This is approximately 64 g of grilled chicken breast, 6,427 g of boiled broccoli, and -641 g of brown rice. If you crunch the numbers, this yields exactly 2,291 kcal, 135 g of proteins, 296 g of carbs, and 63 g of fats. However, this is not a plausible solution — the quantity of a meal component cannot be negative. Given that the system of equations yielded just one solution, our only option is to look for the next best solution given our constraints, i.e. the optimal solution.

Optimization Problem Definition

Now that we have correctly identified the meal balancing problem as an optimization problem, it is imperative that we formalize it. Let’s begin with a definition of optimization in the context of mathematics — Mathematical optimization is the selection of the best possible values for a set of decision variables, subject to constraints, that maximize or minimize an objective function.

In the case of our meal balancing example, we need to make a decision on the quantities of each meal component in order to achieve our goal. As such, the quantity in grams of grilled chicken breast, boiled broccoli, and brown rice, represented by 𝑥, 𝑦, and 𝑧 respectively, will be our decision variables. 

Additionally, there are also some real-world constraints that need to be considered for each meal component: their quantity cannot be negative (simply impossible), and their quantity can only be incremented or decremented by a minimum amount (it is impractical to increase a portion of grilled chicken breast by say 0.1 g). For the sake of simplicity, let’s set the minimum increment/decrement amount to 10 g. Therefore, the decision variables are subject to the following constraints:

𝑥 > 0, 𝑥 ∈ {10, 20, 30, …}
𝑦 > 0, 𝑦 ∈ {10, 20, 30, …}
𝑧 > 0, 𝑧 ∈ {10, 20, 30, …}

Our objective is to meet the customer’s macronutrient requirements. In mathematical terms, this means minimizing the difference between the total of each macronutrient in the meal and its corresponding requirement, resulting in 3 objective functions:

𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒    |𝑃𝑡 – 𝑃𝑟|
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒    |𝐶𝑡 – 𝐶𝑟|
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒    |𝐹𝑡 – 𝐹𝑟|

𝑃, 𝐶, and 𝐹 represent proteins, carbs, and fats respectively, and subscript 𝑡 and 𝑟 denote the total and required quantities. These 3 objective functions can be collapsed into a single objective function by assigning weights based on their relative importance and summing them up. Their contribution to calories can serve as an appropriate weight:

𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒    4×|𝑃𝑡-𝑃𝑟| + 4×|𝐶𝑡-𝐶𝑟| + 9×|𝐹𝑡-𝐹𝑟|

Using Table 1, we can express the total and required quantities in terms of the decision variables 𝑥, 𝑦, and 𝑧:

𝑃𝑡 = 0.30𝑥 + 0.02𝑦 + 0.02𝑧, 𝑃𝑟 = 135
𝐶𝑡 = 0.07𝑦 + 0.24𝑧, 𝐶𝑟 = 296
𝐹𝑡 = 0.08𝑥 + 0.01𝑦 + 0.01𝑧, 𝐹𝑟 =  63

Substituting for these variables in our objective function yields:

𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒    |1.20𝑥 + 0.08𝑦 + 0.08𝑧 – 540| + |0.28𝑦 + 0.96𝑧 – 1184| + |0.72𝑥 + 0.09𝑦 + 0.09𝑧 – 567|

Putting it all together, our meal balancing problem can be formalized as:

𝑔𝑖𝑣𝑒𝑛𝑥, 𝑦, and 𝑧
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒|1.20𝑥 + 0.08𝑦 + 0.08𝑧 – 540| + |0.28𝑦 + 0.96𝑧 – 1184| + |0.72𝑥 + 0.09𝑦 + 0.09𝑧 – 567|
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜𝑥 > 0, 𝑥 ∈ {10, 20, 30, …}
𝑦 > 0, 𝑦 ∈ {10, 20, 30, …}
𝑧 > 0, 𝑧 ∈ {10, 20, 30, …}

Trial and Error

If we were to tackle this problem by hand, we would typically employ a trial and error approach. We would pick arbitrary starting values for the decision variables, evaluate the objective function, intelligently increment or decrement each decision variable accordingly, evaluate the objective function, intelligently increment or decrement each decision variable accordingly, evaluate the objective function, and so on. This approach is illustrated in Table 2.

0100100100350 kcal34 g31 g10 g1,941
15001001001,118 kcal154 g31 g42 g1,325
24001001,0001,943 kcal142 g247 g43 g404
33505001,0002,027 kcal135 g275 g43 g264
43509001,0002,207 kcal143 g303 g47 g204
53508001,0002,162 kcal141 g296 g46 g177

Table 2: The trial and error method for finding the optimal solution, recorded for 5 iterations.

After 5 iterations, we have reached a fairly acceptable solution of 350 g of grilled chicken breast, 800 g of boiled broccoli, and 1,000 g of brown rice. The comparison with the customer’s macronutrient requirement is shown in Table 3.

Requirement2,291.0 kcal135.0 g296.0 g63.0 g0.0
Trial and Error2,218.0 kcal141.0 g296.0 g46.0 g177.0~5 mins

Table 3: The evaluation of the optimal solution produced by the trial and error approach.

However, this approach can be quite cumbersome — imagine repeating this process for thousands of customers! Furthermore, we do not know whether this is the optimal solution to the problem. To overcome this, we need to enlist the aid of a computer.

Brute Force Search

What we have done by hand in the trial and error approach is called a search algorithm. In computing, one of the simplest search algorithms is the brute force algorithm. In contrast to the trial and error approach, where only a small subset of the potential solutions are evaluated, the brute force algorithm systematically enumerates all possible combinations of values of the decision variables and evaluates the objective function for every single one. This guarantees that the most optimal solution to the problem is found.

Continuing with our meal balancing example, we would have to set an upper bound for each meal component so that the search algorithm does not run indefinitely. Let’s set this upper bound to 10,000 g for each component:

0 < 𝑥 ≤ 10,000, 𝑥 ∈ {10, 20, 30, …, 10,000}
0 < 𝑦 ≤ 10,000, 𝑦 ∈ {10, 20, 30, …, 10,000}
0 < 𝑧 ≤ 10,000, 𝑧 ∈ {10, 20, 30, …, 10,000}

The code snippet in Figure 1 implements this brute force algorithm in Python.

x_all = y_all = z_all = [i*10 for i in range(1, 1001)]
x_opt = y_opt = z_opt = 0
obj_opt = float('inf')

for x in x_all:
    for y in y_all:
        for z in z_all:            
            prot_diff = abs(0.30*x + 0.02*y + 0.02*z - 135)
            carb_diff = abs(0.07*y + 0.24*z - 296)
            fat_diff = abs(0.08*x + 0.01*y + 0.01*z - 63)
            obj = 4*prot_diff + 4*carb_diff + 9*fat_diff
            if obj < obj_opt:
                obj_opt = obj
                x_opt = x
                y_opt = y
                z_opt = z
print('x =', x_opt)
print('y =', y_opt)
print('z =', z_opt)

Figure 1: The Python code snippet that implements the brute force algorithm.

After running for ~37 minutes, the algorithm produced the following optimal solution:

𝑥 =   170
𝑦 = 4,190
𝑧 =    10

Which is 170 g of grilled chicken breast, 4,190 g of boiled broccoli, and 10 g of brown rice, achieving an objective function value of just 68. Admittedly, this is a very odd meal composition, but for the sake of keeping things simple, we did not introduce any additional constraints. The comparison with the customer’s macronutrient requirement as well as the solution produced by the trial and error approach is shown in Table 4.

Requirement2,291.0 kcal135.0 g296.0 g63.0 g0.0
Trial and Error2,218.0 kcal141.0 g296.0 g46.0 g177.0~5 mins
Brute Force2,223.2 kcal135.0 g295.7 g55.6 g68.0~37 mins

Table 4: The evaluation of the optimal solution produced by the brute force approach.

The solution of the brute force approach is clearly superior to that of the trial and error approach. However, we should note that the algorithm took much longer to run. This is because the decision variables can each be one of 1,000 discrete values, which means that the brute force algorithm had to evaluate 1,000 x 1,000 x 1,000 = 1,000,000,000 (1 billion) potential solutions!

Consider the effect of introducing a lemon and herb sauce as a fourth component to the meal: assuming it can also be one of 1,000 discrete values, the brute force algorithm would now have to evaluate 1,000 x 1,000 x 1,000 x 1,000 = 1,000,000,000,000 (1 trillion) potential solutions! This is an exponential increase in the required computational power, and adding a second or third meal into the mix will further escalate this number. This is known as combinatorial explosion, and is the primary disadvantage of the brute force algorithm.

Greedy Search

To solve this issue of combinatorial explosion, we need to devise a more intelligent algorithm for selecting which potential solutions to evaluate, as opposed to systematically evaluating all of them. Essentially, we need to formalize the thought process that we used during the trial and error approach when deciding how to increment or decrement each decision variable following an evaluation of the objective function, i.e. which potential solution to evaluate next. In fact, quite a few such algorithms exist and are broadly summarized in Figure 2.

These algorithms try to find the absolute best solution to a problem by meticulously searching through every possibility to ensure finding the perfect solution. Some examples include:

  • Brute Force: Trying every solution one by one to find the optimal solution.
  • Dynamic Programming: Breaking a big problem into smaller parts, and storing solutions to avoid redundant computations.
  • Branch and Bound: Systematically dividing the solution space into smaller subspaces to find the optimal solution.

These algorithms make quick yet suboptimal decisions, with the aim of finding a pretty good solution quickly, even if it is not perfect. Some examples include:

  • Greedy Algorithms: Making the best decision at each step without worrying too much about future consequences.
  • Heuristic Algorithms: Using rules of thumb or intuition to make decisions that are usually good but not guaranteed to be the best.
  • Metaheuristic Algorithms: High-level strategies that guide the search for solutions in large or complex optimization problems.

These are specialized algorithms tailored to specific types of problems. Some examples include:

  • Linear Programming: Solving problems with linear equations and inequalities to find the best outcome.
  • Integer Programming: Dealing with problems where variables have to be whole numbers, not just any numbers.
  • Network Flow Algorithms: Optimizing the flow of goods, information, or resources through interconnected nodes and edges in a network.

Figure 2: A brief overview of the algorithms used to solve optimization problems, categorized into three archetypes.

For the sake of brevity, let’s discuss just one of the most commonly used algorithms — the greedy algorithm. It is best illustrated with the use of a metaphor:

You are a mountain climber attempting to find the shortest path to reach the summit of a mountain. With no map or GPS to guide you, you look at the terrain immediately surrounding you to identify which direction leads you to a higher elevation the fastest, i.e. the direction with the steepest upward slope. You take a step in that direction, pause and reassess your immediate surroundings to identify the new direction of the steepest upward slope. You take another step in this new direction, and repeat this process until eventually you reach a point where there are only downward slopes in all directions. Congratulations, you have now reached the summit!

This is the essence of the greedy algorithm. The objective function is the difference in elevation between yourself and the summit, which needs to be minimized. The decision variables are which path you should take, constrained by the surface of the mountain. At each point you only consider the effect of your immediate next step on the objective function, with no concern for the steps that follow, which is why it is called the greedy algorithm.

However, following this algorithm, you are not guaranteed to find the optimal solution. Continuing with our metaphor:

Proud of your accomplishment, you look around to soak in the beautiful view, only to notice the true summit looming in the distance. Your algorithm has failed you, and you have instead arrived at a false summit!

This is called a local optimum. Whilst a local optimum is a decent solution to the problem, it is not as good as the global optimum. Greedy algorithms in particular are susceptible to getting stuck in local optimums because they only consider the immediate effect on the objective function, without considering long-term consequences. 

However, this only occurs for nonlinear problems. Our mountain climbing metaphor is nonlinear because mountainous terrain peaks and valleys at various points. If instead the climber was climbing a pyramid, which only has a single peak, the greedy algorithm would always lead him to the global, and only, optimum.

Thankfully, our meal balancing problem is linear, so we can return to that and apply the greedy algorithm to get an optimal solution. This is implemented in the code snippet in Figure 2.

import pulp as pl

model = pl.LpProblem('blog', pl.LpMinimize)

x = pl.LpVariable('x', 1, 1000, pl.LpInteger)
y = pl.LpVariable('y', 1, 1000, pl.LpInteger)
z = pl.LpVariable('z', 1, 1000, pl.LpInteger)
prot_diff = pl.LpVariable('prot_diff', 0, None, pl.LpContinuous)
carb_diff = pl.LpVariable('carb_diff', 0, None, pl.LpContinuous)
fat_diff = pl.LpVariable('fat_diff', 0, None, pl.LpContinuous)

model += prot_diff >= (0.30*x + 0.02*y + 0.02*z)*10 - 135
model += prot_diff >= 135 - (0.30*x + 0.02*y + 0.02*z)*10
model += carb_diff >= (0.07*y + 0.24*z)*10 - 296
model += carb_diff >= 296 - (0.07*y + 0.24*z)*10
model += fat_diff >= (0.08*x + 0.01*y + 0.01*z)*10 - 63
model += fat_diff >= 63 - (0.08*x + 0.01*y + 0.01*z)*10

model += 4*prot_diff + 4*carb_diff + 9*fat_diff


for v in model.variables():
    if in ['x', 'y', 'z']:
        print(, '=', v.varValue*10)

Figure 2: The Python code snippet that implements the greedy algorithm with the help of the PuLP library.

Running the code snippet produces exactly the same solution as that of the brute force algorithm:

𝑥 =   170
𝑦 = 4,190
𝑧 =    10

Except this time, the solution was produced almost instantaneously instead of after ~37 minutes. The comparison with the customer’s macronutrient requirement as well as the solutions produced by the prior approaches is shown in Table 5.

Requirement2,291.0 kcal135.0 g296.0 g63.0 g0.0
Trial and Error2,218.0 kcal141.0 g296.0 g46.0 g177.0~5 mins
Brute Force2,223.2 kcal135.0 g295.7 g55.6 g68.0~37 mins
Greedy2,223.2 kcal135.0 g295.7 g55.6 g68.0≪1 secs

Table 5: The evaluation of the optimal solution produced by the greedy approach.


Optimization plays a pivotal role in any business, especially in the personalized meal delivery industry where it is integral to the product itself. In the short term, a manual trial and error approach will offer a reasonable solution. However, as the business grows this approach becomes unsustainable, and a more methodical search algorithm should be considered. Fortunately, there exists an abundance of such algorithms with corresponding programming libraries that implement them. Selecting a suitable search algorithm for a particular use-case can result in substantial improvements in both accuracy and speed when compared to a manual trial and error approach.