In a

previous question we're shopping for an ingredient which comes in standard pack sizes of either 100g, 250g or 500g, having prices $0.40, $0.75 and $1.00 respectively.

Let:

PackSize = {100; 250; 500}

PackPrice = {0.40; 0.75; 1.00}

QuantityNeeded = {0.... up in steps of GCD(PackSize)... MaxQuantity }

GCD = greatest common divisor, eg GCD({100; 250; 500}) = 50

We only need to define a table with intervals in these steps as in between there won't be any change in the cost. I think this logic is right, please correct me if wrong.

We then define f_mincost(QuantityNeeded ) as the set of packs that meets the QuantityNeeded at minimum cost, eg:

f_mincost(1050) gives 1 x 100g + 2 x 500g at a total cost of $2.40

This problem is solved in the

previous question by using dynamic programming to create a

lookup table of values:

a) Quantity needed, eg 1050

b) Minimum cost, eg $2.40

c) Pack selection, eg 100, 500, 500

Questions:

1. What MaxQuantity do we need to tabulate upto before the pattern is repeating

ie MaxQuantity = some function of Packsize

2. How can we define f_mincost(QuantityNeeded) as a function of this repeating pattern

eg what is f_mincost(6400) as a function of f_mincost(some value <= MaxQuantity)

This is is probably straightforward but after the previous question my head has been frazzled!

Thanks again