We help IT Professionals succeed at work.

Check out our new AWS podcast with Certified Expert, Phil Phillips! Listen to "How to Execute a Seamless AWS Migration" on EE or on your favorite podcast platform. Listen Now


Implementation of Newsvendor model in point of sale application

Medium Priority
Last Modified: 2013-11-12
I have been charged with implementing a "suggested orders" facility in a point of sale software application for small retail operations, which looks at historical sales data and determines an optimal quantity of products to order.
One model that looks appropriate as a basis for such a facility is the newsvendor model:



Newsvendor is quite a generic model, which is apparently used by some ERP applications. Because it is a broad model with broad applicability, it makes the minimum set of assumptions about our requirements. In order to use it in all but the most simple of scenarios (such as by our idealised newsvendor, who has the same sales period for all products,  a day, no lead time, and with easy to quantify overage cost and underage cost ), we must determine how to come up with its basic parameters, so even if it is the best model for me to use, I may not be using it appropriately. It is attractive, because it is simple, and allows the user to make economically rational decisions, maximising return. People who are responsible for ordering goods are inclined to be irrationally averse to ordering more products, because the cost of overage is readily appreciable (surplus goods), but the opportunity cost of underage generally isnt, unless you issue rain checks or something. This problem is particularly pronounced in businesses that sell products with high margins, such as music stores, where the cost of underage is much greater than the cost of overage.

My questions are:

1.      Do you regard the newsvendor model to be appropriate for my needs? If not, please suggest an alternative.
2.      If so,  is my particular method of determining parameters for the model  sensible?

My needs are modest. The model must not be overly complicated to use or to implement. It must not present the user with an excessive number of parameters (though it may query our database as needed for statistics like the mean and standard deviation of sales over a given period, or other non-user supplied parameters. We have access to unit profit, historical sales, etc, which can be queried arbitrarily with SQL).  There is a danger of creating a baroque, perfect model with many parameters that no one will actually use.

I assume at this point that you have familiarised yourself with the newsvendor model. If you want to manipulate some figures to see how it works, I suggest you use the Excel NORMINV (normal inverse distribution) function, as mentioned in the MIT open courseware PDF above.

Here is how the application currently uses the newsvendor model, per my current practical implementation of that model, a work in progress.

We must determine the number of units to order for a given future period.

If the period is less than 1 week, we look at n (n is user specified, default 4) comparable periods from earlier sales. For example, if we order to cover demand for between next Monday and Thursday, we look at the last n comparable periods (here, the sum of units sold for each period from Monday to Thursday for the last n weeks).  If the period is greater than 7 days, we similarly look at the same periods for the last n 28 day months. If the period is greater than 28 days, we cannot perform a suggested order at all.
We find the sum of items sold for each of those n periods.

We then find the mean and standard deviation of the set of sales for each period for each item. These will later be supplied to our inverse normal distribution function, product by product.

We must then calculate the critical fractile ratio (cost of single unit underage / (cost of single unit underage + cost of single unit overage) ), which is passed to the inverse normal distribution function as the probability corresponding to the normal distribution, p. This is the part of my implementation that I have the least confidence in.

The cost of single unit underage is simply the lost unit profit. We won't model the hidden cost of customer dissatisfaction or anything like that. We simply assume that the user didnt opt to buy an alternative, comparable product.  Im considering offering the user the option to specify a per-product probability that the user picked an alternative comparable product on which we made identical profit, so that we only incur the loss of that profit those times when the customer doesnt choose an alternative. If the product is something with large product loyalty, like cigarettes, this will be high, perhaps 0.8. If it is something with limited loyalty, like grocery items, this will be low, perhaps 0.2.

The cost of a single unit overage is a percentage of the unit cost price. The percentage is specified as a per-product parameter by the user when assembling a suggested order. If the item is perishable, this percentage is very high, 90%+. If the item isn't perishable (for example, cleaning products), then the only expense is opportunity cost, theft amortized over time, etc, so it will be very low. I call this parameter the percentage of value lost, or PVL.

I have not yet considered lead time, or that we will already have some items in stock. As Ive said, my implementation is a work in progress.

Please critique this approach.

Watch Question

We use a process similar to the "last n comparable periods" to forecast haircuts at walk-in hair salons to estimate staffing.  We calculate a single value for a single service, and although we have some adjustable variables, there is almost no user interaction.

Admittedly, my client base is very non-technical, but I think the most important thing you said is, "There is a danger of creating a baroque, perfect model with many parameters that no one will actually use. "

These salons also keep a small product inventory, and we include some basic reorder settings for their product inventory, but find very few users take advantage of them.

"Small retail operations" rarely have enough man hours to devote to detailed analysis of sales data.  I believe your biggest challenge, by far, is creating a maintenance process that is simple enough that it will not overwhelm the small business user.


>>"Small retail operations" rarely have enough man hours to devote to detailed analysis of sales data.  I believe your biggest challenge, by far, is creating a maintenance process that is simple enough that it will not overwhelm the small business user.

True, but I'd like to give the user the choice. The percentage of value lost parameter and (possibly) the probability of buying an alternative product parameter aren't overly difficult to understand, and would be typically be approximated there and then, when the user was assembling an order.

I just wish that there was some gold standard that I could emulate, but there doesn't seem to be.

Unlock this solution and get a sample of our free trial.
(No credit card required)


>> I see it as more of a data management challenge, which I guess is an interface issue not a calculation issue.  If they have 10000 products and need to make decisions on every one, odds are the data entry will be incomplete.

Good point. At present, the user can add suggested products by product department, and can specify a default value for PVL in advance of adding each department (they may also sort by product department, by PVL, by product description, etc).  So, typically the user will supply a default PVL for each product department and tweak values for specific products if he feels the need.

Unlock the solution to this question.
Thanks for using Experts Exchange.

Please provide your email to receive a sample view!

*This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.


Please enter a first name

Please enter a last name

8+ characters (letters, numbers, and a symbol)

By clicking, you agree to the Terms of Use and Privacy Policy.