Short stories on Business Intelligence
Back in 2005, a time when smartphones didn’t even exist, Michael Jackson was still alive, presidents were people who you could admire and I had more hair, I was working for the well-known French retailer Carrefour.
My initial role was called Promo Analyst -every new recruit in the BEM (Business Intelligence, I guess) department began there- and my main duties were: gather data from our tailored business intelligence software and Point-of-Purchase data bases, AC Nielsen Market surveys and a local market research company. Then, load all those data sets into MS Access, run queries (lots), plug the results into the then 64k rows MS Excel, write some lines of code in Visual Basic to clean data and, assuming that the PC allowed (1 GHz Pentium, 512 Mb RAM), manipulate data to answer the same question always: why didn’t our promos do well in relation to our main competitor? Then, make some graphs in Excel (where else?) and put a PPT together for the C-level monthly meeting. I usually presented my own PPTs which gave me a lot of experience receiving ‘friendly fire’…
“Friendly fire - isn’t.” — Unknown
After months of following the usual procedure of measuring sales during the promo, I thought that it would be best of I compared sales before the promo, during the promo and after the promo. For me, it was simply the most common sense way to do it…
And I was right: sometimes common sense, only sometimes, works. As the new approach showed, promos worked pretty well, especially TV adds! Sales were up between 10% and 100% during promo periods and usually stayed up for a short period of time after the promo ended, which was something that the Commercial Directors and Category Specialists kept arguing promo after promo, punishment after punishment!
“Get your facts first, and then you can distort them as much as you please.” — Mark Twain
Source: xkcd.com
So, what was the issue? Why did the previous method fail to reflect the success of the promos? Simple: they were using the wrong point of reference. By comparing our results with the sales of a rather larger competitor, executives weren’t assessing the trend of our sales but comparing apples to oranges. And that’s a fundamental flaw in the way that the C-level of a global, top retailer was measuring the business. But, after all, was it even their fault?
Morals
Executives don’t know the details behind the algorithms and analysis you are running. That’s not their job, it’s ours. So, let’s don’t be lazy and check the algorithms periodically, their accuracy, whether or not they are pertinent anymore, update and upgrade them, train and test them again. They are not written on stone, are they?
It’s the executive’s job to make decisions based on the information they relay on us to give them. Don’t be the provider of misleading information.
There’s no need to reinvent the wheel every time. Use the brain, exert common sense, ask lots of questions to the people who know more than you about the business, listen to them, ask again…