Marketing Analytics, Rich Data and Deep Learning

08 Dec,2022



By Ashoke Agarrwal


Ashoke AgarrwalMarketing analytics is as old as marketing. Simple arithmetic and the magic of ratios drove marketing analytics in the early days. For example, the rug shop owner in the bazaars of Istanbul would estimate his annual sales based on his year-to-date sales as a ratio of his last year’s sales. Or he would estimate price elasticity by running an experiment for a day or two and analysing the data again using simple arithmetic.


Over the decades, as marketing evolved as both art and science, the depth and availability of data increased, and so did the sophistication of marketing analytics.


Marketing mix modelling using multivariate analysis became a vital activity in the marketing departments of large companies. The data fed to the analytics was sales analysis by geographies, advertising spends, pricing and SKU spreads, and measures of brand lift – awareness and consideration across the company’s brands and competition. Some of the data was first-party data – collected and owned by the company. Further, second-party data – data provided by syndicated research studies like retail and advertising audits – became increasingly important over the years.


The statistical tools’ sophistication increased, including multivariate analysis tools like Principal Component Analysis, Multivariate Analysis of Variance, and Hierarchical Cluster Analysis.


About two decades ago, the age of Big Data, Smartphones and Social Media dawned. And as the next decade sees the emergence of the age of AI, a new dimension to marketing analytics has begun to come into view.


Machine Learning, particularly Deep Learning, is different from statistical analysis.


A simple explanation of the difference is that statistical analysis gives a more precise inference about the relationship between variables, while Deep Learning is more focused on making accurate predictions.


Currently, the debate on using Deep Learning in Marketing Analytics is raging in academic circles.


The October 2019 issue of Sloan Management Review published one such paper – “Is Deep Learning a Game Changer for Marketing Analytics?” by Glen Urban, Artem Timoshenko, Paramveer Dhillon, and John R. Hauser.


Urban et al. studied data on credit card choices provided by The data set was for 260,000 individuals across demographic factors like age, gender, household income, and cards owned. Zip code etc. The data set also included 132 attributes for cards offered (APP interest, reward points – miles, cash etc. and card fees – annual, transfer etc.). The study used three models to analyze the data:

:: Linear regression of choice as a function of user demographics and card attributes

:: The second model was a simple deep-learning model

:: A third model used deep learning but added a step of consideration to the final purchase.


The study found that the difference in predictive accuracy between the three models was insignificant – 70.5% for linear regression, 71.7% and 73.0% for the two Deep Learning models.


Deep Learning is expensive to conduct in terms of the expertise required and the processing needs, including computer power. Urban et al. concluded that statistical analysis would be more cost-efficient when the data set is fully structured. They hypothesized that Deep Learning would be more efficient at analyzing “rich” databases, including user-generated content like Amazon reviews, Instagram posts, Facebook posts and comments.


A study by Liu, Daria and Mizik supports the above hypothesis. The July/ August 2020 issue of Marketing Science reports on the study under the title “Visual Listening: Extracting Brand Image Portrayed on Social Media”.


Liu et al. used a multi-image deep convolutional neural network model – a form of Deep Learning- to predict the presence of perceptual brand attributes in the images consumers post online for 56 brands in the apparel and beverages categories. The study checked the model’s predictions against those made by human judges and found a good fit. The model used by the study is branded as the BrandImageNet and is of use to brand owners for automatically monitoring their brand’s portrayal on social media in real-time and thus better understanding consumer brand perceptions and attitudes towards their and competitors’ brands.


Decades ago, Ogilvy launched the Magic Lantern, which used factor analysis to create a highly appreciated compendium of dos and don’ts for advertising in a particular category. The Magic Lantern used multivariate analysis tools like Component Factor analysis to factorize a brand’s advertising and relate the factors to its market success. It is quite likely that today the Magic Lantern team has moved on to include social media imagery besides advertising imagery along with Deep Learning methods.


Deep Learning is a valuable tool in analysing other aspects of user-generated content, as Timoshenko and Hauser reported in their paper – “Identifying Customer Needs from User-Generated Content”, published in the Jan/ Feb 2019 issue of Marketing Science. The study worked on an extensive data set of 115,099 oral‐care reviews on Amazon in the US, spanning the period from 1996 to 2014 and randomly sampled 12,000 sentences split into an initial set of 8,000 sentences and a second set of 4,000 sentences. The study then used a convolutional neural network to filter non-informative and repetitive content. The study compared user needs identified through Deep Learning analysis of User Generated Content (UGC) with needs identified by professional researchers working on industry-standard experiential interviews. In summary, UGC identifies the vast majority of customer needs (97%), opportunities for product improvement (92%), and hidden opportunities (92%). In addition, the UGC-only method identified seven hidden opportunities, while the interview‐only identified two. As user-generated content explodes, Deep Learning methods are proving to be very useful in using this expanding treasure trove to increase marketing efficiency and effectiveness.


And as the Internet of Things (IoT) develops along with the age of AI, Deep Learning will play a more significant part in product design. A paper titled “Unsupervised Learning for Product Use Activity Recognition: An Exploratory Study of a “Chatty Device” by Nemitari, Khanesar, Burnap and Branson, published in the journal Sensors offers a fascinating insight into this area.


In conclusion, advanced statistical analysis will continue to be more cost-efficient and effective compared to Deep Learning regarding structured data analysis. However, in a world where “rich data” – unstructured multi-media user-generated content on brands and products is exploding, Deep Learning will emerge as a valuable tool in increasing marketing efficiency and effectiveness. Further, deep learning techniques will be needed as IoT matures to effectively use the continuous chatter of embedded sensors.


Post a Comment 

Comments are closed.