Do Viewers Care? Understanding the impact of ad creatives on TV viewing behavior Yannet Interian, Kaustuv, Igor Naverniouk, P. J. Opalinski, Sundar Dorai-raj, and Dan Zigmond∗ Google, Inc.

Abstract Google aggregates data, collected and anonymized by the DISH Network L.L.C., describing the precise second-by-second tuning behavior for millions of television set-top boxes, covering millions of US households, for several thousand TV ad airings every day. From this raw material, Google has developed several metrics that can be used to gauge how appealing and relevant commercials appear to be to TV viewers. While myriad factors impact tuning during ads, we find a measurable effect attributable to the ad creative itself. Although this effect appears modest, it demonstrates that viewers do react differentially to TV advertising, and that these reactions can then be Figure 1: %IAR as a function of the minute of the used to rank creatives by their apparent relevance to hour the ad was aired. the viewing audience.

1

remained tuned throughout the ad airing2 . The intuition behind this metric is that when an ad does not appeal to a certain audience, viewers will vote against it by changing the channel. By including only those viewers who were present when the commercial started, we hope to exclude some who may be channel surfing. However, even these initial viewers may tune away for other reasons. For example, a viewer may be finished watching the current program on one channel and looking for something else to watch. For example, the chart in figure 1 shows %IAR val-

Why viewers tune away

Google has developed several metrics based on second-by-second tuning data collected from several million US television set-top boxes1 . This paper focuses on the most promising of these, the percentage of initial audience retained (%IAR) during a commercial. This is calculated by taking the percentage of the TVs tuned to an ad when it began which then ∗ Please

address correspondence to [email protected]. anonymous set-top box data were provided to Google under license by the DISH Network L.L.C. and Google gratefully acknowledges their assistance in making this work possible, and particularly Steve Lanning, their Vice President for Analytics, for his helpful feedback and support. 1 These

2 We have also calculated these metrics based on households rather than televisions. The results are nearly identical because we find it is unusual for multiple TVs in the same household to be watching the same ad.

1

Figure 2: Impact of initial audience size on %IAR.

ues for hundreds of ad airings in June, based on the minute of the hour when the ad was aired. Although almost all airings have %IAR values above 80%, the vast majorxity of the lowest-scoring airings occur at minutes 28, 29, 30, 58, and 59. Because these are also typical program boundaries, we have two explanations for this phenomena. First, many of the people tuning out at these minutes are doing so in search of new programs on other channels, not in response to a specific ad. Second, some of these low values may be attributable to DVR tuning, which also occur largely at program boundaries. But even after removing DVR events, the program-boundary effect is still visible, suggesting that the first explanation also holds true. Figure 2 shows the susceptibility of %IAR to the size of the initial audience: as the initial audience increases the variance in %IAR decreases. Here we divided the airings into groups of equal number of airings by “low”, “medium” and “high” initial audience and we show %IAR for evenings and mornings. Note that the variances decreases from “low” initial audience to “medium” and “high.” On the other hand, the variance for a given audience size (e.g., “high”) does not change significantly across different dayparts. Figure 3 shows that different networks tend to have different characteristic %IAR measurements. View-

Figure 3: Impact of the underlying network on %IAR ers seem to watch some networks more passively than others. Sports networks, for example, often have low %IAR measures on their ads, while children’s networks tend to have high measures. This could be a reflection of the different audiences who watch these networks (and their characteristic viewing behavior), or it could be that the content itself lends itself to different styles of viewing3 . Prior exposure to a given ad also seems to affect %IAR, often in a somewhat non-intuitive way. Figure 4 plots the %IAR of several hundred different ads aired in the month of August. Each time an ad is aired, the audience is divided up into first-time viewers, second-time viewers, etc., based on their previous exposures. We then calculate a %IAR for each of these sub-audiences. Figure 4 shows the average %IAR across all these ads, separated into subaudiences in this way. The pattern is striking: the more often viewers have seen an ad over the last 3 It could also be that the ads shown on some networks are simply more engaging than other ads, although the effect is so consistent by network genre that we consider this the least likely explanation.

2

Figure 5: Gender differences in %IAR to find similar differences across viewers of differing ages and household incomes.

2

Figure 4: Impact of prior ad exposures on %IAR

Measuring the creative effect

Using many of the results above, we have built statistical models for %IAR using daypart, network, pod position, ad duration, precise time, and day of week. The models attempts to predict the %IAR for a specific airing without knowing which creative will be run. We can then compare the actual %IAR we observe to this prediction. Ads that perform as expected are “normal,” while ads that consistently deviate can be considered “good” or “bad” depending on which side of the prediction they fall on. More precisely, we use the deviation from the model – the residuals – to rank creatives. We compute the fraction of the airings from a given creative that have residuals less than zero (underperforming airings), and then rank creatives using that fraction. 4 To be clear, the causal direction here remains open to deA creative is deemed “bad” if at least 75% of its airbate. It could be that prior viewership creates greater affinity ings on a particular network are underperforming, for ads. But it could also be that more passive viewers are more likely to encounter ads multiple times. In other words, and “good” if 75% of its airings are outperforming. the cohort of single-exposure viewers may include many view- We refer to these residuals as a “retention score,” or ers who practice active ad avoidance, while the other cohorts sometimes, a “quality score.”6 month, the less likely they are to tune away4 . We have recently begun exploring the ways demographics may also impact %IAR. For example, figure 5 plots the differing %IAR values calculated when looking only at households composed of a single female adult resident, a single male adult resident, and all households. Female households (pink triangles) consistently tune away less than male households (blue squares). The average %IAR across all households (gray circles) is generally somewhere in between these two, although for the ads with highest retention, both sets of single-adult households had lower-than-average audience retention5 . We expect

contain fewer of these and so yield higher average retention. 5 This may be due to the absence of children, by definition, in these households. As previously noted, we find that children’s advertising appears to have especially high audience retention on average. This may be because children actually

enjoy advertising more than adults, or, perhaps in some cases, because they cannot reach the remote control. We hope for the latter explanation. 6 The term “ad quality” has a specific meaning in the con-

3

Figure 6: Distribution of residuals per creative. Figure 6 shows the distribution of residuals per creative, with the creatives sorted by the median of their residuals. The creatives marked in red (on the left) appear to be underperforming (based on the 75% standard given above), while the creatives marked in green (on the right) are outperforming (based on the same standard). To ensure that this rank is not an arbitrary artifact, we performed two cross-validation studies. We divided all the airings at random into two groups A and B. In figure 7, we plot points for every creative, showing the score across all airings in each group: the X axis is the fraction of residuals below zero for the airings from group A from the creative, and the Y axis is the same fraction corresponding to airings

from group B. (We restricted the plot to creatives with at least 100 airings.) We see a strong correlation across the two random subsets. In figure 8, we show a comparison of the ranking from month to month: the X axis here is fraction of residuals below zero for the airings from June, and Y axis is the same fraction corresponding to airings from July. The residuals for the two months are calculated from training data for that month. Again the chart shows a clear correlation, suggesting that our creative rank is stable.

3

text of Google’s online advertising efforts that is generally associated with relevance: an ad with “high quality” is one that appears to be highly relevant to a given user in a given context. Although we sometimes use that term for historical reasons to describe our own work on television ads, we prefer the term “retention score,” which described more precisely what is being measured and avoids the judgmental connotations of “quality.”

Retention scores and human evaluation

In order understand further the meaning of the retention scores derived from the logistic regression described above, we conducted a simple survey of 78 Google employees. We asked each member of this admittedly unrepresentative sample to evaluate 20 4

Figure 7: Comparing ranks per creative for models based on two random subsets of all ad airings.

Figure 8: Comparing ranks per creative for ad airings from two consecutive months. Human evaluation At least “somewhat engaging” “Unremarkable” At least “somewhat annoying”

television ads on a scale of 1 to 5, where 1 was “annoying” and 5 was “enjoyable.” We chose these 20 test ads such that 10 of them had underperformed the model prediction in at least 75% of their airings (the so-called “bad ads”), and 10 of them had outperformed in at least 75% of their airings (the “good ads”). Table 3 summarizes the results. Ads that scored at least “somewhat engaging” (i.e, mean survey score greater than 3.5) averaged in the top 14th percentile of retention scores for all creatives. Ads that scored at the other end of the spectrum (mean less than 2.5) averaged in the 70th percentile. Ads with survey scores in between these two averaged in the 38th percentile. Figure 9 gives another view of this data. Here the 20 ads are ranked according to their human evaluation, with the highest-scoring ads on top. The bars are colored according to which set of 10 they belonged to, with green ads coming from the group that out-

Mean rank 14% 38% 70%

Table 1: Correlating retention score rankings with human evaluations performed the model and red ads coming from the group that underperformed. Although the correlation is far from perfect, we see fairly good separation of the “good” and “bad” ads, with the highest survey scores tending to go the ads with the best retention scores.

4

Predictive tests of retention scores

If retention scores based on the residuals shown in figure 6 are measuring some intrinsic property of the 5

Figure 9: Correlating retention score rankings with human evaluations

Figure 10: Predicting future relative %IAR based on retention scores

ads, then it should be possible to predict future audience behavior based on them. To test this, we selected pairs of “good” and “bad” ads and then ran these back-to-back on seven different TV networks7 on several days between December 2008 and February 2009, for a total of 66 distinct airings. Because for each airing the non-creative factors (e.g., time of day, day of week, network, etc.) were held essentially constant8 , we would expect ads with positive retention scores to retain more audience than ads with negative retention scores. Figure 10 shows the results of these 66 airings. The Y axis gives the %IAR for the “good” ad, while the X axis gives the %IAR for the “bad” ad. (The color of the points indicates different networks on which the ads were run.) Points above the diagonal line are those in which the “good” ad retained more audience. This was the case for all 66 airings, demonstrating

that retention scores calculated from our model residuals are strong predictors of future ad performance. These predictive tests represent the strongest evidence to date that our statistical models are able to isolate the impact of creatives on audience behavior, despite the significant noise introduced by noncreative factors.

5

Conclusions

Many factors influence the tuning behavior of TV audiences, making it difficult to understand the precise impact of a specific ad. However, by analyzing the tuning of millions of individuals across many thousands of ads, we can model these other factors and yield an estimate of the tuning attributable to a spe7 The networks used were ABC Family, Bravo, Fine Livcific creative and confirm that creatives themselves do ing, Food Network, Home & Garden Television, The Learning influence audience viewing behavior. This retention Channel, and VH-1. 8 We also alternated the order of the “good” and “bad” ads score — the deviation from the expected behavior — to neutralize any position bias. can be used to rank ads by their appeal, and perhaps 6

relevance, to viewers, and could ultimately allow us to target advertising to a receptive audience much more precisely. In the long run, we hope these methods will inspire and encourage more relevant advertising on television. Advertisers can use retention scores to evaluate how campaigns are resonating with customers. Networks and other programmers can use these same scores to inform ad placement and pricing. Most importantly, viewers can continue voting their ad preferences with ordinary remote controls — and using these techniques, we can finally count their votes and use the results to create a more rewarding viewing experience.

7

Do Viewers Care? Understanding the impact of ... - Research at Google

TV advertising, and that these reactions can then be used to rank creatives by their apparent relevance to the viewing audience. 1 Why viewers tune away.

343KB Sizes 5 Downloads 1041 Views

Recommend Documents

On the Impact of Kernel Approximation on ... - Research at Google
termine the degree of approximation that can be tolerated in the estimation of the kernel matrix. Our analysis is general and applies to arbitrary approximations of ...

Understanding Visualization by Understanding ... - Research at Google
argue that current visualization theory lacks the necessary tools to an- alyze which factors ... performed better with the heatmap-like view than the star graph on a.

The Impact of Memory Subsystem Resource ... - Research at Google
sources on five Google datacenter applications: a web search engine, bigtable ... havior and discover that the best mapping for a given ap- plication changes ... Machines have multiple sockets hosting proces- sors with multiple ... 9 10. 3 MB L2. 11.

Understanding the Mirai Botnet - Research at Google
An Inside Look at Botnets. 2007. [13] BBC. Router hacker suspect arrested at Luton airport. http:// .... Octave klaba Twitter. https://twitter.com/olesovhcom/.

Impact Of Ranking Of Organic Search Results ... - Research at Google
Mar 19, 2012 - average, 50% of the ad clicks that occur with a top rank organic result are ... In effect, they ... Below is an illustration of a search results page.

Incremental Clicks Impact Of Mobile Search ... - Research at Google
[2]. This paper continues this line of research by focusing exclusively on the .... Figure 2: Boxplot of Incremental Ad Clicks by ... ad-effectiveness-using-geo.html.

Incremental Clicks Impact Of Search Advertising - Research at Google
Google Inc. Abstract. In this research ... search advertising has over traditional media ad- vertising. ... across multiple studies easier, we express the in- cremental ...

Understanding user behavior at three scales - Research at Google
Abstract: How people behave is the central question for data analytics, and a ... time and sampling resolution shows us how looking at behavior data at the ...

Perception and Understanding of Social ... - Research at Google
May 17, 2013 - media sites are making investments based on the assumption that these social signals .... They found that the most popular question types were rhetorical ..... ized social search results, 8-10 tasks for each subject. • During the ...

The Abuse Sharing Economy: Understanding ... - Research at Google
network, the services affected, and the duration of attacks. We find that ... sults in 1% of abusive devices generating 48–82% of attacks across services.

TV Impact on Online Searches - Research at Google
Jul 13, 2017 - keywords that are related to the TV ad: “Android” and “Cool”. ... Figure 11: ISPS, ISPI and ISPC for different time of day in the Android campaign.

understanding low fertility: the impact of life-course ...
“life course competition” as the source of cross-national fertility differentials. ... discrepancy to changing intentions, attitudes, and preferences over the life course,.

Understanding and Mitigating the Impact of Load ...
load-balanced across multiple replicas to ensure read- monotonicity; as explained in Section 2). Write requests, on the other hand, must always go to the home server to preserve a sequential write order as the home server acts as an ordering point. W

Limplock: Understanding the Impact of Limpware on ...
five cloud systems (Hadoop, HDFS, ZooKeeper, Cassan- dra, and HBase) and find ... ensure big data continuously flows in big pipes, cloud systems must deal with all ..... Master-slave architecture: This architecture is prone to cluster limplock.

understanding low fertility: the impact of life-course ...
most people express contentment with the number of children they have at the ... parents may differ in the amount of money they directly invest in their children.

Proceedings of the... - Research at Google
for Improved Sentiment Analysis. Isaac G. ... analysis can be expressed as the fundamental dif- ference in ..... software stack that is significantly simpler than the.

The Financial Impact of Alzheimer's on Family Caregivers - Aging Care
professional careers. Source: ... Challenges Alzheimer's Caregivers Face On the Job ... Less likely to quit their job due to caregiving (21% versus 25% overall).

Cloak and Swagger: Understanding Data ... - Research at Google
Abstract—Most of what we understand about data sensitivity is through .... and criminal record” [10]. Similarly .... We use Quora [7], a popular question-and-answer site, in ...... [2] S. Choney, “Kony video proves social media's role as youth

Understanding information preview in mobile ... - Research at Google
tail and writing responses to a computer [14]), a better un- derstanding of how to design ... a laptop or desktop is that the phone requires a lower level of engagement [7]: .... able, with a range of 10 to 100 emails per day (M = 44.4,. SD = 31.4).

Accuracy at the Top - Research at Google
We define an algorithm optimizing a convex surrogate of the ... as search engines or recommendation systems, since most users of these systems browse or ...

Understanding Indoor Scenes using 3D ... - Research at Google
action describes the way a scene type (e.g. a dining room or a bedroom) influences objects' ..... a new dataset that we call the indoor-scene-object dataset.3.

A Room with a View: Understanding Users ... - Research at Google
May 10, 2012 - already made the decision to buy a hotel room. Second, while consumer ... (e.g. business vs. leisure trip) conditions determined the size of the margin ... and only done for a small set of promising options. It requires resources ...