Measuring Advertising Quality on Television Deriving Meaningful Metrics from Audience Retention Data Dan Zigmond

This article introduces a measure of television ad quality based on audience retention

Google, Inc.

using logistic regression techniques to normalize such scores against expected

[email protected] Sundar Dorai-Raj Google, Inc. [email protected] Yannet Interian Google, Inc. [email protected] Igor Naverniouk Google, Inc.

audience behavior. By adjusting for features such as time of day, network, recent user behavior, and household demographics, we are able to isolate ad quality from these extraneous factors. We introduce the model used in the current Google TV Ads product and two new competing models that show some improvement. We also devise metrics for calculating a model’s predictive power and variance, allowing us to determine which of our models performs best. We conclude with discussions of retention score applications for advertisers to evaluate their ad strategies and as a potential aid in future ad pricing.

[email protected]

INTRODUCTION

measure their success through user-response

In recent years, there has been an explosion of

metrics such as click-through rate (CTR), conver-

interest in collecting and analyzing television set-

sion rate (Richardson, Dominowski, and Ragno,

top box (STB) data (also called “return-path” data)

2007), and bounce rate (Sculley, Malkin, Basu, and

(Bachman 2009). As U.S. television moves from

Bayardo, 2009), Google has been exploring how to

analog to digital signals, digital STBs increasingly

use STB measurement to design equivalent meas-

are common in American homes. Where these

ures for TV.

are attached to some sort of return path (as is the

Past attempts to provide quality scores for TV

case in many homes subscribing to cable or satel-

ads have typically relied on smaller constructed

lite TV services), these data can be aggregated and

panels and focused on programming with very

licensed to companies wishing to measure televi-

large audiences. For example, for the 2009 Super

sion viewership.

Bowl, Nielsen published likeability and recall

Advances in distributed computing make it

scores for the top ads (Nielsen Inc., 2009). The

feasible to analyze these data on a massive scale.

scores were computed using 11,466 surveys, and

Whereas previous television measurement relied

they reported on the five best-liked ads and the five

on panels consisting of thousands of households,

most-recalled ads.

data can now be collected and analyzed for mil-

In this article, we define a rigorous measure of

lions of households. This holds the promise of

audience retention for TV ads that can be used to

providing accurate measurement for much (and

predict future audience response for a much larger

perhaps all) of the niche TV content that eludes

range of ads. The primary challenge in design-

current panel-based methods in many countries.

ing such a measure is that many factors appear to

In addition to using these data for raw audience

impact STB tuning during ads, making it difficult

measurement, it should be possible to make more

to isolate the effect of the specific ad itself on the

qualitative judgments about the content—and spe-

probability that a STB will tune away. We propose

cifically the advertising—on television. In much

several ways of modeling such a probability. To the

the same way that online advertisers frequently

best of our knowledge, this is the first to attempt

DOI: 10.2501/S0021849909091090

December 2009  JOURNAL

OF ADVERTISING RESEARCH  419

MEASURING ADVERTISING QUALITY ON TELEVISION

to derive a measure of TV ad quality from

by a hollow dot). Google inserted ads at

how appealing and relevant commercials

large-scale STB data.

approximately 1 minute into this break

appear to be to TV viewers. One such

(shown by the shaded area), during which

measure is the percentage initial audience

SECOND-BY-SECOND MEASUREMENT

there was a slight net increase in the total

retained (IAR): how much of the audience

Google aggregates data—collected and

audience. After Google’s ads, the regular

that was tuned to an ad when it began air-

anonymized by DISH Network LLC—

programming resumed, and the audience

ing remained tuned to the same channel

describing the precise second-by-second

size gradually returned to nearly the prior

when the ad completed.

tuning behavior television STBs in mil-

levels within the first two minutes.

In many respects, IAR is the inverse of

lions of U. S. households. These data can

The lower plot shows the level of tuning

online measures like CTR. For online ads,

be combined with detailed airing logs for

activity across this same timeline. Tune-

passivity is negative: Advertisers want

thousands of TV ads to estimate second-

away events (solid line) peak at the start of

users to click through. This is somewhat

by-second fluctuations in audience during

the break, whereas tune-in (dashed line) is

reversed in television advertising, in

TV commercials everyday.

1

strongest once the programming resumes.

which the primary action a user can take is

For example, audiences fluctuate dur-

Smaller peaks of tune-away events also

a negative one: to change the channel. We

ing a typical commercial break on a major

occur at the start of the Google-inserted

see broad similarities, however, in the pro-

U. S. cable television network (as shown

ads.

pensity of users to take action in response

in Figure  1). The total estimated audi-

to both types of advertising (see Figure 2).

ence drops by approximately 5 percent

TUNING METRICS

In January 2009, the tune-away rates (the

soon after the ads begin at 8:19 am (shown

These raw data can be used to create

additive inverse of IAR) for 182,801 TV

more refined metrics of audience reten-

ads distribution was broadly similar to

tion, which in turn can be used to gauge

the distribution of CTR for a comparable

These anonymous STB data were provided to Google under a license by the DISH Network LLC.

1

number of randomly selected paid search ads that also ran that month. Although the Percentage of Audience Beginning of Commercial Break Google TV Ad Insertion

100%

actions being taken are quite different in the two media, the two measures show a

99%

comparable range and variance.

98%

THE BASIC MODEL Tuning metrics, like IAR, can be useful in

97%

evaluating TV ads. We have found, however, that these metrics are highly influ-

96% 95% # of STB Tuning In # of STB Tuning Out

enced by extraneous factors such as the time of day, the day of the week, and the network on which the ads were aired. These are nuisance variables and make direct comparison of IAR scores very difficult. Rather than using these scores directly, we have developed a model for normalizing the scores relative to expected tuning behavior.

08:19 am

08:20 am

08:21 am

08:22 am

Note: The number of viewers drops roughly 5% after the advertising break starts (top plot). The number of tune-out events (solid line; bottom plot) is strongly correlated with the beginning of the pod (i.e., advertising break). Toward the end of the pod, we also see an increase of tune-in events (dashed-line; bottom plot).

Definition We calculate per airing the fraction of IAR during a commercial. This is calculated by

Figure 1  Pod Graph of STB Tune-In/Out Events on a Major Network 420  JOURNAL

OF ADVERTISING RESEARCH  December 2009

taking the number of TVs tuned to an ad when it began and then remained tuned throughout the ad airing (see Equation 1).

MEASURING ADVERTISING QUALITY ON TELEVISION

 IAR  ln    “Network” + “Ad Duration”   1− IAR + “WeekDay” + “DayPart” (5)

40

Typical TV tune-away rates

ture on the right hand side is a collection

20

of parameters. Here, “Network”, “WeekDay”, and “DayPart” are categorical variables, whereas “Ad Duration” is treated as numeric.

10

Density

30

where IAR is given by (1) and each fea-

Parameter estimates for (1) are obtained

0

using the glmnet package in R (Friedman, Hastie, and Tibshirani, 2009). The 0.00

0.02

0.04

0.06

0.08

glmnet algorithm shrinks insignificant or

0.10

correlated parameters to zero using an L1

Tune-away rate (1 – IAR)

penalty on the parameter estimates. This

182,801 Google TV ads with at least 1,000 impression, January 2009 Note: For most ads, roughly 1% to 3% of the viewers at the beginning of the ad tune away before the end of the ad.

avoids the pitfalls of classic variable selection, such as stepwise regression.

Figure 2  Tune-Away Rate Distribution for TV Ads

Retention Score and Viewer Satisfaction To understand the qualitative meaning of

When an ad does not appeal to a certain

included but not the specific campaign or

retention scores, we conducted a simple

audience, those viewers will vote against

customer. We then define the IAR residual

survey of 78 Google employees. We asked

it by changing the channel. By includ-

to be a measure of the creative effect (see

each member of this admittedly unrep-

ing only those viewers who were present

Equation 3).

resentative sample to evaluate 20 televi-

when the commercial started, we hope to exclude some who may simply be channel surfing. IAR =

Audience that viewed whole ad Audience at beginning of the ad



ˆ IAR residual = IAR − IAR

(3)

sion ads on a scale of 1 to 5, where 1 was “annoying” and 5 was “enjoyable.” We

There are a number of ways to estimate

chose these 20 test ads such that 10 of them

(2), several of which will be discussed in

were considered “bad” and the remaining

this article.

10 were considered “good” (see Table 1).

Using equation 3, we can define under-

Ads that scored at least “somewhat

performing airings as the airings with IAR

enjoyable” (i.e., mean survey score greater

We can interpret IAR as a probability

residual below the median. Now that we

than 3.5) had an average retention score

of tuning out from an ad. However, as

have a notion of underperforming airings,

of 0.86 for all creatives (see Table 2). Ads

explained, raw, per-airing IAR values are

we can formally define the retention score

that scored at the other end of the spec-

difficult to work with because they are

(RS) for each creative as one minus the

trum (mean less than 2.5) had an average

affected by the network, day part, and

fraction of airings that are underperform-

day of the week, among other factors. To

ing in Equation 3 (see Equation 4).



(1)

isolate these factors from the creative (ad), we define Expected IAR of an airing (see Equation­2): ˆ = E(IAR |θˆ ), IAR



(2)

RS = Number of underperforming airings 1− ngs Total number of airin (4)

TABLE 1 Using Retention Scores to Categorize Ads as Either “Bad” or “Good” Ad Quality

Retention Score

The Basic Model

“Good”

>0.75

from an airing, which exclude any features

The basic model we currently use to pre-

“Bad”

<0.25

that identify the creative itself; for exam-

dict expected IAR (IÂR) is a logistic regres-

ple, hour of the day and TV network are

sion of the following form:

^

where θ is a vector of features extracted

These categories were matched empirically with a human evaluation survey

December 2009  JOURNAL

OF ADVERTISING RESEARCH  421

MEASURING ADVERTISING QUALITY ON TELEVISION

Table 2 Correlating Retention Score Rankings with Human Evaluations

retention score of 0.30. Ads with survey

consistently outperformed the model and

scores in between these two had an aver-

black ads coming from the group that

age retention score of 0.62. These results

underperformed. Although the correlation

suggest our scoring algorithm and the

is far from perfect, we see fairly good sepa-

categories defined in Table 1 correlate well

ration of the “good” and “bad” ads, with

Human Evaluation

Mean RS

with how a human being might rank an

the highest survey scores tending to go the

At least “somewhat engaging”

0.86

advertisement.

ads with the best retention scores.

“Unremarkable”

0.62

In another view of these data (see Fig-

At least “somewhat annoying”

0.30

ure 3), the 20 ads are ranked according to

Live Experiments and Model Validation

their human evaluation, with the highest-

To test the validity of our model further,

scoring ads on top. The bars are colored

we ran several live experiments. In these

according to which set of 10 they belonged,

experiments, we identified two ads: one

with gray ads coming from the group that

with a high retention score and one with a

Survey scores of 3.5 or above (or “somewhat engaging”) received retention scores averaging 0.86, whereas survey scores of 2.5 or below (or “somewhat annoying”) received retention scores averaging 0.30. These numbers match well with categories defined in Table 1.

low score. We then placed the two ads side by side, in a randomized order, on several

Video Chat – Cute Kid

networks. Placing ads in the same com-

Trendy Jeans #1

mercial break or pod ensured most other known extraneous features (e.g., time of

Trendy Jeans #2

day, network) were neutralized, so com-

Child PSA

parisons made between the ads would be

Fancy Car

fair (see Figure 4 for our first such experi-

Online Education

ment, conducted in 2008).

Social Networking

After running the ad pairs for about a

Personal Investigator

week, we determined whether the reten-

DIY Product

tion scores were an accurate predictor of

Hunting Gadget

which ad would retain a larger percentage

Musical Instrument

of the audience by observing how often

Emergency Alert System

the ad with higher retention score had the

Singles Website

larger IAR. In this case, the prediction was

Music Download Website

nearly perfect, with only one pair incorrectly ordered.

Talent Agency Audience retention higher than expected

Lawyer #1 Household Cleaner

Audience retention lower than expected

Travel Agency Bank Loans

The purpose of running these live experiments was to determine the accuracy of our retention score model. Ad pairs with little difference in retention score (e.g., <0.1) will be virtually indistinguishable

Lawyer #2 0

1

2 3 4 Human survey scores (1 = “annoying” … 5 = “enjoyable”)

5

Note: The length of the bar represents the average of the scores given by the 20 respondents. The light gray bars correspond to ads with “good” ad quality, and dark bars correspond to ads with “bad” quality, as determined in Table 1. Though the correlation between retention scores and the human evaluation is not perfect (i.e., black bars receive lower scores than gray), a prominent relationship is very visible in this small study.

in terms of relative audience retention. Conversely, pairs with large differences in retention score (e.g., >0.7) should almost always have higher audience retention associated with the ad with the higher score. To test our retention score’s ability to sort a wider range of ads, we produced a plot that relates our predicted

Figure 3  Correlating Retention Score Rankings with Human Evaluations 422  JOURNAL

OF ADVERTISING RESEARCH  December 2009

retention scores back to the raw data (see Figure 5)—a qualitative method of determining how well our retention scores

MEASURING ADVERTISING QUALITY ON TELEVISION

parameter estimates for the basic model. For the user-behavior model, we also apply the same algorithm. For the demographics model, however, we employ a slightly different type of regularization by using

0.98

principal components logistic regression (PCLR) (Aguilera, Escabias, and Valderrama, 2006). PCLR allows for highly correlated parameters in the model, in this case demographic group and network. The data we are using to compare the three models are from June 2009. For the

0.96

networks with the highest median viewership during that month. This leads to a dataset containing 38,302 ads from which

rf Pe

User Behavior

d Ba

0.94

we build our models.

Ad

Go od

Ad

0.95

sake of brevity, we limit ourselves to the 25

Pe rf or or m m ed ed Be Be tte tte r r

Good Ad IAR

0.97

For a typical ad, one to three percent of viewers present at the beginning of

0.94

0.95

0.96

0.97

the ad tune out before the end of the ad

0.98

(Interian et  al., 2009). The User-Behavior

Bad Ad IAR

Model adjusts IAR by splitting the audi-

Note: Each point represents the IAR of the good ad versus the bad ad. Only one of the pairs had the IAR of the bad ad greater than the IAR of the good ad. We randomized each pair to determine which ad comes first, so there is no pod order bias.

ence base into active and passive groups. Our hypothesis is that active users are more likely to tune out from an ad they do not like, whereas passive users will watch

Figure 4  Results from a Live Experiment in 2008

anything regardless of the creative. In fact, active users typically have a much lower

actually sorts creative, both in the struc-

• User-Behavior Model: Same as basic

tured experiments described earlier and

model but incorporates behavior of the

By adding to our model parameters that

in ordinary airings. As expected, the dif-

TV viewer 1 hour prior to an airing.

capture recent tuning behavior for every

ference in retention score is proportional

More specifically, we count the number

STB, we are able to predict more accu-

to the likelihood of the higher-scoring ad

of tune-out events the hour prior to the

rately when a viewer will tune out dur-

retaining more audience.

ad and whether there was a tune-out

ing an ad. The variance of active users is

IAR than passive users (see Figure 6).

We currently have three competing

event in the previous 10 minutes or pre-

much higher than passive users, simply

models for obtaining retention scores. All

vious 1 minute before the ad airs. These

because we have observed IAR further

three models use IAR as a response in a

additional tune-out measures attempt to

from the upper bound of one (see Figure 6,

logistic regression. They differ, though,

separate active users (i.e., more likely to

right panel). This increased variance in the

tune away) from passive.

response improves our model and pro-

either in their lists of features or the type of regularization used to prevent overfitting.

• Demographics Model: Same as basic

vides less noisy predictions of IAR.

model but splits households according • Basic Model: Estimates IAR using

to 113 demographic groups.

Demographic Groups

network, weekday, daypart, and ad duration as main effects in a logistic regression model.

Like the users behavior model described As noted in the Basic Model section

earlier, we also believe different demo-

(previously), we use glmnet to obtain

graphic groups react differently to ads. For

December 2009  JOURNAL

OF ADVERTISING RESEARCH  423

MEASURING ADVERTISING QUALITY ON TELEVISION

example, in an IAR comparison for single men versus single women, almost regardless of the creative, women tend to tune

100

away less than men (see Figure 7).

% of times IAR agrees with RS

For our demographics model, we

90

include gender of adults, presence of children, marital and cohabitation status, and age of oldest adult as additional fea-

80

tures. These categories were identified by an internal data-mining project, which ranked

70

groups

accord-

demographics, such as number of adults

60

present and declared interest in sports TV,

Raw Data Live Experiments Trend Line

50 0.0

0.2

0.4 RShigh – RSlow within a pod

have promise for improving our model. The make up of the included features is described in Table 3. We also have found that certain demo-

0.6

graphics are a partial proxy for network. For example, older adults watch more

Note: Each point (circle) represents the percentage of times that two ads within the same pod (ad break) agree with their respective retention scores. For example, of all ad pairs that have an approximately 0.2 difference in retention scores, roughly 70% of those pairs have the IAR of the lower-ranked ad smaller than the IAR of the higher-ranked ad. We superimposed our live experiments onto the plot (triangles) to show a general agreement with the trend.

cable news networks whereas households with children have higher viewership of children’s networks. This observation suggests that network, one of the features in our Basic Model, might offer redundant

Figure 5  Figure Demonstrating the Predictive Power of Our Retention Score Model

1.00

60

information provided more succinctly by demographics. Including demographic

Active Passive

50 0.95

40 Density

IAR

demographic

ing their relative impact on IAR. Other

0.90

30 20 10

0.85

0 0

5 10 15 ≥20 Number of events in the hour before the ad (truncated at 20)

0.90

0.92

0.94

IAR

0.96

0.98

1.00

Note: The left plot displays the aggregated IAR from 0 events prior to an ad (highest IAR) to 20 or more events prior to an ad (lowest IAR). Each line represents one of 25 networks used in this study. The right plot shows density functions of IAR for airings in June 2009, split by active (solid) and passive (dashed) users. The IAR for active users has a much larger variance because viewers in this group are more likely to change channels during an ad.

Figure 6  Active Users (i.e. Viewers Who Changed Channels Within 10 Minutes Prior to the Ad) Are More Likely Than Passive Users to Tune Away from an Ad 424  JOURNAL

OF ADVERTISING RESEARCH  December 2009

MEASURING ADVERTISING QUALITY ON TELEVISION

information in the same feature list as net-

Single female

Single male

work may, therefore, lead to over-parame-

All STB

terization of the model. Redundancy in the network viewership and demographic groups lead to colin-

0.99

earities in our model formulation. Fitting a model with known correlations will lead to misleading parameter estimates (Myers, 1990). To overcome these problems, we

0.98

use PCLR as an alternative to glmnet.

IAR

With PCLR, we have more control over the model with respect to known correlations.

0.97

For the data discussed in this article, the demographics model contains 144 possible parameters, including the intercept, 112

0.96

parameters from the demographic groups, and remaining parameters from network, daypart, and weekday differences. In

0.95

PCLR, we drop the principal components with little variation, in this case the last 44 dimensions. This leaves us estimat-

Creatives (sorted from “worst” to “best”) Note: The creatives (x axis) are sorted from “best” (low IAR) to “worst” (high IAR) across all demographics. The IAR for single women with no children (black triangles)ends t to be higher than the IAR or f single men with no children (gray crosses), with very few exceptions. IAR for all STBs (including single men and women) tends to be between the two.

ing only 100 parameters and thus greatly reducing the complexity of the model. All further comparisons of the demographics model in the next section are based on the first 100 principal components.

Figure 7  Average IAR for Creatives in June 2009

COMPARING MODELS We have devised four metrics to describe the quality of the models we described in the previous sections. Although these met-

Table 3 Demographic Groups Measured for Each Household

rics tend to agree in ranking models, each measures a different and important aspect of a model’s performance.

Gender

Kids

Married

Single

Age

Male

Yes

Yes

Yes

18–24

Female

No

No

No

25–34

Both

Unknown

Unknown

Unknown

Dispersion The dispersion parameter in logistic regression acts as a goodness-of-fit meas-

35–44

ure by comparing the variation in the data

45–54

to the variation explained by the model (McCullullagh and Nelder, 1989) (see

55–64

Equation 6). The formula for dispersion is

65–74

given by:

75+ This table describes 113 possible groups, including groups where the demographic was not measured. Note that Single is not the opposite of Married; Single implies no other adult living in the household, so two cohabitating adults are both not Single and not Married.



σˆ 2 =

1 N−p

December 2009  JOURNAL

N

∑ i =1

( yi − ni yˆ i )2 , ni yˆ i (1 − yˆ i )

(6)

OF ADVERTISING RESEARCH  425

MEASURING ADVERTISING QUALITY ON TELEVISION

where N is the number of observations, p is

for better models. Or more specifically, a

determine the point on the y-axis that cor-

the number of parameters fit in our model,

good model will have small residual vari-

responds to the median of all retention

yi is the observed IAR, y^i is the expected

ance within creatives (numerator) and a

score differences. The larger the predictive

IAR from our model, and ni is the number

large residual variance between creatives

strength (i.e., the steeper the curve), the

of viewers at the beginning of an ad. The

(denominator).

better the model is at sorting ads that are

closer equation 6 is to 1, the better the fit.

relatively close together in terms of retenPredictive Strength

tion scores.

Captured Variance

Predictive

A reasonable model should minimize the

through their respective retention scores.

Residual Permutation

variance within a creative while maximiz-

Figure 8 shows the predictive strength for

For the last metric, we randomly reorder

ing the variance between creatives. Using

the basic model. In this plot, we see that as

the residuals from our model and recalcu-

the residuals r given by (3), “captured var-

the differences in retention score increase,

late the retention scores according to (4).

iance” is given by

the respective ads also agree in terms of

We then measure the area between the dis-

IAR. So that comparisons of IAR are fair

tributions of the new retention scores and

(i.e., extraneous variables are minimized),

the observed retention scores. The result

each ad pair considered is within a pod.

is interpreted as the difference between

E(Varc (r )) , Var(Ec (r ))



(7)

strength

compares

models

where Varc and Ec are the variance and

To compute the metric, we draw a

determining scores using our model and

expectation of residuals within a creative

curve through the scatter plot using logis-

selecting scores at random. The greater

c. The expression in (7) should be small

tic regression. From the fitted line, we

the difference, the better our model is at producing scores that do not look random (see Figure 9).

100

Model Comparisons previous sections, we compute the relative

90

differences between our three models (see Table 4). The user model is the best accord-

80

ing to the metrics we have defined, fol-

Predictive Strength = 73%

lowed by the demographics model and the basic model. The greatest improvements

70

over the basic model thus far have been in dispersion, whereas predictive strength

Median Difference = 0.17

% of times IAR agrees with RS

Using the four defined metrics from the

60

50 0.0

0.2

has only slightly improved.

Table 4 Comparison Metrics for Each Model 0.4 RShigh – RSlow within a pod

Note: The curve is identical to Figure 5 (with the live experiments removed). We first use the curve to determine the median difference in retention scores of all ad pairs. From this difference, and using the logistic regression trend line, we estimate the percentage of ad pairs for which the aggregated IAR agrees with the retention score difference. For the basic model (pictured), the median difference is 0.17, of which 73% of the ad pairs agree with the retention score ordering.

Figure 8  Predictive Strength Is Computed from the Curve Above 426  JOURNAL

OF ADVERTISING RESEARCH  December 2009

Basic

User

Demographics

Dispersion

41.8

3.2

7.5

Captured variance

5.2

4.1

3.9

Predictive strength

73%

75%

70%

Permutation

43.9

53.6

50.8

0.6

The model with the best metric is shaded in gray. For three of the four models, the user behavior model wins the comparisons. The demographics model is second or first for three of the four comparisons. The basic model fairs the worst among the three.

MEASURING ADVERTISING QUALITY ON TELEVISION

POSSIBLE APPLICATIONS 1.0

We have started using retention scores for

Observed Retention Score

a variety of applications at Google. These

Permuted Retention Score

scores are made available to advertisers,

Cumulative Probability

0.8

who can use them to evaluate how well their campaigns are retaining audience.

0.6

This may be a useful proxy for the relevance of their ads in specific settings.

0.4

For example, Figure 10 shows the retention scores for an automotive advertiser,

0.2

compared with the average scores for other automotive companies advertising on television. Separate scores were calculated

0.0 0.0

0.2

0.4

0.6

0.8

for each network on which this advertiser

1.0

aired. We can see not only significant dif-

Retention Score Note: The shaded area between the two distributions is our permutation metric, which should be large for better models. The distribution of retention scores shown in the figure is from the basic model.

ferences in the retention scores for these ads but differences in the relative scores compared against the industry average.

Figure 9  Empirical Distribution of Observed Retention Scores Versus the Scores Determined from Permuting the Model Residuals

On the National Geographic Channel, for example, this advertiser’s retention scores exceed those of the industry average by a significant margin. On County Music Channel, this advertiser’s scores are lower

Auto Manufacturer

than the industry average, although there is

Industry Competitors

substantial overlap of the 90 percent confi-

USA Network National Geographic Channel TV Guide Network ESPN Do-It-Yourself Network Investigation Discovery ESPN2 A & E Network NFL Network History Channel ESPN Classic Biography Channel TNT – Turner Network TV Versus Great American Country Discovery Travel Channel TLC – The Learning Channel Food Network TBS – Turner Broadcasting System Spike TV Country Music Television

dence intervals. This sort of analysis can be used to suggest ad placements where viewers seem to be more receptive to a given ad. Audience loss during an ad can also be treated as an economic externality, because it denies viewers to later advertisers and potentially annoys viewers. Taking this factor into account might yield a more efficient allocation of inventory to advertisers (Kempe and Wilbur, 2009), and might create a more enjoyable experience for TV viewers. CONCLUSIONS AND FUTURE WORK The availability of tuning data from mil-

0.0

0.2

0.4 0.6 Retention Score

lions of STBs, combined with advances in

0.8

distributed computing that make analy-

Note: Some networks have better scores than others, which provide important feedback to the advertisers. The length of the bar represents a 90% confidence interval on the score.

Figure 10  Retention Scores for Ads Run by an Auto Manufacturer to Their Competitors’ Ads

sis of such data commercially feasible, allows us to understand for the first time the factors that influence television tuning behavior. By analyzing the tuning behavior of millions of individuals across many

December 2009  JOURNAL

OF ADVERTISING RESEARCH  427

MEASURING ADVERTISING QUALITY ON TELEVISION

A reasonable model should minimize the variance within a creative while maximizing the variance

factors and derive an estimate of the tuning attributable to a specific creative. This work confirms that creatives themselves do influence audience viewing behavior in a measurable way. We have shown three possible models for estimating this creative effect. The resulting scores—the deviation of an ad audience from the expected behavior—can be used to rank ads by their appeal, and perhaps relevance, to viewers and could ultimately allow us to target advertising to a receptive audience much more precisely. We have developed metrics for comparing the models themselves, which should help ensure a steady improvement as we continue experimenting with additional data and new statistical techniques. We hope in the future to incorporate data from additional television service operators and to

e3i8fb28a31928f66a5484f8ea330401421]. Friedman, J., T. Hastie, and R.Tibshirani. “glmnet: Lasso and elastic-net regularized

between creatives. thousands of ads, we can model specific

content_display/news/media-agencies-research/

generalized linear models. R package version

Acknowledgements

1.1-3”, 2009. [URL: http://www-stat.stanford. edu/~hastie/Papers/glmnet.pdf]

The authors thank Dish Network for providing the raw data that made this work possible and in

Interian, Y., S. Dorai-Raj, I. Naverniouk,

particular Steve Lanning, Vice President for Ana-

et al .

lytics, for his helpful feedback and support. They

audience retention. In Proceedings of the Third

also thank P. J. Opalinski, who helped us obtain

International Workshop on Data Mining and Audi-

the data disc used in this paper. Finally, they

ence Intelligence for Advertising, Paris: Associa-

thank Kaustuv who inspired much of this work

tion for Computing Machinery, 2009.

Ad quality on TV: predicting television

when he was part of the Google TV Ads team. Kempe, D. and W. C. Wilbur. “What can telDan Zigmond is manager of Google’s TV Ad

evision networks learn from search engines?

Effectiveness and Pricing group and the founder of

How to select, price and order ads to maximize

the Google TV Ads engineering team. He holds a BA

advertiser welfare.” 2009 [URL: http://ssrn.

in computational neuroscience from the University of

com/abstract=1423702]

Pennsylvania.

McCullagh, P and J. A. Nelder. Generalized Sundar Dorai-Raj is a senior quantitative analyst at

Linear Models. London: Chapman and Hall, 1989.

Google. His areas of interests include linear models and statistical computing. He has a Ph.D. in statistics

Myers, R. H. Classical and Modern Regression

from Virginia Tech.

with Applications, 2nd ed. Belmont, CA: Duxbury Press, 1990.

apply similar techniques to other methods

Yannet Interian is a quantitative analyst at Google

of video ad delivery. We would also like to

specializing in data mining. She has a Ph.D. in applied

Nielsen Inc. “Nielsen Says Bud Light Lime

expand the small internal survey we con-

mathematics from Cornell.

and Godaddy.Com Are Most-Viewed Ads Dur-

ducted into a more robust human evaluation of our scoring results. In the long run, we hope this new style

ing Super Bowl XLIII.” 2009. [URL: http://enIgor Naverniouk is a software engineer at Google. His

us.nielsen.com/main/news/news_releases/

work includes distributed computing and machine

2009/February/nielsen_says_bud_light]

of metric will inspire and encourage better

learning. He has an MSc in computer science from the

and more relevant advertising on televi-

University of British Columbia.

sion. Advertisers can use retention scores

Predicting clicks: estimating the click-through

to evaluate how campaigns are resonating with customers. Networks and other

Richardson, M., E. Dominowska and R. Ragno. rate for new ads. In WWW ’07: Proceedings of the

References

programmers can use these same scores

16th International Conference on World Wide Web, New York: Association for Computing Machin-

to inform ad placement and pricing. Most

Aguilera, A. M., M. Escabias and M. J. Valder-

important, viewers can continue vot-

rama .

ing their ad preferences with ordinary

ing logistic regression with high-dimensional

remote controls—and using these statisti-

multicollinear data.” Computational Statistics &

Bayardo. Predicting bounce rates in sponsored

cal techniques, we can finally count their

Data Analysis 50 (2006): 1905–24.

search advertisements. In Proceedings of the 15th

votes and use the results to create a more rewarding viewing experience. 

428  JOURNAL

ery, 2007.

“Using principal components for estimatSculley, D., R. Malkin, S. Basu, and R. J.

ACM SIGKDD International Conference on KnowlBachman, K. “Cracking the Set-Top Box Code.”

edge Discovery and Data Mining. Paris: Associa-

2009. [URL: http://www.mediaweek.com/mw/

tion for Computing Machinery, 2009.

OF ADVERTISING RESEARCH  December 2009

Measuring Advertising Quality on Television - Research at Google

Dec 3, 2009 - they reported on the five best-liked ads and the five most-recalled ads. ... audience behavior. By adjusting for features such as time of day, network, recent user .... TV network are included but not the specific campaign or ... chose these 20 test ads such that 10 of them were considered .... Social Networking.

619KB Sizes 6 Downloads 460 Views

Recommend Documents

Ad Quality On TV: Predicting Television ... - Research at Google
Jun 28, 2009 - ABSTRACT. This paper explores the impact of television advertisements on audience retention using data collected from television set-top ...

Measuring User Rated Language Quality ... - Research at Google
Items 1 - 9 - .360 .616 .257 .431 .811. Google AdWords *. 400 .670 .900 .368 .632 .249 .386 .809. Note. pv = item ..... Missing data: our view of the state of the art.

Targeting Political Advertising on Television
data on television advertising from the 2004 Wisconsin Advertising. Project. Fourth, we collected aggregate level data on the cost of ad- vertising, voter turnout, and the percentage of voters voting for the. Republican presidential candidate. 2.1 Pr

Measuring Interference Between Live ... - Research at Google
generic measurement techniques for analyzing interference in the face of these .... a prohibitively expensive way to find baseline performance, especially given ..... runs with the negative co-runner and call it the “negative frac- tion”. Call th

MEASURING NOISE CORRELATION FOR ... - Research at Google
the Fourier frequency domain. Results show improved performance for noise reduction in an easily pipelined system. Index Terms— Noise Measurement, Video ...

Rhythms and plasticity: television temporality at ... - Research at Google
Received: 16 July 2009 / Accepted: 1 December 2009 / Published online: 16 January 2010 ..... gram of the year. Certainly it provided an opportunity for ... before his political science class met, since it was then that .... of television watching, as

Advertising on YouTube and TV: A Meta ... - Research at Google
Dec 3, 2015 - complemented with online advertising to increase combined reach. ... whether a TV campaign should add online advertising; secondly, we train ...

Reconnecting with Television Advertising
and focused on large sponsorships, online marketing and traditional advertising to reach its audience. “We are trying everyday to make Lenovo a global brand,”.

One Billion Word Benchmark for Measuring ... - Research at Google
amount of data involves a large amount of work, and provides a significant barrier to entry for new mod- eling techniques. By choosing one billion words as.

Measuring Cross-Device Online Audiences - Research at Google
May 10, 2016 - The method performs demographic corrections in the usual way device-by-device. ... of online ad campaigns by audience attributes for one device (or cookie) type. ..... 0|c)=1 − e−κ0c/P . We call this cookie-to-user function the Di

An Active Approach to Measuring Routing ... - Research at Google
studied by analyzing routing updates collected from the pub- lic RouteViews ..... the RRCs using the announcer [9] software developed by one of the authors.

How Surfers Watch: Measuring audience ... - Research at Google
Jul 25, 2010 - viewer response to video ad creatives more generally. 1. ... are the effect due to the creator of the underlying video programming (the ...

Indirect Content Privacy Surveys: Measuring ... - Research at Google
We present a design for indirect surveys and test the design's use as (1) a means to ... concerned about company/government access to health records. [39]. Almost as ... their purchase histories on web sites like Blippy [4], are will- ing to trade ..

Measuring Enforcement Windows with Symbolic ... - Research at Google
Static analysis tools, which help find bugs in software, can take advantage of isolation boundaries to scale to real-world programs. Can we find and leverage the ...

A Method for Measuring Online Audiences - Research at Google
We present a method for measuring the reach and frequency of online ad ... is that a cookie does not identify a person, but a combination of a user account, ..... Analysis of Complex Survey Samples Journal of Statistical Software, 9(1), 1-19,.

Measuring Ad Effectiveness Using Geo ... - Research at Google
website visits, newsletter sign-ups, or software downloads. The results of an ..... search.google.com/pubs/archive/37161.pdf,. 2011. [3] Google Ads Team.

Measuring the User Experience on a Large Scale - Research at Google
Apr 15, 2010 - working at a large company whose products cover a wide range of categories ... analysis of user surveys, and some also provide software for.

The Effect of Advertising on Brand Awareness and Perceived Quality ...
Feb 21, 2009 - changes brand awareness and quality perceptions. Our panel data allow us to control for unobserved heterogeneity across brands and to ...

Measuring consumer switching costs in the television ...
Oct 4, 2016 - likely to switch to satellite service (Chilton Research Services Survey,. 1997). ... qgjt - quality of program content measured as a weighted average total no. of channels .... important information on the distribution of preferences.

2004 measuring up quality standards.pdf
Quality Standards for Sewn Items/Projects. By: Kay Hendrickson, Jan Hiller, and Nancy Mordhorst. Introduction. An essential task for evaluating the quality of.

2004 measuring up quality standards.pdf
Characterized by secure stitching that is a. uniform distance from ... Have a shank (to allow room for the buttonhole ... 2004 measuring up quality standards.pdf.

Fast, Compact, and High Quality LSTM-RNN ... - Research at Google
to-speech (TTS) research area in the last few years [2–20]. ANN-based acoustic models ..... pub44630.html, Invited talk given at ASRU 2015. [27] K. Tokuda, T.

A No-reference Perceptual Quality Metric for ... - Research at Google
free energy of this inference process, i.e., the discrepancy between .... such that their encoding bit rates are all higher than 100 ..... approach in the DCT domain.

Perceived Frequency of Advertising Practices - Research at Google
Jul 24, 2015 - tice was beneficial or harmful, their technical understand- ing of that ... pant demographics, such as age and education level, as well as asking ..... 171 q. 114 q. 94 q. 379 q. 1241 q. 519 q. 1233 q. 1105 q. 470 q. 212 q. 550 q.