Advertising Statistics Suck

You are viewing an old revision of this post, from June 24, 2010 @ 21:18:38. See below for differences between this version and the current revision.

This post is a continuation of my previous post How to Value Advertising.

Specifically, it’s a reply to Andrew Eifler who posted the blog post I responded to. He raised this point:

On the subject of variables and, as you point out, there can be quite a few – i think one of the biggest issues is how we quantify presence on each media channel. Universally the units that are used are “GRPs” or Gross Rating Points which are the product of “Reach” and “Frequency” against your target audience. For advertising measurement to really progress we really need a new unit of measurement. The system of GRPs worked great when the only media options were TV, Print, and Radio – but in today’s world, with such a fragmented media landscape, there really needs to be a more fitting measure. Maybe something like “Persuasion units?” Interested to hear what you think about this.

Andrew Eifler

In general, I doubt I could come up with a decent replacement statistic, simply because the data is so poor. I agree, however, that the current statistics you use – GRPs, and also TRPs – are woefully bad.

Why GRPs Are Bad

The statistic Gross Rating Points (GRPs) is calculated by multiplying percentage reach by frequency (is that average frequency?). Now, this is all well and good if all you’re interested in is "impressions" as in "banner ad impressions." But the experience of NON-obnoxious (overlay) and NON-personalized banner ads should lead people to be VERY skeptical of the worth of impressions as something useful. Click-through rates on banner ads are what, 0.2-0.3%?

0.23% Average CTR

If "actions" (like clicking through is an action) on TV ads are similar – I wouldn’t be surprised – that’s very low. And what’s the conversion rate after that? 5%? 10%? Depressing, but I suppose it’s beside the point.

A more important observation is that (i) people will have variable marginal returns to seeing an ad repeatedly, and (ii) the distribution of view frequencies are highly unlikely to be normally distributed through the population or segment. In the case of (i), I think that the marginal returns are likely to resemble a S-curve as well; of course if your ad is particularly irritating, it may tend negative after some additional inflection point.

I hope the current methodology takes (ii) into the account; I would expect some members of the population to be more likely to see an ad more times. I suppose you can mitigate this by segmenting the population in the right way (e.g. segment by number of times they have seen/are likely to see and/or respond to the advertising). Otherwise, you’re asking to be mislead.

The Problem of Advertising Statistics

However, there’s a deeper problem with such “industry standard” statistics that do not measure the result they measure the deliverable. Or: they do not measure what the customer cares about (increased sales); they measure what the advertising did. “We showed most of the people you care about this ad 3 times – and surveys (?) indicate that consumers remember the ad!”

Sure, I get that’s the best you do. You’re not selling increased sales; you’re selling something very specific. If the company gets increased sales, good for them; if not, well they decided to do it in the first place. Hell, figuring out how to grow sales is (allegedly) why company executives are paid so much.

But, it would seem to me, part of empower companies to make their own choices about how much to spend is giving the information they care about – “How much did the advertising campaign increase my revenue and profit?” – not the deliverables (which are only really of interest internally to the advertising company). Making the process transparent – letting the company buying advertising know what advertising was delivering to their market – may (i) seem like a good thing to do, and (ii) help justify fees to the customer, but it’s really not a very bright idea.

An Unfounded Extrapolation

The reason is simple: it reduces the profit margins of marketing companies. Oh, not in the short term – but in the long term. The selection pressures for marketing shift from making the most effective marketing to making the… well, less effective marketing. If you can make the process less effective, you get paid more. At best, efficiency will stop increasing.

The approach should be reversed. Marketing companies shouldn’t sell different product lines – “Yes, you can spend $3m on TV, $1.2m on radio, $1m on billboards, and $2.5m on online advertising, we can do that for you” – they should be selling increases in sales. Hell, if you really wanted to motivate an advertising company you’d get them some percentage share of the increased revenue attributed to advertising (though with provisions to prevent gaming).

Sure, yes, I know I’m both reaching a bit and have no empirical evidence to make such an allegation and information about how (digital) advertising is changing things implies the opposite. The amount of innovation occurring in digital advertising (albeit, sometimes creepy innovation) is staggering. The allocation algorithms behind Google’s AdWords and AdSense programs are mathematically guaranteed to lead to the most optimal outcome for all involved; the personalization possible with customer tracking (e.g. DoubleClick) is getting there; the shift to Actions and not Impressions is coming fast, etc. I just don’t like statistics like the GRP (unless I’ve completely misunderstood it…).

Screw the Literature

Of course, it’s not like the (academic) literature is any better. I dipped into a couple of (mathematical) marketing journals earlier today when I was researching the (earlier) response; I lost most of my references, though, and as this is not an academic paper I’ll refrain from re-locating them. The upshot, though, is that modern mathematical and economic accounts of advertising assume that (i) you can segment your population well, and (ii) that all segments are homogenous (note that (ii) implies (i)). Is this accurate? – I would be terribly surprised if the segmenting was that accurate.

Given these constraints, most models assume that (i) you represent advertising as contributing to some sense of “goodwill” among people, and (ii) in the absence of advertising the amount of goodwill a customer feels towards you or your product declines (which is a great way to justify advertising! –> apparently it’s a well-established empirical pattern; I’d like to check that).

This is known as “inventing a variable which doesn’t really exist.” The scientific justification for it is that “goodwill hypothetically relates to several variables – known and “latent” (which means “unobservable”) – all of which are correlated, such as goodwill in an “index” for those other variables.” Actually, what they would probably tell you is that goodwill is a latent variable that can be distilled via structural equation modeling from some empirically observable variables – but it’s a distinction without a difference. Mostly.

Now, I suppose goodwill is a better thing to try and measure than GRP. However, it’s still (i) artificial, and (ii) has nothing to do with revenue or profit. Thus, the literature is little better.

However, one good point (which I had not considered) is that given these assumptions, then the approach you take is influenced by whether or not you are trying to maximize goodwill at some point t, or if you are trying to maximize the integral of goodwill over the advertising campaign. The obvious example of the former is in selling tickets to some event.

Closing Thoughts

Measuring advertising is hard. So the tools you have are limited.

However, I think the statistics cited should have more to do with what the company buying advertising needs. That may be revenue and profit, of course – but it may be something else. I mentioned aspirational advertising in the last post; I have no idea how to measure that (except really crudely, with surveys and interviews).

And I think the statistics used should have less to do with impressions, unless you’re trying to improve effectiveness (“Our GRP is 250, but sales only increase 0.4%! Something needs to change!”). It’s certainly never something you should show the customer.

Statistics related to inputs are useless. GRP, and similar statistics, look pretty much at what you put into the campaign – just like college rankings look at what goes into the colleges (SAT scores, money, etc). They don’t measure outputs, e.g. how successful each college student is, or how much advertising increased sales. Why? Because it’s hard to measure.

But abandoning something just because it’s hard is no way to live; and adopting an inputs-based measurement process will do nothing but increase cost (like it’s done for the college industry).

Unless, of course, that’s what you want.


Also, I have to confess that the reply to this I wrote earlier was lost when my computer crashed… teach me to use something lacking autosave.

Post Revisions:

Changes:

June 24, 2010 @ 21:18:38Current Revision
Content
<p>This post is a continuation of my previous post <a href="http:// www.inscitia.com/archives/ how-to-value- advertising/">How to Value Advertising</a>.</p> <p>Specifically, it’s a reply to <a href="http:// andreweifler.com/">Andrew Eifler</a> who posted the <a href="http:// www.draftfcbblog.com/Lists/ Posts/Post.aspx?ID=254">blog post I responded to</a>. He raised this point:</p> <blockquote> <p>On the subject of variables and, as you point out, there can be quite a few – i think one of the biggest issues is how we quantify presence on each media channel. Universally the units that are used are “GRPs” or Gross Rating Points which are the product of “Reach” and “Frequency” against your target audience. For advertising measurement to really progress we really need a new unit of measurement. The system of GRPs worked great when the only media options were TV, Print, and Radio – but in today’s world, with such a fragmented media landscape, there really needs to be a more fitting measure. Maybe something like “Persuasion units?” Interested to hear what you think about this.</p> <p align="right">- <a href="http:// www.inscitia.com/archives/ how-to-value- advertising/ #comment-2290">Andrew Eifler</a></p> </blockquote> <p>In general, I doubt I could come up with a decent replacement statistic, simply because the data is so poor. I agree, however, that the current statistics you use - GRPs, and also TRPs - are woefully bad.</p> <h3>Why GRPs Are Bad</h3> <p>The statistic Gross Rating Points (GRPs) is calculated by multiplying percentage reach by frequency (is that average frequency?). Now, this is all well and good if all you're interested in is &quot;impressions&quot; as in &quot;banner ad impressions.&quot; But the experience of NON-obnoxious (overlay) and NON-personalized banner ads should lead people to be VERY skeptical of the worth of impressions as something useful. Click-through rates on banner ads are what, 0.2-0.3%? <div class="statistic-2">0.23% <span class="statistic- text">Average CTR</span></div>If &quot;actions&quot; (like clicking through is an action) on TV ads are similar - I wouldn't be surprised - that's very low. And what's the conversion rate after that? 5%? 10%? Depressing, but I suppose it's beside the point. </p> <p>A more important observation is that (i) people will have variable marginal returns to seeing an ad repeatedly, and (ii) the distribution of view frequencies are highly unlikely to be normally distributed through the population or segment. In the case of (i), I think that the marginal returns are likely to resemble a S-curve as well; of course if your ad is particularly irritating, it may tend negative after some additional inflection point. </p> <p>I hope the current methodology takes (ii) into the account; I would expect some members of the population to be more likely to see an ad more times. I suppose you can mitigate this by segmenting the population in the right way (e.g. segment by number of times they have seen/are likely to see and/or respond to the advertising). Otherwise, you're asking to be mislead.</p> <h3>The Problem of Advertising Statistics</h3> <p>However, there’s a deeper problem with such “industry standard” statistics that do not measure the <strong>result</strong> they measure the <strong>deliverable</strong>. Or: they do not measure what the customer cares about (increased sales); they measure what the advertising <em>did</em>. “We showed most of the people you care about this ad 3 times – and surveys (?) indicate that consumers remember the ad!”</p> <p>Sure, I get that’s the best you do. You’re not selling increased sales; you’re selling something very specific. If the company gets increased sales, good for them; if not, well they decided to do it in the first place. Hell, figuring out how to grow sales is (allegedly) why company executives are paid so much.</p> <p>But, it would seem to me, part of empower companies to make their own choices about how much to spend is giving the information they care about – “How much did the advertising campaign increase my revenue and profit?” – not the deliverables (which are only really of interest <em>internally</em> to the advertising company). Making the process transparent – letting the company buying advertising know what advertising was delivering to their market – may (i) seem like a good thing to do, and (ii) help justify fees to the customer, but it’s really not a very bright idea.</p> <h3>An Unfounded Extrapolation</h3> <p>The reason is simple: it reduces the profit margins of marketing companies. Oh, not in the <strong>short</strong> term – but in the long term. The selection pressures for marketing <em>shift</em> from making the <strong>most effective</strong> marketing to making the… well, less effective marketing. If you can make the process less effective, you get paid more. At best, efficiency will stop increasing.</p> <p>The approach should be reversed. Marketing companies shouldn’t sell different product lines – “Yes, you can spend $3m on TV, $1.2m on radio, $1m on billboards, and $2.5m on online advertising, we can do that for you” – they should be selling increases in sales. Hell, if you <em>really</em> wanted to motivate an advertising company you’d get them some percentage share of the increased revenue attributed to advertising (though with provisions to prevent gaming). </p> <p>Sure, yes, I know I’m both reaching a bit <em>and</em> have no empirical evidence to make such an allegation <em>and</em> information about how (digital) advertising is changing things implies the opposite. The amount of innovation occurring in digital advertising (albeit, sometimes creepy innovation) is staggering. The allocation algorithms behind Google’s AdWords and AdSense programs are mathematically guaranteed to lead to the most optimal outcome for all involved; the personalization possible with customer tracking (e.g. DoubleClick) is <em>getting there</em>; the shift to Actions and not Impressions is coming fast, etc. I just don’t like statistics like the GRP (unless I’ve <em>completely</em> misunderstood it…).</p> <h3>Screw the Literature</h3> <p>Of course, it’s not like the (academic) literature is any better. I dipped into a couple of (mathematical) marketing journals earlier today when I was researching the (earlier) response; I lost most of my references, though, and as this is not an academic paper I'll refrain from re-locating them. The upshot, though, is that modern mathematical and economic accounts of advertising assume that (i) you can segment your population well, and (ii) that all segments are homogenous (note that (ii) implies (i)). Is this accurate? - I would be terribly surprised if the segmenting was that accurate.</p> <p>Given these constraints, most models assume that (i) you represent advertising as contributing to some sense of “goodwill” among people, and (ii) in the absence of advertising the amount of goodwill a customer feels towards you or your product declines (which is a great way to justify advertising! –&gt; apparently it’s a well-established empirical pattern; I’d like to check that).</p> <p>This is known as “inventing a variable which doesn’t really exist.” The scientific justification for it is that “goodwill hypothetically relates to several variables – known and “latent” (which means “unobservable”) – all of which are correlated, such as goodwill in an “index” for those other variables.” Actually, what they would probably tell you is that goodwill is a latent variable that can be distilled via structural equation modeling from some empirically observable variables – but it’s a distinction without a difference. Mostly.</p> <p>Now, I suppose goodwill is a better thing to try and measure than GRP. However, it’s still (i) artificial, and (ii) <strong>has nothing to do with revenue or profit</strong>. Thus, the literature is <em>little better.</em></p> <p>However, one good point (which I had not considered) is that given these assumptions, then the approach you take is influenced by whether or not you are trying to maximize goodwill at some point t, or if you are trying to maximize the integral of goodwill over the advertising campaign. The obvious example of the former is in selling tickets to some event. </p> <h3>Closing Thoughts</h3> <p>Measuring advertising is hard. So the tools you have are limited.</p> <p>However, I think the statistics cited should have more to do with what the company buying advertising needs. That may be revenue and profit, of course – but it may be something else. I mentioned aspirational advertising in the last post; I have <strong>no idea</strong> how to measure that (except really crudely, with surveys and interviews). </p> <p>And I think the statistics used should have less to do with impressions, <em>unless</em> you’re trying to improve effectiveness (“Our GRP is 250, but sales only increase 0.4%! Something needs to change!”). It’s certainly never something you should show the customer.</p> <p>Statistics related to inputs are useless. GRP, and similar statistics, look pretty much at what you put <em>into</em> the campaign – just like college rankings look at what goes <em>into</em> the colleges (SAT scores, money, etc). They don’t measure outputs, e.g. how successful each college student is, or how much advertising increased sales. Why? Because it’s hard to measure.</p> <p>But abandoning something just because it’s hard is no way to live; and adopting an inputs-based measurement process will do nothing but increase cost (like it’s done for the college industry). </p> <p>Unless, of course, that’s what you want.</p> <hr /> <p><em>Also, I have to confess that the reply to this I wrote earlier was lost when my computer crashed... teach me to use something lacking autosave.</em></p>  <p>This post is a continuation of my previous post <a href="http:// www.inscitia.com/archives/ how-to-value- advertising/">How to Value Advertising</a>.</p> <p>Specifically, it’s a reply to <a href="http:// andreweifler.com/">Andrew Eifler</a> who posted the <a href="http:// www.draftfcbblog.com/Lists/ Posts/Post.aspx?ID=254">blog post I responded to</a>. He raised this point:</p> <blockquote> <p>On the subject of variables and, as you point out, there can be quite a few – i think one of the biggest issues is how we quantify presence on each media channel. Universally the units that are used are “GRPs” or Gross Rating Points which are the product of “Reach” and “Frequency” against your target audience. For advertising measurement to really progress we really need a new unit of measurement. The system of GRPs worked great when the only media options were TV, Print, and Radio – but in today’s world, with such a fragmented media landscape, there really needs to be a more fitting measure. Maybe something like “Persuasion units?” Interested to hear what you think about this.</p> <p align="right">- <a href="http:// www.inscitia.com/archives/ how-to-value- advertising/ #comment-2290">Andrew Eifler</a></p> </blockquote> <p>In general, I doubt I could come up with a decent replacement statistic, simply because the data is so poor. I agree, however, that the current statistics you use - GRPs, and also TRPs - are woefully bad.</p> <h3>Why GRPs Are Bad</h3> <p>The statistic Gross Rating Points (GRPs) is calculated by multiplying percentage reach by frequency (is that average frequency?). Now, this is all well and good if all you're interested in is &quot;impressions&quot; as in &quot;banner ad impressions.&quot; But the experience of NON-obnoxious (overlay) and NON-personalized banner ads should lead people to be VERY skeptical of the worth of impressions as something useful. <div class="statistic-2">0.23% <span class="statistic- text">Average CTR</span></div> Click-through rates on banner ads are what, 0.2-0.3%? If &quot;actions&quot; (like clicking through is an action) on TV ads are similar - I wouldn't be surprised - that's very low. And what's the conversion rate after that? 5%? 10%? Depressing, but I suppose it's beside the point. </p> <p>A more important observation is that (i) people will have variable marginal returns to seeing an ad repeatedly, and (ii) the distribution of view frequencies are highly unlikely to be normally distributed through the population or segment. In the case of (i), I think that the marginal returns are likely to resemble a S-curve as well; of course if your ad is particularly irritating, it may tend negative after some additional inflection point. </p> <p>I hope the current methodology takes (ii) into the account; I would expect some members of the population to be more likely to see an ad more times. I suppose you can mitigate this by segmenting the population in the right way (e.g. segment by number of times they have seen/are likely to see and/or respond to the advertising). Otherwise, you're asking to be mislead.</p> <h3>The Problem of Advertising Statistics</h3> <p>However, there’s a deeper problem with such “industry standard” statistics that do not measure the <strong>result</strong> they measure the <strong>deliverable</strong>. Or: they do not measure what the customer cares about (increased sales); they measure what the advertising <em>did</em>. “We showed most of the people you care about this ad 3 times – and surveys (?) indicate that consumers remember the ad!”</p> <p>Sure, I get that’s the best you do. You’re not selling increased sales; you’re selling something very specific. If the company gets increased sales, good for them; if not, well they decided to do it in the first place. Hell, figuring out how to grow sales is (allegedly) why company executives are paid so much.</p> <p>But, it would seem to me, part of empower companies to make their own choices about how much to spend is giving the information they care about – “How much did the advertising campaign increase my revenue and profit?” – not the deliverables (which are only really of interest <em>internally</em> to the advertising company). Making the process transparent – letting the company buying advertising know what advertising was delivering to their market – may (i) seem like a good thing to do, and (ii) help justify fees to the customer, but it’s really not a very bright idea.</p> <h3>An Unfounded Extrapolation</h3> <p>The reason is simple: it reduces the profit margins of marketing companies. Oh, not in the <strong>short</strong> term – but in the long term. The selection pressures for marketing <em>shift</em> from making the <strong>most effective</strong> marketing to making the… well, less effective marketing. If you can make the process less effective, you get paid more. At best, efficiency will stop increasing.</p><div class="statistic-1">$8m<span class="statistic- text">Where?< /span></div> <p>The approach should be reversed. Marketing companies shouldn’t sell different product lines – “Yes, you can spend $3m on TV, $1.5m on radio, $1m on billboards, and $2.5m on online advertising, we can do that for you” – they should be selling increases in sales. Hell, if you <em>really</em> wanted to motivate an advertising company you’d get them some percentage share of the increased revenue attributed to advertising (though with provisions to prevent gaming). </p> <p>Sure, yes, I know I’m both reaching a bit <em>and</em> have no empirical evidence to make such an allegation <em>and</em> information about how (digital) advertising is changing things implies the opposite. The amount of innovation occurring in digital advertising (albeit, sometimes creepy innovation) is staggering. The allocation algorithms behind Google’s AdWords and AdSense programs are mathematically guaranteed to lead to the most optimal outcome for all involved; the personalization possible with customer tracking (e.g. DoubleClick) is <em>getting there</em>; the shift to Actions and not Impressions is coming fast, etc. I just don’t like statistics like the GRP (unless I’ve <em>completely</em> misunderstood it…).</p> <h3>Screw the Literature</h3> <p>Of course, it’s not like the (academic) literature is any better. I dipped into a couple of (mathematical) marketing journals earlier today when I was researching the (earlier) response; I lost most of my references, though, and as this is not an academic paper I'll refrain from re-locating them. The upshot, though, is that modern mathematical and economic accounts of advertising assume that (i) you can segment your population well, and (ii) that all segments are homogenous (note that (ii) implies (i)). Is this accurate? - I would be terribly surprised if the segmenting was that accurate.</p> <p>Given these constraints, most models assume that (i) you represent advertising as contributing to some sense of “goodwill” among people, and (ii) in the absence of advertising the amount of goodwill a customer feels towards you or your product declines (which is a great way to justify advertising! –&gt; apparently it’s a well-established empirical pattern; I’d like to check that).</p> <p>This is known as “inventing a variable which doesn’t really exist.” The scientific justification for it is that “goodwill hypothetically relates to several variables – known and “latent” (which means “unobservable”) – all of which are correlated, such as goodwill in an “index” for those other variables.” Actually, what they would probably tell you is that goodwill is a latent variable that can be distilled via structural equation modeling from some empirically observable variables – but it’s a distinction without a difference. Mostly.</p> <p>Now, I suppose goodwill is a better thing to try and measure than GRP. However, it’s still (i) artificial, and (ii) <strong>has nothing to do with revenue or profit</strong>. Thus, the literature is <em>little better.</em></p> <p>However, one good point (which I had not considered) is that given these assumptions, then the approach you take is influenced by whether or not you are trying to maximize goodwill at some point t, or if you are trying to maximize the integral of goodwill over the advertising campaign. The obvious example of the former is in selling tickets to some event. </p> <h3>Closing Thoughts</h3> <p>Measuring advertising is hard. So the tools you have are limited.</p> <p>However, I think the statistics cited should have more to do with what the company buying advertising needs. That may be revenue and profit, of course – but it may be something else. I mentioned aspirational advertising in the last post; I have <strong>no idea</strong> how to measure that (except really crudely, with surveys and interviews). </p> <p>And I think the statistics used should have less to do with impressions, <em>unless</em> you’re trying to improve effectiveness (“Our GRP is 250, but sales only increase 0.4%! Something needs to change!”). It’s certainly never something you should show the customer.</p> <p>Statistics related to inputs are useless. GRP, and similar statistics, look pretty much at what you put <em>into</em> the campaign – just like college rankings look at what goes <em>into</em> the colleges (SAT scores, money, etc). They don’t measure outputs, e.g. how successful each college student is, or how much advertising increased sales. Why? Because it’s hard to measure.</p> <p>But abandoning something just because it’s hard is no way to live; and adopting an inputs-based measurement process will do nothing but increase cost (like it’s done for the college industry). </p> <p>Unless, of course, that’s what you want.</p> <hr /> <p><em>Also, I have to confess that the reply to this I wrote earlier was lost when my computer crashed... teach me to use something lacking autosave.</em></p>

Note: Spaces may be added to comparison text to allow better line wrapping.

  • New blog post: Advertising Statistics Suck http://bit.ly/cFVLsG
    This comment was originally posted on Twitter

  • – yes – average frequency
    – Avg. CTRs for display banners are typically around .1% (lower for some industries).

    For the “Increase in sales” agency compensation model – see new model agency “Anomaly” http://www.anomaly.com/home.php

    Segmentation, as currently used by most ad agencies, has trended away from a useful mathematical tool for reaching your target audience. It’s now more of a sales tactic used by agencies to sell (and “brand”) their work for the client.

    Overall the thing I liked best about this post was your thought: “Statistics related to inputs are useless.” Very true. But they’re safe! If your success is tied to statistics related to inputs, your success if practically guaranteed.

    It all goes back to the risk equation in advertising. Marketing services companies by nature are very risk averse. Most still bill by the hour and force their clients to take on most of the risk when it comes to the dollars invested in the advertising. They also want their success metrics to be tied to variables that are controllable (e.g. GRPs) rather than variables that are uncertain (e.g. sales). Unlike almost every other industry where risk is managed and shared – in advertising it all trickles back down to the advertiser. I’m percolating thoughts on this one…

  • – yes – average frequency
    – Avg. CTRs for display banners are typically around .1% (lower for some industries).

    For the “Increase in sales” agency compensation model – see new model agency “Anomaly” http://www.anomaly.com/home.php

    Segmentation, as currently used by most ad agencies, has trended away from a useful mathematical tool for reaching your target audience. It’s now more of a sales tactic used by agencies to sell (and “brand”) their work for the client.

    Overall the thing I liked best about this post was your thought: “Statistics related to inputs are useless.” Very true. But they’re safe! If your success is tied to statistics related to inputs, your success if practically guaranteed.

    It all goes back to the risk equation in advertising. Marketing services companies by nature are very risk averse. Most still bill by the hour and force their clients to take on most of the risk when it comes to the dollars invested in the advertising. They also want their success metrics to be tied to variables that are controllable (e.g. GRPs) rather than variables that are uncertain (e.g. sales). Unlike almost every other industry where risk is managed and shared – in advertising it all trickles back down to the advertiser. I’m percolating thoughts on this one…

  • Compelling (although somewhat verbose) thoughts on Advertising Statistics /via @msjgriffiths http://bit.ly/9pDA1U
    This comment was originally posted on Twitter