# All Reach Is Not Created Equal

**DWR**

At Simulmedia, the Data Science team is tasked with forecasting the delivery of the TV ad inventory that is available to us. These forecasts are then fed as inputs to our optimization engine, which selects the inventory units that maximize target delivery in the most cost-effective way.

The ability to forecast effectively is key to our business; in order to reach the viewers we are targeting, we need to know when and where they are watching TV.

Instead of trying to predict the raw reach across each inventory unit, we look at the *probability* of reaching viewers across the inventory unit. The reasoning is that the raw reach across an inventory unit is just a proxy for what we really want to know: How many people do we expect to view an ad that we purchase within an inventory unit? The raw reach can severely misrepresent this. As an example, take two dayparts, both with a reach of 1,000 viewers. In the first daypart, the 1,000 viewers tuned in for only 5 minutes each. In the second, the 1,000 viewers tuned in for 100 minutes each. If we purchase this daypart, one ad will air within the time range of the inventory unit. Clearly, the latter is more valuable because we are more likely to reach a higher number of viewers. Instead of looking at the raw reach metric, we have instead developed a metric, called Duration Weighted Reach (DWR), which weights the reach by the time that each viewer tuned in to the inventory unit.

Ultimately, the forecasting outputs are fed as inputs into our optimization engine. We can compare the expected campaign impressions after running the optimization engine with both raw reach and DWR as inputs. The graph above shows the increase in impressions from 4 different campaigns we tested. Not only does the DWR metric provide a more accurate forecasting method, but this also improves our campaign planning down the line through a higher number of expected impressions after optimizing the selection of inventory units.

**AdDWR**

To drill down even deeper, we decided to focus on commercial minutes, or minutes where ads actually aired. While it is interesting to see the overall reach of each inventory unit, we are ultimately interested in only the viewing during the ad minutes because these are the times where our ads will actually air and be viewed and measured.

To this end, we can look at the weighted reach of ad minutes versus the non-ad minutes. This ratio gives us an estimate of “tune-away”, i.e. what percentage of viewers stop watching when the ads start airing. This metric is valuable for forecasting as it tells us how traditional reach numbers might be overvaluing or undervaluing certain inventory units. A viewer is useless to an advertiser if they never actually see the ads that are airing.

**Analysis**

First, it’s informative to look at the overall comparison between non-ad reach and ad reach. The graph below visualizes this tune-away. Each point in the graph represents a 15-minute interval. The y-value is the reach of ad minutes, and the x-value is the reach of non-ad minutes. The blue line is a slope of 1; any points that fall on this line indicate that the ad and non-ad reach is the same for that 15-minute interval. If the point is below the line it indicates that the non-ad minute reach is greater than the ad minute reach. Above the line indicates the opposite.

It’s clear that most points lie under the line, as should be expected due to tune-away. Not surprisingly, viewers tend to watch more non-ad minutes than ad minutes. There are some outliers where the non-ad minutes actually had a lower reach in the 15-minute interval than the ad minutes. These outliers (where the points are above the line) tend to fall to the bottom left side of the graph. This makes sense; we’d expect some 15-minute intervals to have higher average viewing on ad minutes than non-ad minutes when the viewing is small.

To break this down further, we can look at tune-away across different attributes. I define tune-away by the following equation:

In each graph below I’ve limited the range of the y-axis so that it’s easier to see changes.

Above, the tune-away is shown across quarter hours. There are 96 quarter hours in a 24 hour period. 00:00 to 00:15 represents quarter hour 0, 00:15 to 00:30 represents quarter hour 1, all the way up to quarter hour 95, which is the interval 23:45 to 00:00. I’ve drawn a line of best fit (using LOESS) to visualize the general trend. Interestingly, tune-away is smallest around 6AM and highest around 12AM.

Above is the same visualization, this time across the day of the week. It seems that tune-away is fairly consistent across day of week. It does appear to be a bit smaller on Saturdays, although there is not much of a difference.

The density plot by program seems to be very similar to the plot by network; the peak of the curve is around 8%, although the tails are a bit longer in this plot. Again, it’s important to note which programs have high tune-away and which have low tune-away.

The density plot by network shows that most networks tend to have a tune-away of around 8%. There is also a nice bell curve, with a little bit of a tail towards 0. It’s important for us to take into account which networks have high tune-away and which networks have low tune-away. A more traditional forecast would tend to overestimate the impressions of spots on networks with high tune-away and underestimate the impressions of spots on networks with low tune-away.

**Conclusion**

These data and plots show the importance of incorporating tune-away into our forecasting models. Without taking into account tune-away, we’d be overestimating the impressions we’d expect would be delivered when we purchase a specific inventory unit. Between incorporating tune-away and using a duration-based metric to estimate the probability of reaching each respondent, our model offers important improvements to come up with a more realistic forecast.