Wednesday, January 09, 2008

Rating the ratings: a study of TV audience measurement

Ratings are the central means of mass audience measurement for the television industry. This post will define what ratings mean and then examine positives and negative aspects of ratings use. It will examine critical US research in the 1990s and conclude with a discussion of the ratings transition debate in Australia earlier this decade.

Ratings provide a quantitative measure of how many homes or people are viewing a program, advertisements, a station or the media itself. They are based on an audience snapshot using both geographical (multi-stage cluster) and characteristic (stratified) sampling techniques. Because of their feedback element, ratings largely control what is broadcast. The television industry uses audience ratings information to justify its broadcasting service performance as well as the cost of advertising spots and sponsorship deals. The ratings approach is based on “exposure” which measures a single audience behaviour: “open eyes facing a medium”. When counted and analysed, exposures allow the industry to predict audiences and pre-sell slots to advertisers. Ratings for a programme are compared against others in the same point in time to determine audience share. Ratings, therefore, create a manageable image of the public for television executives. It is the apparently neutral form of numbers that invest the ratings with so much power.

But there are issues with this blunt approach. Because ratings drive advertising revenue, broadcasters tended to treat audiences as commodities on the basis of their viewing consumption. The economic system of commercial television depends on the extraction of surplus value from an exploited audience. The pressure is therefore on broadcasting decision makers to pander to mass markets in order to continually win large portions of the audience share. Ratings are kept high by sticking to proven formulas. Therefore risk taking is rare because radically different programming may shift audiences in the opposite direction than intended. This means that specialist interests such as the poor, the aged, the intelligentsia and children are not catered for by commercial broadcasters because they know will get more advertising revenue from mass audiences. As a result, innovative programming remains the preserve of publicly funded broadcasters who are not as bound to ratings.

Audience ratings have been criticised from a qualitative perspective. Because exposure is the only data recorded for ratings, the industry only cares about the numbers involved in tuning in, staying tuned, changing channels and turning off. No other audience behaviour is relevant. This means ratings do not capture whether a programme is interesting to its audience. It also means that low ratings problems are generally solved by programming decisions rather than by audience research. Ratings, in effect, “take the side” of the broadcasters. Audience ratings measure only if the message is received and do not capture whether it has been registered or internalised. Broadcasters are not interested in the “lived reality behind the ratings”. The only problem that matters for broadcasters is how to get the audience to tune in.

In the 1990s, researchers such as Eileen Meehan and Ien Ang began to criticise the way audiences were manipulated by the ratings. Meehan noted that intellectuals simply didn’t count in decision-making due to their tiny numbers. TV programming reflected the “forced choice behaviours” of the masses. Ang argued the media didn’t want to know about their audience, but merely prove there was one. Ratings produce a ‘map’ of the audience which provides broadcasters and advertisers with neatly arranged and convenient information. This allows the industry to take decisions about the future what Ang calls a sense of “provisional certainty”. There inevitably follows the streamlining of television output into formulaic genres, the plethora of spin-offs and the rigid placing of programmes into fixed time slots. In a competitive environment each competitor will make its product more like the others rather than taking a chance on producing something different. The result is a remorseless repetitiveness at the heart of the American TV schedule.

The Australian broadcasting industry has also endured controversy as a result of ratings issues. In the early 2000s the apparent certainty of measurement provided by ratings was undermined by a change in the ratings regime. In 2001, OzTAM (and its Italian supplier ATR) won a lucrative contract to replace incumbent provider ACNielsen to provide Australian metropolitan TV ratings. Despite both parties using the same “people meter” technology in the six month overlap period, major discrepancies emerged between the two providers’ data. The discrepancies led to widespread unease in the $5 billion Australian TV industry. ATR boss Muir said the discrepancy was because that ratings were sample-based estimates and therefore subject to sampling and statistical error. But advertisers did not want to hear about sampling errors or issues with “psychological makeup” that a rating system cannot capture. What they wanted was certainty for their business decisions and demanded an unrealistic 100 per cent ratings accuracy. Eventually the two measurement systems came closer together to ease the fears of the advertisers. Nonetheless the controversy exposed the gulf between the questionable accuracy of ratings and the absolute faith put in the system by advertisers.

1 comment:

pdalencour said...
This comment has been removed by the author.