MRP Alone Not Good Enough

This is the first instalment in a series on media relations measurement.

I’m going to say something that could be perceived as sacrilegious among Canadian media relations practitioners.

I’m not a fan of Media Relations Rating Points (MRP).

For those who don’t know, MRP is a uniquely Canadian innovation. It is a relatively simple and inexpensive system for measuring publicity.

Anyone can download a free Excel spreadsheet from www.mrpdata.com, and for a relatively inexpensive subscription fee, can generate audience reach data, which is supplied by News Canada.

At the end of your campaign, you insert the names of newspapers, magazines, blogs, radio stations and television stations that picked up your story. The basic spreadsheet also has cells available for tone (whether positive, neutral or negative) and five other potential criteria that media coverage can be scored against, such as exclusivity of the story, the use of a picture, or prominence in the publication or newscast.

My complaint is not about the tool. My concern is about how it’s being used. And, quite frankly, it’s leading to a laziness among Canadian media relations practitioners in the way they evaluate the effectiveness of their communication programs.

During the past six months, I have judged some of the most prestigious awards programs in this country. I coordinated the media relations category for IABC’s Silver Leaf awards last fall. I participated as a judge in the media relations category of this year’s CPRS Toronto’s Achieving Communication Excellence (ACE) awards. This past weekend, I participated as a media relations judge in IABC/Toronto’s OVATION awards program.

I have been judging media relations entries at local, national and international levels since I coordinated the entire Silver Leaf program in 1992.

Over the past few years, I have witnessed a distinct deterioration in the discipline of media relations measurement since MRP was first introduced. Increasing numbers of entries at all levels are only submitting MRP “results” as their sole source of evaluation.

Honestly, that’s not good enough.

Our profession is about outcomes, not inputs. I have no qualms if your client is happy with MRP data as a sole source of measurement. As someone who has operated a successful business for the past 25 years, I understand the concept of giving clients what they want.

But if you’re asking your peers for evaluation in awards programs (or in portfolio submissions toward earning your ABC or APR designations), MRP alone isn’t good enough.

It’s not enough to say that 16,000,000 people may have been exposed to a message at a cost of one-third of a penny each. Did they get the message? And how did it influence their attitudes, opinions and behaviour?

Did the program reinforce existing positive opinions? Did it encourage audiences to form opinions? Did it neutralize negative opinions? Did the media relations campaign move specifically identifiable audiences to action in ways that support the organization’s objectives? And how do you measure all of the above?

In my mind, finding answers to those questions separates a practitioner from a professional.

If you want to use MRP, fine. But please don’t try to convince a fellow professional that MRP alone is good enough.

Quite frankly, it isn’t.

"You're Just Blowing Smoke"

This is the second instalment in a series on media relations measurement.

To help shed some light on what the state-of-the-art in media relations measurement should be, I thought I’d turn to Wilma Mathews, ABC, a long-time colleague and friend, and author of Media Relations: A Practical Guide for Communicators. Wilma has been practicing media relations for … well, let’s just say quite a few years.

When it comes to media relations evaluation and measurement, Wilma says our industry is certainly better off than it was even five or ten years ago. For many years, media relations practitioners relied on the simplistic output measures of counting clips and adding up circulation.

From there, the process evolved into impressions which, from her perspective, means pretty much the same thing as circulation and viewing audience. Next, the advertising value equivalency (AVE) was born, which she points out is a term that’s not even listed in theDictionary of Public Relations Measurement and Research.

“But over the years, as PR people, agencies and companies have gotten a little savvier, they’ve said that what we’re asking you as media people to do is sell a product, get people to come to an event, change their minds or vote for someone,” Wilma explains. “In short, we’re asking you to change behaviour of a certain audience. And that’s a little harder to do than counting clips.”

She believes the AVE was adopted as a matter of convenience (and I suspect she would say something similar about Media Relations Rating Points). It was a simple way to state some perceived value of media relations to management groups. But to her the AVE is a completely abstract number that has no correlation to any activity because advertising and media relations simply cannot be compared.

“You control everything about advertising,” she explains. “You control nothing about the editorial side of the media. But (the AVE) was a way to say to clients ‘if you had purchased advertising, it would have cost you X amount of dollars, and we prevented you from having to do that.’ And it sounded good at the outset.”

She makes a clear distinction between evaluation and measurement in media relations. “You can evaluate your media relations work and still not measure whether or not it worked,” she explains. “In other words, if a media relations practitioner wanted a positive story on the front page of the business section with a quote from their CEO — and they wanted it to appear before the product launch — if they got all of that it says their process worked. It says nothing about whether that helped sales.”

To her, measurement is the end outcome — from an attitudinal or behavioural perspective. Did people buy the product? Did they vote the way you wanted? Did they form an opinion or change their minds?

“If that didn’t happen and all you’ve got to show for it is advertising value equivalents or impressions,” she points out, “you’re just blowing smoke.”

Linking Objectives to Outcomes

This is the third instalment in a series on media relations measurement.

In this second part of my conversation with Wilma Mathews, ABC, I asked her where we needed to be as an industry when it comes to the strategic use of media relations.

How do we develop objectives for a media relations campaign? How do we evaluate whether we’ve achieved those objectives? In a perfect world, how should people approach those challenges?

Her advice was simple on the surface, but represents the complexity of media relations specifically, and organizational communication in general.

“People need to approach media relations by understanding what it is that your client needs to get done,” she says. “Too often, the client’s needs are misinterpreted to what we can do from a media standpoint, whether it has anything to actually do with solving the problem or not.”

She says that one of the challenges that many practitioners have with measurement is that they may start with a great objective — such as increasing the number of people who participate in a weekend run for cancer research from 10,000 to 12,000 — but their evaluation focuses only on the media clippings they generate. They forget to go back and count the number of people who actually participated in the run.

This goes back to her belief that there is a clear distinction between evaluation and measurement in media relations. Counting the clippings is a form of evaluation around the process. Determining how many people participated in the run is a measurement of outcomes, and therefore success.

“You cannot claim success if you are not measuring the right thing,” she says. “And this slides over into the issue of ethics.”

Wilma believes that it is incredibly unethical to tell a client that a campaign was successful because it generated a million impressions when the objective was to get more people to participate in the food drive, vote for a candidate, or other potential outcome.

There are those who may try counter her argument by saying that it was the client who wanted those media relations results — such as being a guest on certain television programs or being above the fold on the front page of the business section. Therefore, according to codes of ethics governing public relations (whether PRSA, IABC, CPRS or CIPR), the media relations practitioner has done his or her job.

“If that media plan is solely about getting the boss above the fold on the front page of the business section and nothing else, then that’s ok,” she replies. “The objectives may be that (the client) is looking for media support for the product launch, and (the media relations practitioner) will write an objective that says they want to generate 1.5 million impressions.

"You can get impressions. That’s the easy part. But those impressions may have no correlation to a bottom line.”

And without bottom line measurement, the job is less than half done.

A Case Study in Media Relations Success

This is the fourth instalment in a series on media relations measurement.

In the this part of my conversation with Wilma Mathews, ABC, author of Media Relations: A Practical Guide for Communicators, she provided an example of a media relations initiative that demonstrates the importance to linking behavioral outcomes to media relations inputs.
3183GXKYA3L._SX258_BO1,204,203,200_

A staff writer at Arizona State University received an assignment from the archaeology department to write a news release to promote an upcoming lecture: a local attorney, as an amateur Egyptologist, was only the second person to go into an Egyptian tomb.

Wilma told me this writer often takes what many would consider to be an unusual approach to media relations. “She knows her media, so she never does follow up calls to the reporters she sends material to,” Wilma explained. “She knows whether they’re the right ones to get the release.”

The communicator got two hits from her release. One was in a calendar listing in the local newspaper. The other was to a reporter who likes to write human interest stories.

“Without any prompting, the reporter turned this story into a front page of the Sunday leisure section, including two color photographs over three-fourths of a page,” Wilma says. “A lecture that would normally bring in 25 brought in almost 200 people.”

There is no AVE for this program. And the circulation numbers would be small by most media relations measurement standards, because there was only one newspaper’s circulation to include.

However, in many ways, this example represents the tried and true in media relations, and the importance of measurement over evaluation. To be successful, it’s important to understand the needs of reporters and only target those journalists or media outlets who would have an interest in your program, your product, your service or your candidate.

After going through that process, if your media list ends up being only five outlets — but they’re the right five outlets — you can achieve success with what would be considered to be an extremely low AVE, if any AVE at all.

Wilma pointed out that the Dictionary of Public Relations Measurement and Research defines impressions as “the number of people who might have had the opportunity to be exposed to a story that has appeared in the media.”

“It’s taken almost as a fact that if you have a million impressions there’s an assumption that a million people saw it and read it,” Wilma said. “You can make numbers do anything you want. But the real bottom line test is: Did your audience do what you intended them to do?

"You can have all the impressions in the world, but if nobody showed up for that dinner to raise money — and your job was to help improve attendance at that dinner — then you’re just not doing your job.”