When an advertisement is placed in a professional journal,marketers sometimes wonder if anyone will see it. If it is great, but no onenotices, what’s the point? Will it break through the clutter of the other ads?Will the signal cut through the noise?

This article attempts to offer a clear, standardized way totell. The Theory of Signal Detectability (TSD) suggests using the“signal-to-noise” ratio to determine True Recognition, which is the percent whosay they have seen an ad that really have seen it, minus the percent who saythey have seen it who really have not. The measure takes into account bothaccurate and false recognition: Hits and False Alarms.

There are also other elements of print ads worth knowing.What does the advertisement communicate? Is its message extremely important? Isit believable? New and different? Is it persuasive? What tone or image does theadvertisement convey?

Most useful would be to have a Norms base to set resultsagainst, to be able to determine if an ad is above, below, or at norm withrespect to True Recognition and measures of these other elements.

To address these questions, and to test the practicality ofthe Signal Strength technique, a study was conducted with 31 primary care physicians(PCPs—defined as family/general practitioners and internists). The objectiveswere to:

1. Measure True Recognition for four print journal test ads.

2. Obtain diagnostic rating and adjective checklistinformation for each.

3. Begin developing TSD Norms by including three “control”ads.

This method is intended to help discriminate betweendifferent advertisements and to help make an informed decision about the usesto which a given ad will be put.

The 31 PCPs were split into two groups, designated the Hitgroup (16 doctors) and the False Alarm group (15).

Each respondent viewed online two lists of 30 differentjournal ads. The lists were carefully constructed to allow for signal strengthmeasurement. Ads were always presented in random order.

In the Hits group, the second list showed 22 items not seenbefore, and seven of the items that were repeats from the first list.

Four of the seven ads were test ads; the other three adswere used as controls, to be included in future research so that test-retestreliability could be established.

In the False Alarm group, the same seven ads were in thesecond list. In fact, that list was exactly the same as the one the Hits groupsaw. But the False Alarm group had not previously seen any of these seven ads.

When viewing the second list, for each ad, all doctors wereasked if they saw it in the first list or not (yes/no) and how certain theywere of the answer (very, somewhat, not at all).

Following this task, the doctors were asked specificquestions about each of the four test ads: They were asked to rate each on howimportant, believable, new and different, and persuasive the informationpresented was.

Additionally, doctors were asked to check any of a set of 10adjectives which might apply to each test advertisement. Finally, they wereasked about the main message of one test advertisement. Interviewing occurredbetween April 2-4, 2007.

Findings: Signal Strength, True Recognition

The True Recognition percentages for the seven tested printads, derived from the Hits and False Alarms, are shown in Fig. 1. The Norm isthe simple average.

Surprisingly, given the low base size (N = 15), one of theads differs from the Norm. The Tylenol ad was shown to have a very strongSignal, and is significantly (95% Confidence Level) above Norm.

The seven ads attained their Signal scores in differentways. Specifically, the high scoring ads had many Hits and no False Alarms.This suggests that they are distinctive, since no PCP said they had seen themwhen they had not.

In contrast, the low scoring ads had a unusally highpercentage of False Alarms, suggesting they look like many otheradvertisements—a high percentage of physicians who had not seen them beforesaid that they, in fact, had. For instance, the high False Alarms are the mainreason the Centocor ads and Crestor ads are not truly being recognized.

Certainty

When the doctors were asked how sure they were about theHits, the degree of certainty roughly paralleled the percent of hits overall(Fig. 2). They were most certain about Tylenol (the strongest Signal) and leastcertain about Crestor (one of the weakest Signals).

In looking at the certainty around False Alarms (Fig. 3),two things are evident. One is that there is a lot less certainty. This makessense, since these are errors—claims of seeing something not seen. The otherobservation is that the level of certainty rises slightly with the level oferror, suggesting that those ads that look most like other ads truly confusedsome PCPs.

Attribute Ratings

These PCPs rated each of the four test ads on fourattributes: Important, Believable, New and Different, Persuasive (Fig. 4).

  • Importance—Zyvox is considered by far and away the most important ad. The AndroGel and Centocor ads were least important. Zyvox is above the Norm, while the two lowest ads are below it.
  • Believability—Zyvox and Zocor are both believable. Centocor, while not unbelievable, is relatively low on this measure. None of the ads vary from the Norm.
  • New and Different—The AndroGel ad is definitely the most new and different of the ads. Zocor is least, with Zyvox and Centocor falling in the middle.
  • Persuasive—The Zyvox ad is the most persuasive in the group, and the Centocor ad is considered to be relatively unpersuasive. The other two fall in the middle.

Adjective Checklist

The PCPs were also given the opportunity to check offadjectives that they thought applied to each of the Test ads.

The Zocor ad was the least involving and the most boring andordinary. The Zyvox ad was the most involving and least boring of this set offour. The Centocor ad was most unique, being for enrollment in a clinicaltrial; the whimsical AndroGel ad was “nicest.”

Main Message

Since knowing the main takeaway is important when examiningany ad, PCPs were asked about the main message for one of the four Testads—Centocor. The ad shows a man drawing in the sand on a beach (what he isdrawing is unclear), and requests that doctors call an 800 number to learn ofinvestigator sites nearby. About half of these PCPs got the message. Aboutthree in 10 did not know what the ad’s message was at all.

Summary and Discussion

  • The Tylenol control ad, with a signal strength of 81%, is significantly above the norm of 57% (at a 95% Confidence Level). The other ads examined for signal strength are at norm, ranging from a high of 69% (AndroGel) to a low of 36% (Crestor).
  • Diagnostically, the ads with higher signal strengths had fewer False Alarms, suggesting that they are more distinctive.
  • Certainty concerning Hits generally paralleled the percent of Hits overall. Certainty concerning the False Alarms was much lower, and became slightly higher as False Alarms increased, suggesting that the ads with the highest False Alarms create a false sense of confidence that they had been seen before.
  • The test ad for Zyvox was the most Important, Believable, and Persuasive, likely because it is for a product that treats a serious condition. It was moderately New and Different.
  • The test ad for Centocor was the least Important, Believable, and Persuasive. It is to recruit to a Phase II clinical trial, which             may explain the reaction. It is relatively low on “New and Different”, but not as low as Zocor on that measure.
  • The Zocor ad was the least involving and the most boring and ordinary. The Zyvox ad was the most involving and least boring of this set of four. The Centocor ad was most unique. About half were able to play back the main message of the Centocor ad. The whimsical AndroGel ad was the “nicest.”

Strengths and Limitations

The Signal strength technique presents an opportunity todetermine if a professional journal advertisement truly stands out or not, andprovides some diagnostic information to help determine why.

It is new, so the normative data are limited currently, buttime should resolve that issue. The most limiting aspect is that, for obviousreasons, only one journal advertisement for the same product can be tested inone test.

Therefore, to compare two or even three potential journalads, two or three tests would be needed. The saving grace for this limitationis that, based on this test, a small base size does discriminate, so each testcan be cost-effective. 

Stephen J. Hellebusch is president, Hellebusch Research& Consulting, based in Cincinnati, OH