Home » Communication Guideline » Evaluating Communication

Evaluating Communication

Key elements in Communication Evaluation and Success Criteria

Setting up Success Criteria and subsequently evaluating your communication can help answer several questions:

Is the communication feasible and usable?

Is the communication process in itself successful, like the dissemination and population reached?

Is the communication effective?

Each question can be answered in various ways, but important is that expectations as to each question is addressed before the communication starts. For each question we describe key elements and provide examples.

1
Is the communication feasible and usable?
When developing a communication it is important to check with your target group if the communication is feasible and usable.

Involving the target group
Minimally this involves that your communication is piloted with some members of the target group.
Maximally, the communication may be developed iteratively in co-creation with the target group.

Irrespective whether the communication channel is digital, written, video, oral or personal it should be tested whether:

  • it is understandable
  • the intended message comes across
  • it is liked
  • people know what to do
  • it is culturally sensitive (i.e., people are not insulted because of violating cultural norms), and more questions promoting the use of the communication.

It is important that the communication will indeed used by the target group. For instance, using “thinking aloud” procedures when navigating a website or reading a brochure. The instruction is to verbalize all your thoughts while using the communication.

Another method is to interview members individually or in a group (e.g., focus groups) regarding their experiences with using the communication. One could also use eye trackers to see which part of a screen or text people focus at, with a website use the log data to examine how people navigate, or use quantitative questionnaires to assess the opinion of the target group regarding the communication. Be aware to involve members of the target group that vary on important dimensions that may affect their usability judgments, like health literacy, digital experience, age, or culture.

Look at the communication environment
A good starting point for your evaluation and success criteria is to look at what information already exists and to see how other communicator do and use as goals in their communication evaluation. You can also look into published papers on communication efforts to get an idea about benchmarks and possible goals.

For example, if you are about to embark on a campaign to increase people’s perceptions of
the value for money (or value of efforts) of doing self-care activities, you could consider what other existing communication efforts in other areas are using as rating good value for money/efforts.

Online information about users and national information usage like offered from Google Insights is particularly useful source for understanding the prominence of certain news topics over time in your particular country or region.

 

2
Is the communication process successful?
A process evaluation usually focuses at two points: (a) the amount of the communication that goes to the intended target group and (b) if the communication is delivered as intended (fidelity). A process evaluation will tell you if your communication was successfully delivered. Next step is if your communication was effective (question 3). However, if a communication is not effective, it could be because it did not reach the intended target group. So if no one read your brochures on how to treat athlete’s foot, then no effect is to be expected from your communication.
Some key process evaluation components are (Linnan & Steckler, 2002):

      • Reach: the proportion of the intended target group to whom the program is actually delivered
      • Dose delivered: the amounts of intended units of each communication component that is delivered
      • Dose received: the extent to which the target group engage with the communication
    • Fidelity: the extent to which the communication was delivered as intended

Some examples:

      • Reach: counting the number of brochures distributed, counting the number of visits to a website, asking in a questionnaire if one received the communication
      • Dose delivered: did the target group read the brochure and watched the video online that was promoted in the brochure
      • Dose received: log data whether all pages of a website were visited, time spent, asking whether one read all parts of the brochure, how much one thought about the communication
    • Fidelity: a pharmacist did not distribute the whole communication, but only the simplified leaflet and not also the extensive brochure

 

3
Is the communication effective?
Your communication is effective if you reached your goals in your target group with your communication. This implies that you should be able to assess change in your target group on instruments/indicators that measure your goals.Change
One way to assess change is to have a pre-posttest design. This implies that you need to know how your target group scores on instruments/indicators before the communication started. You have to repeat this measurement at least once after a specified time slot with the same instruments/indicators. Arguments regarding the time slot are when you expect change in the instruments/indicators. For instance, changes in knowledge are usually quite fast, changes in behavior or health take more time.

Example: Online information about users and national information usage like offered from Google Insights is particularly useful source for understanding the prominence of certain news topics over time in your particular country or region.
Another way to assess change is to use a control group that not receives the communication. The control group should be very comparable to the communication group, so that differences between the two groups can be attributed to the communication and not to other differences between the groups. The distribution of the target group to the control or communication group should ideally be random and blinded, but in practice that is often not feasible. Also, the combination of pre-post with an intervention-control design is most ideal, but most labor-intensive.

Usually you will not be able to collect data from all the persons in your target group. Therefore, you have to determine how you can approach a representative sample of your target group. The main point is to avoid selection bias. For instance, only inviting target group members that visited the website, while also non-visiting persons were exposed to the communication.

Instruments/indicators
Your instruments/indicators should measure your specific goals within your target group. Those goals could be determinants of behavior (e.g., awareness, comprehension, attitudes, self-efficacy, social norms), behavior itself (self–care or health care use) and the resulting health. It could also include measures at a more societal/political level like costs or savings.

Types of outcomes
Just as your communication should be targeted to specific goals within your target group, your evaluation outcomes should correspond with these to look at shifts in awareness, comprehension, attitudes, as well as behaviors.

If we can say that behavior would be more people using your online self-care guide, then discussions on social media might give you an idea about changes in attitudes, increase in time used reading / listening / viewing relevant information at your site might give you an idea about change in comprehension, and more users at the site in general might be a measurement of increased awareness.

But not all communication efforts can be measured online. Some might be measured by monitoring increase in calls to your self-care helpdesk, number of leaflets removed from stands, number of questions about specific topics at the GPs (so measure by asking at least some of these as well), others by the increase in requests / questions asked at local pharmacies or other health care professionals. Therefore a wider view as to effect and evaluation should be taken and would benefit you in your work to show the different outcomes of your efforts.

Instruments at individual level
When possible, in-depth qualitative or quantitative data for evaluation will also be very helpful. This includes observation sessions to see actual use of tools, to observe interaction and uptake, it could include interview sessions or focus groups also. In addition questionnaires to individual users, or by combining logging data (geographically – or if user data isn’t sensitive in terms of patient information) with targeted forms on site.