By Scott Young
It may not be science, but there are research techniques to measure the effectiveness and leverage the results of package designs. Scott Young articulates the possibilities, the limits, and the best uses of this type of design research. Perhaps more valuable, he recommends specific steps managers can take to build collaborative and productive relationships among designers, experts in research, and decision makers in marketing and sales.
Over the past 10 years, perhaps the most important force influencing package design has been clients’ increasing demand for accountability—the confirmation of effectiveness. This demand has driven the need for consumer research in packaging design, and thus it has fundamentally changed the way decisions are made. However, it has also fueled marketers’ recognition of the value of effective packaging design, and so has led to greater respect for design professionals. In this article, I’ll discuss the doubled-edged sword of accountability and its impact on the evolving relationship between consumer research and packaging design. I’ll also look ahead to the potential implications of marketers’ accelerating efforts to document the connection between packaging design and product sales.
For many designers, it was not all that long ago that research was merely a synonym for “doing your homework” (that is, store visits, analysis of competitive packaging, and so on) at the beginning of a new project.
Consumers were rarely a part of the design development or selection process, as nearly all decisions were left to the collective judgment (and “gut feel”) of package designers and their clients. This approach was certainly easier for both the designers and brand managers involved, and it rested on the assumption that the connection between sales and packaging design was tenuous at best.
Today, it is universally acknowledged that packaging decisions can have a significant impact on sales. Accordingly, it is understood that marketers cannot be expected to make these decisions without some evidence of consumer acceptance. As a result, nearly every design professional has become familiar with the back rooms of focus-group facilities. In fact, despite some misgivings about “turning shoppers into art directors,” focus groups (now synonymous with “qualitative research”) have been the design industry’s answer to client demands for accountability.
For package designers, the relative appeal of qualitative research is obvious: It is hands-on (that is, designers can view and influence the research as it happens), and it provides a great deal of flexibility for gathering reactions to various design elements and executions. For this reason, it is an ideal diagnostic tool that can also provide clients with the reassurance that they are not confusing or offending customers. Therefore, it is not surprising that many larger design consultancies have embraced qualitative research to the point of developing their own research divisions.
From the marketer’s perspective, however, it has become increasingly clear that qualitative research is often not enough. Beyond the well documented limitations of focus groups—they meet in a setting far removed from the shopping experience, and any interpretation of the findings is somewhat subjective—their most important drawback is their inability to provide the numerical evidence that marketers need to guide (and support/justify) their decisions. Therefore, marketers are turning to quantitative research—in which they gather feedback from hundreds of target customers via structured surveys and rating scales—to guide their final decision making. At the same surveys and rating scales—to guide their final decision making. At the same time, however, they favor qualitative research earlier in the design development process, as a tool to provide initial direction and to “narrow down” a wide range of initial concepts.
For package designers, the use of survey research has required a more difficult transition than the use of focus groups. Of course, no one is fully comfortable having his/her work judged or tested, and quantitative research can easily come across as a “final exam” in which designers have little involvement and over which they have little control. Moreover, almost by definition, quantitative researchers speak a different language than design professionals, which makes it easy for them to come across as adversarial, particularly when they are transforming packaging designs into a series of numbers and data tables. To be sure, some researchers have provided solid grounds for concern through their misguided attempts to reduce packaging design to a mathematical equation (that is, “take the most-favored logo and put it with the highest-scoring color…”). These factors have all contributed to the familiar refrain of “research kills creativity.”
This assumption is unfortunate, because if it is applied correctly, survey research has the potential to serve as an enormous step forward (from focus groups) for designers and their clients.
In these ways, survey research can be a powerful tool to foster creativity, to improve or refine packaging designs, and to document the added value of effective design.
Today, marketers are driving another transition with profound implications for packaging design: They are looking for research that can predict the sales/revenue impact of packaging decisions. At face value, this is a logical request. After all, increasing sales revenue is the ultimate objective of virtually any marketing effort—and from a researcher’s perspective, it is not technically difficult to simulate the shopping experience and measure the sales impact of different packaging systems.
However, at a deeper level, the drive for sales measurement raises issues central to both packaging design and research.
Of course, the initial answer to both questions may be: Why can’t they be both? However, the reality is that there are contradictions involved. If it is accepted that packaging changes must drive sales increases, it is nearly inevitable that new design systems will be assessed on this single dimension (that is, projected revenue).
This mindset increases the likelihood that consumer research will evolve into a one-dimensional “score sheet” (as in “80 percent means we go ahead, 79 percent means we don’t”).What may get left behind is the diagnostic insight needed to understand why a packaging system is not working, and the willingness to refine concepts rather than discard them (thus throwing out the baby with the bathwater).We may end up encouraging packaging design focused on generating a short-term “bump” rather than on supporting a longer-term brand strategy, in much the way that testing systems based on respondents’ recall of advertising has led advertising agencies to develop executions that will “beat the system” rather than build brands.
In addition, the drive to predict outcomes rests on the assumption that survey research (that is, simulated shopping) can accurately gauge the sales impact of alternative packaging systems. However, this point is far from certain, regardless of how realistic and complex researchers can make their shopping experiences and sales models. True, a comprehensive packaging study can document differences among packaging design systems in virtually all dimensions affecting sales (visibility on shelf, aesthetic appeal, product perceptions, brand imagery, price expectation, preference versus competition, and so on). It should identify the “trade-ups” (from current packaging) that are likely to improve sales, uncover any risks associated with making a change, and project whether a new system is likely to have a positive or negative impact at retail. But on the other hand, we’ve consistently seen that the linkage between improved packaging and increased sales is indirect, particularly for established brands.
Generally speaking, a new, more appealing, and/or visually effective packaging system is unlikely to immediately change the well-established shopping habits of people who do not buy the brand. In other words, Coke buyers are not likely to come running to a new Pepsi package, nor are Lady Speed Stick users likely to come running to a new Secret package. Instead, the impact is more subtle: A new design may drive nonusers to take a second look at the brand, shift their perceptions somewhat, and perhaps lead them to consider it as an acceptable alternative. On a future shopping trip, when Lady Speed Stick is out of stock or Secret is on promotion, the packaging change may well translate into an incremental sale or even a new loyal user. Unfortunately, this dynamic is nearly impossible to capture via a one-time observation of a person’s shopping trip. Of course, there will always be dramatic success stories, typically associated with “revolutionary” (and often structural) changes to the packaging of well-established brands. However, direct sales increases are not a reasonable expectation or action standard for every redesign effort, particularly for more evolutionary design changes that keep a brand contemporary and relevant. In these circumstances, it is reasonable to expect or even require positive “jumps” in certain key dimensions (such as shelf visibility, aesthetics, and/or product perceptions), while understanding that these may not immediately translate into measurably higher levels of purchase interest.
Overall, the trends and issues outlined in this article can be reduced to one critical challenge for packaging design professionals: How can designers meet their clients’ demands for accountability, without being trapped by unreasonable expectations—and into one-dimensional testing that is likely to limit their creativity?
Certainly, the answer lies in accepting the need for accountability, and working to influence the form and manner it takes. To this end, I can offer several suggestions for helping to ensure that research guides and supports creativity, and provides the diagnostic value needed to improve packaging design.
Proactively manage expectations and helpset research action standards. Designers are often out of the loop when marketers and researchers are establishing the criteria (action standards) against which new packaging systems are to be assessed. As a result, they occasionally find that their design systems are rejected on the basis of very stringent or rigid criteria that may be inappropriate for a given project. It’s critical to set realistic expectations from the beginning, and to ensure that these expectations are translated into proper action standards. Perhaps most important, it’s critical to steer clients toward multiple measures of performance, rather than a single “magic measure” or a strict mathematical formula.
Partner with survey research agencies. It is unlikely that design agencies will be accepted in a quantitative or survey research role, given the inherent conflict of interest associated with evaluating one’s own work. However, they can and should take an active role in educating third party research companies about packaging design issues and the objectives of specific design efforts (via design briefs, for instance). The reality is that packaging represents a low percentage of most research companies’ work, and there is a tendency to apply “advertising measures” (such as recall) to packaging issues. The more information a research agency receives, the more it can customize the research to address the issues critical to each study.
Work with clients to document results and return on investment (ROI). Perhaps most important, designers drive efforts to document the sales impact of newly introduced packaging systems. This involves a great deal of coordination and cooperation with clients in order to gather sales data and trace it to packaging rollouts. However, the rewards can be significant; these efforts are likely to uncover many success stories that reinforce the value of effective packaging. Over time, they may also uncover trends across projects that hlp identify the packaging variables (graphics, package structure, dispensing method, and so forth) that most directly link to sales.
Use research proactively to identify the need for packaging changes. Finally, survey research can also be used to monitor the performance of current packaging and to identify situations in which packaging changes are advisable. This form of annual tracking or auditing addresses a long-standing concern—that packaging changes are usually driven by judgment (rather than data) and often come belatedly in an attempt to reverse market-share declines.More-timely information can help identify instances in which packaging does not contribute to sales, and also help companies allocate packaging resources among their brands (deciding, for instance, which brands are really in need of a packaging update).
Without question, the march toward accountability in packaging design is irreversible, and it will almost inevitably involve the use of numerical data to measure effectiveness. As we all know, numbers can mislead. However, if they are gathered and used properly, numbers can also be a powerful common language for linking packaging design with marketing strategy—and ultimately for documenting the value of effective packaging. For if there is one rule within the marketing world, it is: That which is not measured is not fully valued.