A casual round of beers with friends can be a good learning opportunity. Someone may ask for feedback about her/his beer, or you’re all sampling a particular commercial beer for the first time—you focus your attention and gauge the experience. The caveat is that it’s very subjective. The brewer’s lack of confidence may push you to be overly critical, or what you’ve heard about a given brewery may set your opinion before you’ve even sipped the beer. By contrast, a structured tasting distills that experience into something more objective by peeling away bias and undue influences.
Look at how a product marketing team runs a taste test. They don’t tell people exactly what they’re comparing; they simply prompt them for impressions. The same approach works for beer evaluation, whether the goal is assessing the results of a brewing experiment, training to become a beer judge, or even running a homebrew competition.
These different situations affect the specific steps you should follow, but there are some universal principals to consider.
Fundamentals
Like the marketing taste tests, the central tenet is to avoid bias and bring the focus to your subjects’ perception. A blind tasting, where the judges aren’t told what they’re evaluating, takes away many of the cues that could influence the results. If multiple beers are to be tasted, they should each be served at the same temperature and in the same glassware, without any identifying details. The environment should be neutral, free of distractions.
It’s also best not to influence the subjects with excess information. It’s all right to communicate the goals—to rate or score beers relative to one another or to identify character differences—but avoid sharing expectations or describing how the beers may differ. For example, if you have a brewing experiment with multiple yeast strains, you’ll get more insight into the objective differences by not pointing the judges to only look for yeast character.
Tailoring the Protocol
The BJCP offers great guidance for running a competition, so I won’t cover that here. For the other situations, your process should highlight the factor you’re interested in. Ideally, you want to set up a direct comparison. Going back to our earlier examples, here’s how we might handle them.
Assessing an Experiment
A brewing experiment usually results in a set of beers to compare. You may have treated multiple mini-batches differently, or you may have your test batch and a baseline beer. There are a couple of different ways to pit them against one another. You could treat it like a mini-competition: get a panel of tasters, present the beers, and have them write down their observations and maybe score them. Their comments and results can help you evaluate the change you were investigating.
If you have a relatively small group of beers to contrast, you might handle it more like market research and present them to individual evaluators, discussing their impressions. One-on-one sessions allow you to change up the ordering and dive deeper into details without influencing other judges.
Another useful technique is to set up a triangle test in which you present two identical beers and a different one. The taster is asked to identify the odd sample and characterize the differences. Each run should vary, whether the test batch or the baseline is the odd beer, and the order of presentation should be random. One advantage of this approach is that it gives you a sense of the judge’s sensitivity. If a single judge can’t pick out the odd one, you can discount the distinctions (s)he raises. On the other hand, if only a few of the judges can discern the unique beer, then it may indicate that your change didn’t impact the perception of the beer. That might be a positive result if you were attempting a shortcut without compromising character. But if you were expecting a difference, it could mean that your experiment needs rethinking because your change was too subtle or the base beer may have overpowered the differences.
Palate Training
If your intention is palate training, then a structured tasting can be a good tool for learning off-flavors and elemental components. In this case, doctored beers can be very effective. A mild base beer is selected, and one sample is adulterated with a targeted off-flavor, such as diacetyl or chlorophenol. It’s even better if you dose two samples, one at the threshold of sensitivity and one well over that. The tasting is similar to the triangle test. The subject is supposed to identify the off-flavor, which beers are modified, and what level of change was made to each.
Limitations
Keep in mind that palate sensitivity varies a lot from person to person, or even day to day for the same person. Also, the order in which the beers are tasted has an impact. Palate fatigue is one factor, but “anchoring” is another. The first beer almost always sets an expectation of intensity for the second. Multiple rounds of tastings with different judges and random order can help overcome these traps.