Sensory Evaluation of Canned Beer 👃👅🍺

Sensory Evaluation of Canned Beer 👃👅🍺

My once a month rule counts if it’s within the confines of the next month, right? Anyway, I’ve had this post half written and outlined since I last posted here in early March. Quick Aud update (I’m sure you’ve all been waiting on pins and needles)! Since my last post I treated myself to a Sony mirrorless camera that I’ve largely used to shoot pictures of birds (check out my Instagram for some of them) and of my cats, but am excited to continue to improve and get deep into this hobby. I already have three lens and am considering getting more and a tripod. I also took a trip to Vermont with the TalkBeer crew to drink delicious beers and sing karaoke. I’m sure I’ll get a post about that trip together soon. I took quite a few pictures with the new camera :). I’ve also been on a reading kick that has been taking up most of my time. In the past month I've finished American Sour Beers, Wild Brews, and Brewing. Hoping to keep the momentum going and finally tackle of my backlog of brewing books and then buy more!

So, Why this Topic?

I’ve been in a bit of a creative rut when it’s come to NTFO these past few months. As previously noted, it’s largely due to being busy at work. Life also just gets in the way and, most importantly, I don’t want this blog to feel like a chore. This is a passion project after all. The idea that got me excited to get back to writing was a project I did at work. This one thought/idea got me excited to write again and give this blog a second (...third?) life. In this post, I’m going to talk about the successes and failures of my experience designing a sensory evaluation of canned beer.

The Problem to be Solved and the Approach

At my current place of employment, we have three core brands that we package regularly using our 4-head canning line. The quality part of my job involves maintaining a cold and ambient/warm archive of all of our canned and bottled beer. This is done for a variety of reasons (internal investigation of customer complaints, keeping an eye on package integrity over time, etc). For our core brands I was also keeping an extra can once per month with the intent of doing a 6-month evaluation of beer in package.

Initially I was collecting cans to have the option to do a vertical (tasting the same beer from different times - more traditionally for beers meant for cellaring/aging typically separated by a year) to see how we and our canning line was doing. However, the topic of potentially adding a “best by” date came up in discussion and gave the archive an additional purpose.

After enough time, I had six cans of each core brand. I needed to develop a way to present the samples to a panel and come up with an objective to guide the tasting. Setting clear expectations for your sensory participants is important prerequisite to gathering meaningful data. Keeping in mind previous discussions of adding a “best by” date to can, I felt ASBC Method of Analysis Sensory Ranking Test (Sensory Analysis - 11 [you’ll need an account to access this])  was the best fit. The method instructs you to present samples to participants in a random order, which they will taste and answer a question about by ranking them.

The Method provides an example answer form, which I utilized to ask the main question, “Which sample do you like the best?” and a secondary question aiming to utilize the previous questions’ determinations, “Which sample do you feel bridges the gap between true to brand and out of specification?”. The second question was an effort to find where the best by time might land. I presented this form along with six samples, presented at random (i.e. not from newest to oldest), identified with three randomly generated numbers to keep it unbiased. Each person received the same pour from each can (first person got the first pour, second person got the second pour). This was done in an effort to have uniformity within each participants’ samples. My theory being the first pours will be the clearest and the last pours less so due to disturbed sediment at the bottom of the can. It’s important to not allow for visual distinction between samples that may influence an assessor. This particular evaluation is meant to rely on the other senses. Ideally an opaque container would be used, but I do not yet have those in my sensory arsenal. Obviously if you were evaluating visual preference or ranking clarity, this would not apply.

Screen Shot 2018-04-21 at 1.29.49 PM.png

Once fully set up, all 6 samples, answer forms, and working pens, I had participants arrive to complete the evaluation. I gave a short explanation of what was in front of them, but relied on previous conversations about the tasting and the answer form itself. Their favorite would receive a 1 and least favorite a 6. Upon completion, I would provided everyone with “answer keys” so they could see from when each sample came from and if they found a correlation in their ranking/tasting.

IMG_8641.jpg

Data Collection and Evaluation

After everyone had finished, the group had a conversation about the beers and compared our results with one another. This was more informal to see if we had a general agreement about the best, least favorites, and where we felt the beer was starting to fall off (the can that “bridged the gap”). Something interesting that came up was that the freshest can was not always the favorite. It seems for some brands, a bit of time in can improved how it tasted. Fresh cans, in some cases, had an aggressive “green” bitterness. This opened the important discussion of how to brew a more balanced beer so it tastes its best freshly packaged.

After our informal chats, I took the answer forms to compile the data. While referencing the ASBC Method, I determined rank sums and rank means. Those with the smallest rank sums can be assumed to be, in the outline of this experiment, the favorite, and those with the highest, the least. I also calculated rank means as another way to look at the data and to break up ties. I compiled the data from the second questions to see if there was a majority agreement on a general time frame.

Screen Shot 2018-04-21 at 1.17.29 PM.png

The next steps for this set of data will involve comparing results to total package oxygen data from the canning day (to see, for instance, if there's a linear relationship between preferred cans and low oxygen) and to the hopping schedule for that particular batch (to see if there's a relationship between how much and when hops were added and rank). Additionally, one brand had two ties for rank sums and rank averages, I’ll need to spend some time here to determine their true ranking and if there’s significance. When I sent out the initially compiled data to the participants I ranked the fresher can higher if there was a tie, which is almost certainly a simplification.

What Did and Did Not Work

I believe the rank sums are an effective way to look at the cans to answer the question of preference and stimulate a discussion of how the product is doing in cans over time. It also got us tasting and talking about the beer which can be surprisingly tricky as the brewery gets busier, more tanks are installed, and new projects are undertaken. Tasting the beer sometimes takes a back seat. Contrary to popular belief, brewery employees don’t drink all day.

As for what didn't work... It wasn’t until I was preparing these panels that I realized that I should’ve collected an extra can from every run, not just one per month, so I could've had more options. Unfortunately, the cans were not very equally separated. Admittedly, we only can as often as we need it which makes an even collection difficult. 

I could have done a better job of laying out expectations. For instance, the group was a bit chatty which is frowned upon in the sensory world as it can influence the evaluation of other assessors and produce poor data. I also could have better prepared the assessors to combat sensory fatigue by recommending a plan of attack for this exercise. Some assessors nearly finished the entirety of all six 4-oz samples. This can certainly overwhelm the palate.

Finally, it wasn’t until after the fact that I realized how ineffective the second question was. In some cases some assessors completely disregarded it. Regardless, the questions was too vague. Perhaps the best way would have been to discuss with each assessor in person afterwards or included a more straightforward question. For example, “Which samples do you feel fall within a “best by” window and which do not?”. I need to put some more thought and do some research into this particular portion.

PHEW! Did you make it?

As I continue in this industry and particular field, I continue to be drawn to sensory analysis. My interest in statistics and science is beautifully married in this discipline. My favorite things about beer are the flavors it can evoke and I truly enjoy trying reflecting on and picking them out whenever I’m enjoying one. One of my overarching goals at my current job is to create a robust sensory program and this was just a first stab at it. I want to make sensory as ingrained in the day as filling kegs and cleaning tanks! Another step I’ve taken is having our team taste the beer after it has been carbonated, but before it goes into any packages. I told them “Whomever is ‘tapping’ the tank for the first time should be tasting that beer”. It’s a small step, but an important one. This will also hopefully catch any problems and/or off flavors before it’s put into package and sent out into the market. It also simply gets everyone tasting and thinking about the product!

TL;DR: Aud designed a sensory evaluation of cans. It had both successes and failures. Sensory science is dope.

Comments/Concerns/Questions? Lemme know!

I'm back! Again!

I'm back! Again!