As we are nearing the end of 2020, I would like to look back to the “Avocado case” one more time. This small project was something I worked on in 2019 and it brought great traction with many of those that saw it. In all honesty, it surprised me big time. I sure liked the idea we came up within the team but never imagined that months later, people would continue to reach out to me asking me to tell them more about “the Avocado case” they heard about.

The Avocado case, huh? 

Now, before we dive into how and why of the case, it might be good to tell you what this case was all about since more people do not know about, compared to those that do know about it. 

The Avocado case was originally meant to be a proof of concept (POC). There were multiple reasons for us to build this POC but in the first place, we were looking for a way to learn and evangelise about the possibilities of Microsoft’s Dynamics 365 and Azure platforms, more specific its options around Intelligent Cloud and Intelligent Edge. Imagine this as a concept car, the ones that you see on the car shows now and then. Small chances they will ever reach the production stage but boy do they look cool!

One Avocado In Thousands 

A POC. Good. Now, what is it? Yep, we’ll get there but let’s first take a closer look at the challenge we were trying to address, from a customer perspective. I honestly believe that technology makes sense if applied properly. The avocado case focuses on avocados. Their quality to be precise. With the avocado ripeness being one of the primary drivers of the product’s quality. Also, avocado’s should be handled properly since they are both fragile and expensive. Many avocado trading companies, therefore, have sorting and quality processes in place to define the quality of the products. That includes defining the ripeness of the products identifying the damaged ones. This is a process that still involves manual labour (hence the costs of your ready to eat avocado). Imagine if we could leverage the power of the cloud to do that. 

Let’s get to it 

The first thing we needed was data. We needed lots of avocado pictures. In all different sorts and size since that would allow us to improve the quality of the data model. We had lots of beautiful produce. We also had damaged produce: we had spotted avocados, bruised ones, ones that got damaged on handling. Closed ones, open ones, with or without core, with our without stickers, one per photo, multiple per foto. We even included other types of fruit, to make sure our model would not recognise pears and organise as avocados. 

Next, our data set was uploaded to the cloud and we started outer manual labour. Ranking the photos. Yep, that’s manual. We had to go in and tell the cloud that this particular photo was an avocado, this one a pear and this one an orange. We had to label them with ripeness states, possible damages, avocado seeds visible or not (to prevent confusion with severely damaged produce) and many more. Once we had fed and trained the model, it was time to test the model.

We again started to upload pictures, ones the model had not seen before and we asked the model to judge them based on what we had taught it about fruit and avocados in particular. This results in values. More data. The model was (for example) 75% sure this was an avocado and 63% it was ripe. Plus, it was 78% certain it had bruises. Oh, it did that per individual photo. Cool huh? Mwah. Yes, we were excited about the way we made it work but kinda disappointed at the same time.

We had to take this a level higher

Tying all Avocados Together

We did that in two ways. First, we integrated this avocado ranking process in a business process. We tied it to a customer complaint, allowed a user to initiate a “cloud test” straight from his email client (where complaints tend to arrive) and built-in approvals to make it work for a fruit trader. Very cool, but nobody gets very excited about the ability to press a button in your email client. That is why I came up with something else. 

This needed to become physical

Next, we ordered a plain conveyer belt, 25 plastic avocados, two baskets, a dummy camera and a 24-inch monitor. We fixed the camera over the conveyer belt and created a BI dashboard that showed a handful of metrics. Avocados counted, ripe avocados, damaged avocados and several others. The metrics got “live” updates of the camera through a few scripts running in the background, which we screen-recorded as we needed it to run automatically and on repeat. And that’s when it clicked. People now got it. They walked up to the conveyer belt, saw the avocados, looked at the monitor and were like: “oh, that’s interesting”. While the belt was out the Microsoft Netherlands’ Innovation Alley, people started sharing photos of it on LinkedIn and email. Never have we received that many feedback on a marketing initiative over a longer time. 

Stick with the physical 

Which brings me to where I wanted to take this story. Turning this from a digital POC into a physical setup for people to watch, touch and experience was what we needed to make this tick. Despite the conveyer belt being a fraction of conveyor belts that are used in food manufacturing companies, nothing more was required to make people start imagining how this would look in real-life. 

Does that mean our mind is biased towards physical over digital? Do we need that “assistance”, to better understand things? What does that mean for marketing and product marketing in a digital world? Should marketeers be striving to bring physical experiences into the digital world? What does that mean for the future of mixed and augmented reality? 

I do not own the answers to those questions, since the answers are not defined by me but by the people who get to see and experience what I do as a marketeer. I do know though that I was triggered by what happened here and I will look to reuse this experience in future projects!