Skip to main content

Digital Lab #2 : Machine Learning

This post shows the output of the second Base Digital Lab. The Digital Lab is a recurring internal experimental workshop at Base in which a designer and a developer are paired for two days. The goal is to explore new web technologies, and what they could mean for brands tomorrow. Want to know more? Check our introduction to the Digital Lab.

Far from the image that the sci-fi scene has made popular with human killer-robot swarms and evil digital over-minds, artificial intelligence (AI) or Machine Learning has gone a long way and is now more than ever accessible and available for experimentation.

However, it’s pretty hard for newbies to grasp the limitations and skill-set needed to make it work. Even the full range of capacities of the technology itself is blurry. We decided to explore it. Vast but creative, we were sure that we could connect our clients’ needs with our work.

A little bit of context

Over the last few years, tech giants have gradually made their Machine Learning tools and frameworks open source, creating a whole new playground for developers, creatives and corporations.

Starting with the 'bigger' actors, we can already list a variety of invisible but effective day-to-day Machine Learning use cases. Netflix personalizes suggestions and artworks based on users personalities. Google Photo automatically sorts and categorizes images based on their content. Spotify suggests playlists and songs based on users listening habits. And of course voice assistants like Alexa, Siri or Google assistant are entirely built using AI.

The cultural world has also started to team up with Machine Learning experts to enhance museum collections, improve visitor / user experiences or create engaging visits that spark their audiences' curiosity. For example, Google has created a tool that links hand drawn doodles to artworks of their digital art collection. The more detailed the drawing, the more precise the result:

Tate Modern has also created a brilliant experiment called Recognition. Recognition was an artificial intelligence program that compared artworks with up-to-the-minute photojournalism.

And of course, the community has used these new tools to create their own interpretation of the technology :-p

Finding a starting point

Luckily for us, Google recently released an updated version of their — noob friendly — Machine Learning tool: Teachable Machine. It‘s built using Tensorflow.js, the company's Machine Learning ecosystem.

Teachable Machine allows the use of three different learning algorithms or models. Image recognition with Mobilenet, pose recognition with Posenet and sound recognition with SpeechCommands18w. Daniel Shiffman produced a great introduction series about the tool on Youtube.

Its use is incredibly simple and thus accessible to designers and creatives. It allowed us to iterate over our ideas very quickly. Just a reminder, the Base Digital Lab can only take up a maximum of two days of our time, and we wanted relevant enough examples to be showcased.

The process

After a quick brainstorming on gathering achievable experiment ideas, the time came to train our models. And our first attempts led to some memorable moments.

ING, Voting using hand gestures

Last year Base had the opportunity to collaborate with ING to create and design an exhibition at the ING Art Center in Brussels. As part of the overall work we created a web app (PWA) that is also used as the audio-guide for the exhibition.

One of the key features of the app is a voting system, letting users like or dislike artworks in order to open up a debate. Within the exhibition, the app works through QR codes, displayed on the artwork labels.

To go further, we tried to envision an interactive installation that would be more engaging for the visitors. We removed the interface and brought interactivity through simple hand gestures. As you can see in the video below, the web cam detects a thumbs up or a thumbs down and votes Love or Hate accordingly!

Try the experiment

Bozar, Mapping body poses to artworks

Along the years, Base has had the privilege to work with Bozar in Brussels. As their last show was dedicated to Keith Haring, we took the opportunity to play around it. As part of the exhibition, we imagined an interactive installation that matches visitors poses with the artists artworks.

Here we used pose recognition and fueled our model with rather… flexible data.

And some of our team members were quickly hooked to the game.

In the video below you can see that the webcam detects my pose, then shows an artwork that matches the pose.

Try the experiment yourself

Maison Dandoy, Matching products to drawings

Maison Dandoy is a famous Brussels brand of biscuits, long-time client of Base and a great subject for experiment as its tone is already fresh and playful. We loved Quick, Draw! by Google and we tried to create a similar experience for the Maison Dandoy shops.

The experiment is aimed at kids visiting the shops, in the form of a contest. They are asked to draw their favorite cookie. The system then tries to match these doodles with actual cookies and gives a feedback.

To train the image recognition model, we had to produce a variety of sketches as reference for the drawings that would later be drawn by kids.

Then we fed our model with the data and interpreted the output to create the guessing game behavior. In the video below you can see we find the Maison Dandoy biscuits that match the doodle drawn in front of the web cam.

Try the experiment yourself!

Delphine had some down time as I was coding, so she went even further by designing custom Dandoy biscuits packaging printed with customer's doodles, emphasizing the uniqueness of the experience.

Mitsulift, Animating a logo based on speech

Mitsulift is an elevator company that Base branded a few years ago. The notion of going up and down is inherently linked to the brand identity and iconography. It seemed almost natural to enhance the logo by letting it move based on users' voices.

In the video below, you will see that the logo moves up when I say "Up" and moves back down when I say "Down".

Try the experiment yourself

What did we learn?

Well… A lot. More than we could have imagined.

First, we learned that Machine Learning is easy to use. Or at least easy to experiment with. We only covered three models in this article, the ones available through Teachable Machine. But if you wish to go further, Ml5.js is a more exhaustive library, also built on top of Tensoflow.js and allowing the use of a lot more of these models.

We were surprised ourselves with the quality of output we managed to produce in only two days. This would have been unachievable a few years ago.

We also realized that Machine Learning can be pretty low tech. The only hardware we used to produce these experiments is a laptop equipped with a webcam.

Machine Learning is accessible through open source tools and frameworks (unlike our previous Digital Lab topic Instagram filters for example, that requires our users to go through the Facebook ecosystem). It's easily sharable as it can be used in the browser, opening up very interesting opportunities for brands.

Well, this was a lot of fun. On to the next digital lab!

Shout out to David Horsler, intern at the time of the Lab and of great help