![E74A3BF9-EE59-4564-A697-DCCE24D54853.jpeg](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1587086190991-3QLI7X7INRGOR55S2W3U/E74A3BF9-EE59-4564-A697-DCCE24D54853.jpeg)
Sounds and Models: A Machine Learning Journey
Google Cloud
Experience Design // Interactive Design
![Leveraging the power of Machine Learning can be a challenging concept for (potential) customers to wrap their heads around. In partnership with Google Cloud, our team created an accessible, yet rich experience that shows how easily a machine learnin](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599438940872-KABKH1N9YN15V6NGLWW9/IMG_1318.jpg)
Leveraging the power of Machine Learning can be a challenging concept for (potential) customers to wrap their heads around. In partnership with Google Cloud, our team created an accessible, yet rich experience that shows how easily a machine learning model can be trained and deployed—all through the metaphor of making music. Our work spanned the entire experience: from strategy, defining the user experience, 3D design, visual design, development, and fabrication.
![At the first station, attendees gather (create) data by plucking strings on this large-scale experiential instrument. As they create sounds, the sound waves are converted into spectrograms, yielding a visual file that can be easily understood by ima](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599445987295-JB3PUS001XTQILHTM92K/station_one.gif)
At the first station, attendees gather (create) data by plucking strings on this large-scale experiential instrument. As they create sounds, the sound waves are converted into spectrograms, yielding a visual file that can be easily understood by image recognition tools.
![Each string represents a unique instrument, and connects above and below to the surrounding architecture. Depending on where a string is touched, a different note is played.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599450092803-XAOXYJ4EE8H1AT80P4OY/Station1-01-InstrumentPLayer.png)
Each string represents a unique instrument, and connects above and below to the surrounding architecture. Depending on where a string is touched, a different note is played.
![Different states of the sound conversion station.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599450076693-2EX99CJR06LRXVKHKSM2/station_one.jpg)
Different states of the sound conversion station.
![At the second station—leveraging the data they created at the previous station—attendees train a Machine Learning model.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599438963746-WM0ZCWOCE5YW25LH9P6X/IMG_1512.jpg)
At the second station—leveraging the data they created at the previous station—attendees train a Machine Learning model.
![They choose to do this automatically with AutoML, or manually, by adjusting hyper parameters through tactile sliders on a playful mixing board.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599438938482-5AXKJDSH4FLEKY2SVYL0/IMG_1451.jpg)
They choose to do this automatically with AutoML, or manually, by adjusting hyper parameters through tactile sliders on a playful mixing board.
![Hyper parameters are adjusted by sliding the dials on the mixing board.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599443410941-37DK23QI92JX3CMLFYI6/Station2-00-Jupyter-00-02.png)
Hyper parameters are adjusted by sliding the dials on the mixing board.
![At the third station, attendees test the effectiveness of their model by composing and playing a song on the large-scale music box. Detents on each slider provide tactile feedback for an enhanced note composing experience.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599445361999-ITNML4EZFJFQOB9OEONP/IMG_1444.jpg)
At the third station, attendees test the effectiveness of their model by composing and playing a song on the large-scale music box. Detents on each slider provide tactile feedback for an enhanced note composing experience.
![Attendees choose which model to use; the one they created, the AutoML model, or one of many others.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599450860234-RGTNT6U6OKPE5RC31TQ0/Station3-01-Instructions.png)
Attendees choose which model to use; the one they created, the AutoML model, or one of many others.
![A simple animation guides attendees through through instrument selection.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599450861607-KZAI8GL5SCP3VWOLRPNH/Station3-02-MoveSliders.png)
A simple animation guides attendees through through instrument selection.
![Each dot corresponds to a tactile detent on the physical slider. Depending on where the slider is positioned, a different note from that instrument is played.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599450681875-W0V8BC981BP78VF5CS8M/MoveTheSlider-20190312.gif)
Each dot corresponds to a tactile detent on the physical slider. Depending on where the slider is positioned, a different note from that instrument is played.
![Attendees spin the crank on the large-scale music box to play their notes.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599439603959-3KYKKQ3I3MYAUW0D9SIP/station_three.gif)
Attendees spin the crank on the large-scale music box to play their notes.
![The sound plays audibly and is captured on-screen in real time.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599450862026-EBNWOMGY0CTUDPQEDFHS/Station3-04-Generating.png)
The sound plays audibly and is captured on-screen in real time.
![Real-time predictions come back as they’re ready, so a simple animation builds to show this progress.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599450262453-HR5FRHGATFLD4LTCCU5F/GeneratingPredictions.gif)
Real-time predictions come back as they’re ready, so a simple animation builds to show this progress.
![Leveraging the trained model, predictions for each instrument are made and presented against the actual instruments, showing how accurate the model was.](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1599445651908-A6IMRD19LPK95L1OMJ7Z/Testing_predictions_sm.gif)
Leveraging the trained model, predictions for each instrument are made and presented against the actual instruments, showing how accurate the model was.
![E74A3BF9-EE59-4564-A697-DCCE24D54853.jpeg](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1587086167136-UY2OU9TH0A31YNA4WJ7G/E74A3BF9-EE59-4564-A697-DCCE24D54853.jpeg)
![](https://images.squarespace-cdn.com/content/v1/57f69f0d2994ca5463a503cf/1587085930248-N93LS8Q8WN3TZPE95XBV/image-asset.jpeg)
A full demonstration of all three stations.
Project Team
Jamie Barlow, Thomas Ryun, Ryan Greenhalgh, Marcus Guttenplan, Justin Lui, Jai Sayaka, James Feser, Mike Roth, Tyler Adamson
Created at Sparks