David Erickson discusses how Spotify's Discover Weekly algorithm works during this segment of Episode 207 of the Beyond Social Media Show: [ Ссылка ]
How can Spotify’s Discover Weekly playlist feature be so accurate in recommending songs you’d like?
Weekly recommendation of 30 songs
Article is fairly technical, but essentially boils down to three recommendation models that Spotify uses
1. Collaborative Filtering
Think of how Netflix uses it’s star rating system to recommend movies to other similar users.
Instead of the explicit feedback of a rating system, Spotify uses implicit feedback. Specifically, the stream counts of the tracks we listen to, as well as additional streaming data, including whether a user saved the track to his/her own playlist, or visited the Artist page after listening.
If you and I like three of the same songs, collaborative filtering results in recommending my fourth song to you and your fourth song to me. Spotify does this using massive data sets.
2. Natural Language Processing
This model uses track metadata, news articles, blogs, and
other text around the internet.
“Spotify crawls the web constantly looking for blog posts and other written texts about music, and figures out what people are saying about specific artists and songs — what adjectives and language is frequently used about those songs, and which other artists and songs are also discussed alongside them.”
“Each artist and song had thousands of daily-changing top terms. Each term had a weight associated, which reveals how important the description is (roughly, the probability that someone will describe music as that term.)”
“The NLP model uses these terms and weights to create a vector representation of the song that can be used to determine if two pieces of music are similar.”
3. Audio Models
“Including a third model further improves the accuracy of this amazing recommendation service. But actually, this model serves a secondary purpose, too: Unlike the first two model types, raw audio models take into account new songs.”
Spotify analyzes audio files for waveform analysis. They do this using Convolutional Neural Networks: “Convolutional neural networks are the same technology behind facial recognition. In Spotify’s case, they’ve been modified for use on audio data instead of pixels.“
“After processing, the neural network spits out an understanding of the song, including characteristics like estimated time signature, key, mode, tempo, and loudness. “
Understanding how Spotify’s recommendation system works and what kind of data it depends upon can help:
1) musicians understand how to optimize their content for discovery and
2) how we might prepare for optimizing the next wave of content, which will be audio.
The article mentioned in this segment:
How Spotify’s Discover Weekly Knows You So Well
by Sophia Ciocca
[ Ссылка ]
Subscribe to the eStrategy YouTube channel: [ Ссылка ]
FOLLOW DAVID: [ Ссылка ]
SUBSCRIBE To Free e-Strategy Newsletter: [ Ссылка ]
MARKETING PODCAST: [ Ссылка ]
MARKETING INSIGHT: [ Ссылка ]
MARKETING STATS & TRENDS: [ Ссылка ]
MARKETING VIDEOS: [ Ссылка ]
LIKE: [ Ссылка ]
FOLLOW: [ Ссылка ]
![](https://i.ytimg.com/vi/U6rr0eJKYgM/maxresdefault.jpg)