Facebook AI explains how Instagram Explore works and gives a glimpse of its technical nature.
It further reveals over half of Instagram users visit Explore every month. Find out how the platform filters out the most relevant content out of the numerous options for users.
They use an AI system based on a three-part ranking funnel equipped with lightweight modeling techniques, tools enabling high-velocity experimentation and more.
Here are a few key elements of Explore.
IGQL is a domain-specific language optimized for retrieving candidates in recommender systems. Its execution is optimized in C++.
The language was developed to gain the right level of abstraction and assembles all algorithms into one place.
To create a pertinent content inventory for Explore, Instagram creates a pipeline that focuses on account-level information rather than the media-level.
This move aims to tackle the burden of dealing with a large variety of interests among the users that are continually evolving. By building account embeddings, they’re able to identify which accounts are topically similar to each other.
They infer account embeddings using ig2vec, a word2vec-like embedding framework. Typically, the word2vec embedding framework learns a representation of a word based on its context across sentences in the training corpus. Ig2vec treats account IDs that a user interacts with — e.g., a person likes media from an account — as a sequence of words in a sentence.
By applying the same techniques from word2vec, they predict accounts with which a person is likely to interact with, in a given session within the Instagram app.
If an individual interacts with a sequence of accounts in the same session, it’s more likely to be topically coherent compared with a random sequence of accounts from the diverse range of Instagram accounts. This helps identify topically similar accounts.
The ranking distillation model preselects candidates before using more complex ranking models.
This model is used to maximize the number of media pieces evaluated. as the higher the number of posts evaluated, the higher the chances of compiling a more personalized inventory for a user.
The system then records the input candidates with features, as well as outputs, from the more complicated ranking models.
The distillation model is then trained on this recorded data with a limited set of features and a simpler neural network model structure to replicate the results. Its objective function is to optimize for NDCG ranking (a measure of ranking quality) loss over the main ranking model’s output.
They use the top-ranked posts from the distillation model as the ranking candidates for the later-stage high-performance ranking models.
Setting up the distillation model’s mimicry behavior minimizes the need to have multiple parameters and maintain multiple models in different ranking stages.
Leveraging this technique, they evaluate a bigger set of media to find the most relevant media on every ranking request while keeping the computational resources under control.
Through these foundational blocks, the system generates potential candidates, scans them through a three-stage ranking infrastructure, then fuse the users’ reactions. For instance, negative actions such as “See Fewer Posts Like This”.
To enhance Explore’s balance between new interests alongside existing ones, they add a heuristic rule into a value model to boost the diversity of content.
They downrank posts from the same author or same seed account by adding a penalty factor, so users don’t see multiple posts from the same person or the same seed account in Explore. This penalty increases as you go down the ranked batch and encounter more posts from the same author.