What are Twiddlers in Google’s ranking systems? And how can we use this knowledge to rank better?

What are Twiddlers in Google’s ranking systems? And how can we use this knowledge to rank better?

Twiddlers are C++ code objects used in Google’s ranking algorithms that help Google reorder search results. They are used in a Google system called Superroot, Google’s ranking framework.

I came across this Google Doc about Twiddlers from 2017 while reading Mario Fischer’s excellent breakdown of what we can learn about how the ranking works based on the DOJ vs. Google test and also those recently discovered API files contains many Attributes that may be used in ranking.

Here is the Google doc. We have to remember that it was written in 2018, so there are probably some new Twiddlers in use by now, but there is still a lot we can learn:

Twiddler Quick Start Guide – Internal Google documentation

What do twiddlers do?

Twiddlers are essentially algorithms that can adjust the order of Google search results after the initial ranking is created. They provide recommendations (so-called twiddles) for re-ranking websites based on various signals.

In 2018, there were hundreds of twiddlers, each trying to optimize specific signals. Given that search technology has evolved since then, there are likely a whole host of other Twiddlers that Google uses today.

Twiddlers can be used to improve specific results, filter results to increase diversity, remove duplicates and spam, and more.

Twiddler

From Google Twiddler Quick Start Guide

Pandu Nayak told us in the DOJ vs. Google test that Google’s AI system RankBrain reranks the top 20-30 results. I assume that twiddlers play a big role here.

RankBrain reranks the top 20-30 results

From Pandu Nayak’s DOJ vs. Google testimony

Types of Twiddlers

There are two types of twiddlers:

Predoc Twiddler. These twiddlers are used for extensive customization and work on a base set of already ranked search results without detailed information such as SERP snippets or other information about the URLs themselves. They can be used to promote specific pages or filter the results.

For example this YouTubeDensityTwiddler can improve a channel’s main page if several of its videos match the search query well.

Lazy twiddlers. These twiddlers work on more comprehensive information, including excerpts and other important information about a document. The document is about lazy twiddlers using “docinfo”. While it doesn’t specifically say what information can be used, I expect that this will take into account all information stored through a website. For example, the page title, publication date, author information, and structured data extracted from the page can be considered part of the Docinfo.

I’m speculating here, but it wouldn’t surprise me if any of the following could be used by lazy twiddlers:

  • Entity recognition
  • Page headings
  • PageRank
  • Anchor text information
  • NavBoost User interaction signals: clicks, long clicks, bad clicks and good clicks
  • Content blocks (see the end of this article for more information)

Why twiddlers are important

Some twiddlers are used to boost results scored using traditional information retrieval (IR) algorithms.

Twiddlers can boost some results

Twiddlers can be used to create a category and then apply actions to results within that category, such as:

  • Limiting the number of results from this category. For example, a Twiddler is called BlogCategorizer. It can categorize all blog posts to ensure that not too many appear in the SERPs.
  • This means that results from this category are only displayed below a certain ranking position. For example this BadURLsCategorizer Twiddler can be used to ensure that pages marked as demoted do not appear on the top pages. The min_position The category allows results to be demoted to the second page. I assume that websites affected by SpamBrain are also affected by this twiddler. If you find that you can’t crack page 1 no matter what you do, you may be downgraded by spam filter twiddlers.
  • Identify whether a page is an official page and make sure it ranks high. There is a Twiddler named Official PageTwiddler This can ensure that an official site ranks at the top of the results.
  • Ensuring that specific pages are displayed in a specific order. An example of where the SetRelativeOrder Twiddler is designed to ensure that the original version of a YouTube video always ranks higher than duplicates.

Twiddlers can also be used to remove or filter pages from results. The EmptySnippetFilter Twiddler filters out results that don’t have a snippet. The DMCA Filter Hides results for which Google has received a DMCA notice.

Twiddlers can also be used to annotate results. For example this SocialLikesAnnotator comments social results with the number of likes received. That’s interesting!

How can SEOs use this information?

This is all fascinating information, but what can we do with knowledge of Twiddler to improve our SEO strategy? Here are a few thoughts:

Meta descriptions are important

The information used by twiddlers also includes the snippet displayed in search results. This is usually your meta description, although Google may rewrite snippets to display them in the SERPs. From this document on Twiddlers it appears that the information in your meta description could be used in some way in Twiddlers to reorder the results. However, this is controversial since John Mueller specifically said so Meta descriptions are not used in ranking. It’s possible that it’s not the meta description that’s being used, but rather the relevant information that Google pulls from your page to use as a snippet (which Sometimes Your meta description) which is important. Even if I’m wrong, the snippet displayed in search results can influence user click behavior, which we believe is important Navboost system.

Here’s how I adjust my SEO strategy: I pay more attention to how enticing my clients’ search snippets are compared to my competitors’. I focus on writing meta descriptions that help a user decide that this page likely contains the answer to their query. This way I understand what is important to user intent and design my meta description based on that. I monitor CTR in GSC and work on writing snippets that get more people to click.

Different outcomes are important. Be original.

There are twiddlers that filter out similar results. We know that originality is important to Google because it is the first item listed in the self-assessment questions create helpful content.

Original content is important to Google

When analyzing SERPs for keyword rankings, it’s interesting to note that each result tends to offer something different to the searcher. I often use the analogy of a librarian. If a searcher says, “Looking for resources to help me learn more about green widgets,” the librarian might say, “Okay, here’s a site from a brand known as a leader in this space, and here’s a page from a company people buy from, and here’s a blog post with an excellent buying guide… oh, and here’s this resource – from your website….” What will the librarian say about your resource?

Robot Librarian (Crafted with Flux in Grok)

I have often noticed that there is only one or perhaps only a few results in the top 10 that are insightful. When I try to rank an informational site, my competition isn’t the full top 10 results. Rather, my goal is to be a better or more helpful resource than the ranking sites that have a similar intent to mine.

If you haven’t read this patent yet Information gainI would highly recommend it.

Here’s how I adjust my SEO strategy: I am I put myself in the shoes of a seeker when studying the SERPs that Google chooses. If my client cannot be the official resource for this query, we need to think about how we can clearly be a better choice than the similar sites that Google ranks.

Put yourself in the shoes of a search workbook

Freshness is important

Have you read Google’s patent? Freshness-based ranking? Their systems determine whether users tend to prefer up-to-date information on the topic they are searching for. This is likely something that involves twiddlers who can re-rank results based on recency.

Several of them Attributes The items listed in the recently discovered API files are related to Freshness. These attributes are likely part of the document information for lazy twiddlers to use.

  • Creation date
  • lastUpdateTimestamp – Used for freshness tracking
  • encoded NewsAnchorData – Ratings are calculated based on the quality and timeliness of news topics

Here’s how I adjust my SEO strategy: I pay attention to the results in the SERPs to gauge whether Google favors fresh information. For most topics, users are likely to prefer up-to-date information. Even though new, current information isn’t appearing in the SERPs, I’m starting to experiment with updating information in older posts to add new, relevant information. I’m careful not to add new information just for ranking reasons. It has to be something that readers find really helpful.

Google warns against faking the freshness of their products helpful content documentation.

don't fake freshness

On a related note: Added “chunks” as attributes

I just noticed that there is an updated version of this Attributes that may be used in the search – Version 0.6.0 includes new attributes related to “chunks”, which are sections of content. I suspect that this information can likely be used in a variety of ways in terms of ranking – perhaps by twiddlers, but more likely by vector search algorithms that help determine what content is likely to be relevant and helpful.

Chunks as attributes

I’m currently experimenting with a few clients to better understand user intent and optimize content so that it looks good for both vector search and searchers with specific intent. If you are interested in becoming a customer so I can test my theories on your content too, Contact me.

SEO in the Gemini Era Course

Want Latest Updates in Your Inbox?

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top