What’s Wrong with SEO Ranking Factors Surveys?

By Bryson Meunier, Natural Search Associate Director, Content Solutions

As much as I generally enjoy the content at SEOMoz, and applaud the idea of surveying the industry to test the validity of supposed ranking factors, I know that any survey of this type is only going to be so helpful in a brand’s SEO efforts. Here are four reasons why you should take any ranking factors survey with a big grain of salt.

Ranking Factors are Variables

Everyone knows that links are important, and that search engines commonly use various signals from links to determine relevance. This is reflected in the most recent SEOMoz survey, where links comprise 4 of the top 5 most important ranking factors with scores of “very high importance”. What’s not reflected in the survey is the fact that for certain queries it is entirely possible to get in the top 5 results with no links, or with links of lesser quality than the other results in the SERP (Search Engine Results Page), simply by controlling on-page factors like keywords in title tag and keywords in domain. Because of this variance by SERP, as optimizers we need to ensure that all areas that the engines look to in order to determine relevance are taken into consideration, and controlled within reason. The SEOMoz survey is helpful in prioritizing ranking factors based on the collective understanding of the SEO experts that Rand Fishkin knows, but there are occasions when spending time optimizing for the top 5 ranking factors will not lead to placement in the top 5 organic results. This doesn’t mean that the top 5 ranking factors cited aren’t important, just that they’re only 5 variables of many in determining relevance for any given query, and should be considered on a case by case basis rather than a generic survey.

SEO Experts Often Drink Our Own Kool-Aid (or Link Juice?)

The nofollow-based PageRank sculpting deactivation announcement of earlier this year demonstrated how little certain SEO experts test the wisdom they impart to their co-workers and their peers. In the Chicago SEO meetup group that I organize, a number of the more vocal participants swore to the effectiveness of nofollow-based PageRank sculpting in spring of this year, even citing particular examples of area-SEOs who had supposedly used it to their benefit. Yet, as Matt Cutts explained, nofollow-based PageRank sculpting has not been an effective tactic since they deactivated prior to June 2008. Something else was clearly at play in whatever benefits these consultants were seeing to their sites, but they did not yet know that PageRank sculpting had been deactivated and were not testing rigorously to determine the actual cause. Instead, they were listening to known experts like Michael Gray, who at that time was saying that nofollow for PageRank sculpting could be valuable for driving traffic and sales to a web site.

Ranking factors surveys suffer from the same groupthink principle. The collective wisdom of known experts in the field is valuable, but how many of them are testing their assumptions, and how many of them are parroting the assumptions of someone else?

SEO Experts are not One Size Fits All

In the list of contributors I see link building specialists, SEOs who specialize in small business, SEOs who specialize in social media , usability or local search, SEOs who optimize content on behalf of the search engines (but aren’t search engineers), and a few SEOs who work for major holding companies optimizing enterprise-level content. Each contributor’s opinion is related to his/her experience in optimizing web-based content, and if a contributor specializes in link building, it seems to me that his/her opinion is likely going to skew heavily toward link-based factors as being important, because that’s what this person does and knows. There’s nothing wrong with this bias, provided there’s a balance of types of expertise in the contributor list to balance out the whole. If there’s an imbalance of expertise in the contributor list, this will likely skew the results toward the types of experts surveyed. Unless some effort is consciously made to combat this imbalance, it’s likely that any given ranking factors study will suffer from it.

More than One Ranking Algorithm

Specializing in mobile search, I can tell you that SEOMoz’s list will only be so valuable in optimizing content in mobile search results, as none of the contributors specialize in mobile search. Which is fine, because this list concerns only desktop-web based content. They do have one local search contributor in David Mihm, but acknowledge in the footer that his Local Ranking Factors survey is a better ranking factors survey if you’re looking to optimize local content for Google Maps. This is because different types of search have different ranking algorithms, and the SEOMoz survey is primarily concerned with desktop-Web-based search.

In 2005, when the first SEOMoz ranking factors study was released, this was probably acceptable, as blended or universal search hadn’t yet been introduced, and optimizing for desktop-based web search was about all that was necessary for visibility from search engines. Four years later desktop-web based search results often include news results, product results, blog results, mobile results, videos, images or local search results, each of which have different ranking algorithms with different signals to determine relevance. For example, links are important for mobile search, but probably not to the extent that they are in desktop search; and the number one factor in the local search ranking factors survey is not “keyword-focused anchor text from external links”, but “local business listing address in city of search”. In 2009 if you’re talking about ranking factors you need to specify which ranking algorithm you’re talking about, as there are many that determine even the desktop results.

I’m not trying to pick on SEOMoz here. Their ranking factors survey is really the best in our industry, and they deserve credit for pioneering the form. What’s more, ranking factors surveys are great for getting SEO recommendations implemented, as they lend some degree of credibility to changes that might cost time or resources to implement and make it possible for positive traction in organic search results to be achieved. Non-SEOs sometimes find consensus among experts to be valuable in being persuaded that the changes they’re making to their content is going to have some positive impact and isn’t just SEO voodoo. However, their limitations need to be recognized if our industry is ever going to take the idea to the next level, and it seems to me that these four points: ranking factors as variables, absence of peer-reviewed verifiable opinions, imbalance of expertise types for SEO subject matter experts who contribute and multiplicity of ranking algorithms are making them only so useful to SEOs today.


Copyright © 2008 Resolution Media, Inc. All rights reserved.