Tuesday, October 15, 2019

Content Accuracy Factors You Should Optimize For

Content Accuracy has been a recurring topic in the SEO community, especially now because Google has revealed it as a ranking factor. It has been the talk of the town since Gary Illyes from Google has said so at PubCon, a well-known SEO conference. Content Accuracy is especially important for YMYL sites because according to Google they “go to great lengths to surface reputable and trustworthy sources.”

This is a huge thing but I must admit it is pretty ironic since back in September, Danny Sullivan from Google has said that it is not treated as a ranking factor. However, they may be talking about different facets of content accuracy as a factor for site performance in the SERPs. So I want to discuss this with you, do you think you are optimizing well for content accuracy? Here is a quick cheat guide for you!

Creating the Perfect Content Accuracy Recipe

What you must know about accuracy in digital marketing is that it is not a popularity contest. A piece of content garnering thousands of views does not mean that it is automatically accurate for the user searching for it. We cannot pinpoint the exact elements that Google regards as the standard for Content Accuracy, but I suggest that you strategize according to these metrics:


When we measure content based on correctness, it deals more beyond making sure that you do not have grammatical errors and vocabulary lapses. It is also the extent to which you would go to prove that your content is reliable enough to be credited as a source of information.

Think about it in an elementary school sense, if these students would not use your content as a source for their homework, do not bother thinking users actively searching for answers will treat you favorably. Correctness is a great metric of accuracy because it validates your content as something you can promote freely without worrying about user reception and negative feedback.

Credibility and Authority

Another revelation that Google made at PubCon is that they do not keep track of E-A-T scores. Gary Illyes also made it clear that E-A-T and YMYL are concepts instead of a standard that they keep tabs on. With that said, it is still important to establish authority, especially in championing accuracy in your content. One way to know if users see you as an authority in a particular niche is through links. If there are many people linking to you, this basically solidifies your reputation for that particular content. Credibility and authority go hand in hand, especially if you optimize well for topic relevance. Speaking of topical relevance, this is a signal that Google’s systems can rely on to rank content so be mindful of this as well.


Google cannot exactly tell if the content is accurate, according to Danny Sullivan. Instead, they align their signals to find topic relevance and authority. Verifying accuracy is not something that the system can easily accomplish. Be that as it may, you should still optimize your content to be as objective as possible. Meaning, as much as possible, the content you publish should be impartial and not explicitly biased.

danny sullivan content accuracy


Author reputation in accordance with E-A-T has always been a part of the discussion for most SEOs. Users today can be turned away by content published by an unreliable source. They can easily question who the writer or content creator is and this will hurt your SEO efforts in the long run. Reactiveness is a major factor in content accuracy, so if their feedback includes doubts, you’re probably on the wrong track.

Key Takeaway

Good content has a great recall. This is a great goal for content accuracy because who wouldn’t want people to read and share their hard work, right? I have been writing for many years now and making sure that I share accurate content is second-nature to me. This is a quality that SEOs should also mirror since it will be a sound investment in the long run.

Seeing as content accuracy is stated as a ranking factor by Google, this has validated its importance even more. More than having a steady quota of content from your site, it should be high quality enough to be promoted and shared with other people. This is how these metrics come into play; by helping you be a trusted and reputable source of information for users.

Content Accuracy Factors You Should Optimize For was originally posted by Video And Blog Marketing

Google's Robots.txt Parser is Misbehaving

The newly-released open source robots.txt parser is not, as Google claims, the same as the production Googlebot parsing code. In addition, we have found cases where each of the official resources disagrees with the others. As a result, there is currently no way of knowing how the real Googlebot treats robots.txt instructions. Read on for example robots.txt files that are treated differently by Googlebot and by the open source parser.

Googlers: if you’re reading this, please help us clarify for the industry how Googlebot really interprets robots.txt.

Google recently released an open source robots.txt parser that they claimed is “production code”. This was very much needed because, as they said in the announcement blog post, “for 25 years, the Robots Exclusion Protocol (REP) was only a de-facto standard”.

Before they released it, we might have thought that the substantial documentation from Google and the online checking tool from Google amounted to a reasonable ability to know how Googlebot would treat robots.txt directives in the wild.

Since the release of the open source parser, we have found that there are situations where each of the three sources (documentation, online checker, open source parser) behave differently to the others (see below for more on each of these situations):

Table of misbehaving robots.txt parser

This might all be just about OK if, as claimed, the open source parser is the authoritative answer now. I guess we could rely on the community to build correct documentation from the authoritative source, and build tools from the open source code. The online checker is part of the old Google Search Console, is clearly not being actively maintained, and documentation can be wrong, or fall out of date. But to change the rules without an announcement in an area of extreme importance is dangerous for Google, in my opinion. 

The existence of robots exclusion protocols is central to their ability to cache the entire public web without copyright concerns. In most situations, this kind of mass copying would require opt-in permission - it’s only the public interest in the existence of web search engines that allows them to work on an assumption of default permission with an opt-out. That opt-out is crucial, however, and Google is in very dangerous territory if they are not respecting robots.txt directives.

It gets worse though. The open source parser is not the same as the production Googlebot robots.txt parsing code. Specifically, in the third case above, where the open source parser disagrees with the documentation and with the online checker, real Googlebot behaves as we would previously have expected (in other words, it agrees with the documentation and online checker, and disagrees with the open source parser). You can read more below about the specifics.

The open source parser is missing Google-specific features

Even if you don’t know C++ (as I don’t), you can see from the comments on the code that there are a range of places where the open source parser contains Google-specific features or differences from the specification they are trying to create (the line linked above - line 330 of robots.cc - is one of a number of changes to make Googlebot more forgiving, in this case to work even if the colon is missed from a “User-agent:” statement).

Given these enhancements, it’s reasonable to believe that Google has, in fact, open-sourced their production parsing code rather than a sanitised specification-compliant version that they extend for their own purposes. In addition, they have said officially that they have retired a number of enhancements that are not supported by the draft specification.

Take the code, their official announcements, and additional statements such as Gary Illyes confirming at Pubcon that it’s production code, and we might think it reasonable to believe Google on this occasion:

That would be a mistake.

If you use the open source tool to build tests for your robots.txt file, you could easily find yourself getting incorrect results. The biggest problem we have found so far is the way that it treats googlebot-image and googlebot-news directives (and rules targeting other sub-googlebots as well as other non-googlebot bots from Google like Adsbot) differently to the way the real Googlebot does.

Worked example with googlebot-image

In the absence of directives specifically targeting googlebot-image, the image bot is supposed to follow regular Googlebot directives. This is what the documentation says. It’s how the old online checker works. And it’s what happens in the wild. But it’s not how the open source parser behaves:

googlebot-image misbehaving

Unfortunately, we can’t fall back on either the documentation or the old online checker as they both have errors too:

The online checker has errors

googlebot/1.2 is equivalent to googlebot user-agent

Now, it’s quite hard to work out exactly what this part of the documentation means (reviewing the specification and parser, it seems that it means that only letters, underscores, and hyphens are allowed in user-agents in robots.txt directives, and anything that comes after a disallowed character is ignored).

But, it is easy to understand the example - that googlebot/1.2 should be treated as equivalent to googlebot.

That’s what the documentation says. It’s also how it’s treated by the new open source parser (and, I believe, how the real Googlebot works). But it’s not how the online robots.txt checker works:

Google Search Console robots.txt checker is wrong

The documentation differs from reality too

Unfortunately, we can’t even try to build our own parser after reading the documentation carefully because there are places where it differs from the online checker and the new open source parser (and, I believe, production Googlebot).

For example:

user agent matches the first most specific rule

There are some examples in the documentation to make it clear that the “most-specific” part refers to the fact that if your robots.txt file disallows /foo, but explicitly allows /foo/bar, then /foo/bar (and anything contained in that, such as /foo/bar/baz) will be allowed.

But note the “first” in there. This means, to my understanding, that if we allow and disallow the exact same path, then the first directive should be the one that is obeyed:

Search Console is correct in this instance

But it turns out the order doesn’t matter:

Search Console doesn't match the documentation

In summary, we have no way of knowing what real Googlebot will do in every situation

All the sources disagree

And we know that real Googlebot can’t agree with all of them (and have tested one of these areas in the wild):

And actual Googlebot behaves differently to all of them

What should happen now?

Well, we (and you) need to do some more testing to figure out how Googlebot behaves in the real world. But the biggest change I’m hoping for is some change and clarity from Google. So if you’re reading this, googlers:

  1. Please give us the real story on the differences between the newly-open-sourced parser and production Googlebot code
  2. Given the proven differences between the old Search Console online checker and real Googlebot behaviour, please remove confusion by deprecating the old checker and building a compliant new one into the new Search Console

Google's Robots.txt Parser is Misbehaving was originally posted by Video And Blog Marketing

Friday, October 11, 2019

Google Adds Video Reports in Search Console

Videos are rapidly changing the way people search. The thing is, it’s not just about YouTube anymore. Google recently added a new report in Search Console for Video content that allows webmasters to see how their own videos perform on search results and errors in their structured data markup.

Currently, there are three ways videos appear on Google; on the main search results, in the Videos tab, and on Google discover.

Video Performance Report

In Google Search Console, you could see how your videos perform by going to the Performance Report and clicking Search Appearance. If you want to go into more detail, you could also check the keywords your Videos appeared for and the specific pages that appeared. This data is really useful as you would know the exact keywords to optimize your video content.

Video Enhancements

Structured data is not a ranking factor but it can enhance video your videos in the search results to make them more appealing for searchers. If you have video content on your website that is marked up with the Video structured data, you will start to see errors, warnings, and pages with valid structured data.

When this feature rolled out, Google sent out thousands of emails to webmasters informing them of errors in their Videos markup. We also received an email regarding this error.

When I checked the errors on Search Console, it was a little bit confusing because the pages that have errors in them contained links toward YouTube videos that I linked to. I don’t own any of those videos and it’s impossible for me to give these YouTube videos a proper structured data markup. Since the feature is new, I think this is one thing Google overlooked.

How Google Crawls Video Content

Now that we have this report integrated into Google Search Console, it adds more reasons for webmasters to properly markup their video content. In the Search Console Guidelines, Google mentions 3 ways they extract video content from websites:

  • Google can crawl the video if it is in a supported video encoding. Google can pull up the thumbnail and a preview. Google can also extract some limited meaning from the audio and video file.
  • Google can extract data through the webpage’s text and meta tags the video is in.
  • If present, Google uses the VideoObject structured data markup of the page or a video sitemap.

Also, Google requires two things for videos to appear in the search results:

  • A thumbnail image
  • Direct link to the video file

Google highly recommends the use of structured data. They mentioned that structured data is best for pages that they already know about and are being indexed. The best way to go about it is to have a video sitemap file, submit it in Search Console, and markup your pages.

Sample Video Structured Data Markup

If you want to markup your video content, here’s the code for a standard Video Rich Results:

<script type=”application/ld+json”>
“@context”: “https://schema.org”,
“@type”: “VideoObject”,
“name”: “Title of Video Here”,
“description”: “Full description of Video Here”,
“thumbnailUrl”: [
“uploadDate”: “2019-10-10T08:00:00+08:00”,
“duration”: “PT10M30S”,
“contentUrl”: “https://videofilelink.mp4”,
“embedUrl”: “https://seo-hacker.com”,
“interactionCount”: “700”

You could further optimize this structured data markup by adding Video Carousel markup if you have a page with a full gallery of videos or by adding Video Segments markup so users can see a preview of your video in the search results. If you want a full list of video rich snippets, check out Google’s Video Markup Guide here.

Key Takeaway

Videos are changing the way webmasters create content. It helps users engaged and increase dwell time. Currently, video snippets are dominated by YouTube videos and I’m interested to see if this update to video content will help webmasters who are publishing videos on their own website get their videos in the search results and draw more clicks. Hopefully, this update encourages more webmasters to use their own platforms when uploading videos that they create and we see a more diversified video search results.    

Google Adds Video Reports in Search Console was originally posted by Video And Blog Marketing

Thursday, October 10, 2019

Emotional Intelligence at Work: Why I’m Trying to be Less Empathetic

If you aren’t yet familiar with the concept of Emotional Intelligence (also known as emotional quotient or EQ), it’s probably time to take note. In his book Emotional Intelligence NY Times science writer David Colman argued that it wasn’t, as previously thought, IQ that guaranteed business success but EQ instead. He defined the four characteristics of EQ as being:

  1. good at understanding your own emotions (self-awareness)
  2. good at managing your emotions (self-management)
  3. empathetic to the emotional drives of other people (social awareness)
  4. good at handling other people’s emotions (social skills)

As the idea of emotional intelligence takes hold in the workplace, we’re increasingly likely to hear conversations about the core components described above. The one I hear, and see, people get most hung up on is empathy.

What exactly is empathy?

A seemingly common misunderstanding of the term lies in the difference between empathy and sympathy, with people often confusing the two. In a talk on the Power of Vulnerability, researcher Brené Brown does a great job at explaining the difference. She says: “Empathy fuels connection while sympathy drives disconnection. Empathy is I’m feeling with you. Sympathy, I’m feeling for you.” 

Brown goes on to explain that: “When we empathize, we don’t see that person as unlucky or someone who made poor choices in life, but rather a flawed individual like us. In other words, you put yourself in their position and try to connect by unearthing your similar experiences.”

Empathy fuels connection while sympathy drives disconnection. Empathy is I’m feeling with you. Sympathy, I’m feeling for you.

Brené Brown

And you’re saying that’s bad?!

I’m saying it’s more nuanced than we first think. I’m saying that you can be kind and compassionate and accept someone’s truth even if you don’t have shared experiences to draw upon.

In cases where you are able to feel another person’s feelings, it can be super useful in helping the two of you connect. At work, this could look like a shared frustration, for example. We’ve all been there; moaning about a pain in the ass point of contact or sharing your feelings about some critical feedback. When someone relates to your experience, it can help you to feel validated at a time where you’re struggling to validate your own feelings. Or maybe it just makes you feel seen or understood. Either way, it feels good. There’s huge amounts of relief in knowing someone else feels the same way as you. That you’re not alone; even when we’re at work. We’re all human after all.

That sounds entirely positive, so why are you trying to be less empathetic?

I’m not saying that people should be less empathetic per se. But I am saying that sometimes empathy is not necessarily possible. Sometimes empathy as we initially understand it (as trying to tap into similar feelings) can do more harm than good and, in these cases, we should replace it with something else.

Consider a conversation with a colleague who’s upset. Their experiences, and lens through which they view the world, are completely different from yours. Maybe they’re a different race, gender, sexuality to you. If you’ve never been subjected to, say, racism or homophobia, then you might need to accept that you can’t actually feel how that feels. You may not have any similar experiences to recall.

And that’s where the definition of empathy as ‘unearthing your similar experiences’ falls down for me.

What happens when you try to empathise but can’t?

I’ve both seen this happen, and experienced it first hand. 

I’m a 33-year-old woman in leadership. Sexism is a thing I’ve come up against again and again over the course of my career. And there have been times where I have felt worse as a result of sharing my experiences. Ironically, because those I’ve confided in have, with the purest intentions, tried to empathise. 

How does that happen? 

Well, it’s usually when the person I confide in cannot possibly understand how I feel in that moment. If you are a man, for example, how are you supposed to conjure up, in your imagination, a lifetime of being subjected to sexism in order to feel how I feel at that specific point in time? You might have an idea. But you won’t be able to really feel it. You don’t have similar experiences to call upon.

When we get hung up on the idea of being empathetic we might question someone’s experience in order to try to understand or relate. Can you picture a time where you’ve felt questioned in this way? For me it might look something like this (hypothetical) scenario:

Me: “I feel like that client wouldn’t listen to me because I’m a woman. He listened to you when you made the same point.

Male colleague: “Really? Because you’re a woman? Maybe but that guy’s talked over me before too so I know it feels crappy.

Despite the fact that my colleague is trying to empathise - by explaining that he’s felt the same at times - it doesn’t feel like I’ve been related to, which is what we’re told what empathy is ‘supposed’ to achieve. In fact, I could be left feeling misunderstood and doubting my own experience. Having experienced the same or similar words or actions isn’t the same as having been made to feel a particular way because of a lifetime of context.

So what do I do instead?

What I am absolutely not saying is that we should disregard experiences we can’t relate to. Quite the opposite actually. Instead of getting hung up on whether you can or can’t relate, be open to the possibility that you may not be able to, and recognise that all experiences should be given due consideration regardless.

In other words, just accept that the other person’s experience is their truth. If you’re busy trying to recall similar experiences (especially if you don’t have any), you’re in your head and, despite your best intentions, you’re actually getting further away from connecting to the other person’s experience. 

In the above scenario, accepting might sound like my colleague saying something like: “Really? Have you experienced that kind of thing before? That must really suck.” 

Note that this doesn’t mean you necessarily agree with their point of view. You just accept it as their truth. 

If you can do this, you can still find a way to fuel a connection by moving outside of your own experience and replacing empathy with compassion. If you can’t get to that compassionate place then you’re in danger of writing off them and/or their experience all together.

Rarely can a response make something better. What makes something better is a connection.

Brené Brown

I’m also not saying never try to understand other people’s perspectives. Doing that will help you be more compassionate - just don’t do it out loud, or in the moment at which someone is confiding in you. And don’t write people off if there is no way for you to relate to them. Their experience is valid, and worthy of compassion, whether you ‘get it’ or not. 

As Brené Brown says: “Rarely can a response make something better. What makes something better is a connection.” So, if you can’t connect with empathy, then it might be better to find a way to connect with compassion instead.

Emotional Intelligence at Work: Why I’m Trying to be Less Empathetic was originally posted by Video And Blog Marketing

Tuesday, October 8, 2019

Google Chrome Will Start Blocking HTTP Resources in HTTPS Pages

Cover Photo - Google Chrome Will Start Blocking HTTP Subresources in all Pages

Google has always been pushing for a safer and more secure search experience for the users. The best example of this is when they pushed the migration to https for websites that cared about their rankings. I fully support a safer and more secure environment in search since it builds trust and improves our brand’s reputation with the users.

A few days ago, Google published a blog post in their security blog which contained massive news regarding the use of resources that lead back to http websites (otherwise known as Mixed Content) will be blocked by Chrome.

What is Mixed Content?

Mixed content is an instance where the initial HTML of the page is loaded through a secure HTTPS connection, but other resources inside the page are loaded through an insecure HTTP connection. 

These resources may include videos, images, scripts, etc. The term “mixed content” came from the fact that both HTTP and HTTPS content are loaded on the same page, but the initial request was done through a secure HTTPS. Browsers of today already display a warning that lets users know that a page/site isn’t secure. But once Google Chrome has fully rolled out this update, it will also trigger a warner in the address bar of Chrome. This is what it would look like:

not secure website screenshot

By default, browsers do not block images, audio, or videos from loading but scripts and iframes are blocked. To see this, you can just go to the site settings part of your browser and check the factors that are blocked by your browser.

Site Settings Screenshot

According to Google, not blocking the resources will threaten a user’s security and privacy since people with ill intentions can tamper with the insecure resource that you’ve used. Additionally, mixed content leads to a confusing browser security UX because the page is displayed as somewhere between secure and insecure. 

Of course, there are problems with blocking the images, videos, etc. for users and us as webmasters. 

Google will be releasing this over successive versions of Google Chrome. This starts with Chrome 79 that will be rolled out by December 2019 until Chrome 80 by January 2020.

Key Takeaway

This is a great move by Google because the last time they enforced something like this is when they pushed the migration to HTTPS. Webmasters and SEOs should be ready for this change since Google Chrome is one of the leading browsers in the market – used by thousands of people around the world. 

Some of the ideas I’m thinking of to avoid blocking the images I’ve used in our pages are::

  • Do a full crawl audit of the site. Screaming Frog and Netpeak Spider can definitely do the job of finding HTTP resources contained in your pages. Once you’ve had the full list, remove or change the images you’re using.
  • Instead of embedding an image from other sites, try to interact with the webmaster if you can use their image with proper citation. This will not only help you avoid getting blocked by Chrome but will also enable you to build connections with other webmasters
  • The hardest choice would be to make your version of the image. This will be especially hard if you don’t have a resident graphic designer in your team.

This is another move by Google that will need us to adapt and change the finer details that our strategies have. Do you have any ideas on how we can approach this? Comment it down below!

Google Chrome Will Start Blocking HTTP Resources in HTTPS Pages was originally posted by Video And Blog Marketing

Thursday, October 3, 2019

GoogleBot User Agent’s Update Rolls Out in December

Cover Photo - Google Updates GoogleBot's User Agent

GoogleBot optimization can be tricky especially since you cannot control how the spider perceives your site. As Google announced that they are going to update the user agent of GoogleBot come December, this is also a call for us SEOs to rethink how we can optimize for Google crawlers.

This update is a great signal on how Google values freshness because it regards that user-agent strings reflect newer browser versions. If they thought of a system for this, what more for content and user experience, right? So for those who are engaging in unethical practices like user-agent sniffing, I suggest sticking to white-hat practices and reap results from them.

What is a User Agent?

For the non-technical person, a user agent can be an alien term but what they don’t know is that they have been utilizing this every day that they explore the web. A user agent is responsible for connecting the user and the internet. Ultimately, you are part of that chain of communication if you are an SEO because it would be a great practice to optimize for user agents but not to the point that you exploit these types to turn them in your favor.

There are many User Agent types but we will just focus on the area that matters to SEO. User agents are put to work when a browser detects and loads a website. In this case, GoogleBot is the one to do this and it is mainly responsible for retrieving the content from sites in accordance with what the user requests from the web.

Simply put, the user agent helps in turning user behavior and action to commands. The user agent also takes the device, type of network, and the search engine into account so it can properly customize the user experience depending on intent.

How is the update going to change the way we optimize for crawlers?

Googlebots user agent string will all be periodically updated to match Chrome updates, which means that it will be at par with what browser version the user is currently using.

This is what Googlebot user agents look like today:

googlebot user agents today

And this is how it will look like after the update:

future googlebot user agent

Notice the slight change in the form of user agent strings, “W.X.Y.Z”? These strings will be substituted with the Chrome version we’re using. Google uses this for example, “instead of W.X.Y.Z, the user agent string will show something similar to “76.0.3809.100.” They also said that the version number will be updated on a regular basis.

How is this going to affect us? For now, Google says don’t fret. They expressed assurance and confidence that most websites will not be affected by this slight change. If you are optimizing in accordance with Google guidelines and recommendations then you don’t have anything to worry about. However, they stated that if you are looking for a specific user agent, you may be affected by the update.

What are the ways to optimize better for GoogleBot?

It would be better to use feature detection instead of being obsessed with detecting the user agent of your users. Google is kind enough to provide access to tools that can help us do this, like the Webmaster Tools which helps in optimizing your site.

Googlebot optimization is as simple as being vigilant about fixing errors on your site. This goes without saying that you shouldn’t over-optimize and that includes browser sniffing. Optimizing according to the web browser that a visitor is using becomes lazy work in the long run because it would mean that you are not holistic in your approach to optimizing sites.

The web is continually progressing which means that as webmasters, we have to think quickly on our feet on how to keep up with the software and algorithm updates. To do that, here are some ways that can help you succeed in Googlebot optimization.

Fix Crawl Errors

Do not stress yourself out and guess the errors that affect your site. To find out if your site is performing well with crawler guidelines, read up on useful information to maximize the tools at your disposal. Your site’s crawl performance can be seen if you take a look at the Coverage error report in your Search Console because crawl issues are under this feature.

crawl errors

Looking at the way Google sees your site and alerts you on what you should fix within it is a surefire way to optimize for crawlers, not just for Google but for any search engine as well.

Do a Log File Analysis

This is also a great way to better optimize for crawlers because downloading your log files from your server can help you analyze how GoogleBot perceives your site. A Log File Analysis can help you understand your site’s strength in terms of content and crawl budget which will assure you that users are visiting the right pages. Specifically, these pages should be relevant both for the user and for the purpose of your site.

Most SEOs do not use a Log File analysis to improve their sites but with the update that Google is planning to roll out, I think it is high time that this becomes a standard for everyone in the industry.

There are many tools at your disposal that can help you find the information regarding the hits of your website whether it’s from a bot or a user which can help you in generating relevant content. With this, you can see how the search engine crawlers behave on your site and what information they deem necessary to store in a database.

Optimize Sitemaps

It has been mentioned time and again that sitemaps do not put you at the forefront of the priority queue for crawling, but it does the job of helping you see what content is indexed for your site. A clean sitemap can do wonders for your site because it helps in improving user navigation and behavior as well.

The sitemap feature can help you test if your sitemap can be beneficial for your site or put it in jeopardy. Start optimizing your sitemaps and it will improve your site.

sitemapsUtilize Inspect URL Feature

If you are particularly meticulous on how your site content is doing, then inspecting specific URLs does the trick of knowing where you can improve it.

The inspect URL feature gives you insight on a particular page on your site that would need improvements which can help you maintain your efforts for Googlebot optimization.

It can be as simple as finding no errors for that URL but sometimes, there are particular issues that you have to deal with head-on so you can fix the error associated with it.

Key Takeaway

With the Googlebot update comes another way to help us SEOs bring better user experience to site visitors. What you should also take note of are the common issues that Google has seen while evaluating the change in the user agent:

  • Pages that present an error message instead of normal page contents. For example, a page may assume Googlebot is a user with an ad-blocker, and accidentally prevent it from accessing page contents.
  • Pages that redirect to a roboted or noindex document.

In order to see if your site is affected by the change or not, you can opt to load your webpage in your browser using the Googlebot user agent update. Click here to know how you can override the user agent in Chrome and comment down below if you are one of the sites affected or not.

GoogleBot User Agent’s Update Rolls Out in December was originally posted by Video And Blog Marketing

Tuesday, October 1, 2019

Screaming Frog vs Netpeak Spider: Who’s the Superior Crawler?

SEO crawlers are must-have tools for every SEO professional or website owner. You need to have at least one of them in your toolbox but for us at SEO-Hacker, we have two: Screaming Frog and Netpeak Spider. 

Without SEO crawlers you’ll probably go crazy manually checking each and every page on your website for SEO issues. That is why it’s an absolute must that you have a tool that you can rely on.

Today, I’m going to compare two of the best SEO crawlers: Screaming Frog and Netpeak Spider. I use both of these tools when I do on-page optimizations for my clients and I absolutely love both of them. But today, these two will be rivals for this blog post. 

In the Left Corner: Screaming Frog


Screaming Frog is one of the leading SEO crawlers in the industry and is famous for many SEOs. It is a powerful crawler that can crawl small to large websites with ease. It gathers key onsite SEO factors that allow SEOs to find and fix issues easily.

Its main features allow you to find Error 404s on your website, find missing and duplicate page titles and subheadings and visualize your overall site architecture. Screaming Frog also allows you to integrate Google Analytics, Google Search Console, Ahrefs, and Majestic data for more data and analysis. You could try Screaming Frog here.

In the Right Corner: Netpeak Spider

Similar to its competitor, Netpeak Spider is an SEO tool that can help you do a comprehensive technical audit of your website fast and easy. It checks for 50+ on-page SEO parameters and can identify 60+ types of issues in your website. You could also have a good idea of how much PageRank your pages have to improve your overall linking structure. It is great for experienced SEO specialists. You could try Netpeak Spider here.

Round 1: Pricing and Free Trial

Both tools offer free trials, however, the difference is that Netpeak Spider offers a 14-day free trial of all of its features while Screaming Frog offers free use for a lifetime but you are limited to crawling 500 URLs per website and other crawl data is unavailable.

For the pricing, Screaming Frog is 20 dollars cheaper than Netpeak Spider. A 1-year license of Screaming Frog costs 149 Euros which is about 162 dollars while Netpeak Spider costs 182 dollars. Screaming Frog only offers a 1-year license of the tool while Netpeak Spider has monthly, 3 months, and 6 months pricing plans. If you have 2 or more users, you would have to buy one license for each.

Round 2: Crawl Processing

There are two types of SEO crawlers: desktop and cloud-based crawlers. Screaming Frog and Netpeak Spider are both Desktop crawlers which means you install them on your computer and it consumes your computer’s memory and CPU whenever you crawl a website.

If you have a decent computer, crawling small to medium-sized websites through these two tools is no problem at all. However, if you’re doing an SEO audit for a large website, you might have a problem. Both of these tools will eat up your RAM. But on the flip side, if your computer is fast, then the faster can these tools crawl your websites.

Usually, Netpeak Spider takes more time than Screaming Frog. But, Netpeak Spider usually crawls more pages than Screaming Frog.

Round 3: Features

Both tools are actually similar in features. They crawl your website and analyze page titles and metadata, look for redirect chains, broken links, duplicate content, and many more. 

For Netpeak Spider, they specifically have 54 parameters that they check and can spot 62 issues for optimization on the website. I specifically like that they also analyze internal PageRank on your website and also identify dead ends, pages that have incoming but no outgoing links. It gives me a good idea of how much link juice a specific page has.

For Screaming Frog, I specifically like that it could also identify pages that have structured data and identify if there are issues with the structured data on those pages.

Round 4: Interface

This is where the fun begins.

The interface of Screaming Frog is simple. New users will not have a hard time navigating through the tool. In the right window, it will give you an overview of the analysis of the SEO elements on your website.

It will show you information on the number of internal and external links on your site, duplicate and missing tags, duplicate and missing subheadings, pages with hreflangs, etc.

If you click on an SEO element, it will show you all the URLs that fall under that category in the left window. Screaming Frog will provide information on every URL on your website. It can also show how your pages would look in the search results page.

While similar in functionalities, Netpeak Spider is different in many ways when it comes to interface. The dashboard will give you a quick summary of the number of URLs crawled by the software and charts that show you an overall look at your website.

Once you go to the results page, you could immediately identify the URLs with issues because they are labeled. Red means these pages have critical errors while Yellow indicates non-urgent issues.

On the right window, you could see a list of parameters that Netpeak Spider checks. You could select which parameters you want to see if you only need to optimize specific parameters.

Final Verdict

Coming into this fight Netpeak Spider is the underdog because Screaming Frog is more popular than Netpeak. In my opinion, there is no clear winner here. Both of these tools have their own ways of presenting data but they essentially do the same: provide a technical website audit. 

One is just as useful as the other and I highly recommend having at least one of these tools in your SEO toolbox. Have you used one of these tools? Let me know if the comments on which one you prefer more.

Screaming Frog vs Netpeak Spider: Who’s the Superior Crawler? was originally posted by Video And Blog Marketing