As a website owner, it is crucial to understand that you cannot and should not ignore Google if you want to do well in the online landscape. Google dominates the search engine market by a huge margin. Therefore, if you want to propel your online journey in the right direction, it is vital to follow the rules laid down by Google. It is important to understand that staying on the right side of Google is in your interest.
If you follow the right practises, you can rank well on Google. However, if you break the rules, Google can penalise you as well.
That is why it is crucial to know what is right and the things that you need to avoid. And duplicate content is one such thing that causes a lot of confusion. There are many myths about duplicate content. It is vital to know the truth behind duplicate content and how Google perceives it.
Many website owners are afraid of duplicate content as much as they are of spammy links. Many people think that Google penalises websites for duplicate content.
Let us, therefore, take you through what you need to know about duplicate content, the myths, and the reality.
Google tried allaying the fears way back in 2008 when Susan Moska wrote on the Google Webmaster – “Let’s put this to bed once and for all, folks: There’s no such thing as a “duplicate content penalty.” At least, not in the way most people mean when they say that. You can help your fellow webmasters by not perpetuating the myth of duplicate content penalties!”
So, what is duplicate content?
According to Google, “duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. Mostly, this is not deceptive in origin.”
The way Google handles duplicate content is what makes people believe that they can be penalized for duplicate content by Google. What happens in case of duplicate content is that the duplicates get filtered in the search results.
The content that appears on the internet in more than ‘one place’ is known as the duplicate content. The phrase ‘one place’ in this context refers to a unique website address or URL, which means if the same content appears at more than one web address, then you have duplicate content.
So, what are the problems that duplicate content can cause?
Why does duplicate content matter?
Search engines can face three main problems due to duplicate content:
- It can become difficult for the search engines to know which version they need to include and exclude from their indices.
- The search engines get confused about whether to direct the link metrics to one page; or how to separate them between multiple versions.
- The search engines also don’t know which version to rank for a search query.
- So, what is the percentage of duplicate content on the internet?
- According to a study by Raven Tools, 29% of pages had duplicate content.
How does Google perceive duplicate content?
- Let us take you through what Google thinks about duplicate content:
- Google has clarified several times that it does not penalize websites for duplicate content.
- When people search Google, they want diversity in the search results. The users don’t want to see the same article over and over again. That is why Google consolidates duplicate content and show one version only.
- There are algorithms in place at Google to avoid duplicate content from impacting webmasters. The algorithms place the different versions of duplicate content into a group, and the best URL in the group is displayed. The algorithms consolidate various signals like links from pages within the group to the one being displayed to the users. Google says that if website owners cannot sort out the duplication issue themselves, then they will solve the problem at their end.
- Duplicate content can lead to adverse action from the search engines only if it is used to manipulate search results.
- The worst that can happen after filtering is that your best page will not get displayed in search results.
- What Google does is it tries to determine the source of the content and displays it for its users.
- In case somebody deliberately duplicates your content without your permission, then you can report and get the same removed.
- It is advisable not to block duplicate content because if the search engine cannot crawl all the versions, then how will it consolidate the signals.
Causes of duplicate content
There are several causes of duplicate content, and some of these are:
- www vs non-www
- HTTP vs HTTPS
- Session IDs
- URL parameters used for sorting and tracking
- Order of parameters
- Comment pagination
- Printer-friendly pages
- Index pages
- Trailing slashes
- Scrapers and content syndication
- Country or language versions
- Developing or hosting environments
How to solve the duplicate content issue?
Here are some of the solutions to solve the duplicate content issue:
Use canonical tags – A canonical tag or rel=canonical element or canonical link is used to consolidate signals, which helps search engines to pick up your most preferred version. When you use the canonical tag, it helps prevent problems that are caused due to duplicate content which appears on multiple URLs.
Use a 301 redirect – It is a permanent redirect which helps pass on most of the link equity or ranking power to the redirected page. It prevents the alternate versions of the pages from getting displayed.
Let Google know how to handle URL parameters – You can set the URL parameters so that Google gets to know of them. In this case, Google will not have to figure out what needs to be done. You make it easy for the search engine to find the original content.
Use rel=”alternate” – You can use this to consolidate different versions of a page, for example, mobile version or various country or language pages. The hreflang tags are a technical solution for websites that have similar content in several languages. It is used to display the correct language page in the search results. Google has clarified that sorting out the hreflang issue would not improve your search engine rankings, but it allows them to display the correct version during a search. This happens because Google identifies the alternate versions and consolidates the signals for various pages.
You should also use rel=”prev” and rel=”next” for pagination issues.
Start content syndication, but always follow the best practices.
Scraping or spam can also cause some issues. Hence, you should not disallow in robots.txt, do not nofollow, do not noindex, and don’t canonical from pages that target longer-tail to overview-type pages. You should use the abovementioned signals for your specific issues so that search engines know how you want your content to be viewed.
Though duplicate content does not lead to any official penalty, yet it can sometimes impact your search engine rankings. If there are several pieces of appreciably similar content in more than one location, then it can become difficult for search engines to display the correct version for a specific search query.
If you want to improve your rankings on Google, you need to have a robust SEO strategy. You can get in touch with vStacks Infotech for a smart SEO strategy. We know what it takes to improve your rankings.