Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google keeps marking different pages as duplicates
-
My website has many pages like this:
mywebsite/company1/valuation
mywebsite/company2/valuation
mywebsite/company3/valuation
mywebsite/company4/valuation
...
These pages describe the valuation of each company.
These pages were never identical but initially, I included a few generic paragraphs like what is valuation, what is a valuation model, etc... in all the pages so some parts of these pages' content were identical.
Google marked many of these pages as duplicated (in Google Search Console) so I modified the content of these pages: I removed those generic paragraphs and added other information that is unique to each company. As a result, these pages are extremely different from each other now and have little similarities.
Although it has been more than 1 month since I made the modification, Google still marks the majority of these pages as duplicates, even though Google has already crawled their new modified version. I wonder whether there is anything else I can do in this situation?
Thanks
-
Google may mark different pages as duplicates if they contain very similar or identical content. This can happen due to issues such as duplicate metadata, URL parameters, or syndicated content. To address this, ensure each page has unique and valuable content, use canonical tags when appropriate, and manage URL parameters in Google Search Console.
-
Yes, there are a few other things you can do if Google is still marking your pages as duplicates after you have modified them to be unique:
-
Check your canonical tags. Canonical tags tell Google which version of a page is the preferred one to index. If you have canonical tags in place and they are pointing to the correct pages, then Google should eventually recognize that the duplicate pages are not actually duplicates.
-
Use the URL parameter tool in Google Search Console. This tool allows you to tell Google which URL parameters it should treat as unique and which ones it should ignore. This can be helpful if you have pages with similar content but different URL parameters, such as pages for different product categories or pages with different sorting options.
-
Request a recrawl of your website. You can do this in Google Search Console. Once Google has recrawled your website, it will be able to see the new, modified versions of your pages.
If you have done all of the above and Google is still marking your pages as duplicates, then you may need to contact Google Support for assistance.
-
-
If Google is marking different pages on your website as duplicates, it can negatively impact your website's search engine rankings. Here are some common reasons why Google may be doing this and steps you can take to address the issue:
Duplicate Content: Google's algorithms are designed to filter out duplicate content from search results. Ensure that your website does not have identical or near-identical content on multiple pages. Each page should offer unique and valuable content to users.
URL Parameters: If your website uses URL parameters for sorting, filtering, or tracking purposes, Google may interpret these variations as duplicate content. Use canonical tags or the URL parameter tool in Google Search Console to specify which version of the URL you want to be indexed.
Pagination: For websites with paginated content (e.g., product listings, blog archives), ensure that you implement rel="next" and rel="prev" tags to indicate the sequence of pages. This helps Google understand that the pages are part of a series and not duplicates.
www vs. non-www: Make sure you have a preferred domain (e.g., www.example.com or example.com) and set up 301 redirects to the preferred version. Google may treat www and non-www versions as separate pages with duplicate content.
HTTP vs. HTTPS: Ensure that your website uses secure HTTPS. Google may view HTTP and HTTPS versions of the same page as duplicates. Implement 301 redirects from HTTP to HTTPS to resolve this.
Mobile and Desktop Versions: If you have separate mobile and desktop versions of your site (e.g., responsive design or m.example.com), use rel="alternate" and rel="canonical" tags to specify the relationship between the two versions.
Thin or Low-Quality Content: Pages with little or low-quality content may be flagged as duplicates. Improve the content on such pages to provide unique value to users.
Canonical Tags: Implement canonical tags correctly to indicate the preferred version of a page when there are multiple versions with similar content.
XML Sitemap: Ensure that your XML sitemap is up-to-date and accurately reflects your website's structure. Submit it to Google Search Console.
Avoid Scraped Content: Ensure that your content is original and not scraped or copied from other websites. Google penalizes sites with duplicate or plagiarized content.
Check for Technical Errors: Use Google Search Console to check for crawl errors or other technical issues that might be causing duplicate content problems.
Structured Data: Ensure that your structured data (schema markup) is correctly implemented on your pages. Incorrectly structured data can confuse search engines.
Regularly monitor Google Search Console for any duplicate content issues and take prompt action to address them. It's essential to provide unique and valuable content to your website visitors while ensuring that search engines can correctly index and rank your pages.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What Tools Should I Use To Investigate Damage to my website
I would like to know what tools I should use and how to investigate damage to my website in2town.co.uk I hired a person to do some work to my website but they damaged it. That person was on a freelance platform and was removed because of all the complaints made about them. They also put in backdoors on websites including mine and added content. I also had a second problem where my content was being stolen. My site always did well and had lots of keywords in the top five and ten, but now they are not even in the top 200. This happened in January and feb. When I write unique articles, they are not showing in Google and need to find what the problem is and how to fix it. Can anyone please help
Technical SEO | | blogwoman10 -
Chat GPT
I want to get your thoughts on Chat GPT for creating articles on my site to drive SEO. Does Google approve of this type of content or not? I seems quite good quality - I suppose a key question also is: is it duplicate content? I have used on Propress website and also on blog sites so need to understand if this will reduce my rankings. Thanks
Content Development | | Katie231
Matthew1 -
Is page speed important to improve SEO ranking?
I saw on a SEO Agency's site (https://burstdgtl.com/search-engine-optimization/) that page speed apparently affects Google ranking. Is this true? And if it is, how do I improve it, do I need an agency?
On-Page Optimization | | jasparcj0 -
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
Why is Google Webmaster Tools showing 404 Page Not Found Errors for web pages that don't have anything to do with my site?
I am currently working on a small site with approx 50 web pages. In the crawl error section in WMT Google has highlighted over 10,000 page not found errors for pages that have nothing to do with my site. Anyone come across this before?
Technical SEO | | Pete40 -
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week. I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front. For example, I add to the removal tool:- https://www.mydomain.com/blah.html?search_garbage_url_addition On the confirmation page, the URL actually shows as:- http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look? AND PART 2 OF MY QUESTION If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request? www.domain.com/url.html?xsearch_... A description for this result is not available because of this site's robots.txt – learn more.
Technical SEO | | sparrowdog1 -
Is Google suppressing a page from results - if so why?
UPDATE: It seems the issue was that pages were accessible via multiple URLs (i.e. with and without trailing slash, with and without .aspx extension). Once this issue was resolved, pages started ranking again. Our website used to rank well for a keyword (top 5), though this was over a year ago now. Since then the page no longer ranks at all, but sub pages of that page rank around 40th-60th. I searched for our site and the term on Google (i.e. 'Keyword site:MySite.com') and increased the number of results to 100, again the page isn't in the results. However when I just search for our site (site:MySite.com) then the page is there, appearing higher up the results than the sub pages. I thought this may be down to keyword stuffing; there were around 20-30 instances of the keyword on the page, however roughly the same quantity of keywords were on each sub pages as well. I've now removed some of the excess keywords from all sections as it was getting in the way of usability as well, but I just wanted some thoughts on whether this is a likely cause or if there is something else I should be worried about.
Technical SEO | | Datel1