Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Organic search traffic dropped 40% - what am I missing?
-
Have a client (ecommerce site with 1,000+ pages) who recently switched to OpenCart from another cart. Their organic search traffic (from Google, Yahoo, and Bing) dropped roughly 40%. Unfortunately, we weren't involved with the site before, so we can only rely on the wayback machine to compare previous to present.
I've checked all the common causes of traffic drops and so far I mostly know what's probably not causing the issue. Any suggestions?
- Some URLs are the same and the rest 301 redirect (note that many of the pages were 404 until a couple weeks after the switch when the client implemented more 301 redirects)
- They've got an XML sitemap and are well-indexed.
- The traffic drops hit pretty much across the site, they are not specific to a few pages.
- The traffic drops are not specific to any one country or language.
- Traffic drops hit mobile, tablet, and desktop
- I've done a full site crawl, only 1 404 page and no other significant issues.
- Site crawl didn't find any pages blocked by nofollow, no index, robots.txt
- Canonical URLs are good
- Site has about 20K pages indexed
- They have some bad backlinks, but I don't think it's backlink-related because Google, Yahoo, and Bing have all dropped.
- I'm comparing on-page optimization for select pages before and after, and not finding a lot of differences.
- It does appear that they implemented Schema.org when they launched the new site.
- Page load speed is good
I feel there must be a pretty basic issue here for Google, Yahoo, and Bing to all drop off, but so far I haven't found it. What am I missing?
-
Hi Adam,
Not to point out something that is likely well taken-care of, but did the GA / Analytics code populate across the site?
Also, is there any heavy JavaScript on the site, especially above analytics code, that might prevent analytics code from loading properly. We had this happen with a client a few years ago. We built custom analytics for this client (they did not want to run GA). Client placed our code in the footer. Client placed slow-loading CRO code in the header. CRO code took so long to load that people had often clicked away from the page they landed on before our code had had a chance to record their visit, as JavaScript generally loads in the same order as it's placed on the page. We had them move our little piece of code up to the top of the page. Problem was solved (in the mean time, we were recording a 20,000 visit loss each week!).
I'm just wondering if this is a tracking issue since all search traffic, not just Google has been affected. It would be quite rare to find an issue that has the same effect at the same time to both Bing and Google's algos. They're similar, but they're not identical and Bing generally tends to take longer to respond to change than Google as well.
Any chance you have raw server logs to compare analytics stats to?
-
I don't see anything that I would think would trigger that. Let me PM you the URL.
-
Did the layout of the header area change significantly? If, for instance, the header area went from 1/10th of the "above the fold" area to 1/3rd, that might run the entire site afoul of the "topheavy" part of Panda.
-
Thanks for the suggestions!
-
The homepage, category, and product pages have all lost traffic.
-
So far, I haven't found any noteworthy changes in content.
-
I've been wondering if this might be part of the issue.
-
I've reviewed Majestic link data, and only see a few deleted backlinks, so I'm thinking it's not a backlink issue.
-
-
Thanks for the suggestion. So far the only significant difference in optimization I've found has been that they added Schema.org markup.
-
Possibilities:
- The layout of the product pages for the new shopping cart is pissing off Panda. If that's the case, the traffic to the home page shouldn't have changed much, but the product pages will have dropped.
- Panda now sees the pages in general as having less content than before, perhaps images aren't getting loaded in the pages in such a way that Google sees them whereas they were before, something like that....and Panda now thinks the entire site is less rich in content.
- It often seems to take Google a month or so to "settle out" all of the link juice flows when you do a bunch of redirects, have new URLs, etc. I would expect that the link juice calculation is iterative, and that would be why it would take a number of iterations of the PageRank calculation in order for entirely new URLs to "get" all the link juice they should have.
- Their backlinks were moderately dependent upon a set of link networks, and those link networks have shut down all their sites (so that neither Google nor Bing still see the links from them).
Those are the ideas that come to mind so far.
-
Did the new cart generate product pages that were differently optimized than the old cart? (if cart-generated product pages were used)
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Organic Traffic Drop of 90% After Domain Migration
We moved our domain is http://www.nyc-officespace-leader.com on April 4th. It was migrated to https://www.metro-manhattan.com Google Search Console continues to show about 420of URLs indexed for the old "NYC" domain. This number has not dropped on Search Console. Don't understand why Google has not de-indexed the old site.
Intermediate & Advanced SEO | | Kingalan1
For the new "Metro" domain only 114 pages are being shown as valid. Our search volume has dropped from about 85 visits a day to 12 per day. 390 URLs appear as "crawled- currently not indexed". Please note that the migrated content is identical. Nothing at all changed. All re-directs were implemented properly. Also, at the time of the migration we filed a disavow for about 200 spammy links. This disavow file was entered for the old domain and the new one as well. Any ideas as to how to trouble shoot this would be much appreciated!!! This has not been very good for business.0 -
How to get local search volumes?
Hi Guys, I want to get search volumes for "carpet cleaning" for certain areas in Sydney, Australia. I'm using this process: Choose to ‘Search for new keyword and ad group ideas’. Enter the main keywords regarding your product / service Remove any default country targeting Specify your chosen location (s) by targeting specific cities / regions Click to ‘Get ideas’ The problem is none of the areas, even popular ones (like north sydney, surry hills, newtown, manly) are appearing and Google keyword tool, no matches. Is there any other tools or sources of data i can use to get accurate search volumes for these areas? Any recommendations would be very much appreciated. Cheers
Intermediate & Advanced SEO | | wozniak650 -
How to target for misspelled Brand name searches
Hi to all the SEO experts here, I am working on SEO of my 4 months old website. For example, its 'abz.com'. We like the brand name 'abz' for the business and we are able to SEO well for keyword 'abz'. However, we would also like to target for the keyword 'abc'. There are 2 reasons for that: 'abc' is an actual word. So there is a possibility that our users may type 'abc' instead of 'abz' to reach us. For 'abc', the top result is 'abct.us', which is a site of adult in nature. Also our website doesn't feature at all in the results. This is hitting us hard in terms of or brand visibility. So the questions are: How to feature in results of keyword search of 'abc'? Will the following approach work: Buying an available domain 'abc.co.in', and use it to feature in 'abc' results and 301 redirect to 'abz.com' Having 'abc' in the page meta (title and description). This is hard for us, since we need to rethink our taglines and copyrights. 2. If we search for 'abz', Google says "Do you mean abc". Is there a way to not have this suggestion? It would helpful to have some more ideas for this problem.
Intermediate & Advanced SEO | | manasag0 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | | Bob_van_Biezen0 -
Redirect Search Results to Category Pages
I am planning redirect the search results to it's matching category page to avoid having two indexed pages of essentially the same content. Example http://www.example.com/search/?kw=sunglasses
Intermediate & Advanced SEO | | WizardOfMoz
wil be redirected to
http://www.example.com/category/sunglasses/ Is this a good idea? What are the possible negative effect if I go this route? Thanks.0 -
Page position dropped on Google
Hey Guys, My web designer has recommended this forum to use, the reason being: my google position has been dropped from page 1 to page 10 in the last week. The site is weloveschoolsigns.co.uk, but our main business site is textstyles.co.uk the school signs are a product of text styles. I have been told off my SEO company, that because I have changed the school logo to the text styles logo, Google have penalised me for it, and dropped us from page 1 for numerous keywords, to page 10 or more. They have also said that duplicate content within the school site http://www.weloveschoolsigns.co.uk/school-signs-made-easy/ has also a contributed to the drop in positions. (this content is not on the textstyles site) Lastly they said, that having the same telephone number is a definate no no. They said that I have been penalised, because google see the above as trying to monopolise on the market. I don’t know if all this is true, as the SEO is way above my head, but they have quoted me £1250 to repair all the errors, when the site only cost £750. They have also mentioned that because of the above changes, the main text styles site will also be punished. Any thoughts on this matter would be much appreciated as I don't know whether to pay them to crack on, or accept the new positions. Either way I'm very confused. Thanks Thomas
Intermediate & Advanced SEO | | TextStylesUK0 -
Why is Google Displaying this image in the search results?
Hi i'm looking at advice on how to remove or change a particular image Google is displaying in the search results. I have attached a screenshot. From the first look of it, i assumed the image would be related and be on the dealers Google+ Local Page: https://plus.google.com/118099386834104087122/about?hl=en But there are no photos. The image seems to be coming from the website. Is there a way to stop Google from displaying this image or making them display a totally different image. Thanks, Chris XzfsnUy.png
Intermediate & Advanced SEO | | Mattcarter080