Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
NoIndexing Massive Pages all at once: Good or bad?
-
If you have a site with a few thousand high quality and authoritative pages, and tens of thousands with search results and tags pages with thin content, and noindex,follow the thin content pages all at once, will google see this is a good or bad thing?
I am only trying to do what Google guidelines suggest, but since I have so many pages index on my site, will throwing the noindex tag on ~80% of thin content pages negatively impact my site?
-
If you're not currently suffering any ill effects, I probably would ease into it, just because any large-scale change can theoretically cause Google to re-evaluate a site. In general, though, getting these results pages and tag pages out of the index is probably a good thing.
Just a warning that this almost never goes as planned, and it can take months to fully kick in. Google takes their sweet time de-indexing pages. You might want to start with the tag pages, where a straight NOINDEX probably is a solid bet. After that, you could try rel=prev/next on the search pagination and/or canonical search filters. That would keep your core search pages indexed, but get rid of the really thin stuff. There's no one-sized-fits-all solution, but taking it in stages and using a couple of different methods targeted to the specific type of content may be a good bet.
Whatever you do, log everything and track the impact daily. The more you know, the better off you'll be if anything goes wrong.
-
At the moment you are in Google but not really following the Google guidelines as far as the thin content is concern... once you will apply the rule you will be more nearer to Google guidlines which simply means Google will love you more...so no big problems!
You might see a lil ups and downs in traffic but it will be ok within days of time!
-
It may take a while when the pages you are deindexing are not crawled as often by Google. You just have to sit back and wait a bit.
Two other points.
Look in your Analytics. If you delete all those pages, how much traffic do they bring in to start with? If it is only 5% of traffic, then expect to lose that much.
One correction on the use of robots.txt vs the meta tag. Robot.txt stops Google from crawling, but will not remove pages from SERPs. Noindex meta tags on page will get them removed. Use the former and you will be happier.
-
As far as google crawling and de-indexing all of the pages with the noindex tag, is that a time consuming process before all of the pages are removed?
-
No negative impacts here as far as penalties or otherwise. Just make sure it's really what you want to do. If the page would ever be searched for by a user then keep it indexed regardless of how thin you worry the content might be. Or beef it up.
Also consider using your robots file instead of having to add that tag to all these pages...
-my two cents.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Images on their own page?
Hi Mozers, We have images on their own separate pages that are then pulled onto content pages. Should the standalone pages be indexable? On the one hand, it seems good to have an image on it's own page, with it's own title. On the other hand, it may be better SEO for crawler to find the image on a content page dedicated to that topic. Unsure. Would appreciate any guidance! Yael
Intermediate & Advanced SEO | | yaelslater1 -
Submitting Same Press Release Content to Multiple PR Sites - Good or Bad Practice?
I see some PR (press release) sites where they distribute the same content on many different sites and at end they give the source link is that Good SEO Practice or Bad ? If it is Good Practice then how Google Panda or other algorithms consider it ?
Intermediate & Advanced SEO | | KaranX0 -
Can noindexed pages accrue page authority?
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking). I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content. My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
Intermediate & Advanced SEO | | THandorf0 -
Too many on page links
Hi I know previously it was recommended to stick to under 100 links on the page, but I've run a crawl and mine are over this now with 130+ How important is this now? I've read a few articles to say it's not as crucial as before. Thanks!
Intermediate & Advanced SEO | | BeckyKey1 -
Should my back links go to home page or internal pages
Right now we rank on page 2 for many KWs, so should i now focus my attention on getting links to my home page to build domain authority or continue to direct links to the internal pages for specific KWs? I am about to write some articles for several good ranking sites and want to know whether to link my company name (same as domain name) or KW to the home page or use individual KWs to the internal pages - I am only allowed one link per article to my site. Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Javascript to fetch page title for every webpage, is it good?
We have a zend framework that is complex to program if you ask me, and since we have 20k+ pages that we need to get proper titles to and meta descriptions, i need to ask if we use Javascript to handle page titles (basically the previously programming team had NOT set page titles at all) and i need to get proper page titles from a h1 tag within the page. current course of action which we can easily implement is fetch page title from that h1 tag being used throughout all pages with the help of javascript, But this does makes it difficult for engines to actually read what's the page title? since its being fetched with javascript code that we have put in, though i had doubts, is anyone one of you have simiilar situation before? if yes i need some help! Update: I tried the JavaScript way and here is what it looks like http://islamicencyclopedia.org/public/index/hadith/id/1/book_id/106 i know the fact that google won't read JavaScript like the way we have done with the website, But i need help on "How we can work around this issue" Knowing we don't have other options.
Intermediate & Advanced SEO | | SmartStartMediacom0 -
Meta Tag Force Page Refresh - Good or Bad?
I had recently come across a meta tag that could cause a auto refresh on a users browser when implemented. I have been using it for a redesign and was curious if there could be any negative effects for using it, here is the code: All input is appreciated. Ciao, Todd Richard
Intermediate & Advanced SEO | | RichFinnSEO0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0