Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Robots.txt & url removal vs. noindex, follow?
-
When de-indexing pages from google, what are the pros & cons of each of the below two options:
-
robots.txt & requesting url removal from google webmasters
- Use the noindex, follow meta tag on all doctor profile pages
- Keep the URLs in the Sitemap file so that Google will recrawl them and find the noindex meta tag
- make sure that they're not disallowed by the robots.txt file
-
-
Great, comprehensive answer from Ryan as ever.
Nothing more to see here folks.
Move along now.
Move along.
-
The preferred option would be the noindex, follow tag.
The robots.txt file is a choice of last resort. The best robots.txt file for a site is an empty file (i.e. no disallows). The robots.txt file is a tool that can be used when other options are either not available, or the effort is deemed as too great.
If you use robots.txt and the url removal from google, that will work, the page will get de-indexed, but then Google will never crawl that page again and therefore not follow any of the links on that page. You are blocking their crawler so your site will not be crawled as thoroughly which means pages can be missed, a lower pecentage of your pages will be indexed (mainly applies to larger sites), and the link juice which flows to any of the blocked pages will lose their value. Any anchor text or other link value on those pages will be lost as well.
If you use the "noindex, follow" tag then those pages will still be crawled, those pages will continue to contribute value to your site and the page's links will continue to offer value to their target URLs, many of which will be your site's internal pages.
A final point is the URL removal tool in Google WMT will remove the page from Google, but it wont affect Yahoo, Bing and other directories.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My url disappeared from Google but Search Console shows indexed. This url has been indexed for more than a year. Please help!
Super weird problem that I can't solve for last 5 hours. One of my urls: https://www.dcacar.com/lax-car-service.html Has been indexed for more than a year and also has an AMP version, few hours ago I realized that it had disappeared from serps. We were ranking on page 1 for several key terms. When I perform a search "site:dcacar.com " the url is no where to be found on all 5 pages. But when I check my Google Console it shows as indexed I requested to index again but nothing changed. All other 50 or so urls are not effected at all, this is the only url that has gone missing can someone solve this mystery for me please. Thanks a lot in advance.
Intermediate & Advanced SEO | | Davit19850 -
What do you add to your robots.txt on your ecommerce sites?
We're looking at expanding our robots.txt, we currently don't have the ability to noindex/nofollow. We're thinking about adding the following: Checkout Basket Then possibly: Price Theme Sortby other misc filters. What do you include?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Product or Shop in URL
What do you think is better for seo and for sale, I am using woo-ecommerce for health products website. websitename.com/product/keyword OR websitename.com/shop/keyword
Intermediate & Advanced SEO | | MasonBaker0 -
Robots.txt, does it need preceding directory structure?
Do you need the entire preceding path in robots.txt for it to match? e.g: I know if i add Disallow: /fish to robots.txt it will block /fish
Intermediate & Advanced SEO | | Milian
/fish.html
/fish/salmon.html
/fishheads
/fishheads/yummy.html
/fish.php?id=anything But would it block?: en/fish
en/fish.html
en/fish/salmon.html
en/fishheads
en/fishheads/yummy.html
**en/fish.php?id=anything (taken from Robots.txt Specifications)** I'm hoping it actually wont match, that way writing this particular robots.txt will be much easier! As basically I'm wanting to block many URL that have BTS- in such as: http://www.example.com/BTS-something
http://www.example.com/BTS-somethingelse
http://www.example.com/BTS-thingybob But have other pages that I do not want blocked, in subfolders that also have BTS- in, such as: http://www.example.com/somesubfolder/BTS-thingy
http://www.example.com/anothersubfolder/BTS-otherthingy Thanks for listening0 -
Yoast & rel canonical for paginated Wordpress URLs
Hello, our Wordpress blog at http://www.jobs.ca/career-resources has a rel canonical issue since we added pagination to the front page and category-pages. We're using Yoast and it's incorrectly applying a rel-canonical meta tag referencing page 1 on page 2, 3, etc. This is a known misuse of the rel-canonical tag (per Google's Webmaster Blog - http://googlewebmastercentral.blogspot.ca/2013/04/5-common-mistakes-with-relcanonical.html, which says rel-canonical should be replaced with rel-prev and rel-next for page 2, 3, etc.). We don't see a way to specify anywhere in Yoast's options to correct this behaviour for page 2, 3, etc. Yoast allows you to override a page's canonical URL, otherwise it automatically uses the Wordpress permalink. My question is, does anyone know how to configure Yoast to properly replace rel-canonical tags with rel-prev and rel-next for paginated URLs, or do I need to look at another plugin or customize the behavior directly in my child theme code? This issue was brought up here as well: http://moz.com/community/q/canonical-help, but the only response did not relate to Yoast. (We're using Wordpress 3.6.1 and Yoast "Wordpress SEO" 1.4.18)
Intermediate & Advanced SEO | | aactive0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Multiple stores & domains vs. One unified store (SEO pros / cons for E-Commerce)
Our company runs a number of individual online shops, specialised in particular products but all in the same genre of goods overall, with a specific and relevant domain name for each shop. At the moment the sites are separate, and not interlinked, i.e. Completely separate brands. An analogy could be something like clothing accessories (we are not in the clothing business): scarves.com, and silkties.com (our field is more niche than this) We are about to launch a related site, (e.g. handbags.com), in the same field again but without precisely overlapping products. We will produce this site on a newer, more flexible e-commerce platform, so now is a good time to consider whether we want to place all our sites together with one e-commerce system on the backend. Essentially, we need to know what the pros and cons would be of the various options facing us and how the SEO ranking is affected by the three possibilities. Option 1: continue with separate sites each with its own domains. Option 2: have multiple sites, each on their own domain, but on the same ecommerce system and visible linked together for the customer (with unified checkout) – on the top of each site could be a menu bar linking to each site: [Scarves.com] – [SilkTies.com] – [Handbags.com] The main question here is whether the multiple domains are mutually beneficial, particularly considerding how close to target keywords the individual domains are. If mutually benefitial, how does it compare to option 3: Option 3: Having recently acquired a domain name (e.g. accessories.com) which would cover the whole category together, we are presented with a third option: making one site selling all of these products in different categories. Our main concern here would be losing the ability to specifically target marketing, and losing the benefit of the domains with the key words in for what people are more likely to be searching for (e.g. 'silk tie') rather than 'accessories.' Is it worth taking the hit on losing these specific targeted domain names for the advantage of increased combined inbound links?
Intermediate & Advanced SEO | | Colage0