Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Canonical issues using Screaming Frog and other tools?
-
In the Directives tab within Screaming Frog, can anyone tell me what the difference between "canonicalised", "canonical", and "no canonical" means? They're found in the filter box. I see the data but am not sure how to interpret them. Which one of these would I check to find canonical issues within a website? Are there any other easy ways to identify canonical issues?
-
Hello
I spotted this thread and was just about to reply, but Dirk has answered it all perfectly. Thanks Dirk!
Under 'reports' there's also a 'canonical errors' report which will show canonicals with various technical issues - Those that are blocked by robots.txt, have no response, 3XX redirect, 4XX or 5XX error (essentially anything other than a 200 ‘OK’ response). It will also show any URLs discovered only via a canonical, that are not linked to internally from the sites own link structure (in the ‘unlinked’ column when ‘true’).
Hope that helps anyway.
Cheers!
Dan
-
Hi,
The difference between them
-
canonical : url has a canonical url - which can be self-referencing (canonical url = url) or not
-
canonicalised: url has a canonical url which is not self-referencing (canonical url <> url)
-
no canonical : quite obvious - the url has no canonical.
Potential issues could be - url's that you would like to have a canonical don't have a canonical or url's that are canonicalised don't have the right canonical url. You can use the lists (both canonicalised & no canonical) from Screaming Frog to check them - but it's up to you to judge whether the canonical is ok or not (no automated tool can guess what your intentions are).
Typical mistakes with canonicals: all url's have the same canonical url (like the homepage), or have canonical url's that do not exist. You could also check this with Screaming Frog using the setting "respect canonicals" - this way only the canonical url's will be shown in the listing.Also keep in mind that canonical url's are merely a friendly request to Google to index the canonical rather than the normal url - but it's not an obligation for Google to do this (check https://support.google.com/webmasters/answer/139066?hl=en quote: "the search results will be more likely to show users that URL structure. (Note: We attempt to respect this, but cannot guarantee this in all cases.)"
Dirk
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does using a canonical with ?utm_source=gmb cause any issues?
All of our URLs in Google My Business are tagged with ?utm_source=gmb. This way when people click on it within a Google Map listing, knowledge graph, etc we know it came from there. I'm assuming using a canonical on all ?_utm_source _pages (we have others, including some in the index) won't cause any problems with this, correct? Since they're not technically traditional organic SERPs? Dumb question I know, but better safe than sorry. Thanks.
Technical SEO | | Alces1 -
Indexing Issue of Dynamic Pages
Hi All, I have a query for which i am struggling to find out the answer. I unable to retrieve URL using "site:" query on Google SERP. However, when i enter the direct URL or with "info:" query then a snippet appears. I am not able to understand why google is not showing URL with "site:" query. Whether the page is indexed or not? Or it's soon going to be deindexed. Secondly, I would like to mention that this is a dynamic URL. The index file which we are using to generate this URL is not available to Google Bot. For instance, There are two different URL's. http://www.abc.com/browse/ --- It's a parent page.
Technical SEO | | SameerBhatia
http://www.abc.com/browse/?q=123 --- This is the URL, generated at run time using browse index file. Google unable to crawl index file of browse page as it is unable to run independently until some value will get passed in the parameter and is not indexed by Google. Earlier the dynamic URL's were indexed and was showing up in Google for "site:" query but now it is not showing up. Can anyone help me what is happening here? Please advise. Thanks0 -
Google serp pagination issue
We are a local real estate company and have landing pages for different communities and cities around our area that display the most recent listings. For example: www.mysite.com/wa/tumwater is our landing page for the city of Tumwater homes for sale. Google has indexed most of our landing pages, but for whatever reason they are displaying either page 2, 3, 4 etc... instead of page 1. Our Roy, WA landing page is another example. www.mysite.com/wa/roy has recently been showing up on page 1 of Google for "Roy WA homes for sale", but now we are much further down and www.mysite.com/wa/roy?start=80 (page 5) is the only page in the serps. (coincidentally we no longer have 5 pages worth of listings for this city, so this link now redirects to www.mysite.com/wa/roy.) We haven't made any major recent changes to the site. Any help would be much appreciated! *You can see what my site is in the attached image... I just don't want this post to show up when someone google's the actual name of the business 🙂 nTTrSMx.jpg C4mhfgh.jpg
Technical SEO | | summithomes0 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Sitelinks Issue - Different Languages
Hey folks, We run different ccTLD's for revolveclothing.com (revolveclothing.es, revolveclothing.com.br, etc. etc.) and they all have their own WMT/Google Console with their own href lang tags etc. The problem is this. https://www.google.fr/#q=revolve+clothing When you look at the sitelinks, you'll see that one of them (sales page) happens to be in Portuguese on the French site. Can anyone investigate and see why?
Technical SEO | | ggpaul5620 -
Rel=canonical Weebly
My problem is with my website as it says I have duplicate page titles and contents because of a /index.html. It says the duplicate content is due to the fact that my homepage on my website is www.seacandytackle.com but it is also www.seacandytackle.com/index.html because I use weebly. How can I use the tag to fix this? It won't let me do a 301 redirect because it is a home page. How can I fix this? What code would I have to use and which url? Also it says that I have duplicate page content between http://www.seacandytackle.com/index.html and http://www.seacandytackle.comhttp://www.seacandytackle.com but I don't recall having any page that looks like http://www.seacandytackle.com http://www.seacandytackle.com from weebly. How can I fix this issue as well? Thank you for any help. Step by step implementation would be particularly helpful in using the rel= tags to fix these duplicate issues.
Technical SEO | | SeaCandyTackle0 -
Exclude status codes in Screaming Frog
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration. The site has approximately 190,000 pages according to the results of a Google site: command. The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root. There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters. There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs. Does anyone know how to exclude files using status codes? I know that would help. If it helps, the site is kodylighting.com. Thanks in advance for any guidance you can provide.
Technical SEO | | DonnaDuncan0 -
Trailing Slashes In Url use Canonical Url or 301 Redirect?
I was thinking of using 301 redirects for trailing slahes to no trailing slashes for my urls. EG: www.url.com/page1/ 301 redirect to www.url.com/page1 Already got a redirect for non-www to www already. Just wondering in my case would it be best to continue using htacces for the trailing slash redirect or just go with Canonical URLs?
Technical SEO | | upick-1623910