Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to force Wordpress to remove trailing slashes?
-
I've searched around quite a bit for a solution here, but I can't find anything. I apologize if this is too technical for the forum.
I have a Wordpress site hosted on Nginx by WP Engine. Currently it resolves requests to URLs either with or without a trailing slash.
So, both of these URLs are functional:
<code>mysite.com/single-post</code>
and
<code>mysite.com/single-post/</code>
I would like to remove the trailing slash from all posts, forcing
mysite.com/single-post/
to redirect tomysite.com/single-post
. I created a redirect rule on the server:^/(.*)/$ -> /$1
and this worked well for end-users, but rendered the admin panel inaccessible. Somewhere, Wordpress is adding a trailing slash back on to the URL
mysite.com/wp-admin
, resulting in a redirect loop. I can't see anything obvious in .htaccess.Where is this rule adding a trailing slash to 'wp-admin' established?
Thanks very much
-
WPEis run on a base server using Apache with a Nginx proxy so you can use the WP engine 301 redirect system built-in or you can simply add a redirect to the HTAccess file. If you would like to use a tool to do this I recommend this one another alternative is to ask WP engine to make a change for you.https://www.aleydasolis.com/htaccess-redirects-generator/non-slash-vs-slash-urls/ApacheJust copy to your htaccess:```
https://example.com/page/**https://example.com/page** ``` <label for="nonslash">**Slash to Non-Slash URLs**</label> > <ifmodule mod_rewrite.c="">RewriteEngine on > RewriteCond %{REQUEST_FILENAME} !-d > RewriteRule ^(.*)/$ /$1 [L,R=301]</ifmodule> **Non-Slash to Slash URLs** ``` ****Apache** https://example.com/page**
> <ifmodule mod_rewrite.c="">RewriteEngine on > RewriteCond %{REQUEST_FILENAME} !-f > RewriteRule ^(.*[^/])$ /$1/ [L,R=301]</ifmodule>
USEING Nginx to do
**https://example.com/page/**
As you see, there is one tiny difference between those two URLs, and it’s the trailing slash at the end. In order to avoid duplicate content, if you are using Nginx you can **remove the trailing slash from Nginx** URLs. Place this inside your virtual host file in the server {} block configuration: > ``` > rewrite ^/(.*)/$ /$1 permanent; > ``` Full example: > ``` > server { > listen 80; > server_name www.mysite.com; > rewrite ^/(.*)/$ /$1 permanent; > } > ``` All done, now Nginx will remove all those trailing slashes.
USEING Nginx to do
https://example.com/page
https://example.com/page/Add a trailing slash by placing this inside your virtual host file in the server {} block configuration: > ``` > rewrite ^(.*[^/])$ $1/ permanent; > ``` Full example: > ``` > server { > listen 80; > server_name www.mysite.com; > rewrite ^(.*[^/])$ $1/ permanent; > } > ``` From now on, Nginx should add your trailing slash automatically to every url * https://www.scalescale.com/tips/nginx/add-trailing-slash-nginx/ * https://www.scalescale.com/tips/nginx/nginx-remove-trailing-slash/ I hope this helps, Tom
-
Hi John, sorry ive been on leave so not checked back on the forums.
Glad it looks like its working for you. I dont think the comments do anything except signify where word press has begun writing to the .htaccess (i dont run wordpress so can't be sure). Normally comments do nothing but signify something useful to the user.
I can try to breakdown the code a little for you, but my htaccess isn't fantastic so its by no means complete.
Firstline: RewriteCond%{REQUEST_FILENAME}!-d
RewriteCond% = This says use this condition if....
{REQUEST_FILENAME}!-d = ... is NOT a directoryRewriteRule^(.*)/$ /$1 [L,R=301]
I believe this bit takes a snapshot of the url upto the final / then rewrites it to that snapshot.The combination of these must mean it doesn't affect your wordpress admin directory. I know this code can break if your install is within a directory (as is discussed in the stackover flow link) but they have provided a solution for that in that topic. I would say test if on your live website to make complete sure it will work as this may be slightly different to your local install. Have a back-up ready just incase it doesn't.
Make sure you check every url including
Homepage
Pages
Posts
Category Pages
Sub Category Pages
Post Pages
Any images or filesTo make sure it is working as expected on all of them.
-
Thanks so much, ATP! It looks like writing the condition into .htaccess does the trick—at least for my local install. Is this because the commands located within the
BEGIN WordPress
END WordPress
comments only apply to URLs outside of the WP admin area?
Thanks again, ATP—that was a very thorough and helpful response!
-
Hi John
Did ATP's solution help you out? Let us know if we can look into this further!
-
Hi John,
I asked something similar myself something myself but im on the Magento platform. This should matter as the solution wasn't platform specific. It just involved editing htaccess file. If your up for editing your .htacccess file then it could be of some use. The topic URL is below and it contains multiple solutions for editing and removing the / and the debugging process we went through along the way. (Courtesy of Andy and Dirk) Hopefully its of some use to you
https://moz.com/community/q/cms-pages-multiple-urls
SUMMARY:
If you know how to edit your .htaccess and your ready to dive straight in this code should do it.RewriteCond%{REQUEST_FILENAME}!-d
RewriteRule^(.*)/$ /$1 [L,R=301]If you want the page with explanations and walk-through please see the original topic as editing your htaccess badly can cause all sorts of errors.
Edit: I realised i was probably a tiny bit lazy and should of probably included this link which is the original link i got sent from stackoverflow with instructions on how to to edit your .htaccess file.
http://stackoverflow.com/questions/21417263/htaccess-add-remove-trailing-slash-from-url
Dirks answer later in the post offer guidance on applying it to certain parameters which should prove helpful if your still having loop problems with the admin page.
-
Thanks for the replies, Donna, Martijn. I am running Yoast and considered adding the trailing slash, but:
-Most of the inbound links we have are to URLs with no slash
-The slash style seems a little dated in general-few sites use them these days.
I'd really love to just figure out how to solve the issue a little closer to the root.
-
Hi John,
It seems obvious, but why not go for a adding a trailing slash to every URL instead or removing it, would solve your issues at least.
-
Are you using the yoast SEO plugin? There is a setting under Advanced > Permalinks that forces a trailing slash onto URLs. I'd try looking at that first.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using 410 To Remove URLs Starting With Same Word
We had a spam injection a few months ago. We successfully cleaned up the site and resubmitted to google. I recently received a notification showing a spike in 404 errors. All of the URLS have a common word at the beginning injected via the spam: sitename.com/mono
Technical SEO | | vikasnwu
sitename.com/mono.php?buy-good-essays
sitename.com/mono.php?professional-paper-writer There's about 100 total URLS with the same syntax with the word "mono" in them. Based on my research, it seems that it would be best to serve a 410. I wanted to know what the line of HTACCESS code would be to do that in bulk for any URL that has the word "mono" after the sitename.com/0 -
Removing a large number of unnecessary pages from a site
Hi all, I got a big problem with my website. I have a lot of page, duplicate page made from various combinations of selects, and for all this duplicate content we've be hit by a panda update 2 years ago. I don't want to bring new content an all of these pages, about 3.000.000, because most of them are unnecessary. Google indexed all of them (3.000.000), and I want to redirect the pages that I don't need anymore to the most important ones. My question, is there any problem in how google will see this change, because after this it will remain only 5000-6000 relevant pages?
Technical SEO | | Silviu0 -
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week. I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front. For example, I add to the removal tool:- https://www.mydomain.com/blah.html?search_garbage_url_addition On the confirmation page, the URL actually shows as:- http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look? AND PART 2 OF MY QUESTION If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request? www.domain.com/url.html?xsearch_... A description for this result is not available because of this site's robots.txt – learn more.
Technical SEO | | sparrowdog1 -
How to Remove /feed URLs from Google's Index
Hey everyone, I have an issue with RSS /feed URLs being indexed by Google for some of our Wordpress sites. Have a look at this Google query, and click to show omitted search results. You'll see we have 500+ /feed URLs indexed by Google, for our many category pages/etc. Here is one of the example URLs: http://www.howdesign.com/design-creativity/fonts-typography/letterforms/attachment/gilhelveticatrade/feed/. Based on this content/code of the XML page, it looks like Wordpress is generating these: <generator>http://wordpress.org/?v=3.5.2</generator> Any idea how to get them out of Google's index without 301 redirecting them? We need the Wordpress-generated RSS feeds to work for various uses. My first two thoughts are trying to work with our Development team to see if we can get a "noindex" meta robots tag on the pages, by they are dynamically-generated pages...so I'm not sure if that will be possible. Or, perhaps we can add a "feed" paramater to GWT "URL Parameters" section...but I don't want to limit Google from crawling these again...I figure I need Google to crawl them and see some code that says to get the pages out of their index...and THEN not crawl the pages anymore. I don't think the "Remove URL" feature in GWT will work, since that tool only removes URLs from the search results, not the actual Google index. FWIW, this site is using the Yoast plugin. We set every page type to "noindex" except for the homepage, Posts, Pages and Categories. We have other sites on Yoast that do not have any /feed URLs indexed by Google at all. Side note, the /robots.txt file was previously blocking crawling of the /feed URLs on this site, which is why you'll see that note in the Google SERPs when you click on the query link given in the first paragraph.
Technical SEO | | M_D_Golden_Peak0 -
Removing Media from Wordpress
I've run the seomoz on page report and found an interesting issue. I'm using wordpress and it seems that every picture I add to my articles seem to be added as separate pages to the site. I'm having to go to each and every picture and creating a meta tag and description to it. I still get duplicate content issues with the same. On my Disqus system, I get the same pictures added just as a page or article would look like. What can I do to avoid this?
Technical SEO | | emasaa0 -
Noindex vs. page removal - Panda recovery
I'm wondering whether there is a consensus within the SEO community as to whether noindexing pages vs. actually removing pages is different from Google Pandas perspective?Does noindexing pages have less value when removing poor quality content than physically removing ie. either 301ing or 404ing the page being removed and removing the links to it from the site? I presume that removing pages has a positive impact on the amount of link juice that gets to some of the remaining pages deeper into the site, but I also presume this doesn't have any direct impact on the Panda algorithm? Thanks very much in advance for your thoughts, and corrections on my assumptions 🙂
Technical SEO | | agencycentral0 -
Wordpress "incoming search terms" plugin
Hello everyone! newbie to SEO and have been trying to keep everything nice and ethical but I've seen on a couple of blogs today "incoming search terms" at the bottom of the blogs, then a bullet pointed list of search terms beneath it. So I had a quick search about the use of it and noticed wordpress has a plugin that automatic ally generates these "incoming search terms". I ask is this a legitimate plugin or will this harm my blog? I assume it generally will as I can't see this being much use for the audience, rather it would be 100% for trying to lure in search engines.
Technical SEO | | acecream0 -
Why has Google removed meta descriptions from SERPS?
One of my clients' sites has just been redesigned with lots of new URLs added. So the 301 redirections have been put in place and most of the new URLs have now been indexed. BUT Google is still showing all the old URLs in the SERPS and even worse it only displays the title tag. The meta description is not shown, no rich snippet, no text, nothing below the title. This is proving disastrous as visitors are not clicking on a result with no description. I have to assume its got something to do with the redirection, but why is it not showing the descriptions? I've checked the old URLs and he meta description is definitely still in the code, but Google is choosing not to show it. I've never seen this before so I'm struggling for an answer. I'd like to know why or how this is happening, and if it can be resolved. I realise that this may be resolved when Google stops showing all the old URLs but there's no telling how long that will take (can it be speeded up?)
Technical SEO | | Websensejim0