开发者

Make search bots to not crawl a deleted page?

开发者 https://www.devze.com 2023-02-24 05:01 出处:网络
Currently we are using a Kentico CMS for out web site and we used to have a page called pages/page1.aspx. We removed that page but every day the google, bing and yahoo sarch robot tries to read that p

Currently we are using a Kentico CMS for out web site and we used to have a page called pages/page1.aspx. We removed that page but every day the google, bing and yahoo sarch robot tries to read that page. Because the page doesn't exist the CMS开发者_运维知识库 throws the following error (in the log)

Event URL:  /pages/page1.aspx
URL referrer:   
User agent:     Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)

Message: The file '/pages/page1.aspx' does not exist.
Stack Trace:
at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath)
// and the rest of the stacktrace

When we get too many of these errors the whole site crashes (have to clear .Net temp files and restart app pool). Basically I can go to a page that doesn't exist, hit refresh many times and take the site down. Extremely bad. However, first thing, how can I get the bots to not try to access this page?

Thanks in advance.


If it's just a single page, or a few pages that are causing this, modify robots.txt to tell the legitimate search engines not to check it.

I'd also check what HTTP response you're sending when the page is not found? You might be sending something that causes the spider to think it should keep checking? Instead of a 404 maybe you should try permanently redirecting to your home page?

Finally, WTF? I'd talk to the Ketnico folks about this bug.


I think that you have a configuration error. While a robots.txt file would hopefully correct this issue, bots can choose to ignore that file.

A better solution would be to setup your error pages correctly. What happens when you go to a page that doesn't exist? It sounds like your system is showing a yellow screen, which is an unhandled exception bubbling all the way up to the user. I would check your error page setup so that users (and robots) get redirected to a 404 error page. I'm guessing that when Yahoo and others see that 404 page, they will stop trying to index it.


Have you tried using a robots.txt file?

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号