I've been getting a lot of errors sent to my e-mail due to web crawlers hitting parts of my site without any request data, and I w开发者_StackOverflow社区as wondering what is the best way to handle web crawlers in Django? Should I issue a redirect when I come across an empty QueryDict?
You could consider implementing a robots.txt to disallow crawlers from accessing areas of your site that are intended for humans only, such as forms.
I think your views should work with any request, at list return page with message "Incorrect request". 500 is ugly. Are you sure that user don't open page without any request data? "get" method of QueryDict can help with default values.
Well behaved crawlers should only do GET requests. Forms should be anything but GET requests.
Ruby and Rails uses CRUD mapping
Create -> POST,
READ -> GET,
Update -> PUT,
Delete -> DELETE
Only things without additional info should be GET requests.
精彩评论