开发者

how to check if my website is being accessed using a crawler?

开发者 https://www.devze.com 2023-02-13 13:57 出处:网络
how to check if a certain page is being accessed from a开发者_Go百科 crawler or a script that fires contineous requests?

how to check if a certain page is being accessed from a开发者_Go百科 crawler or a script that fires contineous requests? I need to make sure that the site is only being accessed from a web browser. Thanks.


This question is a great place to start: Detecting 'stealth' web-crawlers

Original post:

This would take a bit to engineer a solution.

I can think of three things to look for right off the bat:

One, the user agent. If the spider is google or bing or anything else it will identify it's self.

Two, if the spider is malicious, it will most likely emulate the headers of a normal browser. Finger print it, if it's IE. Use JavaScript to check for an active X object.

Three, take note of what it's accessing and how regularly. If the content takes the average human X amount of seconds to view, then you can use that as a place to start when trying to determine if it's humanly possible to consume the data that fast. This is tricky, you'll most likely have to rely on cookies. An IP can be shared by multiple users.


You can use the robots.txt file to block access to crawlers, or you can use javascript to detect the browser agent, and switch based on that. If I understood the first option is more appropriate, so:

User-agent: *
Disallow: /

Save that as robots.txt at the site root, and no automated system should check your site.


I had a similar issue in my web application because I created some bulky data in the database for each user that browsed into the site and the crawlers were provoking loads of useless data being created. However I didn't want to deny access to crawlers because I wanted my site indexed and found; I just wanted to avoid creating useless data and reduce the time taken to crawl.

I solved the problem the following ways:

  • First, I used the HttpBrowserCapabilities.Crawler property from the .NET Framework (since 2.0) which indicates whether the browser is a search engine Web crawler.  You can access to it from anywhere in the code:

    • ASP.NET C# code behind:

      bool isCrawler = HttpContext.Current.Request.Browser.Crawler;
    • ASP.NET HTML:

      Is crawler? = <%=HttpContext.Current.Request.Browser.Crawler %>
    • ASP.NET Javascript:

      <script type="text/javascript">  
      var isCrawler = <%=HttpContext.Current.Request.Browser.Crawler.ToString().ToLower() %>  
      </script>

    The problem of this approach is that it is not 100% reliable against unidentified or masked crawlers but maybe it is useful in your case.

  • After that, I had to find a way to distinguish between automated robots (crawlers, screen scrapers, etc.) and humans and I realised that the solution required some kind of interactivity such as clicking on a button.  Well, some of the crawlers do process javascript and it is very obvious they would use the onclick event of a button element but not if it is a non interactive element such as a div.  The following is the HTML / Javascript code I used in my web application www.so-much-to-do.com to implement this feature:

    <div  
    class="all rndCorner"  
    style="cursor:pointer;border:3;border-style:groove;text-align:center;font-size:medium;font-weight:bold"  
    onclick="$TodoApp.$AddSampleTree()">  
    Please click here to create your own set of sample tasks to do  
    </div>

    This approach has been working impeccably until now, although crawlers could be changed to be even more clever, maybe after reading this article :D

0

精彩评论

暂无评论...
验证码 换一张
取 消