开发者

How many users are sufficient to make a heavy load for web application

开发者 https://www.devze.com 2023-01-31 15:03 出处:网络
I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) r

I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian.

After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activiti开发者_C百科es, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users?

EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment.

The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways:

  • my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry?
  • what if I take MySQL, and move it to dedicated machine with a lot of RAM?


It's impossible to give a specific number of users that would cause a slowdown on any random application. It depends entirely on what the application does and how it's run. There are a number of things to look at.

  1. Profile. Run your application through a profiler with as much real-world usage as possible. The best thing to do is use an automated test or series of unit tests that run through all layers to make the profiling session as repeatable as possible. Even if you profile your application under lower load, you can identify bottlenecks and improve them. You should profile both your application code and SQL code.

  2. Bottlenecks. Profiling will tell you what code and/or queries are taking up the most time and fixing those will help a lot, but you also want to look for architectural bottlenecks that you can avoid. Do you have a users waiting on writes that they don't really need to wait on? Can you use a producer/consumer queue to queue up certain non-critical writes so the app can respond faster to users and flush this data to the db lazily. Are there any long-running requests that are waiting on other outside resources that can benefit from asynchronous processing?

  3. Caching. Are there some requests or data that can be cached? Even if this is not the bottleneck, reducing the load on the server as much as possible will help. Particularly if you have a lot of database contention and you can cache some commonly used data in the application, then you can avoid some database round-trips.

  4. Memory data. Take a look at how your application uses the database and see if there is anything that doesn't really need to be in the database. If so, moving this data to in-memory data structures (or even an in-memory database) would vastly improve performance. This isn't commonly possible though, but when it is, it's a huge benefit.


There are 2 important numbers that come to my head:

  1. Degradation point: Number of user when applications slows down.
  2. Breaking point: Number of users that causes you application to crash.

In order to determine these values you need to test your application while increasing the number of users, for instance, start with one user and add another every minute until your application stops responding. Is important to measure you memory and cpu usage to correlate these to the number of active users in your test.

Your comments indicate you found the degradation point and you believe your database is the contention point. There are 2 MySQL startup parameters that can help you to validate your assumption, which are the following:
--log-slow-queries
--log-queries-not-using-indexes

Monitor your processes with "ps" to determine which ones are consuming more memory and cpu to determine which parts of your architecture are consuming more resources. Another good supporting data for you analysis will be the vmstat ouput, perhaps every 60 seconds.

In short, start monitors with ps and vmstat, stress your application while increasing the number of users, when the application slows down, stop your monitors and graph your process cpu and memory together with the active user count at the moment, from that point you can determine whether your problem is CPU or memory, once you figure that out you will just pick the top 10 processes for that given resource and these are candidates for contention. Review the MySQL logs to identify where you can add new indexes and determine if some of the slow queries can be re-written.


Quick notes how to fix your problem

Suggested

  • Disable statistics module

  • Remove any not critical modules

Prerequisite

  • Install APC - or anything similar

  • Enable Drupal caching - but not the aggressive one

Option1

  • Memcache + Memcache API - Install Memcache API. This will offload db by handling sessions for authorized users.

Option2

  • Cacherouter Install cacherouter. This will replace cache gets/sets from db to desired option (Memcache or if you run out memory - filesystem)

  • Cache for authorized users Install Authcache It works perfectly with cacherouter, but it's only for 6x. Plus it requires some redesign (but there was project called EasyAuthCache which might be handy)

  • upgrade to 6.x Basically in 6.x version there are several handy modules to offload db by caching (and they would help a lot in this case). I suspect, that your slow selects come from views.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号