I'm writing an asp.net application. When a user view page XXX.aspx a lot of calls is done to a back-end application which provide data and business logic. I have an API from that back-end application which i use to call methods to execute business logic and get data.
My problem is as follows: If a user hits repeatedly F5 (or even just holds it down) then several web request will be executed at the same time. This again forces me to open several connections to the back-end application. To open connection is a expensive so I have implemented a caching mechanism so that a user session get's a connection and sticks to that connection until it's returned to an connection pool after about 15 seconds. If I activate the caching mechanism the connection towards the back-end app crashes when I hit F5 fast. This happens because all of the request's are processed at the same t开发者_JS百科ime and therefore try to use the same connection simultaneously. This is not "legal". Fare enough :) I have added a sleep function so that a connection will only be used by a single request at any give time. But again this is slow and if I hit the F5 20 times i will have to wait the about 15-20 seconds before the "last" response is shown. I don't want to process all these request. I have tried holding down F5 at other asp.net applications on the web and I notice that some have this problem, other not.
This must be a quite common problem, but I cannot find any good information about this. Are there any settings in asp.net that will cancel all request prior to the latest or do I have to implement my own system for this? Are there any best practices around this kind of situation?
To provide a more common valid example: Let's say that a page request did 5 selects towards a sql server an a user hits F5 20 times super fast. To execute 5*20 selects is what I want to avoid, probably 10 of the selects will be executed since it takes some time to hit F5 like this, but once there is a build-up of request only the last should be executed.
Thanks in advance!
Update/additional info:
- The content "cannot" be cached. Also by default, any caching is/should be done in back-end system making distributed cache possible and also making cache available to other applications running on top of back-end system.
- Each session gets it's own connection to back-end.
- The page is "fast", loading at 750-1000ms. (But I'm a lot faster at pushing F5...)
If no better solution appear I will create a version number in session object or similar. Before doing any call towards the back-end there will be a compare with current request version number against the version number in session. Only request that match the latest version number will be executed. For my ajax pages this check will be skipped so that concurrent ajax calls are possible. It might also be that I will resort to some kind of very short lived caching in asp.net, but it opens a whole new world of issues. This is after all a problem that will occur seldom and it's furthermore impossible to prohibit completely due to the fact that I'm designing the asp.net solution to be more or less stateless. But if there is 2 or even more web servers there will still be benefit in the "version" system.
Thanks so far for good suggestions and interesting opinions!
Given the restrictions you're imposing, I would do something like this to serialize backend access for a given user:
Connection AcquireConnection (string user)
{
lock (user) {
Cache cache = HttpRuntime.Cache;
string key = user + "@@@ConnectionInUse";
if (cache [key] != null) {
Monitor.Wait (user, true); // TODO: check return value!
}
cache [key] = key;
return OpenConnection ();
}
}
void ReleaseConnection (string user)
{
lock (user) {
Cache cache = HttpRuntime.Cache;
string key = user + "@@@ConnectionInUse";
cache.Remove (key);
Monitor.Pulse (user);
}
}
Then in the code of your page:
// This will block until there's no other connection in use for 'user'
Connection cnc = AcquireConnection (user);
try {
// Do your thing here
} finally {
// This will wake up the next request in Monitor.Wait()
ReleaseConnection (user);
}
I'm not sure if this will work, but I just thought: try to use Response.IsClientConnected
before your expensive operation.
Just wondering if, refreshing that page, ASP.NET knows previous page doesnt need to be completed. That doesn't solve your immediate problem, but poses as a small workaround.
Could you not just cache the result of the database query in your backend application (for the same time that you already cache the connection) and if an identical request comes in return the cached result without re-querying your database?
I think you need to use lazy initialization values for calls to your back and services. If any client requests information A, then you block the client and in separate background thread call back-end services. All subsequent requests will be blocked by the same lazy value. When information is available, lazy value is stored in cache and all subsequent requests will have it.
This has got nothing to do with Database. You need to take care of the behaviour where in user will press F5 rigorously.
One thing you can do is create a session key that keeps track of the client request. No further requests from the same client is served for the page. Once the process is completed you release that key and wait to accept new requests.
But what if he tries from another window - in that case that request needs to be served since it is not made rigorously.
I think it is not possible to completely fix the problem. And my be this needs to be taken care at the IIS level and not by ASP.Net.
It sounds like you're almost there - but I would do more to discourage the use of F5.
As you are already tracking users through the use of session to limit their requests to one connection you should try encouraging them to be a bit more patient. There are two ways you could do this:
1 Break New Requests
Add a session variable such as InProgress
that you set to true
as you start performing your select commands, and set to false
when you've finished.
You can then check this value before starting the actions - if it's already InProgress
, bail out, inform the user they've hit F5 to quickly, and please wait to try again.
Obviously you should warn the user at the start that this is a long running process, and the repercussions of their actions - although we all know users don't read warnings.
2 Provide more feedback to the users
You need to look at why your users are pressing F5 so much - usually it's frustration due to a lack of response from the web - people are becoming used to the AJAXy way of doing things - an "in progress" type animation informing them that stuff is happening, please wait with a "Please don't hit F5" type message usually works. You'd need to look into using something like an UpdatePanel or some other JS library to help there.
On pages where I have a lot of dynamic content I still dump the output into cache for at least 15 seconds so that way if repeated refresh requests hit I don't have to hit the backend every time. Even if your backend caches it for you, the 15 second buffer means that you only get dinged at most 4 times a minute for content.
The next thing you do is implement a blocking scheme, which could be as simple as Session["IsLoading"] = true
and not making a new backend connection unless that value is nonexistent or false. If it is in a loading state it returns a simple "leave-me-alone-I'm-working" message that also causes the page to auto refresh after 2 or 3 seconds.
I solved this issue late yesterday evening by using a somehow similar approach as given by Gonzalo but with some extra feature that end "obsolute" requests. Now I'm able to hold down F5 for as long as I want and still get a valid response within 1 sec after releasing F5 :) :) And so far I have not seen the "Please do not refresh so fast!" message on the client side.
I created this class
public class RequestTracker
{
private Hashtable hash;
private Hashtable hashSync;
public RequestTracker() {
hash = new Hashtable();
hashSync = Hashtable.Synchronized(hash);
}
public int UpdateRequestId() {
if (CwGlobal.SessionIdExists)
{
int newRequestId = hashSync.ContainsKey(CwGlobal.SessionId) ? (int)hashSync[CwGlobal.SessionId] + 1 : 1;
hashSync[CwGlobal.SessionId] = newRequestId;
return newRequestId;
}
return 0;
}
public int CurrentRequestId() {
if(CwGlobal.SessionIdExists && hashSync.ContainsKey(CwGlobal.SessionId))
return (int)hashSync[CwGlobal.SessionId];
return 0;
}
}
And use it like this and together with the added lock() it works flawless
public void ValidateRequest()
{
if(!this.UseRequestTracker || requestTracker == null)
return;
if (RequestId < requestTracker.CurrentRequestId())
{
httpContext.Response.Write("Please do not refresh so fast!");//TODO add automatic refresh to this page
httpContext.Response.End();
}
}
public string Foo()
{
ValidateRequest();
string ret;
using (var apiConn = apiPool.getConn())
{
lock (apiConn)
{
ret = (string)apiConn.Call("bar");
}
}
return ret;
}
There are some more code to it, like adding the RequestTracker in Global.asax and also making sure that RequestId is set when page load's etc.
精彩评论