开发者

How to use a memory cache in a concurrency critical context

开发者 https://www.devze.com 2023-03-12 22:56 出处:网络
Consider the following two methods, written in pseudo code, that fetches a complex data structure, and updates it, respectively:

Consider the following two methods, written in pseudo code, that fetches a complex data structure, and updates it, respectively:

getData(id) {
   if(isInCache(id)) return getFromCache(id)         // already in cache?
   data = fetchComplexDataStructureFromDatabase(id)  // time consuming!
   setCache(id, data)                                // update cache
   return data
}

updateData(id, data) {
   storeDataStructureInDatabase(id, data)
   clearCache(id)
}

In the above implementation, there is a problem with concurrency, and we might end up with outdated data in the cache: consider two parallel executions running getData() and updateData(), respectively. If the first execution fetches data from the cache exactly in between the other execution's call to storeDataStructureInDatabase() and clearCache(), then we will get an outdated version of the data. How would you get around this concurrency problem?

I considered the following solution, where the cache is invalidated just before data is committed:

storeDataStr开发者_如何学JAVAuctureInDatabase(id, data) {
   executeSql("UPDATE table1 SET...")
   executeSql("UPDATE table2 SET...")
   executeSql("UPDATE table3 SET...")
   clearCache(id)
   executeSql("COMMIT")
}

But then again: If one execution reads the cache in between the other execution's call to clearCache() and COMMIT, then an outdated data will be fetched to the cache. Problem not solved.


In the cache way of thinking you cannot prevent retrieving outdated data.

For example, when someone start sending an HTTP request (if your application is a web application) that will later render the cache invalid, should we consider the cache invalid when the POST request start? when the request is handled by your server? when you start the controller code?. Well no. In fact the cache is invalid only when the database transaction ends. Not even when the transaction start, only at the end, on the COMMIT phase of the transaction. And any working process working with previous data has very few chances of being aware that the data as changed, in a web application what about html pages showing outdated data in a browser, do you want to flush theses pages?

But let's just think your parallel process are not just there for the web, but for real concurrency critical parallel jobs.

One problem is that your cache is not handled by the database server, so it's not in the transaction COMMIT/ROLLBACK. You cannot decide to clear the cache first but rebuild it if you rollback. So you can only clear and rebuild the cache after the transaction is commited.

And that lead the possibility to get an outdated version of the cache if your get comes between the database commit and the cache clear instruction. So :

  • is it really important that you have an outdated version of the cache? Let's say your parallel process made something just a few milliseconds before you would have retrieve this new version (so it's the old one) and work with it for maybe 40ms, and then build final report on that without noticing the cache have been flush 15ms before the end of the work. If your process response cannot contain any outdated data, then you'll have to check data validity before outputing it (so you should recheck that all data used in the work process are still valid at teh end).
  • So if you don't want to recheck data validity that mean your process should have put some lock (semaphore?) when starting and should release the lock only at the end of the work, your are serializing your work. Databases can speed up serialization by working on pseudo-serialization levels for transactions and breaking your transaction if any changes make this pseudo-serialization hasardous. But here you're not only working with a database so you should do the serialization on your own side.
  • Process serialization is slow, but you may try to do the same as the database, that is runing jobs in parallel and invalidating any job running when data is altered (so having something that detect your cache clear and kill and rerun all existing parallel jobs, implying you have something mastering all the parallel jobs)
  • or simply accept you can have small past-invalid-outdated data. If we talk of web application the time your response walks on TCP/IP to the client browser it may be already invalid.

Chances are that you will accept to work with outdated cache data. The only really important point is that if you cannot trust your cache data for a really critical thing then you should'nt use a cache for that. If your are manipulating Accounting data for example. The only way to get a serialization of parallel tasks is to do:

  • in the Writing process: all the important reads (the one that will get some writes) and all the write things in a transaction with a high isolation level (level 4) and with all necessary row locks. That's something hard to do working only with a database, it's quite impossible if you add an external cache for read operations.
  • in parallel read process: do what you want (read from external cache), if the read data won't be used for write operations. If one of the read data will later be use for a write operation this data validity will have to be checked in the write transaction (so in the Writing process). Why not adding a timestamp watermark on the data, so that when it will come back for a write operation you'll be able to know if it is still valid.
0

精彩评论

暂无评论...
验证码 换一张
取 消