开发者

Message passing vs locking

开发者 https://www.devze.com 2023-03-29 11:07 出处:网络
What exactly is the difference between message passing concurrency schemes and lock-based concurrency schemes, in terms of performance? A thread that is waiting on a lock blocks, so other threads can

What exactly is the difference between message passing concurrency schemes and lock-based concurrency schemes, in terms of performance? A thread that is waiting on a lock blocks, so other threads can run. As a result, I don't see how message-passing can be faster than lock-based concurrency.

Edit: Specifically, I'm discussing a message-passing approach like in Erlang, compared to a share开发者_如何学JAVAd-data approach using locks (or atomic operations).


As some others have suggested ("apples and oranges"), I see these two techniques as orthogonal. The underlying assumption here seems to be that one will choose one or the other: we'll either use locking and shared resources or we'll use message passing, and that one renders the other unnecessary, or perhaps the other is even unavailable.

Much like, say, a metacircular evaluator, it's not obvious which are the real primitives here. For instance, in order to implement message passing, you're probably going to need atomic CAS and particular memory visibility semantics, or maybe some locking and shared state. One can implement atomic operations in terms of locks, or one can implement locks in terms of atomic operations (as Java does in its java.util.concurrent.locks types).

Likewise, though admittedly a stretch, one could implement locking with message passing. Asking which one performs better doesn't make much sense in general, because that's really more a question about which are built in terms of which. Most likely, the one that's at the lower level can be driven better by a capable programmer than the one built on top—as has been the case with manual transmission cars until recently (quite a debate there too).

Usually the message-passing approach is lauded not for better performance, but rather for safety and convenience, and it's usually sold by denying the programmer control of locking and shared resources. As a result, it bets against programmer capability; if the programmer can't acquire a lock, he can't do it poorly and slow the program down. Much like a debate concerning manual memory management and garbage collection, some will claim to be "good drivers," making the most of manual control; others—especially those implementing and promoting use of a garbage collector—will claim that in the aggregate, the collector can do a better job than "not-so-good drivers" can with manual management.

There's no absolute answer. The difference here will lie with the skill level of the programmers, not with the tools they may wield.


IMHO, Message passing is probably not exactly a concurrency scheme. It is basically a form of (IPC) Inter Process Communication, an alternative to Shared objects. Erlang just favors Message passing to Shared Objects.

Cons of Shared Objects (Pros od Message Passing):

  • The state of Mutable/Shared objects are harder to reason about in a context where multiple threads run concurrently.
  • Synchronizing on a Shared Objects would lead to algorithms that are inherently non-wait free or non-lock free.
  • In a multiprocessor system, A shared object can be duplicated across processor caches. Even with the use of Compare and swap based algorithms that doesn't require synchronization, it is possible that a lot of processor cycles will be spent sending cache coherence messages to each of the processors.
  • A system built of Message passing semantics is inherently more scalable. Since message passing implies that messages are sent asynchronously, the sender is not required to block until the receiver acts on the message.

Pros of Shared Objects (Cons of Message Passing):

  • Some algorithms tend to be much simpler.
  • A message passing system that requires resources to be locked will eventually degenerate into a shared object systems. This is sometimes apparent in Erlang when programmers start using ets tables etc. to store shared state.
  • If algorithms are wait-free, you will see improved performance and reduced memory footprint as there is much less object allocation in the form of new messages.


Using message passing when all you wish to do is locking is wrong. In those cases, use locking. However, message passing gives you much more than just locking - as its name suggests, it allows you to pass messages, i.e. data, between threads or processes.


Message passing (with immutable messages) is easier to get right. With locking and shared mutable state it's very hard to avoid concurrency bugs.

As for performance, it's best that you measure it yourself. Every system is different - what are the workload characteristics, are operations dependent on the results of other operations or are they completely or mostly independent (which would allow massive parallelism), is latency or throughput more important, how many machines there are etc. Locking might be faster, or then again message passing might, or something completely different. If the same approach as in LMAX fits the problem at hand, then maybe that could be. (I would categorize the LMAX architecture as message passing, though it's very different from actor-based message passing.)


Message Passing don't use shared memory, which means that it doesn't need locks, cause each thread(process) can only load or store its own memory, the way they communicate each other is to send&receive messages.

0

精彩评论

暂无评论...
验证码 换一张
取 消