Active 9 years, 6 months ago. Viewed 11k times. Add a comment. Active Oldest Votes. Ben Ben I think you're overselling here. TM algorithms are subtle, just as much as are traditional synchronization techniques. And the resulting bugs can be equally inscrutable.
In any case you generally need a working traditional variant anyway as a fallback for transaction collisions, so there's no free lunch. STM has largely failed to catch on for these reasons. HTM in Haswell looks promising, but more so from a performance perspective than an ease of programming one. As my programming projects are far more limited by available programmer time than by execution speed, I'm more interested in ease-of-programming technologies than performance technologies.
STM systems that provide the safety guarantees I'm talking about actually exist now and are usable by me now. AndyRoss: What do you mean by "failed in the market"?
It has never really been in the market in the first place. It is fairly widely used in a few languages Clojure and Haskell have it built in , but more generally, it has not failed, because it has not really been attempted at a large scale yet. If that constitutes "failed in the market", then I'd like to hear your case for HTM, because that's even less used today. A STM system implemented on top of C is going to be fairly useless, yes, but that doesn't mean "all STM implementations in all languages ever are doomed to fail".
It also simplifies programming interfaces, for example by providing a simple mechanism to convert between blocking and nonblocking operations. This scheme has been implemented in the Glasgow Haskell Compiler. The conceptual simplicity of STMs enable them to be exposed to the programmer using relatively simple language syntax. In its simplest form, this is just an "atomic block", a block of code which logically occurs at a single instant:.
When the end of the block is reached, the transaction is committed if possible, or else aborted and retried. CCRs also permit a guard condition , which enables a transaction to wait until it has work to do:. If the condition is not satisfied, the transaction manager will wait until another transaction has made a commit that affects the condition before retrying. This loose coupling between producers and consumers enhances modularity compared to explicit signaling between threads.
For example:. This ability to retry dynamically late in the transaction simplifies the programming model and opens up new possibilities.
One issue is how exceptions behave when they propagate outside of transactions. In "Composable Memory Transactions", the authors decided that this should abort the transaction, since exceptions normally indicate unexpected errors in Concurrent Haskell, but that the exception could retain information allocated by and read during the transaction for diagnostic purposes. They stress that other design decisions may be reasonable in other settings. STM can be implemented as a lock-free algorithm or it can use locking.
There are two types of locking schemes: In encounter-time locking Ennals, Saha, and Harris , memory writes are done by first temporarily acquiring a lock for a given location, writing the value directly, and logging it in the undo log.
Commit-time locking locks memory locations only during the commit phase. A commit-time scheme implemented by Dice, Shalev, and Shavit uses a global version clock.
Every transaction starts by reading the current value of the clock and storing it as the read-version. They took forks with numbers 1, 2, 4, 5. Second and Third can't eat with one fork 3. Fifth can't even take a single fork. Fourth and First finished eating. They put their forks. Forks with number 3, 4, 5, 1 was taken by Third and Fifth.
Second is really unlucky guy. Third and Fifth finished their meal. Second is the only dining philosopher now. He was starving for ms while waiting for others. Skip to content. Star Branches Tags. Could not load branches. Could not load tags. Latest commit. Git stats 21 commits. However, in practice, STM systems also suffer a performance hit compared to fine-grained lock-based systems on small numbers of processors 1 to 4 depending on the application.
This is due primarily to the overhead associated with maintaining the log and the time spent committing transactions. Even in this case performance is typically no worse than twice as slow. Theoretically, the worst case space and time complexity of n concurrent transactions is O n. Actual needs depend on implementation details one can make transactions fail early enough to avoid overhead , but there will also be cases, albeit rare, where lock-based algorithms have better time complexity than software transactional memory.
In addition to their performance benefits [ citation needed ] , STM greatly simplifies conceptual understanding of multithreaded programs and helps make programs more maintainable by working in harmony with existing high-level abstractions such as objects and modules.
Lock-based programming has a number of well-known problems that frequently arise in practice:. In contrast, the concept of a memory transaction is much simpler, because each transaction can be viewed in isolation as a single-threaded computation.
Deadlock and livelock are either prevented entirely or handled by an external transaction manager; the programmer need hardly worry about it. Priority inversion can still be an issue, but high-priority transactions can abort conflicting lower priority transactions that have not already committed. Such limitations are typically overcome in practice by creating buffers that queue up the irreversible operations and perform them at a later time outside of any transaction.
In Haskell , this limitation is enforced at compile time by the type system. In , Tim Harris , Simon Marlow , Simon Peyton Jones , and Maurice Herlihy described an STM system built on Concurrent Haskell that enables arbitrary atomic operations to be composed into larger atomic operations, a useful concept impossible with lock-based programming.
To quote the authors:. Perhaps the most fundamental objection [ For example, consider a hash table with thread-safe insert and delete operations. Now suppose that we want to delete one item A from table t1, and insert it into table t2; but the intermediate state in which neither table contains the item must not be visible to other threads.
Unless the implementor of the hash table anticipates this need, there is simply no way to satisfy this requirement. With STM, this problem is simple to solve: simply wrapping two operations in a transaction makes the combined operation atomic. The only sticking point is that it is unclear to the caller, who is unaware of the implementation details of the component methods, when it should attempt to re-execute the transaction if it fails.
In response, the authors proposed a retry command which uses the transaction log generated by the failed transaction to determine which memory cells it read, and automatically retries the transaction when one of these cells is modified, based on the logic that the transaction will not behave differently until at least one such value is changed.
The authors also proposed a mechanism for composition of alternatives , the orElse function. It runs one transaction and, if that transaction does a retry , runs a second one. If both retry, it tries them both again as soon as a relevant change is made. It also simplifies programming interfaces, for example by providing a simple mechanism to convert between blocking and nonblocking operations.
This scheme has been implemented in the Glasgow Haskell Compiler. The conceptual simplicity of STMs enables them to be exposed to the programmer using relatively simple language syntax. In its simplest form, this is just an "atomic block", a block of code which logically occurs at a single instant:.
When the end of the block is reached, the transaction is committed if possible, or else aborted and retried. This is simply a conceptual example, not correct code. For example, it behaves incorrectly if node is deleted from the list during the transaction. CCRs also permit a guard condition , which enables a transaction to wait until it has work to do:.
If the condition is not satisfied, the transaction manager will wait until another transaction has made a commit that affects the condition before retrying. This loose coupling between producers and consumers enhances modularity compared to explicit signaling between threads.
For example:. This ability to retry dynamically late in the transaction simplifies the programming model and opens up new possibilities. One issue is how exceptions behave when they propagate outside of transactions. In "Composable Memory Transactions", [6] the authors decided that this should abort the transaction, since exceptions normally indicate unexpected errors in Concurrent Haskell, but that the exception could retain information allocated by and read during the transaction for diagnostic purposes.
They stress that other design decisions may be reasonable in other settings. STM can be implemented as a lock-free algorithm or it can use locking.
0コメント