Description
Avoiding this penalty has meant designing specifically asynchronous algorithms. To date, the design of these asynchronous algorithms has been ad-hoc and non-intuitive. We show how many algorithms, designed and analyzed assuming synchrony, can be easily and systematically converted so that the same work and time bounds are maintained under arbitrary asynchrony. The existence of lower bounds indicates that there exist problems for which the same work and time bounds cannot be maintained. However, this paper shows that in far more cases than hitherto thought possible, asynchrony does not induce a time or work penalty.
We suggest a radically new approach to the problem of cache coherence. We show how appropriate architectural support motivates the design of algorithms which are immune to cache incoherence.