Transactional Memory (TM) stands as a powerful paradigm for manipulating shared data in concurrent applications. It avoids the drawbacks of coarse grain locking schemes, namely the potentially excessive limitation of concurrency, while jointly providing support for synchronization transparency to the programmers, which is achieved by embedding code-blocks accessing shared data within transactions. On the downside, excessive transaction aborts may arise in scenarios with non-negligible volumes of conflicting data accesses, which might significantly impair performance. TM needs therefore to resort to methods enabling applications to run with the maximum degree of transaction concurrency that still avoids thrashing. In this article, we focus on Software TM (STM) implementations and present a machine learning-based approach that enables the dynamic selection of the best suited number of threads to be kept alive along specific phases of the execution of STM applications, depending on (variations of) the shared data access pattern. Two key contributions are provided with our approach: (i) the identification of the well suited set of features allowing the instantiation of a reliable neural network-based performance model and (ii) the introduction of mechanisms enabling the reduction of the run-time overhead for sampling these features. We integrated a real implementation of our machine learning-based thread-parallelism regulation approach within the TinySTM open source package and present experimental data, based on the STAMP benchmark suite, which show the effectiveness of the presented thread-parallelism regulation policy in optimizing transaction throughput.
Dettaglio pubblicazione
2017, JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, Pages 208-229 (volume: 109)
Machine learning-based thread-parallelism regulation in software transactional memory (01a Articolo in rivista)
Rughetti Diego, Di sanzo Pierangelo, Ciciani Bruno, Quaglia Francesco
keywords