Implicitly parallel scripting as a practical and massively scalable programming model for high-performance computing /

Saved in:
Bibliographic Details
Author / Creator:Armstrong, Tim author.
Imprint:2015.
Ann Arbor : ProQuest Dissertations & Theses, 2015
Description:1 electronic resource (198 pages)
Language:English
Format: E-Resource Dissertations
Local Note:School code: 0330
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/10773045
Hidden Bibliographic Details
Other authors / contributors:University of Chicago. degree granting institution.
ISBN:9781321876574
Notes:Advisors: Ian T. Foster Committee members: John Reppy; Anne Rogers.
This item must not be sold to any third party vendors.
Dissertation Abstracts International, Volume: 76-11(E), Section: B.
English
Summary:In recent years, large-scale computation has become an indispensable tool in many fields. Programming models and languages play a central enabling role by abstracting the computational capabilities of a network of computers to enable programmers to construct applications without dealing with the full complexity of a distributed system of computers. This dissertation is motivated by the limitations of current programming models for high-performance computing in addressing emerging problems, including programmability for non-expert parallel programmers, abstraction of heterogeneous compute resources, composition of heterogeneous task types into unified applications, and fault tolerance.
We demonstrate that a high-level programming language built on top of a data-driven task parallelism execution model can feasibly address many of these problems for many applications while remaining expressive and efficient. We make several technical contributions towards this goal: formal semantics for a deterministic execution model based on lattice data types and task parallelism, a scalable distributed runtime system implementing the execution model, and an optimizing compiler for the Swift programming language that targets this runtime system. In combination, these contributions enable applications with common patterns of parallelism to be simply and concisely expressed in Swift, yet run efficiently on distributed-memory clusters with many thousands of compute cores.