ARGO OS: Difference between revisions
From Modelado Foundation
imported>Beckman No edit summary |
imported>Beckman No edit summary |
||
(10 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
| website = [http://argo-osr.org/ http://argo-osr.org/ ] | | website = [http://argo-osr.org/ http://argo-osr.org/ ] | ||
| pi = Pete Beckman (ANL) | | pi = Pete Beckman (ANL) | ||
| chief-scientist = Marc Snir (ANL)}} | | chief-scientist = Marc Snir (ANL) | ||
}} | |||
Disruptive new computing technologies, such as 3D memory, ultra-low-power cores, and embedded network controllers, are changing the scientific computing landscape. For the next few years, novel designs will flourish as new technologies are explored. Furthermore, changing work-flows and programming environments are making new demands on the low-level system software. As noted by DOE workshops and reports, today’s operating system and runtime (OS/R) software cannot be incrementally extended and grown into an exascale solution. A new approach is required. | |||
Argo is a project to develop new exascale Operating System and Runtime Software (OS/R) specifically designed to support extreme-scale scientific computation. Argo is built on an agile, new modular architecture that supports both global optimization and local control. It aims to efficiently leverage new chip and interconnect technologies while addressing the new modalities, programming environments, and workflows expected at exascale. It is designed from the ground up to run future HPC applications at extreme scales. | |||
Argo will be developed over the course of three years and will result in an open-source prototype system that is vendor neutral and runs on several architectures. Four key innovations create the foundation of this project: a new node OS/R that supports OS specialization, a lightweight run-time system for massive concurrency, a global view that supports cross-cutting verticals of power and fault management, and a backplane to allow resource managers and optimizers to communicate and control the platform. | |||
<b>An OS/R with Multiple Views</b>: Our design supports hierarchical views on the entire exascale system. The global view enables Argo to combine live performance data, active control interfaces, and machine-learning techniques to dynamically manage power across the entire system, respond to fault, or tune application performance. Only with a whole-system perspective can power budget goals be reached and cascading failures halted to avoid a system crash. At the other end of the spectrum is the local view. For scalability, compute nodes must have a measure of autonomy to manage and optimize massive intranode parallelism, schedule low-latency messages on embedded network adapters, and adapt to new memory technologies. Bringing together these multiple perspectives, and the corresponding software components operating within our hierarchical view, is our strategy for addressing the four key exascale challenges: power, parallelism, memory hierarchy, and resilience. | |||
== Team Members == | == Team Members == | ||
<ul> | |||
Argonne National Laboratory: | <li><b>Argonne National Laboratory</b>: Pete Beckman, Marc Snir, Pavan Balaji, Rinku Gupta, Kamil Iskra, Franck Cappello, Rajeev Thakur, Kazutomo Yoshii<br> | ||
<li><b>Boston University</b>: Jonathan Appavoo, Orran Krieger<br> | |||
Boston University: | <li><b>Lawrence Livermore National Laboratory</b>: Maya Gokhale, Edgar Leon, Barry Rountree, Martin Schulz, Brian Van Essen<br> | ||
<li><b>Pacific Northwest National Laboratory</b>: Sriram Krishnamoorthy, Roberto Gioiosa<br> | |||
Lawrence Livermore National Laboratory: | <li><b>University of Chicago</b>: Henry Hoffmann<br> | ||
<li><b>University of Illinois Champagne Urbana</b>: Laxmikant Kale, Eric Bohm, Ramprasad Venkataraman<br> | |||
Pacific Northwest National Laboratory | <li><b>University of Oregon</b>: Allen Malony, Sameer Shende, Kevin Huck<br> | ||
<b>University of Tennessee Knoxville</b>: Jack Dongarra, George Bosilca, Thomas Herault<br> | |||
University of Chicago: | </ul> | ||
University of Illinois Champagne Urbana: | |||
University of Oregon: | |||
University of Tennessee Knoxville: | |||
Latest revision as of 06:54, January 10, 2014
Argo | |
---|---|
PI | Pete Beckman (ANL) |
Chief Scientist | Marc Snir (ANL) |
Website | http://argo-osr.org/ |
Disruptive new computing technologies, such as 3D memory, ultra-low-power cores, and embedded network controllers, are changing the scientific computing landscape. For the next few years, novel designs will flourish as new technologies are explored. Furthermore, changing work-flows and programming environments are making new demands on the low-level system software. As noted by DOE workshops and reports, today’s operating system and runtime (OS/R) software cannot be incrementally extended and grown into an exascale solution. A new approach is required.
Argo is a project to develop new exascale Operating System and Runtime Software (OS/R) specifically designed to support extreme-scale scientific computation. Argo is built on an agile, new modular architecture that supports both global optimization and local control. It aims to efficiently leverage new chip and interconnect technologies while addressing the new modalities, programming environments, and workflows expected at exascale. It is designed from the ground up to run future HPC applications at extreme scales.
Argo will be developed over the course of three years and will result in an open-source prototype system that is vendor neutral and runs on several architectures. Four key innovations create the foundation of this project: a new node OS/R that supports OS specialization, a lightweight run-time system for massive concurrency, a global view that supports cross-cutting verticals of power and fault management, and a backplane to allow resource managers and optimizers to communicate and control the platform.
An OS/R with Multiple Views: Our design supports hierarchical views on the entire exascale system. The global view enables Argo to combine live performance data, active control interfaces, and machine-learning techniques to dynamically manage power across the entire system, respond to fault, or tune application performance. Only with a whole-system perspective can power budget goals be reached and cascading failures halted to avoid a system crash. At the other end of the spectrum is the local view. For scalability, compute nodes must have a measure of autonomy to manage and optimize massive intranode parallelism, schedule low-latency messages on embedded network adapters, and adapt to new memory technologies. Bringing together these multiple perspectives, and the corresponding software components operating within our hierarchical view, is our strategy for addressing the four key exascale challenges: power, parallelism, memory hierarchy, and resilience.
Team Members
- Argonne National Laboratory: Pete Beckman, Marc Snir, Pavan Balaji, Rinku Gupta, Kamil Iskra, Franck Cappello, Rajeev Thakur, Kazutomo Yoshii
- Boston University: Jonathan Appavoo, Orran Krieger
- Lawrence Livermore National Laboratory: Maya Gokhale, Edgar Leon, Barry Rountree, Martin Schulz, Brian Van Essen
- Pacific Northwest National Laboratory: Sriram Krishnamoorthy, Roberto Gioiosa
- University of Chicago: Henry Hoffmann
- University of Illinois Champagne Urbana: Laxmikant Kale, Eric Bohm, Ramprasad Venkataraman
- University of Oregon: Allen Malony, Sameer Shende, Kevin Huck
University of Tennessee Knoxville: Jack Dongarra, George Bosilca, Thomas Herault