DEGAS: Difference between revisions
From Modelado Foundation
imported>Cdenny (Created page with "{{Infobox project | title = DEGAS | image = 180px | imagecaption = | team-members = List of team members | pi = Lead PI (Institute) | co-pi = Co-P...") |
imported>Cdenny No edit summary |
||
Line 3: | Line 3: | ||
| image = [[File:Your-team-logo.png|180px]] | | image = [[File:Your-team-logo.png|180px]] | ||
| imagecaption = | | imagecaption = | ||
| team-members = | | team-members = LBNL, Rice U., UC Berkeley, UT Austin, LLNL, NCSU | ||
| pi = | | pi = Katherine Yelick (LBNL)) | ||
| co-pi = | | co-pi = Vivek Sarkar (Rice U.), James Demmel (UC Berkeley), | ||
Mattan Erez (UT Austin), Dan Quinlan (LLNL) | |||
| website = team website | | website = team website | ||
}} | }} | ||
Line 12: | Line 13: | ||
== Team Members == | == Team Members == | ||
* Lawrence Berkeley National Laboratory (LBNL) | |||
* Rice University | |||
* University of California, Berkeley | |||
* University of Texas at Austin | |||
* Lawrence Livermore National Laboratory (LLNL) | |||
* North Carolina State University (NCSU) | |||
== | == Mission == | ||
'''Mission Statement:''' To ensure the broad success of Exascale systems through a unified programming model that is productive, scalable, portable, and interoperable, and meets the unique Exascale demands of energy efficiency and resilience. | |||
== Goals & Objectives == | |||
* '''Scalability:''' Billion‐way concurrency, thousand‐way on chip with new architectures | |||
* '''Programmability:''' Convenient programming through a global address space and high‐level abstractions for parallelism, data movement and resilience | |||
* '''Performance Portability:''' Ensure applications can be moved across diverse machines using implicit (automatic) compiler optimizations and runtime adaptation | |||
* '''Resilience:''' Integrated language support for capturing state and recovering from faults | |||
* '''Energy Efficiency:''' Avoid communication, which will dominate energy costs, and adapt to performance heterogeneity due to system-‐level energy management | |||
* '''Interoperability:''' Encourage use of languages and features through incremental adoption | |||
== Roadmap == | == Roadmap == | ||
'''''Any Roadmap to be included?''''' | |||
== Impact == | == Impact == | ||
'''''Any Impact to be included?''''' | |||
== Programming Models == | |||
=== Two Distinct Parallel Programming Questions === | |||
* What is the parallel control model? | |||
[[File:Example.jpg]] | |||
* What is the model for sharing/communication? | |||
[[File:Example.jpg]] | |||
=== Applications Drive New Programming Models | |||
* Message Passing Programming | |||
** Divide up domain in pieces | |||
** Compute one piece and exchange | |||
** '''MPI and many libraries''' | |||
* Global Address Space Programming | |||
** Each start computing | |||
** Grab whatever/whenever | |||
** '''UPC, CAF, X10, Chapel, Fortress, Titanium, GlobalArrays''' | |||
=== Hierarchical Programming Model === | |||
[[File:DEGAS-Heirarchical-PM.png|right]] | |||
* Goal: Programmability of exascale applications while providing scalability, locality, energy efficiency, resilience, and portability | |||
** ''Implicit constructs:'' parallel multidimensional loops, global distributed data structures, adaptation for performance heterogeneity | |||
** ''Explicit constructs:'' asynchronous tasks, phaser synchronization, locality | |||
* Built on scalability, performance, and asynchrony of PGAS models | |||
** Language experience from UPC, Habanero‐C, Co‐Array Fortran, Titanium | |||
* Both intra and inter‐node; focus is on node model | |||
* Languages demonstrate DEGAS programming model | |||
** ''Habanero‐UPC:'' Habanero’s intra‐node model with UPC’s inter‐node model | |||
** ''Hierarchical Co‐Array Fortran (CAF):'' CAF for on‐chip scaling and more | |||
** ''Exploration of high level languages:'' E.g., Python extended with H‐PGAS | |||
* Language‐independent H‐PGAS Features: | |||
** Hierarchical distributed arrays, asynchronous tasks, and compiler specialization for hybrid (task/loop) parallelism and heterogeneity | |||
** Semantic guarantees for deadlock avoidance, determinism, etc. | |||
** Asynchronous collectives, function shipping, and hierarchical places | |||
** End‐to‐end support for asynchrony (messaging, tasking, bandwidth utilization through concurrency) | |||
** Early concept exploration for applications and benchmarks | |||
=== Communication-Avoiding Compilers === | |||
== Software Stack == | == Software Stack == |
Revision as of 22:55, February 5, 2013
DEGAS | |
---|---|
File:Your-team-logo.png | |
Team Members | LBNL, Rice U., UC Berkeley, UT Austin, LLNL, NCSU |
PI | Katherine Yelick (LBNL)) |
Co-PIs | Vivek Sarkar (Rice U.), James Demmel (UC Berkeley),
Mattan Erez (UT Austin), Dan Quinlan (LLNL) |
Website | team website |
Download | {{{download}}} |
Description about your project goes here.....
Team Members
- Lawrence Berkeley National Laboratory (LBNL)
- Rice University
- University of California, Berkeley
- University of Texas at Austin
- Lawrence Livermore National Laboratory (LLNL)
- North Carolina State University (NCSU)
Mission
Mission Statement: To ensure the broad success of Exascale systems through a unified programming model that is productive, scalable, portable, and interoperable, and meets the unique Exascale demands of energy efficiency and resilience.
Goals & Objectives
- Scalability: Billion‐way concurrency, thousand‐way on chip with new architectures
- Programmability: Convenient programming through a global address space and high‐level abstractions for parallelism, data movement and resilience
- Performance Portability: Ensure applications can be moved across diverse machines using implicit (automatic) compiler optimizations and runtime adaptation
- Resilience: Integrated language support for capturing state and recovering from faults
- Energy Efficiency: Avoid communication, which will dominate energy costs, and adapt to performance heterogeneity due to system-‐level energy management
- Interoperability: Encourage use of languages and features through incremental adoption
Roadmap
Any Roadmap to be included?
Impact
Any Impact to be included?
Programming Models
Two Distinct Parallel Programming Questions
- What is the parallel control model?
- What is the model for sharing/communication?
=== Applications Drive New Programming Models
- Message Passing Programming
- Divide up domain in pieces
- Compute one piece and exchange
- MPI and many libraries
- Global Address Space Programming
- Each start computing
- Grab whatever/whenever
- UPC, CAF, X10, Chapel, Fortress, Titanium, GlobalArrays
Hierarchical Programming Model
- Goal: Programmability of exascale applications while providing scalability, locality, energy efficiency, resilience, and portability
- Implicit constructs: parallel multidimensional loops, global distributed data structures, adaptation for performance heterogeneity
- Explicit constructs: asynchronous tasks, phaser synchronization, locality
- Built on scalability, performance, and asynchrony of PGAS models
- Language experience from UPC, Habanero‐C, Co‐Array Fortran, Titanium
- Both intra and inter‐node; focus is on node model
- Languages demonstrate DEGAS programming model
- Habanero‐UPC: Habanero’s intra‐node model with UPC’s inter‐node model
- Hierarchical Co‐Array Fortran (CAF): CAF for on‐chip scaling and more
- Exploration of high level languages: E.g., Python extended with H‐PGAS
- Language‐independent H‐PGAS Features:
- Hierarchical distributed arrays, asynchronous tasks, and compiler specialization for hybrid (task/loop) parallelism and heterogeneity
- Semantic guarantees for deadlock avoidance, determinism, etc.
- Asynchronous collectives, function shipping, and hierarchical places
- End‐to‐end support for asynchrony (messaging, tasking, bandwidth utilization through concurrency)
- Early concept exploration for applications and benchmarks