Actions

Runtime Systems: Difference between revisions

From Modelado Foundation

imported>Jsstone1
imported>Sonia
No edit summary
 
(11 intermediate revisions by 3 users not shown)
Line 4: Line 4:
Jim Stone's notes are [[media:Runtime_Systems_Summit_Stone_notes.pdf|Runtime_Systems_Summit_Stone_notes.pdf]].
Jim Stone's notes are [[media:Runtime_Systems_Summit_Stone_notes.pdf|Runtime_Systems_Summit_Stone_notes.pdf]].


Sonia is planning a Runtime System Workshop in August. '''Let her know by 4/16 when it would be good / not good to schedule.'''  
Sonia is planning a Runtime System Workshop in early 2015.  
 
'''Note:''' *** HOMEWORK *** assignments designated by Sonia with deadlines as specified. Please make updates directly to this wiki. Contact [mailto:slewis@modelado.org Scott Lewis] if you need assistance.  
 
'''Report Repository information:''' We will use a private repository on the MF github organization: https://github.com/ModeladoFoundation/doe-reports
Remember that you must login at https://github.com/login in order to access this repository.
 
Please create your github account at https://github.com/join and give your account username to [mailto:slewis@modelado.org Scott Lewis]. Scott will add you for access to the repository.


'''Note:''' *** HOMEWORK *** assignments designated by Sonia with deadlines as specified. Please make updates directly to this wiki. 


= Exascale Runtime Systems Summit Plan and Outcomes =  
= Exascale Runtime Systems Summit Plan and Outcomes =  
Line 202: Line 208:


=== Runtime Services ===
=== Runtime Services ===
{| class="wikitable" style="float:right; margin-left: 10px; width: 50%"
{| class="wikitable" style="float:right; margin-left: 10px; width: 30%"
| Runtime services: external view, tuning knobs, quality of service metrics (locality is a prime one)
| Runtime services: external view, tuning knobs, quality of service metrics (locality is a prime one)
|}
|}
Line 210: Line 216:
* Introspection services: info about power, performance, heterogeneity. Variability.
* Introspection services: info about power, performance, heterogeneity. Variability.
* Creation,  translation, isolation, security, release: name space, virtualization
* Creation,  translation, isolation, security, release: name space, virtualization
* Communication of data and code, including synchronization (event-oriented). Migration services. Move work, move data. Is not separate from the communication services, it is composed with.
* Communication of data and code, including synchronization (event-oriented). Migration services. Move work, move data. Is not separate from the communication services, it is composed with.
* Concurrency control: isolation, atomics (it gets into scheduling?)
* Concurrency control: isolation, atomics (it gets into scheduling?)
* Location and  Affinity/Locality services: map to some things that are mentioned above. Provides information and does binding.  
* Location and  Affinity/Locality services: map to some things that are mentioned above. Provides information and does binding.  
* Express error checking/detection and recovery. Allows to specify resilience properties. Both to computation and data and hardware resources.   
* Express error checking/detection and recovery. Allows to specify resilience properties. Both to computation and data and hardware resources.   
* Load balancing.  Scheduling.  
* Load balancing.  Scheduling.  
* OS requests services from the runtime: give me back resources that I gave you, tell runtime to graceful degradation/shutdown
* OS requests services from the runtime: give me back resources that I gave you, tell runtime to graceful degradation/shutdown
* Services can make requests to other services, e.g., tools
* Services can make requests to other services, e.g., tools


Line 225: Line 231:
* Locality attributes
* Locality attributes


=== *** HOMEWORK - Wilf, Vivek, Kathy, Vijay - BY 4/22 *** ===  
=== Key abstractions and their definitions ===
==== Wilf, Vivek, Kathy, Vijay ====  
{| class="wikitable" style="float:right; margin-left: 10px; width: 50%"
{| class="wikitable" style="float:right; margin-left: 10px; width: 50%"
| For the different execution models, key abstractions need to be identified and jointly supported by the runtime system, compilers, and hardware architecture.  
| For the different execution models, key abstractions need to be identified and jointly supported by the runtime system, compilers, and hardware architecture.  
Line 234: Line 241:
|}
|}


* Refine key abstractions and their definition
We talked about breaking the runtime into a set of services that need to be available to the applications codes both explicitly and implicitly. These services can be placed anywhere in the system and accessible to any node in the system either directly or through communication with a node that supports that service.
* Using refined key abstractions, define runtime services
==== Runtime Services ====
* Create a matrix for mapping on what current investments (including Charm++ runtime) provide these services and a brief description on how the services are provided.
All the services can have a runtime, just in time or ahead of time component. We could write up what these behaviors are  under each service
*Scheduling and Execution: Working with the introspection service this service finds a resource where a task can most efficiently be scheduled. Working with the code preparation service ensures the task is appropriately linked and optimized and then schedules the task. The scheduling service on request can be asked to re-schedule a failed or non-responsive task.
*I/O Operations: Accepts I/O messages and output them appropriately
*File System Manipulation: Accepts storage messages and completes them appropriately
*Communications: Communication services  
*Code preparation: This service takes code, intermediate or binary, and prepares it for execution. This preparation can include compilation (JIT or ahead of time), aggregation (in lining etc.), linking and optimization
*Memory Management: Conducts memory allocation, management and garbage collection tasks
*Introspection and error detection: Interfaces to the hardware and firmware services to provide current node, system and execution state.


==== Matrix Services ====
{| class="wikitable"
|+Services
|-
|Service
|MPI+X
|OpenMP
|OpenCL
|CHARM++
|-
|Scheduling and Execution
|OS
|OS
|OS
|OS
|-
|I/O Operations
|OS
|OS
|OS
|OS
|-
|File System Manipulation
|OS
|OS
|OS
|OS
|-
|Communications
|YES
|YES
|YES
|YES
|-
|Code preparation
|OS
|OS
|OS
|OS
|-
|Memory Management
|NO
|NO
|NO
|NO
|-
|Introspection and error detection
|NO
|NO
|NO
|NO
|-
|}
'''Deadlines'''
'''Deadlines'''
* Initial draft to be distributed to summit participants: '''April 22'''
* This is a first draft
* Final draft including comments/suggestions from summit participants: '''April 30'''
* Final draft including comments/suggestions from summit participants: '''April 30'''


== Summit Outcome: Community Buy-In ==
== Summit Outcome: Community Buy-In ==


* Presentation of solutions by Wilf (slides 15- 20)
* [[media:Community20.pdf|Presentation]] of solutions by Wilf (slides 15- 20)
* Discussions of the presented ideas
* Discussions of the presented ideas
* More questions than we had time for:
* More questions than we had time for:
Line 344: Line 412:


=== Component Interfaces ===
=== Component Interfaces ===
{| class="wikitable" style="float:right; margin-left: 10px; width: 50%"
{| class="wikitable" style="float:right; margin-left: 10px; width: 30%"
|
|
* What interfaces are needed to support hybrid programming models?
* What interfaces are needed to support hybrid programming models?
Line 359: Line 427:
Interfaces may be distributed/centralized
Interfaces may be distributed/centralized


==== *** HOMEWORK - Sanjay, Vivek, Thomas, Costin, Ron (composability), Vijay, Milind - BY 5/12 *** ====
==== *** HOMEWORK - Sanjay, Vivek, Thomas, Costin, Ron (composability), Vijay, Milind *** ====
* '''Refine major components and their definition'''
* '''Refine major components and their definition'''
* '''For each component, describe its interfaces'''
* '''For each component, describe its interfaces'''


'''Deadlines'''
* Initial draft to distribute to summit participants: '''May 12'''
* Final draft to be presented at X-Stack meeting: '''May 27'''


=== Runtime Systems Vision ===
=== Runtime Systems Vision ===
Line 480: Line 545:
# …
# …


=== *** HOMEWORK - Thomas, Wilf, Ron, Costin - BY 5/12 *** ===
=== *** HOMEWORK - Thomas, Wilf, Ron, Costin  *** ===
* '''Refine the milestones for P0, P1, and P2'''
* '''Refine the milestones for P0, P1, and P2'''


=== Deadlines ===
* First draft to be distributed to the summit participants: '''May 12'''
* Final draft to be shared at the X-Stack PI meeting: '''May 27'''


== Plan for Writing the Report ==
== Plan for Writing the Report ==
{| class="wikitable" style="float:right; margin-left: 10px; width: 50%"
{| class="wikitable" style="float:right; margin-left: 10px; width: 30%"
| Discussion: what is the main message to be conveyed? What if what we are proposing as a research agenda doesn’t get done? What if we don’t have means push research to a open source community framework?
| Discussion: what is the main message to be conveyed? What if what we are proposing as a research agenda doesn’t get done? What if we don’t have means push research to a open source community framework?
|}
|}
Line 504: Line 566:
*** Recommendations regarding workshops
*** Recommendations regarding workshops
* Schedule.  
* Schedule.  
** First draft: '''May 27 (target 10 pages)'''
** First draft: ''' October 17, 2014 (target 10 pages)'''
** Present at X-Stack meeting and collect feedback from X-Stack community
** Present at X-Stack meeting and collect feedback from X-Stack community
** Coordination calls in June and July.
** Coordination calls in June and July.
** Final draft: '''July 31'''
** Final draft: '''Dec 19, 2014'''


=== *** HOMEWORK - ALL - BY 4/16 *** ===
=== *** HOMEWORK - ALL *** ===
* Send Sonia proposed changes to the outline and volunteer for report section by '''April 16'''
* Send Sonia proposed changes to the outline and volunteer for report section by '''September 15'''
* Sonia will distribute assignments by '''April 22'''
* Assignments to be decided on the call, week of '''Sept 15'''

Latest revision as of 17:48, September 15, 2014

Sonia Sachs held the first Runtime Systems Summit on April 9, 2014. The contents of the meeting are below, to be revised with updates by the attendees.

The presentation that Sonia emailed on 4/11 is Runtime Systems Summit April 9 2014 - v4.pptx. Jim Stone's notes are Runtime_Systems_Summit_Stone_notes.pdf.

Sonia is planning a Runtime System Workshop in early 2015.

Note: *** HOMEWORK *** assignments designated by Sonia with deadlines as specified. Please make updates directly to this wiki. Contact Scott Lewis if you need assistance.

Report Repository information: We will use a private repository on the MF github organization: https://github.com/ModeladoFoundation/doe-reports Remember that you must login at https://github.com/login in order to access this repository.

Please create your github account at https://github.com/join and give your account username to Scott Lewis. Scott will add you for access to the repository.


Exascale Runtime Systems Summit Plan and Outcomes

Summit Goals

Discuss current challenges in Exascale runtime systems: we would like to create an articulation of these challenges that is commonly accepted by the community. We want to discuss how to leverage recent reports and community input on challenges that we face in the area of runtime systems to create a complete set of challenges and discuss the current state-of-the-art to deal with them.

Develop a set of questions that must be answered in this area. An initial set of questions is included in this document.

Generate a roadmap for generating a unified runtime systems architecture for Exascale systems that has broad community acceptance and that leverages investments: Not a unified runtime software for Exascale

To be clear: this summit is not an opportunity for participants to promote their current research agenda, but to take a fresh look into the future needs for Exascale runtime systems. The goal for this summit is to develop a unified runtime architecture with common components, not a single unified runtime software.


Summit rule: Participants should not promote their current research agenda

  • Generate a roadmap for achieving a unified runtime systems architecture for Exascale systems
    • Reach consensus on the top six challenges and solutions for them.
    • Agree on a comprehensive set of questions that must be answered in order to achieve such architecture
    • Current known answers to posed questions
  • Generate a roadmap for a research program on runtime systems
    • Consistent with achieving a unified runtime systems architecture
    • Discuss future workshop
  • Prepare for writing a report

Plan to create the Unified Runtime Systems Architecture Roadmap

We need to leverage current investments in runtime systems: OCR, HPX, ARTS, SEEC, GVR runtime and runtimes to support advance/extended MPI and Global Arrays

  • Agree on top six (6) challenges and solutions (1 hour)
    • Strawman set of challenges: slide 3
  • For each challenge: discuss current state-of-the-art and how is challenge addressed in existing runtime systems to be leveraged? (1-2 hours)
  • Agree on a set of top questions to be answered (1 hours)
    • Strawman set of questions: slide 4
  • For each question: discuss currently known answers and how existing runtime systems answer it? (1-2 hours)
  • Vision (1-2 hours)
    • what are major components?
    • programming interfaces and interfaces to the OS
    • How do we measure success

Strawman set of challenges

For the different execution models, key abstractions need to be identified and jointly supported by the runtime system, compilers, and hardware architecture.

A large number of lightweight tasks and their coordination will need runtime support that is capable of dealing with system heterogeneity and with end-to-end asynchrony.

Locality-aware, dynamic task scheduling will need runtime support so that it is possible to continuously optimize when code or data should be moved.

Task coordination/synchronization primitives that are best suited to support exascale systems need to be identified.

Load imbalances created by a large number of sources for non-uniform execution rates will require runtime support to dynamic load balancing.

  • Key abstractions need to be identified
    • and jointly supported by the runtime system, compilers, and hardware architecture.
  • Runtime support for lightweight tasks and their coordination
    • capable of dealing with system heterogeneity and with end-to-end asynchrony.
  • Runtime support for locality-aware, dynamic task scheduling
    • Enabling continuously optimizing code or data movement.
  • Need for task coordination and synchronization primitives
  • Runtime support for dynamic load balancing
    • To deal with load imbalances created by a large number of sources for non-uniform execution rates


  • What is the current known key abstractions? Which key abstractions are currently supported by runtime systems to be leveraged?
  • For each of these challenges: What is the current state-of-the-art on such runtime support? How is this done in runtime systems to be leveraged?

Strawman Set of Questions

  • Runtime system software architecture
    • What would the principal components be? What are the semantics of these components? What is the role of the different execution models?
    • What are the mechanisms for managing processes/threads/tasks and data?
    • What policies and/or mechanisms will your runtime use to schedule code and place data?
    • How does the runtime dynamically adapt the schedule and placement so that metrics of code-data affinity, power consumption, migration cost and resiliency are improved?
    • How does the runtime manage resources (compute, memory, power, bandwidth) to meet a power, energy and performance objective?
    • How does the runtime scale?
    • What is the role of a global address space or a global name space?
    • What programming models will be supported by the runtime architecture?
    • What OS support should be assumed?
  • Community buy-in:
    • How do we achieve community buy-in to an envisioned runtime architecture and semantics?
    • We need a process to continuously evaluate refine the envisioned runtime architecture and semantics while keeping focus on achieving an Exascale runtime system.
    • What should this process be?

Current Runtime Investments

ASCR has made a number of investments in runtime system software research for Exascale. In the 2012 X-Stack program [2], projects include application-driven runtime systems support. Research in this area is concerned with maximizing concurrency efficiency, properly dealing with asynchrony of computation and communication, exploiting data locality, minimizing data movement, managing faults, and the needed support for heterogeneous computing elements, sound semantics for programmability, support for novel programming models, and delivery of an efficient execution environment to application developers. A number of runtime systems are currently being pursued: OCR, HPX, ARTS, SEEC, GVR runtime [2] and runtimes to support advance/extended MPI and Global Arrays [3].

Research of system-driven runtime systems, supported by the 2013 OS/R Program [4], is concerned with mechanisms, including semantics, of “common runtime services,” described in the OS/R report [5]. Examples of such mechanisms are thread management, low-level communication services, and resource management. Tight interaction among the different runtime service components has been identified as essential in order to deal with challenges of resilience, asynchronous computations, and locality of computation. A number of approaches of systems-driven runtime are currently being pursued in the ARGO, HOBBES, and X-ARCC projects. Research on applications-driven and systems-driven runtime systems are addressing challenges of resilience, power, hierarchical memory management, unprecedented parallelism, heterogeneity of hardware resources, locality and affinity management. DOE ASCR has insisted that focus on self-aware, dynamical systems should guide most of the research solutions in these two categories. DOE ASCR has also strongly recommended that close coordination be established among the various runtime system research projects. However, a community-wide involvement in defining runtime architecture for Exascale computing remains elusive.

  • 2012 X-Stack program [2]: application-driven runtime systems support:
    • maximizing concurrency efficiency,
    • dealing with asynchrony of computation and communication,
    • exploiting data locality,
    • minimizing data movement,
    • managing faults,
    • support for heterogeneous computing elements,
    • semantics for programmability,
    • support for novel programming models
  • Runtime systems to be leveraged
    • OCR, HPX, ARTS, SEEC, GVR runtime and runtimes to support advance/extended MPI and Global Arrays
  • Mapping important questions to projects:
Remember to say that table of questions will be extended to include these three projects.

Research of system-driven runtime systems, supported by the 2013 OS/R Program [4], is concerned with mechanisms, including semantics, of “common runtime services,” described in the OS/R report [5]. Examples of such mechanisms are thread management, low-level communication services, and resource management. Tight interaction among the different runtime service components has been identified as essential in order to deal with challenges of resilience, asynchronous computations, and locality of computation. A number of approaches of systems-driven runtime are currently being pursued in the ARGO, HOBBES, and X-ARCC projects. Research on applications-driven and systems-driven runtime systems are addressing challenges of resilience, power, hierarchical memory management, unprecedented parallelism, heterogeneity of hardware resources, locality and affinity management. DOE ASCR has insisted that focus on self-aware, dynamical systems should guide most of the research solutions in these two categories. DOE ASCR has also strongly recommended that close coordination be established among the various runtime system research projects. However, a community-wide involvement in defining runtime architecture for Exascale computing remains elusive.

  • 2013 OS/R Program [4]: systems-driven mechanisms described in the OS/R report:
    • thread management,
    • low-level communication services,
    • resource management,
    • different runtime service components tightly connected to deal with challenges:
      • resilience,
      • asynchronous computations,
      • and locality of computation.
  • Runtime systems approaches to be leveraged in ARGO, HOBBES, and X-ARCC projects.
  • Mapping important questions to projects:
    • Not yet available
      • To be completed after upcoming OS/R semi-annual review

Summit Outcome: Top Challenges

  • Growing gap between communication and computation
  • Scalability & Starvation: dominant parameters to optimize, critical path management
  • Locality and data movement: need terminology for inter and intra
  • Power is critical
  • Overhead
  • Resilience: scalability and power problems exacerbates
  • Load balancing: contention, hot spots,
  • Heterogeneity: performance irregularities, static and dynamic, heterogeneity in storage/memory
  • In-situ data analysis and mgmt: new dimension of interoperability
  • Exploitation of runtime information (introspection), feedback control of performance data, managing performance data
  • Resource allocation
  • Scheduling & workflow orchestration
  • Complexity/optimization/tuning
  • Portability
  • Synchronization: event-driven, mutual exclusion, barriers, phasers
  • Computing everywhere
  • Name space: both data and computation, includes location management
  • Support tools
  • Support for migratable computational units
  • Hardware support, tight-coupling
  • Expose some of runtime elements to system managers

Summit Outcome: Top Challenge Classes

We need to separate Problems from Solutions

Runtime services: external view, tuning knobs, quality of service metrics (locality is a prime one)
Internal runtime architecture: question on global optimization of runtime services. Need to time and energy optimize.
Runtime systems serves one application, with instrospection.
OS understands the system and the workload.

  1. Locality and data movement: need terminology for inter and intra, dynamic decisions, handling variability, conflict of optimizing locality, data movement and costs of dynamic scheduling- questions of policy. Synchronization: event-driven, mutual exclusion, barriers, phasers. Overhead. Growing gap between communication and computation
  2. Resilience: scalability and power problems exacerbates.
  3. Variability. Static and Dynamic. Power management. Load balancing: contention, hot spots. Exploitation of runtime information (introspection), feedback control of perfomance data, managing performance data
  4. Heterogeneity: performance irregularities, static and dynamic, heterogeneity in storage/memory. Computing everywhere.
  5. Scalability & Starvation: dominant parameters to optimize, critical path management. Name space: both data and computation, includes location management. Complexity/optimization/tuning
  6. Portability and interoperability. In-situ data analysis and mgmt: new dimension of interoperability: runtime systems composability.
  7. Resource allocation. Scheduling & workflow orchestration. Cross jobs (apps) scheduling: OS role. Focus scheduling for one job. Support for migratable computational units. Hardware support, tight-coupling. Expose some of runtime elements to system managers
  8. Usability. Support tools.

Summit Outcome: Challenge Problems

  • For each challenge problem, we want to give examples in the context of challenge problems
    • Vivek suggested one multi-physics challenge problem.

***HOMEWORK - ALL***

  • Identify and describe challenge problems

Summit Outcome: Key Abstractions

  • Unit of computation
    • attributes: locality, synchronization, resilience, critical path
  • Naming: data, computation, objects that combine both (active objects)
  • Global side-effects: programming model abstraction?
  • Execution Model
  • Machine Model, Resources: memory, computation, storage, network, …
  • Locality and affinity, hierarchy
  • Control State: collective of info distributed across the global system that determines the next state of the machine. Distributed snapshot of the system. Logical abstraction, how to reason about the system.
  • Enclave
  • Scheduler: local scheduler of a single execution stream
  • Execution Stream: something that has hardware associated with
  • Communication data transfer
  • Concurrency patterns, synchronization
  • Resilience, detection, fault model

Summit Outcome: Runtime Services

Runtime Services

Runtime services: external view, tuning knobs, quality of service metrics (locality is a prime one)
  • Schedule and execute threads/tasks/work unit, including code generation
  • Resource allocation (give me resources dynamically, as needed, release resources): including networks. heterogeneity
  • Introspection services: info about power, performance, heterogeneity. Variability.
  • Creation, translation, isolation, security, release: name space, virtualization
  • Communication of data and code, including synchronization (event-oriented). Migration services. Move work, move data. Is not separate from the communication services, it is composed with.
  • Concurrency control: isolation, atomics (it gets into scheduling?)
  • Location and Affinity/Locality services: map to some things that are mentioned above. Provides information and does binding.
  • Express error checking/detection and recovery. Allows to specify resilience properties. Both to computation and data and hardware resources.
  • Load balancing. Scheduling.
  • OS requests services from the runtime: give me back resources that I gave you, tell runtime to graceful degradation/shutdown
  • Services can make requests to other services, e.g., tools

Service Attributes

  • How the service will be provided?
  • Expected resilience
  • Expected resources usage
  • Persistence of memory
  • Locality attributes

Key abstractions and their definitions

Wilf, Vivek, Kathy, Vijay

For the different execution models, key abstractions need to be identified and jointly supported by the runtime system, compilers, and hardware architecture.

A large number of lightweight tasks and their coordination will need runtime support that is capable of dealing with system heterogeneity and with end-to-end asynchrony. Locality-aware, dynamic task scheduling will need runtime support so that it is possible to continuously optimize when code or data should be moved. Task coordination/synchronization primitives that are best suited to support exascale systems need to be identified. Load imbalances created by a large number of sources for non-uniform execution rates will require runtime support to dynamic load balancing.

We talked about breaking the runtime into a set of services that need to be available to the applications codes both explicitly and implicitly. These services can be placed anywhere in the system and accessible to any node in the system either directly or through communication with a node that supports that service.

Runtime Services

All the services can have a runtime, just in time or ahead of time component. We could write up what these behaviors are under each service

  • Scheduling and Execution: Working with the introspection service this service finds a resource where a task can most efficiently be scheduled. Working with the code preparation service ensures the task is appropriately linked and optimized and then schedules the task. The scheduling service on request can be asked to re-schedule a failed or non-responsive task.
  • I/O Operations: Accepts I/O messages and output them appropriately
  • File System Manipulation: Accepts storage messages and completes them appropriately
  • Communications: Communication services
  • Code preparation: This service takes code, intermediate or binary, and prepares it for execution. This preparation can include compilation (JIT or ahead of time), aggregation (in lining etc.), linking and optimization
  • Memory Management: Conducts memory allocation, management and garbage collection tasks
  • Introspection and error detection: Interfaces to the hardware and firmware services to provide current node, system and execution state.

Matrix Services

Services
Service MPI+X OpenMP OpenCL CHARM++
Scheduling and Execution OS OS OS OS
I/O Operations OS OS OS OS
File System Manipulation OS OS OS OS
Communications YES YES YES YES
Code preparation OS OS OS OS
Memory Management NO NO NO NO
Introspection and error detection NO NO NO NO

Deadlines

  • This is a first draft
  • Final draft including comments/suggestions from summit participants: April 30

Summit Outcome: Community Buy-In

  • Presentation of solutions by Wilf (slides 15- 20)
  • Discussions of the presented ideas
  • More questions than we had time for:
    • We will post Wilf’s presentation in the xstack wiki
    • Wilf will present these again at the X-Stack PI meeting
    • Summit participants are encouraged to send Wilf and I comments/questions/suggestions
    • We will encourage X-Stack meeting participants to give us comments/questions/suggestions
  • Ecosystem Creation: How do we achieve community buy-in to an envisioned runtime architecture and semantics?
  • Process: We need a process to continuously evaluate refine the envisioned runtime architecture and semantics while keeping focus on achieving an Exascale runtime system. What should this process be?

Ecosystem Creation

  • Establish an open, transparent environment where the solution is not pre-determined
  • Provide an organic process for community decision-making, ensuring that the best solution wins
  • Avoid a single player or clique dominating
  • Lower the barrier to participation by providing stable, reliable releases of candidate solutions to a broad audience

Process

  • Build an independent, open-source foundation that ensures the different projects can be continuously available, evolved, and supported.
  • The different projects will evolve based on the contributions made. As solutions demonstrate their superiority, they will attract more contributions as well as consensus.
  • The community will organically migrate to the superior solution.
  • DOE can continuously view progress and help fund projects to cover any critical shortfalls.

Who is in the community?

  1. Exascale Computing Research Community (us)
  2. High Performance Computing User Community (current users)
  3. Academic Community (future users)
  4. Application Development Community (scientists and engineers)
  5. Software Development Community
  6. Hardware Vendors

Community Services

  • Project Team Infrastructure - e.g. source code control, tooling, debuggers, collaboration/communication
  • Release Engineering
  • Technical Support
  • IP management
  • Education, instruction and training
  • Community Development

Build on Experience: Community 2.0

Learn from the best Eclipse, Apache, Mozilla

  • Building the community/ecosystem is top priority
  • Support multiple projects and give them autonomy
  • Support Regular Community Interaction
  • Long-term commitment to quality through education and process

Avoid the pitfalls

  • Commercial control of the purse strings leads to community breakdown
  • Get to community support quickly and maintain community control

Summit Outcomes

Comprehensive Set of Questions

Runtime system software architecture

  • What are the major services provided by this architecture?
  • What is the strategy that the runtime system has to embody? What is the role of the different execution models?
  • What would the principal components be? What are the semantics of these components?
  • What are the mechanisms for managing and scheduling units of computation and data?
  • How does the runtime dynamically adapt the schedule and placement so that metrics of code-data affinity, power consumption, migration cost and resiliency are improved?
  • How does the runtime manage resources (compute, memory, power, bandwidth) to meet a power, energy and performance objective? How are resources exposed?
  • How does the runtime scale? How does the runtime ensures its scalability?
  • What is the role of name/address spaces? Are they global or not? What is their scope?
  • What programming models will be supported by the runtime architecture?
  • How will composability be enabled?
  • What OS support is assumed? What can the OS ask/expect from runtimes?
  • What can compilers ask/expect of runtimes? What runtimes can ask/expect of compilers?
  • Just in time compilation: what is the runtime support needed?
  • What hardware support is assumed, can be exploited, can helpful? What is the machine model assumed?
  • What is the cost model assumed (energy, performance, resilience)?
  • How does the runtime system enables use of application or system info for resilience? In general, how does the runtime system uses information?
  • What tools expect from runtimes, and what runtimes expect from tools?

Runtime Systems Major Components

  • Unit of computation manager and scheduling
  • Name service for everything that one wants to virtualize. Address allocation/translation.
  • Data distribution and redistribution
  • Locality management
  • Power management
  • Communication interfaces and/or infrastructure.
  • Network I/O
  • Active storage (compute in storage). I/O, locality
  • Load balancers
  • Location managers
  • Prefetcher for explicit memory mgmt
  • Lightweight migratable threads
  • Introspection management. Monitoring/tools interface
  • Reliable data store. I/O is embedded here.
  • Global termination detection
  • Event and synchronization framework
  • Failure detection
  • Failure recovery
  • Adaptive controller
  • Interoperability (with in-situ analysis, visualization, etc.)
  • Composability manager

Component Interfaces

  • What interfaces are needed to support hybrid programming models?
  • What interfaces are needed to coordinate across multiple runtime systems software that may concurrently run on a system?
  • What Interfaces to compilers are needed?
  • What interfaces to OS are needed?

Interfaces should:

  • Support hybrid programming models
  • Interface to compilers and OS
  • Ensure progress guarantees (formal methods)

Interfaces may be distributed/centralized

*** HOMEWORK - Sanjay, Vivek, Thomas, Costin, Ron (composability), Vijay, Milind ***

  • Refine major components and their definition
  • For each component, describe its interfaces


Runtime Systems Vision

Inputs:

  • Enable efficient exploitation of exascale hardware resource by applications.
  • Addressing the identified exascale challenges
  • Exploiting runtime information not available to compilers or programs
  • Responsive to asynchrony
  • Capable of delivering the services identified.

First cut:

Enable efficient applications execution on exascale hardware with runtime systems that address the need for massive hierarchical concurrency, data movement minimization, failure tolerance, adaptation to performance variability, and management of energy and system resources.

*** HOMEWORK - Kathy ***

Send us the vision statement, with input from; Sanjay, Vivek, Thomas, Costin, Ron (composability), Vijay, Milind

How Do We Measure Success

Proposed high-level criteria

  • Efficiency, scalability, productivity
  • Reliability, power management
  • Move from static control to dynamic control; introspection
  • Move programming burden from programmer to system
  • Heterogeneity
  • Strong scaling and greater generality
  • SLOWER: starvation, latency, overhead, waiting, energy, resilience.
  • How well is energy conserved?
  • How do we measure runtime ability to handle heterogeneity and variability?
  • How do we measure resilience?
  • How do we measure the ability to handle load imbalances?
  • How do we measure scalability?

Micro benchmarks and mini-apps

How to measure? Metrics

  • Time, work, energy
  • Idleness
  • Combined task scheduling and communication metric
  • Flexibility of the system (doing new things quickly)
  • Overhead: not orthogonal to starvation, lower bound on the thread that can be explored, reduces concurrency. Sanjay can explain how.
  • Need to engage performance tools.
  • What is the key bottleneck?
  • Ease of programming/productivity metric: what could that be?

Plan to create Roadmap for Runtime Research

New runtime research

  • runtime mechanisms to extract parallelism
  • Proof that dynamic adaptive runtime systems are/not needed due to variability: simulation modeling? When can we get this done? Need trends, need to know bounds.
  • Metrics crosscutting with existing and new research
  • Programming interfaces for programmer engagement
  • Composability management
  • Integration with IO, network, storage
  • Debugging, debugging, debugging
  • Compute everywhere: adds challenges for debugging
  • Distributed algorithms for scheduling that scales
  • Workflow usage models
  • Improved micro-benchmarks, mini-apps that exploit runtime systems attributes
  • Dynamic, interactive steering
  • Energy consumption/ power management
  • Make the machine more useable by sys admin
  • Learning runtime with observations
  • How to deal with variability
  • Interoperability of runtime with workflow and job scheduler and in-situ analytics: model s of use

Integration into a open-source community runtime: Modelado

Testing/validation for ASCR/NNSA apps running at scale

Major milestones and time-line

  • Should follow Hardware timeline
  • P0: petascale node by 2017
  • P1: exascale node by 2019
  • P2: exascale cabinet prototype by 2022

Requirement

  • demonstrated the benefits of dynamic adaptive runtime for regular apps (2014-1015)

P0

  1. evaluation of proxy apps with different runtimes, exercising composability
  2. identifying hardware dependencies and pruning the list
  3. demonstrate that runtimes can scale up to petascale
  4. intermediate representation identified/specified
  5. Models and evaluation methodologies
  6. Model for compute everywhere
  7. Model for debuggability
  8. demonstrate on a multi-note context
  9. demonstrate explicit management of memory, or the other way around. If it can be done by 2017.

P1

  1. evaluation of larger proxy apps with different runtimes, exercising composability
  2. refining hardware dependencies list
  3. demonstrate benefits of intermediate representation
  4. demonstrate runtime mechanisms to extract parallelism on exascale context
  5. demonstrate that runtimes can scale up to exascale
  6. validation of Models and evaluation methodologies
  7. validation of Model for compute everywhere
  8. validation Model for debuggability
  9. demonstrate on a multi-node context,
  10. demonstrate explicit management of memory, or the other way around

P2

  1. evaluation of apps running at scale with different runtimes
  2. refining hardware dependencies list
  3. demonstrate benefits of intermediate representation
  4. demonstrate runtime mechanisms to extract parellelism on exascale context
  5. demonstrate that runtimes can scale up to exascale
  6. validation of Models and evaluation methodologies at scale
  7. validation of Model for compute everywhere at scale
  8. validation Model for debuggability at scale

*** HOMEWORK - Thomas, Wilf, Ron, Costin ***

  • Refine the milestones for P0, P1, and P2


Plan for Writing the Report

Discussion: what is the main message to be conveyed? What if what we are proposing as a research agenda doesn’t get done? What if we don’t have means push research to a open source community framework?
  • Proposed outline
    • Top Challenges
    • Comprehensive set of questions to be answered
    • State-of-the-art: How challenges and questions are addressed in existing runtime systems that we want to leverage?
    • Towards a Unified Runtime Systems Architecture
      • Components
      • Interfaces
    • Conclusion
      • Recommendations towards jointly evolving vision of unified runtime systems architecture
      • Recommendations on the roadmap to Runtime Systems Research
      • Recommendations regarding workshops
  • Schedule.
    • First draft: October 17, 2014 (target 10 pages)
    • Present at X-Stack meeting and collect feedback from X-Stack community
    • Coordination calls in June and July.
    • Final draft: Dec 19, 2014

*** HOMEWORK - ALL ***

  • Send Sonia proposed changes to the outline and volunteer for report section by September 15
  • Assignments to be decided on the call, week of Sept 15