Actions

Communications

From Modelado Foundation

PI
XPRESS Ron Brightwell
TG Shekhar Borkar
DEGAS Katherine Yelick
D-TEC Daniel Quinlan
DynAX Guang Gao
X-TUNE Mary Hall
GVR Andrew Chien
CORVETTE Koushik Sen
SLEEC Milind Kulkarni
PIPER Martin Schulz

Questions:

What is the primary performance metric for your runtime?

What are the communication "primitives" that you expect to emphasize within your project? (e.g. two-sided vs one-sided, collectives, topologies, groups) Do we need to define extensions to the traditional application level interfaces which now emphasize only data transfers and collective operations? Do we need atomics, remote invocation interfaces, or should these be provided ad-hoc by clients?
XPRESS The communication primitive is based on the “parcel” protocol that is an expanded form of active-message and that operates within a global address space distributed across “localities” (approx. nodes). Logical destinations are hierarchical global names, actions include instantiations of threads and ParalleX processes (spanning multiple localities), data movement, compound atomic operations, and OS calls. Continuations determine follow-on actions. Payload conveys data operands and block data for moves. Parcels are an integral component of the semantics of the ParalleX execution model providing symmetric semantics in the domain of asynchronous distributed processing to local synchronous processing on localities (nodes).
TG
DEGAS The GASNet-EX communications library provides Active Messages (AM), one-sided data-movement and collectives as its primary communications primitives. Secondary primitives, such as atomics, may be emulated via AM or implemented through native hardware when available. The programming models in the DEGAS project use these primitives for many purposes. The Active Message primitives support the asynchronous remote invocation operations present in the Habanero and UPC++ efforts, while atomics will provide efficient point-to-point synchronization.
D-TEC The primary focus is computation via asynchronous tasks. The primary communication primitive is (reliably delivered) fire-and-forget active messages. Higher level behavior (finish, at) is synthesized by the APGAS runtime ontop of the active message primitive. However, for performance at scale we recognize the importance of additional primitives: both non-blocking collectives and one-sided asynchronous RDMA's for point-to-point bulk data transfer
DynAX SWARM uses split-phase asynchronous communications operations with separate setup and callback phases, with a common co-/subroutine-call protocol on top of this. Method calls can easily be relocated by SWARM, and within this infrastructure we additionally provide for split-phase transfers and collective operations. Non-blocking one-sided communication is the only real need, though secondary features (such as remote atomics) might be beneficial.
X-TUNE X-TUNE is primarily focused on introducing thread-level parallelism and synchronization, and is relying on programmers or other tools to manage communication. Support in X-TUNE could be used to perform autotuning of communication primitives, but this is beyond the scope of our activities.
GVR
CORVETTE
SLEEC N/A
PIPER Communication will be out of band and need to be isolated, emphasis on streaming communication
Traditional communication libraries (e.g. MPI or GASNet) have been developed without tight integration with the computation "model". What is your strategy for integrating communication and computation to address the needs of non SPMD execution?
XPRESS It is important for performance portability that most communication related codes comprise invariants that will always be true. Aggregation, routing, order sensitivity, time to arrival, and error management should be transparent to the user. Destinations, in most cases, should be relative to placement of first class objects for adaptive placement and routing. Scheduling should tolerate asynchrony uncertainty of message delivery without forfeit of performance assuming sufficient parallelism.
TG Our runtime can rely on existing thread (we use pthreads for example) frameworks but we do not need them as we really use the existing threading framework to emulate a computing resource.
DEGAS DEGAS is extending the GASNet APIs to produce GASNet-EX. GASNet-EX is designed to support the computation model rather than dictate it. Unlike the current GASNet, GASNet-EX allows (but does not require) treatment of threads as first-class entities (as in the MPI endpoints proposal), allowing efficient mapping of non-SPMD execution models, e.g. Habanero, that are impractical or inefficient today.
D-TEC It is not clear to us that tight integration of communication libraries and the computation model is needed to support non SPMD execution. The X10/APGAS runtime supports non-SPMD execution at scale while maintaining a fairly strict separation between the communication layer (X10RT) and the computational model. X10RT provides basic active message facilities, but all higher-level computational model concerns are handled above the X10RT layer of the runtime. However, there are certainly opportunities to optimize some of the small "control" messages sent over the X10RT transport by the APGAS runtime layer by off-loading pieces of runtime logic into message handlers that could run directly within the network software/hardware. Pushing this function down simply requires the network layer to allow execution of user-provided handlers, not a true integration of the computation model into the communication library.
DynAX SWARM's codelet model can be used to effect split-phase co-/subroutine calls, whether or not there are networking features present. Applications can control the routing of these calls and their associated data explicitly, but this is not necessary unless higher-level partitioning schemes are being used. This scheme allows any computation encompassed by one or more codelets to be relocated, allowing both data and computation to be relocated transparently and enabling scaling from a single hardware thread to thousands of nodes without rewriting the application.
X-TUNE
GVR
CORVETTE
SLEEC Computation methods should express their dependences so that SLEEC's runtime(s) can manage communication within a heterogeneous node.
PIPER N/A
What type of optimizations should be transparently provided by a communication layer and what should be delegated to compilers or application developers?

What is the primary performance metric for your runtime?

XPRESS Time to solution of application workload, with minimum energy cost within that scope.
TG
DEGAS Communications libraries and neighboring runtime layers should be responsible only for dynamic optimization of communication. Examples of such optimizations include: aggregation of messages with the same destination, scheduling multiple links, and injection control for congestion avoidance. Compilers or application developers should be responsible for static optimizations such as communication avoidance, hot-spot elimination, etc.

Primary metrics for communications runtime include latency of short messages, bandwidth of large messages, and communication/computation overlap opportunity during long-latency operations. Reduction of energy is a metric for the stack as a whole and may be more dependent on avoiding communication than on optimizing it (see also energy-related question).

D-TEC Under the assumption that the communication layer is not tightly integrated with the computation model, the scope of transparent optimization seems limited to optimizing the flow of traffic within the network. Perhaps also providing performance diagnostics and control points to higher-levels of the runtime to enable them to optimize communication behavior. Optimizations need to be planned/managed at a level of the stack that has sufficient scope to make good decisions.
DynAX A communications layer should ideally be able to load-balance both work and data without application involvement, using optional application-provided placement hints to assist in the process. Compilers should deal more with transforming higher-level language features like data types and method calls into SWARM constructs, and although compilers may generate hints, this will likely have to be the responsibility of the developer and tuner. The primary (external) metrics used are time to application completion and energy cost.
X-TUNE
GVR
CORVETTE
SLEEC N/A
PIPER unclear
What is your strategy towards resilient communication libraries?
XPRESS To first order runtime system assumes correct operation of communication libraries as being pursued by Portals-4 and the experimental Photon communication fabric. Under NNSA PSAAP-2 the Micro-checkpoint Compute-Validate-Commit cycle will detect errors including those due to communication failures.
TG
DEGAS DEGAS is pursuing a hybrid approach to resilience which consists of both backward recovery (rollback-recovery of state via checkpoint/restart) and forward recovery (recompute or re-communicate faulty state via Containment Domains), working together in the same run. The ability of Containment Domains to isolate faults and to perform most recovery locally is ideal for most "soft errors", while the use of rollback-recovery is appropriate to hard node crashes. The combination of the two not only reduces the frequency of checkpoints required to provide effective protection, but also limits the type of errors that an application programmer must tolerate. Further, our approach allows the scope of rollback-recovery to be limited to subsets of the nodes and, in some cases, only the faulty nodes need to perform recovery.

The communications library supports each resilience mechanism in appropriate ways. For rollback-recovery, GASNet-EX must include a mechanism to capture a consistent state, a significantly more challenging a problem with one-sided communication than in a message-passing system, especially if one does not wish to quiesce all application communications for a consistent checkpoint. For Containment Domains, GASNet-EX must run through communications failures by reacting (not aborting), by notifying other runtime components, by enabling these components to take appropriate actions, and by preventing resource leaks associated with (for instance) now-unreachable peers.

D-TEC This is not an area of research for D-TEC. We are assuming the low-level communication libraries will (at least as visible by our layer) operate correctly or report faults when it does not. Any faults reported by the underlying communication library will be reflected up to higher-levels of the runtime stack.
DynAX The DynAX project will focus on Resilience in year three. As such, it's not yet clear what SWARM's reliability requirements from the communication layer will be. We expect error recovery to occur at the level of the application kernel, by using containment domains to restart a failed tile operation, or algorithmic iteration, or application kernel when the error is detected. The success of this will hinge on whether errors can be reliably detected, and reliably acted upon.
X-TUNE N/A
GVR
CORVETTE
SLEEC N/A
PIPER ability to drop and reroute around failed processes
What and how can a communication layer help in power and energy optimizations?
XPRESS Energy waste on unused channels needs to be prevented. Delays due to contention for hotspots need to be mitigated through dynamic routing. Information on message traffic, granularity, and power needs to be provided to OSR.
TG
DEGAS If/when applications become blocked waiting for communications to complete, one should consider energy-aware mechanisms for blocking. Other than that, most mechanisms for energy reduction with respect to communication are also effective for reducing time-to-solution and are likely to be studied in that context (where the metric is easier to measure).
D-TEC
DynAX Data transfers will occupy a large portion of the energy budget, so minimizing the need for data movement will greatly improve energy consumption. This can be done by ensuring that, whenever possible, work is forwarded to the data and not vice versa. SWARM's codelet model ensures that this is quasi-transparent to the programmer, although the runtime must obviously perform the work-forwarding and whatever data relocations are needed itself. Hints from the compiler or application programmer/tuner can assist the runtime in this and further decrease energy consumption.

As part of PEDAL (Power Efficient Data Abstraction Layer), we are also developing an additional SW layer that encapsulates data composites. This process assigns specific layouts, transformations and operators to the composites that can be used advantageously to reduce power and energy costs. A similar process will we applicable to resiliency as well.

X-TUNE N/A
GVR
CORVETTE
SLEEC N/A
PIPER N/A
Congestion management and flow control mechanisms are of particular concern at very large scale. How much can we rely on "vendor" mechanisms and how much do we need to address in higher level layers?
XPRESS Vendor systems can help with redundant paths and dynamic routing. Runtime system data and task placement can attempt to maximize locality for reduced message traffic contention.
TG
DEGAS As others have observed, the first line of defense against congestion is intelligent placement of tasks and their data. This is the domain of the tasking runtime and the application author.

Ideally, the vendor systems would provide some degree of congestion management. This would use information not necessarily available to the communications runtime, e.g. static information about their network, dynamic information about application traffic, and traffic from other jobs. However, compilers and runtime components with "macro" communications behaviors, i.e. collectives or other structured communications, could potentially inform the communications layer about near-future communications, where this information can be used to build congestion-avoiding communications schedules. These scheduling approaches can be greatly enhanced if the vendors expose information about current network conditions, particularly for networks where multiple jobs share the same links.

D-TEC Higher levels should focus on task & data placement to increase locality and reduce redundant communication. Placement should also be aware of network topology and optimize towards keeping frequently communicating tasks in the same "neighborhood" when possible. Micro-optimization of routing, congestion management, and flow control are probably most effectively handled by system/vendor mechanisms since it may require detailed understanding of the internals of network software/hardware and the available dynamic information.
DynAX Vendor mechanisms are helpful, but not necessary, for ensuring the correctness and timeliness of low-level data transfers. SWARM itself uses a higher-level flow-control mechanism based on codelet completion (i.e., using callbacks as a basis for issuing ACKs). SWARM also performs load-balancing on work and data to help minimize congestion and contention.
X-TUNE N/A
GVR
CORVETTE
SLEEC N/A
PIPER N/A