Resilience: Difference between revisions
From Modelado Foundation
imported>Infinoid (Add DynAX answers) |
imported>ERoman (Add text describing DEGAS Resilience efforts.) |
||
Line 20: | Line 20: | ||
|(TG) | |(TG) | ||
|(DEGAS) | |(DEGAS) | ||
Our approach to resilience comprises three principal technologies. First, | |||
Containment Domains (CDs) are an application-facing resilience technology. | |||
Second, we introduce error handling and recovery routines into the | |||
communications runtime (GASNet-EX) and clients, e.g. UPC or CAF, to handle | |||
errors, and return the system to a consistent global state after an error. | |||
Third, we provide low-level state preservation, via node-level checkpoints for | |||
the application and runtime to complement their native error-handling routines. | |||
Effective use of CDs requires application and library writers to write | |||
application-level error detection, state preservation and fault recovery | |||
schemes. For GASNet-EX, we would like mechanisms to identify failed nodes. | |||
Although we expect to implement timeout-based failure detection, we would | |||
like system-specific RAS function to provide explicit notification of failures | |||
and consensus on such failures, as we expect vendor-provided RAS mechanisms | |||
to be more efficient, more accurate, and more responsive than a generic | |||
timeout-based mechanism. Finally, our resilience schemes depend on fast | |||
durable storage for lightweight state preservation. We are designing schemes | |||
that use local or peer storage (disk or memory) | |||
for high volume (size) state preservation, and in-memory storage for high | |||
traffic (IOs) state preservation, we need the hardware to be present. Since | |||
our IO bottleneck is largely sequential writes with little or no reuse, almost | |||
any form of persistent storage technology is suitable. | |||
|(D-TEC) | |(D-TEC) | ||
|The DynAX project will focus on Resilience in year three. The general approach will be to integrate Containment Domains into the runtime and adapt proxy applications to use them. This will depend on the application developer (or a smart compiler) identifying key points in the control flow where resumption can occur after failure, and identifying the set of data which must be saved and restored in order to do so. It also relies on some mechanism to determine when a failure has occurred and which tasks were effected. This will vary from one system and type of failure to the next, examples include a hardware ECC failure notification, or a software data verification step detects invalid output, or maybe even a simple timeout occurs. Finally, it will require a way for the user to provide some criteria for how often to save resilience information off; the runtime may choose to snapshot at every opportunity it is given, or it might only do a subset of those, to meet some power/performance goal. | |The DynAX project will focus on Resilience in year three. The general approach will be to integrate Containment Domains into the runtime and adapt proxy applications to use them. This will depend on the application developer (or a smart compiler) identifying key points in the control flow where resumption can occur after failure, and identifying the set of data which must be saved and restored in order to do so. It also relies on some mechanism to determine when a failure has occurred and which tasks were effected. This will vary from one system and type of failure to the next, examples include a hardware ECC failure notification, or a software data verification step detects invalid output, or maybe even a simple timeout occurs. Finally, it will require a way for the user to provide some criteria for how often to save resilience information off; the runtime may choose to snapshot at every opportunity it is given, or it might only do a subset of those, to meet some power/performance goal. | ||
Line 32: | Line 54: | ||
|(TG) | |(TG) | ||
|(DEGAS) | |(DEGAS) | ||
Our resilience technologies provide tremendous flexibility in handling faults. | |||
In our hybrid user-level and system-level resilience scheme, CDs provide | |||
lightweight error recovery that enables the application to isolate the effects | |||
of faults to specific portions of the code, thus localizing error recovery. | |||
With the use of nesting CDs, an inner CD can decide how to handle an error, | |||
or to propagate this error to a parent CD. If no CD handles the error locally, | |||
we use a global rollback to hide the fault. With this approach, the use of | |||
local CDs for isolated recovery limits the global restart rate | |||
|(D-TEC) | |(D-TEC) | ||
|In a recursive tree hierarchy of containment domains, each level will have successively finer grained resilience, with fewer affected tasks, and less cost of redundant computation if the domain restarts due to failure. Containment domains are isolated from each other, so there is no extra communication / synchronization necessary between them. The 3 dependencies mentioned above (identification of containment domains, identification of failures, and the cost function) should be equally applicable to any exascale system. The only hardware dependence we anticipate is the hardware's ability to identify failures as it occurs, and the details of how that failure is reported. The system can be tuned to act more conservatively, or less conservatively, by ignoring some containment domains and enforcing others. The most conservative approach is to back up the necessary data for, and check results of, every containment domain in the application. The least conservative case is to only act on the outer-most containment domain, i.e. simply retain the program image and the parameters it was launched with, and if a failure occurs anywhere in the application, the entire thing is thrown out and starts again from the beginning. Reality is usually somewhere between those two extremes... the runtime can pick and choose which containment domains to act on, based on cost/benefit analysis (which I will discuss further in the question below on application semantics information). | |In a recursive tree hierarchy of containment domains, each level will have successively finer grained resilience, with fewer affected tasks, and less cost of redundant computation if the domain restarts due to failure. Containment domains are isolated from each other, so there is no extra communication / synchronization necessary between them. The 3 dependencies mentioned above (identification of containment domains, identification of failures, and the cost function) should be equally applicable to any exascale system. The only hardware dependence we anticipate is the hardware's ability to identify failures as it occurs, and the details of how that failure is reported. The system can be tuned to act more conservatively, or less conservatively, by ignoring some containment domains and enforcing others. The most conservative approach is to back up the necessary data for, and check results of, every containment domain in the application. The least conservative case is to only act on the outer-most containment domain, i.e. simply retain the program image and the parameters it was launched with, and if a failure occurs anywhere in the application, the entire thing is thrown out and starts again from the beginning. Reality is usually somewhere between those two extremes... the runtime can pick and choose which containment domains to act on, based on cost/benefit analysis (which I will discuss further in the question below on application semantics information). | ||
Line 44: | Line 74: | ||
|(TG) | |(TG) | ||
|(DEGAS) | |(DEGAS) | ||
In DEGAS, applications express semantic information required for resilience | |||
through the CD API; that's the point of CDs. The CD hierarchy, i.e. boundary | |||
and nesting structure, describes the communication dependencies of the | |||
application in a way that is not visible to the underlying runtime or | |||
communication systems. In contrast, a transparent checkpoint scheme | |||
must discover the dependency structure of an application by tracking (or | |||
preventing) receive events to construct a valid recovery line. The CD | |||
structure makes such schemes unnecessary, as the recovery line is described | |||
by the CD hierarchy. | |||
For applications that do not use CDs, we fall back to a transparent | |||
checkpoints. Here, we rely on the runtime to discover communication | |||
dependencies, advance the recovery line, and perform rollback propagation | |||
as required. One clear opportunity to improve efficiency for resilience | |||
is by exploiting the one-sided semantics of UPC to track rollback dependencies. | |||
We are starting by calculating message (receive) dependencies inside the | |||
runtime, but we are interested in tracking memory (read/write) dependencies | |||
as a possible optimization. A second area for improved efficiency | |||
is in dynamically adjusting the checkpoint interval (recovery line advance) | |||
according to job sizes at runtime. An application running on, 100 k-nodes must | |||
checkpoint half as often as one running on 400 k-nodes; for error rates, | |||
by tolerating 3/4 of the errors, checkpoint overhead is halved. Such | |||
considerations suggest that static (compile-time) resilience | |||
policies should be supplemented with runtime policies that take into account | |||
the number of nodes, failure (restart) rates, checkpoint times, memory size, | |||
etc. to ensure good efficiency. | |||
|(D-TEC) | |(D-TEC) | ||
|The runtime decides whether or not to act on each individual containment domain. It chooses which containment domains to act upon, based on tuning parameters from the user and based on the data size and the estimated amount of parallel computation involved in completing the work associated with a containment domain. The data size is easy to determine at runtime, and the execution information can be provided by runtime analysis of previous tasks, or by hints from the application developer or smart compiler. Using that information, and resilience tuning parameters provided by the user, the runtime can do simple cost/benefit analysis to determine which containment domains are the best ones to act upon to achieve the user's performance/resilience goal. | |The runtime decides whether or not to act on each individual containment domain. It chooses which containment domains to act upon, based on tuning parameters from the user and based on the data size and the estimated amount of parallel computation involved in completing the work associated with a containment domain. The data size is easy to determine at runtime, and the execution information can be provided by runtime analysis of previous tasks, or by hints from the application developer or smart compiler. Using that information, and resilience tuning parameters provided by the user, the runtime can do simple cost/benefit analysis to determine which containment domains are the best ones to act upon to achieve the user's performance/resilience goal. | ||
Line 56: | Line 112: | ||
|(TG) | |(TG) | ||
|(DEGAS) | |(DEGAS) | ||
Fast, durable storage is a key technology for increasing the efficiency of | |||
resilience. In DEGAS we are interested in both bulk storage and logging | |||
storage. The requirements for these differ slightly, in that we use logging | |||
for small high-frequency updates, possibly as much as one log entry per | |||
message, but we use the bulk storage for large infrequent updates, as these | |||
are used for checkpoints. | |||
We are exploring non-inhibitory consistency algorithms, i.e. algorithms that | |||
allow messages to remain outstanding while a recovery line is advanced. We | |||
face a challenge right now in that it is difficult to order RDMA operations | |||
with respect to other message operations on a given channel. In the worst | |||
case, we are required to completely drain network channels, and globally | |||
terminate communications in order to establish global consistency, i.e. | |||
agreement on which messages have been sent and received. A hardware capability | |||
that may prove valuable here are network-level point-to-point fence operations. | |||
A similar issue on Infiniband networks is that the current Infiniband | |||
driver APIs require us to fully tear down network connections in order to | |||
put the hardware into a known physical state with respect to the running | |||
application process, i.e. we need to shut down the NIC to ensure that | |||
it doesn't modify process memory while we determine local process state. | |||
A lightweight method of disabling NIC transfers (mainly RDMA) would eliminate | |||
this teardown requirement. | |||
|(D-TEC) | |(D-TEC) | ||
|If the hardware can reliably detect and report soft failures (such as errors in floating point instructions), this would avoid running extra application code to check the results, which are sometimes rather costly. Reliable, slower memory for storing snapshot data would also be beneficial. This memory would be written with snapshot data every time a containment domain was entered, but only read while recovering from a failure, so the cost of the write is much more important than the cost of the read. | |If the hardware can reliably detect and report soft failures (such as errors in floating point instructions), this would avoid running extra application code to check the results, which are sometimes rather costly. Reliable, slower memory for storing snapshot data would also be beneficial. This memory would be written with snapshot data every time a containment domain was entered, but only read while recovering from a failure, so the cost of the write is much more important than the cost of the read. |
Revision as of 22:51, April 28, 2014
Sonia requested that Andrew Chien initiate this page. For comments, please contact Andrew Chien.
QUESTIONS | XPRESS | TG X-Stack | DEGAS | D-TEC | DynAX | X-TUNE | GVR | CORVETTE | SLEEC | PIPER |
---|---|---|---|---|---|---|---|---|---|---|
PI | Ron Brightwell | Shekhar Borkar | Katherine Yelick | Daniel Quinlan | Guang Gao | Mary Hall | Andrew Chien | Koushik Sen | Milind Kulkarni | Martin Schulz |
Describe how your approach to resilience and its dependence on other programming, runtime, or resilience technologies? (i.e. uses lower-level mechanisms from hardware or lower level software, depends on higher level management, creates new mechanisms) | (EXPRESS) | (TG) | (DEGAS)
Our approach to resilience comprises three principal technologies. First, Containment Domains (CDs) are an application-facing resilience technology. Second, we introduce error handling and recovery routines into the communications runtime (GASNet-EX) and clients, e.g. UPC or CAF, to handle errors, and return the system to a consistent global state after an error. Third, we provide low-level state preservation, via node-level checkpoints for the application and runtime to complement their native error-handling routines. Effective use of CDs requires application and library writers to write application-level error detection, state preservation and fault recovery schemes. For GASNet-EX, we would like mechanisms to identify failed nodes. Although we expect to implement timeout-based failure detection, we would like system-specific RAS function to provide explicit notification of failures and consensus on such failures, as we expect vendor-provided RAS mechanisms to be more efficient, more accurate, and more responsive than a generic timeout-based mechanism. Finally, our resilience schemes depend on fast durable storage for lightweight state preservation. We are designing schemes that use local or peer storage (disk or memory) for high volume (size) state preservation, and in-memory storage for high traffic (IOs) state preservation, we need the hardware to be present. Since our IO bottleneck is largely sequential writes with little or no reuse, almost any form of persistent storage technology is suitable. |
(D-TEC) | The DynAX project will focus on Resilience in year three. The general approach will be to integrate Containment Domains into the runtime and adapt proxy applications to use them. This will depend on the application developer (or a smart compiler) identifying key points in the control flow where resumption can occur after failure, and identifying the set of data which must be saved and restored in order to do so. It also relies on some mechanism to determine when a failure has occurred and which tasks were effected. This will vary from one system and type of failure to the next, examples include a hardware ECC failure notification, or a software data verification step detects invalid output, or maybe even a simple timeout occurs. Finally, it will require a way for the user to provide some criteria for how often to save resilience information off; the runtime may choose to snapshot at every opportunity it is given, or it might only do a subset of those, to meet some power/performance goal. | (X-TUNE) | (GVR) | (CORVETTE) | SLEEC | (PIPER) |
'One challenging problem for Exascale systems is that projections of soft error rates and hardware lifetimes (wearout) span a wide range from a modest increase over current systems to as much as a 100-fold increase. How does your system scale in resilience to ensure effective exascale capabilities on both the varied systems that are likely to exist and varied operating points (power, error rate)? | (EXPRESS) | (TG) | (DEGAS)
Our resilience technologies provide tremendous flexibility in handling faults. In our hybrid user-level and system-level resilience scheme, CDs provide lightweight error recovery that enables the application to isolate the effects of faults to specific portions of the code, thus localizing error recovery. With the use of nesting CDs, an inner CD can decide how to handle an error, or to propagate this error to a parent CD. If no CD handles the error locally, we use a global rollback to hide the fault. With this approach, the use of local CDs for isolated recovery limits the global restart rate |
(D-TEC) | In a recursive tree hierarchy of containment domains, each level will have successively finer grained resilience, with fewer affected tasks, and less cost of redundant computation if the domain restarts due to failure. Containment domains are isolated from each other, so there is no extra communication / synchronization necessary between them. The 3 dependencies mentioned above (identification of containment domains, identification of failures, and the cost function) should be equally applicable to any exascale system. The only hardware dependence we anticipate is the hardware's ability to identify failures as it occurs, and the details of how that failure is reported. The system can be tuned to act more conservatively, or less conservatively, by ignoring some containment domains and enforcing others. The most conservative approach is to back up the necessary data for, and check results of, every containment domain in the application. The least conservative case is to only act on the outer-most containment domain, i.e. simply retain the program image and the parameters it was launched with, and if a failure occurs anywhere in the application, the entire thing is thrown out and starts again from the beginning. Reality is usually somewhere between those two extremes... the runtime can pick and choose which containment domains to act on, based on cost/benefit analysis (which I will discuss further in the question below on application semantics information). | (X-TUNE) | (GVR) | (CORVETTE) | N/A | (PIPER) |
What opportunities are there to improve resilience or efficiency of resilience by exporting/exploiting runtime or application semantics information in your system?" | (EXPRESS) | (TG) | (DEGAS)
In DEGAS, applications express semantic information required for resilience through the CD API; that's the point of CDs. The CD hierarchy, i.e. boundary and nesting structure, describes the communication dependencies of the application in a way that is not visible to the underlying runtime or communication systems. In contrast, a transparent checkpoint scheme must discover the dependency structure of an application by tracking (or preventing) receive events to construct a valid recovery line. The CD structure makes such schemes unnecessary, as the recovery line is described by the CD hierarchy. For applications that do not use CDs, we fall back to a transparent checkpoints. Here, we rely on the runtime to discover communication dependencies, advance the recovery line, and perform rollback propagation as required. One clear opportunity to improve efficiency for resilience is by exploiting the one-sided semantics of UPC to track rollback dependencies. We are starting by calculating message (receive) dependencies inside the runtime, but we are interested in tracking memory (read/write) dependencies as a possible optimization. A second area for improved efficiency is in dynamically adjusting the checkpoint interval (recovery line advance) according to job sizes at runtime. An application running on, 100 k-nodes must checkpoint half as often as one running on 400 k-nodes; for error rates, by tolerating 3/4 of the errors, checkpoint overhead is halved. Such considerations suggest that static (compile-time) resilience policies should be supplemented with runtime policies that take into account the number of nodes, failure (restart) rates, checkpoint times, memory size, etc. to ensure good efficiency. |
(D-TEC) | The runtime decides whether or not to act on each individual containment domain. It chooses which containment domains to act upon, based on tuning parameters from the user and based on the data size and the estimated amount of parallel computation involved in completing the work associated with a containment domain. The data size is easy to determine at runtime, and the execution information can be provided by runtime analysis of previous tasks, or by hints from the application developer or smart compiler. Using that information, and resilience tuning parameters provided by the user, the runtime can do simple cost/benefit analysis to determine which containment domains are the best ones to act upon to achieve the user's performance/resilience goal. | (X-TUNE) | (GVR) | (CORVETTE) | N/A | (PIPER) |
What capabilities of provided by resilience researchers (software or hardware) could have a significant impact on the capabilities or efficiency of resilience?" | (EXPRESS) | (TG) | (DEGAS)
Fast, durable storage is a key technology for increasing the efficiency of resilience. In DEGAS we are interested in both bulk storage and logging storage. The requirements for these differ slightly, in that we use logging for small high-frequency updates, possibly as much as one log entry per message, but we use the bulk storage for large infrequent updates, as these are used for checkpoints. We are exploring non-inhibitory consistency algorithms, i.e. algorithms that allow messages to remain outstanding while a recovery line is advanced. We face a challenge right now in that it is difficult to order RDMA operations with respect to other message operations on a given channel. In the worst case, we are required to completely drain network channels, and globally terminate communications in order to establish global consistency, i.e. agreement on which messages have been sent and received. A hardware capability that may prove valuable here are network-level point-to-point fence operations. A similar issue on Infiniband networks is that the current Infiniband driver APIs require us to fully tear down network connections in order to put the hardware into a known physical state with respect to the running application process, i.e. we need to shut down the NIC to ensure that it doesn't modify process memory while we determine local process state. A lightweight method of disabling NIC transfers (mainly RDMA) would eliminate this teardown requirement. |
(D-TEC) | If the hardware can reliably detect and report soft failures (such as errors in floating point instructions), this would avoid running extra application code to check the results, which are sometimes rather costly. Reliable, slower memory for storing snapshot data would also be beneficial. This memory would be written with snapshot data every time a containment domain was entered, but only read while recovering from a failure, so the cost of the write is much more important than the cost of the read. | (X-TUNE) | (GVR) | (CORVETTE) | N/A | (PIPER) |