当前位置:首页 >> >>

Tolerating SEU Faults in the Raw Architecture


Tolerating SEU Faults in the Raw Architecture
Karandeep Singh#, Adnan Agbaria+*, Dong-In Kang#, and Matthew French# # USC Information Sciences Institute, Arlington VA, USA Email: {karan,

dkang, mfrench} @isi.edu + IBM Haifa Research Lab, Mount Carmel, Haifa 31905, Israel Email: adnan@il.ibm.com Abstract
This paper describes software fault tolerance techniques to mitigate SEU faults in the Raw architecture, which is a single-chip parallel tiled computing architecture. The fault tolerance techniques we use are efficient Checkpointing and Rollback of processor state, Break-pointing, Selective Replication of code and Selective Duplication of tiles. Our fault tolerance techniques can be fully implemented in the software, without any changes to the architecture, transparent to the user, and designed to fulfill run-time performance and throughput requirements of the system. We illustrate these techniques by mitigating matrix multiply kernel mapped on Raw. The proposed techniques are also applicable to other tiled architectures (and also parallel systems in general). corresponding states and detecting faults. Our analysis shows that our techniques mitigate a high percentage of SEU (single event upset) faults in Raw, with no VLSI or architecture modifications.

2. Raw Architecture
The Raw architecture consists of 16 processing tiles connected in a mesh using two types of high performance pipelined networks: static and dynamic. The Raw chip is implemented in IBM’s 180 nm 1.8V 6-layer CMOS 7SF SA-27E copper process. It has a peak throughput of 6.8 GFLOPS at 425 MHz. Each tile has two processors; one MIPS type compute processor with 32KB each of data and instruction cache, and one switch processor with 64 KB of instruction cache. The compute processor has an 8-stage in-order single-issue MIPS-style processing pipeline and a 4-stage single-precision pipelined FPU. Two of the interconnecting networks are static 2-D point-to-point mesh networks, which are optimized to route single word quantities of data (without any headers) and these routes are determined at compile time. There are two dynamically routed networks in Raw. The general dynamic network is used for data transfer among tiles, while the memory dynamic network is used to access off-chip memory [Taylor02]. Block diagram for the Raw architecture is shown in Figure 1. Raw’s exposed ISA allows parallel applications to exploit all of the chip resources, including gates, wires and pins and it performs very well across a large class of stream and embedded computing applications [Taylor04]. Compared to a 180 nm Pentium-III, using commodity PC memory system components, Raw performs within a factor of 2x for sequential applications with a very low degree of ILP, about 2x to 9x better for higher levels of ILP, and 10x-100x better when highly parallel applications are hand coded and optimized [Taylor04].
* Work performed at USC/ISI

1. Introduction
Multi-core tiled computing architectures are becoming increasing popular because of their good performance, throughput, and power efficiency. Multiple cores also provide an inherent redundancy that enables better fault tolerance and recovery. Raw (developed at Massachusetts Institute of Technology) [Taylor02] is a general purpose tiled parallel processing architecture with a compiler programmable interconnection network. The current Raw chip has 16 processor tiles, which can scale to a larger number of tiles on future chips. The interconnection network extends off-chip, allowing large fabrics of Raw chips to be built. To mitigate faults on Raw, we periodically checkpoint the processor state, which is stored in off-chip stable storage. The state to be checkpointed includes data caches, data in memory, registers, and states of on-chip networks. Two-level Asynchronous and Synchronous checkpoints are taken to improve performance. Selective Replication of code and Selective Duplication of tiles are used for fault/failure detection. Breakpoints are inserted in the code after every replication/ duplication for comparing

Figure 1: Raw architecture (source [Taylor04])

3. Fault Tolerance Techniques
In this work we assume that a SEU fault [NVSR02] can happen at any time. We would like to mitigate SEUs with minimum modification in the Raw architecture, and preferably, without any hardware modification. So, we consider and adopt software fault tolerance techniques that are (1) transparent to the user and (2) efficient to meet performance requirements. Mainly, we use a combination of the following techniques for fault detection and tolerance: selective replication, selective duplication, checkpoint/restart, breakpoints, and TMR. A breakpoint (BP) is used in Raw for error detection. The compiler (or user) inserts BPs in the code of the program to be able to detect errors. The location of the BPs in the program depends on the applied fault detection techniques. For example, as shown later, we insert BPs after code duplication and any selective replication. Please notice here that BP is used to help us in detecting faults. In selective replication (SR) [GS96], the compiler and/or user selects some code to be replicated. As a result, during execution, the selected code runs simultaneously in two different tiles in Raw. Due to replication some synchronization may be required to ensure total ordering in messages’ delivery [Birman97]. Although, replication is a successful technique for providing fault tolerance, we use SR for failure detection in Raw. Specifically, we trace the states of the two replicas. BPs are inserted after SR to detect any SEU that may happen in one of the replicas.

In selective duplication (SD), the compiler and/or user selects some instructions to be duplicated in the code. Then after any code duplication, a BP is inserted to detect any SEU in the instruction. As we see here, we use BPs to check any possible SEU after code duplication and replication. SR is more expensive than SD in terms of the resources and the price to compare the states of the replicas during a BP. Therefore, we try to use SD where we are able to detect SEUs, without the need of synchronization with other tiles. Although SD cannot detect a SEU in the networks, it may detect it in a register. We use a heuristic function that helps us in determining the code that needs to be either replicated or duplicated. Such function is based on cost/effectiveness tradeoff to choose either replication or duplication. We use checkpoint/restart for providing fault tolerance in Raw. Checkpoint/Restart (C/R) is a way to provide persistence and fault tolerance in both uniprocessor and distributed systems [EAWJ02]. Checkpointing is the act of saving an application's state to stable storage during its execution, while restart is the act of restarting the application from a checkpointed state. In the Raw architecture, in order to recover a tile from a failure, we need to checkpoint the state of the tile, which is defined as follows: data caches, data in memory, registers, and the state on the networks. So, during checkpoint, we need to save all these states for each tile. Figure 2 presents the information on what need to be checkpointed.

The network buffers

The registers

The log file

Logs are saved during the execution

Log file

Figure 2: Checkpointing technique in Raw In our approach, to achieve an efficient C/R mechanism, we applied a new application-based incremental checkpointing. With this approach, during checkpoint, each tile saves only the following: all the registers, the state of the networks, and a log file. This log file is created during the execution and is flushed at every checkpoint. The log file implements our incremental checkpointing technique. Instead of using page faults or copy-on-write as presented in [Plank97], we use a new technique that involves the compiler to identify all the modified data structures between two consecutive checkpoints. The compiler inserts the log calls. By calling a special Log call, the modified data structures are logged in the log file. Specifically, for every data structure a, we log a after its last modify and before the next checkpoint. Using compiler-based analysis, we can identify all the data structures that need to be logged between every two consecutive checkpoint calls [Muchnick97]. Figure 3 presents an example of code in which the compiler inserts Log calls for logging the modified data structures.
C h k p t() // C h eck p o in t # i a = f(..) …. b = g (..) … a = … // N ew m o d ificatio n fo r a … a = … // L ast m o d ificatio n fo r a L o g (a ) … C h k p t() // C h eck p o in t # i+ 1

Figure 3: An example of using Log and Chkpt calls in the code

Usually, the Raw architecture runs multi-task applications in which every tile runs a task. The tasks communicate with each other using on-chip networks. The problem of checkpoint and restart is more complicated in distributed settings (such as Raw), where an application is composed of several processes, each possibly running on a different computer. In order to restart such an application, one has to choose a collection of checkpoints, one from each process that corresponds to a consistent distributed application's state. A distributed application's state is not consistent if it represents a situation in which some message m is received by some process, but the event of sending m is not in the checkpoint collection. A collection of checkpoints that corresponds to a consistent distributed state forms a recovery line. As presented in Figure 4, if a failure occurs when no collection of checkpoints taken during the execution forms a recovery line, then the application will have to be restarted from the beginning, regardless of the number of checkpoints already taken. The domino effect was identified in [Randell75] as the source of not being able to find a recovery line in an execution with checkpoints, indicating that a recovery line is not guaranteed unless special care is taken. To ensure recovery lines, in this work, we adopt the d-BC (d-Bounded Cycle) distributed checkpointing protocol presented in [AAFV04]. This is a two-level checkpointing protocol. In Raw, we allow every tile to take its checkpoint independently. We denote this local checkpoint by CL1. However, to avoid the domino effect, we force all the tiles to coordinate their checkpoints to create a consistent global checkpoint. We denote this global checkpoint by CL2. Since CL1 is local and does not require synchronization, CL1 is cheaper than CL2 in terms of size and overhead and occurs more frequently than CL2

T1
message failure

T2
checkpoint Inconsistent checkpoints

X
Figure 4: Inconsistent checkpoint due to message exchanges

Figure 5 presents a running example of our fault tolerance techniques in Raw. This includes SR, SD, BPs, and C/R. In this example, tile T7 implements SR for T1. Similarly, T5 implements SR for T2. Notice here that after every SR we have BPs to check for any possible errors.

Again, notice here that the BPs and checkpoints are inserted in the code during the compilation of the application. So, each tile is aware of the techniques that it applied during the execution.

Global and consistent checkpoints

T1
SD join BP

T7 - replica T2 T5 - replica
- Breakpoint - CL2 Checkpoint - CL1 Checkpoint

Figure 5: An example that shows our fault tolerance techniques in Raw

4. Analysis and a Sample Application
We define Reliability as the percentage of time that an application can run without resetting the system on a SEU. We derive the reliability analytically using area information of the processor and the information of possible effects when an SEU fault happens on the area. We use analytical methods to derive reliability numbers. Here, we present an implementation of fault-tolerant

matrix multiplication on the Raw processor, and derive reliability of the implementation. The estimated amount of areas of functional components of a tile in Raw processor is shown in Figure 6. A tile consists of a tile processor and a switch processor. A tile processor is estimated to take 60.1% of a tile’s area. And a switch processor is estimated to take 39.9% of a tile’s area.

Compo -nents Area (%) Error (%) Result (%)

Icache 14.75 A* 3.042

Icache Ctrl 0.05 100 0.05

Dcache 14.75 0 0

Dcache Ctrl 0.05 50 0.025

FPU 15 0 0

ALU 15 0 0

Fetch Unit 0.05 00 0.05

GPR 0.20 0 0

SPR 0.20 50 0.1

Event Counters 0.05 0 0

Total 60.1

3.267

(a) Tile processor

Compo -nents Area (%) Error (%) Result (%)

Icache 26 B** 6.5

Icache Control 0.15 100 0.15

Switch Processor 12 0 0

SN FIFO Data 0.20 100 0.2

SN FIFO Ctrl 0.05 100 0.05

DN FIFO Data 0.50 10 0.05

DN FIFO Ctrl 0.25 25 0.063

MN FIFO Data 0.50 0 0

MN FIFO Ctrl 0.25 0 0

Total 39.9

7.013

(b) Switch processor Figure 6: Estimated area information of a Raw tile and its fault susceptibility when fault tolerant Matrix Multiplication runs on it (*: 50% of cache is prone to SEUs at any given time. Out of that 33% area is assumed to be occupied by instructions and 66% by operands. 5% of instruction and 25% of operand area is assumed to be critical) ** : About 50% cache is assumed to be filled and 50% of them are assumed to be critical.) An implementation of FT Matrix Multiplication using our fault tolerance technique is shown in Figure 7. Base algorithm multiplies input matrices A and B, and produces result matrix C in a streaming fashion. Columns of matrix B are read by the tiles in the upper row (tiles 1 and 2 in Figure 7). Rows of matrix A are read by the tiles at the leftmost column (tiles 4, 8, 12 in Figure 7). Result matrix C is collected by the tiles in the rightmost column except the tile in the upper row (tiles 7, 11, 15 in Figure 7). Computations are done by the remaining tiles (tiles 5, 6, 9, 10, 13, 14 in Figure 7). For each part of the algorithm and mapping, different FT techniques are applied. Different FT techniques are applied to the algorithm. Temporal Triple Modular Redundancy (TMR) is applied to input and output parts, which makes single SEU correction possible for inputs. The overhead of temporal TMR at the input nodes is justified by the longer computation time of the computation nodes. Code duplication and local checkpoint and rollback techniques are used for the computation nodes. Since inputs are protected by software TMR, an SEU on a computation node can be well confined within the nodes and local checkpoint and rollback technique can tolerate SEU on the computation part. System monitoring processes are replicated in tile 0 and tile 3 for better protection which are not used by the base algorithm.

0

1

2

3

4

5

6

7

Function Column Input Row Input Computation Result Output System Monitoring

8

9

10

11

Node 1, 2 4, 8, 12 5, 6, 9, 10, 13,14 7, 11, 15 0, 3

FT Technique Temporal TMR Temporal TMR Code Duplication Temporal TMR Code Replication

12

13

14

15

(a) Mapping (b) Used techniques Figure 7: Mapping of FT Matrix Multiplication on Raw Processor and techniques used Reliability of the FT Matrix Multiplication on Raw is estimated using the hardware area information which is shown in Figure 6. Effect of an SEU on each functional component is estimated conservatively. For example, an SEU on instruction cache control logic is always (100%) assumed to lead to system reset. The percentage of system reset when a SEU occurs on a functional component is presented in the rows titled as “Error (%)” in Figure 6 (a) and (b). The overall percentage of system reset due to an SEU on a functional component

is presented in the rows titles as “Result(%)” in Figure 6 (a) and (b). Based on the assumptions and estimation of the hardware, the reliability of FT Matrix Multiplication is 89.72%. The performance of the FT Matrix Multiplication is determined by the performance of the computation nodes. Since Code Duplication techniques are used for the computation algorithm, we expect the performance in terms of FLOPS to be slightly less than half of the base algorithm.

Method for Safety-Critical Systems: Effectiveness and Drawbacks. In Proceedings of the 15th IEEE Symposium on Integrated Circuits and Systems Design. pp. 101—106, September, 2002, Porto Alegre, Brazil. [Plank97] J. S. Plank. An Overview of Checkpointing in Uniprocessor and Distributed Systems, Focusing on Implementation and Performance. Technical Report UT-CS-97372. Department of Computer Science, University of Tennessee. July, 1997. [Randell75] B. Randell. System Structure for Software Fault Tolerance. IEEE Transactions on Software Engineering. SE1:220—232, June, 1975. [Taylor02] Michael B. Taylor, , Jason Kim, Jason Miller, David Wentzlaff, Fae Ghodrat, Ben Greenwald, Henry Hoffmann, Paul Johnson, Jae-Wook Lee, Walter Lee, Albert Ma, Arvind Saraf, Mark Seneski, Nathan Shnidman, Volker Strumpen, Matt Frank, Saman Amarasinghe and Anant Agarwal. The Raw Microprocessor: A Computational Fabric for Software Circuits and General Purpose Programs. IEEE Micro, Mar/Apr 2002. [Taylor04] Michael Bedford Taylor, Walter Lee, Jason Miller, David Wentzlaff, Ian Bratt, Ben Greenwald, Henry Hoffmann, Paul Johnson, Jason Kim, James Psota, Arvind Saraf, Nathan Shnidman, Volker Strumpen, Matt Frank, Saman Amarasinghe, and Anant Agarwal. Evaluation of the Raw Microprocessor: An Exposed-Wire-Delay Architecture for ILP and Streams. Proceedings of International Symposium on Computer Architecture, June 2004.

5. Conclusion
We present software fault tolerance techniques to mitigate SEU faults in the Raw architecture. In this work, we describe these fault detection and mitigation techniques, which can be implemented in the software and don’t require any hardware changes in the architecture. We demonstrate these techniques with the help of a sample application (matrix multiplication) and also present analytical evaluations of the performance and reliability numbers for our techniques.

6. Acknowledgements
Effort sponsored through the Department of the Interior National Business Center under grant number NBCH1050022. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.

References
[AAFV04] A. Agbaria, H. Attiya, R. Friedman, and R. Vitenberg. Quantifying Rollback Propagation in Distributed Checkpointing. Journal of Parallel and Distributed Computing. 64(3):370—384, March, 2004. [Birman97] K. P. Birman. Building Secure and Reliable Network Applications. Manning Publishing Company and Prentice Hall. December, 1996. [EAWJ02] E. N. Elnozahy, L. Alvisi, Y. M. Wang, and D. B. Johnson. A Survey of Rollback-Recovery Protocols in MessagePassing Systems. ACM Computing Surveys. 34(3):375—408, September, 2002. [GS96] R. Guerraoui and A. Schiper. Fault-Tolerance by Replication in Distributed Systems. Reliable Software Technologies - Ada-Europe'96. pp. 38—57, 1996. [Muchnick97] S. S. Muchnick. Advanced Compiler Design and Implementation. Morgan Kaufmann Publishers. 1997. [NVSR02] B. Nicolescu, R. Velazco, M. Sonza Reorda, M. Rebaudengo, and M. Violante. A Software Fault Tolerance


相关文章:
更多相关标签: