A novel emulation-simulation framework is presented on studying the low error rate performance of capacity-approaching low-density parity-check (LDPC) codes decoded using a message-passing algorithm. High-throughput hardware emulation uncovers combinatorial error structures that underpin the error floors. The captured errors are analyzed in functionally equivalent software simulation to illuminate the effects of wordlength, quantization, and algorithm design, thereby extending the theoretical discovery for practical usage.
The emulation-simulation framework further allows the algorithm and implementation to be iteratively refined to improve the error-floor performance of message-passing decoders. A dual quantization scheme is first introduced to reduce the degradation of soft decoding. Then, a reweighted message-passing algorithm is proposed to eliminate local minima caused by the remaining dominant errors. This improved algorithm is realized in a simple post-processor that compensates the message-passing decoding algorithm to achieve the near maximum-likelihood decoding performance. Results are demonstrated by the design of a 5.35 mm^2, 65nm CMOS chip that realizes a grouped parallel architecture to optimize the area and power efficiencies by aggressively scaling down the interconnection overhead. The 47.7 Gb/s LDPC decoder operates without error floor down to the bit error rate level of 10^-14.
The iterative emulation-simulation framework and systematic architectural exploration can be extended to other complex systems, thereby enabling the joint optimizations of algorithm, architecture, and implementation.