Creator:
Date:
Abstract:
In this thesis, we study all the problematic graphical structures which play important roles in the error floor performance and error correction capability of LDPC codes. These graphical structures are: LETSs, ETSs, NETSs, stopping sets and codewords. First, we complement LSS characterization by demonstrating how the remaining structures of LETSs can be characterized. Then, we propose a new characterization for LETSs of variable-regular LDPC codes. Compared to the LSS-based characterization, which is based on a single LSS expansion technique, the new characterization involves two additional expansion techniques. The introduction of the new techniques mitigates the search efficiency problem that LSS-based characterization/search suffers from.
Moreover, we generalize the proposed approach of variable-regular to irregular LDPC codes. We explain how the characterization of LETS structures in variable-regular graphs can be used to characterize the LETS structures of irregular graphs. Also, we propose a graph based approach to find all ETSs.
In addition, we derive a lower bound on the size of the smallest ETSs and NETSs in variable-regular LDPC codes. The derived lower bound demonstrates that the size of the smallest possible NETS is, in general, larger than that of an ETS with the same b value. This provides a theoretical justification as to why NETSs are often not among the most harmful trapping sets. Moreover, we propose an efficient search algorithm to provide a list of NETSs in an interest range. Also, we derive tight lower and upper bounds on the minimum distance and stopping distance of LDPC codes. The bounds, which are established using a combination of analytical results and search techniques, are applicable to both regular and irregular LDPC codes with a wide range of rates and block lengths. The search algorithms to find codewords and stopping sets are based on the ETSs and NETSs search algorithms provided in this thesis. Extensive simulation results on several LDPC codes demonstrate the accuracy and efficiency of the proposed algorithms. In particular, the algorithms are significantly faster than the existing search algorithms in the literature.