CHARACTERIZATION OF CODES OF IDEALS OF THE POLYNOMIAL RING

The study of ideals in algebraic number system has contributed immensely in preserving the notion of unique factorization in rings of algebraic integers and in proving Fermat’s last Theorem. Recent research has revealed that ideals in Noetherian rings are closed in polynomial addition and multiplication.This property has been used to characterize the polynomial ring     1 2  n n x mod x F for error control. In this research we generate ideals of the polynomial ring using GAP software and characterize the polycodewords using Shannon’s Code region and Manin’s bound.


Error control Coding
The modern approach to error control coding in digital communication systems was started by Shannon [1], Golay [2] and Hamming [3]. By mathematically defining entropy of an information source and the capacity of a communication channel Shannon [1] showed that it was possible to achieve reliable communication over a noisy channel provided that the source's entropy is lower than the channel's capacity. He did not explicitly state how channel capacity could be practically reached, only that it was attainable. Hamming [3] and Golay [2] developed the first practical error control schemes. According to Wicker [4] this Hamming code had some undesirable properties; first, it was not efficient, requiring three check bits for every four data bits and second, it could only correct a single error within the block.
Golay code [2] addressed these shortcomings by generalizing the construction of the Hamming code. In the process he discovered two codes; The first is the binary Golay code which groups data into blocks of twelve bits and then calculates eleven check bits. The associated decoding algorithm is capable of correcting up to three errors in the 23 bit code word. The second is the ternary Golay code, which operates on ternary, rather than binary, numbers. This code protects blocks of six ternary symbols with five ternary check symbols and has the ability to correct two errors in the resulting eleven symbol code word. The general techniques for developing Hamming and Golay codes were the same. They involved grouping q-ary symbols into blocks of k and then adding k n  check symbols to produce an n symbol code word. The resulting code has the ability to correct t errors, and has a code rate n k . A code of this type is called a block code, and is referred to as a Hamming and Golay codes are linear since the moduloq sum of any two code words is itself a code word. According to Wicker [4] it is the binary Golay code which provided error control during the Jupiter fly-by of Voyager I launched in 1979 .
The next main class of linear block codes to be discovered were the Reed-Muller codes, which were first described by Muller [5] in the context of Boolean logic design. These codes were more superior to the Golay Codes since they allowed more flexibility in the size of the code word and the number of correctable errors per code word. According to Wicker [4] though for some BCH codes it could be more. BCH codes were extended to the nonbinary case 2) (  q by Reed and Solomon [9]. Reed Solomon (RS) codes constituted a major advancement because their non binary nature allows for protection against bursts of errors. However, it was not until Berlekamp [10] introduced an efficient decoding algorithm that RS codes began to find practical applications. In his paper on the application of error control to communication, Berlekamp [10], realized that RS codes have found extensive applications in such systems as Compact Disk (CD) players, Digital Video Decoders (DVD) players, and the Cellular Digital Packet Data (CDPD). Lin,etal [11] realized several fundamental drawbacks when block codes were in use. First, due to their frame oriented nature,the entire code word must be received before decoding can be completed. This introduces an intolerable lateness into the system, particularly for large block lengths. A second drawback was that block codes require precise frame synchronization. A third drawback was that most algebraic-based decoders for block codes work with hard-bit decisions, rather than with the unquantized, or â€oesoftâ€•, outputs of the demodulator. With hard-decision decoding typical for block codes, the output of the channel is taken to be binary,while with soft-decision decoding the channel output is continuous-valued.
According to Lin,etal [11], in order to achieve the performance bounds predicted by Shannon [1] a continuous-valued channel output is required. While block codes can achieve impressive performance, they are typically not very power efficient, and therefore exhibit poor performance when the signal-to-noise ratio is low. The poor performance of block codes at low signal tonoise ratios is not a function of the code itself, but a function of the sub optimality of hard-decision decoding.
Elias [12] introduced convolution codes to solve the drawbacks of block codes. By segmenting data into distinct blocks, convolution codes add redundancy to a continuous stream of input data by using a linear shift register. Each set of n output bits is a linear combination of the current set of k input bits and the m bits stored in the shift register. The total number of bits that each output depends on is called the constraint length, and denoted by c  . The rate of the convolution encoder is the number of data bits  taken in by the encoder in one coding interval, divided by the number of code bits n output during the same interval. Just as the data is continuously encoded,it can also be continuously decoded.
Convolution codes have been used by several deep space exploration such as Voyager and Pioneer. According to Odenwalder [13] a sub class of convolution codes has become a standard for commercial satellite communication applications. Berlekamp [10] noted that all of the second generation digital cellular standards incorporate convolution coding.
The major weakness of convolution codes is their susceptibility to burst errors. Convolution codes have properties that are complimentary to those of Reed-Solomon codes [9]. While convolution codes are susceptible to burst errors, RS codes handle burst errors quite well, Wicker [4]. Ungerboeck [14] discovered Trellis Coded Modulation (TCM) which use convolution codes and multidimensional signal constellations to achieve reliable communications over band limited channels. TCM have enabled telephone modems to break the 9600 bits per second (bps) barrier, and today all high speed modems use TCM. They are also used for satellite communication applications, Wicker [4]. TCM comes remarkably close to achieving Shannon's promise of reliable communications at channel capacity, and is now used for channels with high signal to noise ratio that require high bandwidth efficiency. Berrou and Glavieux [15] discovered Turbo codes. The performance of Turbo codes has helped in narrowing the gap between practical coding systems and Shannon's theoretical limit. A turbo code is the parallel concatenation of two or more component codes. In its original form, the constituent codes were from a subclass of convolution codes. Due to the presence of the inter leaver, optimal (maximal likelihood) decoding of turbo codes is complicated and therefore impractical. It is the decoding method that gives turbo codes their name, since the feedback action of the decoder resembles that of a turbo-charged engine. Turbo codes approach the capacity limit much more closely than any of the other codes. Shannon's model [1] was developed using error coding techniques based on algebraic coding theory. According to his Theorem "Given a code with a code rate that is less than the communication channel capacity , a code exists for a block length of n bits, with code rate that can be transmitted over the channel with an arbitrarily small probability of error". Theoretically, we should be able to devise a coding scheme for a particular communication channel for any error rate, but no one has been able to develop a block code that satisfies Shannon's Theorem. While many previous results for polynomial effectiveness have been published, no previous work has attempted to achieve complete screening of all possible polynomials for error control. According to Castagnoli  [7] polynomial's effectiveness is evaluated by computing weights for that polynomial. A critical measurement of polynomial effectiveness for general purpose computing is the HD. Each undetectable error pattern is itself a codeword. This also means that determining the minimum HD for a polynomial is equivalent to determining the lowest non-zero weight for that polynomial. Furthermore, the weights of a polynomial give the number of undetectable errors for corresponding numbers of bit errors.
The candidate polynomials considered in this paper are ideals of the polynomial ring Castagnoli etal [7] utilized Fujiwara's [16] techniques to evaluate the weights of polynomials that had been carefully selected based on prime factorization characteristics. Lin and Costello [17] conjectured that there must be techniques for error control coding that could provide the best code. Alderson [18] introduced one of the techniques of using Geometric construction on optimal optical orthogonal codes.
Koopman [19] provided a standard for describing previous work and expected results. He recommended the following shorthand notation to represent factorization of a polynomial: "d represents the degree of a factor. Thus "{1,5,29}" represents the set of all polynomials whose irreducible factorization is: â (i.e., has irreducible factors of degrees 1, 5, and 29).
Prange [6] showed that under polynomial addition, the polynomial rendering of a cyclic code is an ideal of some ring. This correspondence opened the way for the application of algebra to cyclic code. Fujiwara etal [16] developed cyclic codes based upon polynomials over finite fields.
Projective geometry and Shannon's Theorem have been used to determine and characterize the required types of ideals. Geometrical constructions of the code region has also been used to define a region which can provide these codes. Principal ideals which form the generator element in the polynomial ring were found very useful in this study. This research was primarily a determination and characterization of principal ideals of the polynomial ring which provide codes that satisfy Shannon Theorem.
Charles [20] improved on Prange's [6] work to show that polynomial addition and multiplication of cyclic codes were closed in polynomial rings. His work could also be used to confirm that the polynomial rendering of a cyclic code is an ideal of the polynomial ring .
According to Peterson and Weldon [21] a code can only be useful for computer application if and only if it is expressed in binary form or easily convertible into binary symbols. To be used for error detection a given polynomial code must have both a generator polynomial and a check polynomial.
Over the years the desire to reconcile efficiency and reliability of various code vectors has motivated researchers into this area of study. An exhaustive search for codes of ideals of polynomial rings has not been done. According to Castagnoli etal [7] there might be other forms of polynomials not explored which provide other similarly useful error detection and error correction capabilities. Internet Engineering Task Force (IETF) [22] filtered cyclic redundancy codes within the code region of 32-bit for greater HD. It singled out a class of polynomials of {1,3,28} with HD=6 as the best polynomial for the purpose of preserving message length while detecting errors at the same time. This was however a CRC Code and could not be used for error correction.
To date it is regrettable that no block code has been developed that precisely meets the promise of Shannon [1] of reconciling efficiency and reliability.

Definition 2.1
A nonempty subset I of a ring shall be called an ideal of if and only if:   is cyclic if and only if C satisfies the following two conditions: . Multiplication of a code polynomial by x corresponds to a right cyclic shift of the corresponding codeword. Since C is a cyclic code, it contains the cyclic shifts of all codewords, so since each summand is in C .
Therefore, condition (ii) also holds. So if C is a cyclic code, then conditions (i) and (ii) hold.
Conversely, suppose that conditions (i) and (ii) hold. Take ) (x r to be a scalar . The conditions imply that C is a linear code. Then, if we take ) (x r = x , condition (ii) implies that C is a cyclic code. Hence, if conditions (i) and (ii) hold, then C is a cyclic code.

Proposition 2.3
Polynomial codes of length n over   , we must show that: On the other hand, suppose that I is an ideal in . Then its elements are polynomials of degree less than n , and by Definition, its elements satisfy (ii) ) (

Proposition 2.4 [23]
A code C of length n over ] can detect t errors if and only if