Senin, 29 Oktober 2012

Beranda » Huffman Encoder/Decoder using Xilinx ISE

Huffman Encoder/Decoder using Xilinx ISE

This article explains in details the steps involved in FPGA implementation of Huffman Encoder/ Decoder using Xilinx ISE software. First the background theory part that is essential is explained and then the actual FPGA implementation of the encoder/ decoder is provided. Also provided herewith is the VHDL code for the Huffman encoder/ decoder. If readers require Xilinx ISE, visit this blogpost download Xilinx ISE v14.6 or download Xilinx ISE v14.3

1. Huffman Coding Algorithm

Huffman  coding  is  used  to  code  values  statistically  according  to  their  probability  of occurrence. Short code words are assigned to highly probable values and long code words to less probable values. Huffman coding is used in MPEG-2 to further compress the bit stream. This system describes how Huffman coding is done in MPEG-2 and its implementation.

1.1 Algorithm
1. Reduce the source S to S1 and continue on to S2, S3 until we reach a source with two
symbols (or one for which it is easy to design a compact code).
2. Assign a compact code for the final reduced source. For a two symbol source the trivial code is a 0 and 1.
3. Backtrack to the original source assigning a compact code for the jth reduced source . The compact code we assign to the original source is the binary Huffman code.


1.2  Steps

1. Adding the two least probable symbols gives 25%. The new symbol is F
2. Adding the two least probable symbols gives 45%. The new symbol is G
3. Adding the two least probable symbols gives 55%. The new symbol is H
4. Write "0" and "1" on each branch of the summation arrows. These binary values are called branch binaries.
5. For each letter in each column, copy the binary numbers from the column on the right, starting from the right most columns (i.e., in column three, G gets the value "1" from the G in column four.) For summation branches, append the binary from the right-hand side column to the left of each branch binary. For A and C in column three append "0" from H in column four to the left of the branch binaries. This makes A "00" and B "01".

Completing step 5 gives the binary values for each letter: A is "00", B is "01", C is "11", D is "100", and E is "101". The input with the highest probability is represented by a code word of length two, whereas the lowest probability is represented by a code word of length three.

2. Block Diagram & Architecture of Huffman Coding

2.1 Encoding
 The following block diagram shows the key processing steps of encoding.


 Processing Steps

1. Application of DCT to the video/image stream

Source image samples are grouped into 8x8 blocks, shifted from unsigned integers to signed integers and input to the DCT. The following equation is the idealized mathematical definition of the 8x8 DCT:


Each 8x8 block of source image samples is effectively a 64-point discrete signal which is a function of the two spatial dimensions x and y. The DCT takes such a signal as its input and decomposes of the 64 unique two-dimensional "spatial frequencies" which comprise the input signal's "spectrum." The output of the DCT is the set of 64 basis-signal amplitudes (DCT coefficients) whose values can be regarded as the relative amount of the 2D spatial frequencies contained in the 64-point input signal.The DCT coefficients are divided into "DC coefficient" and "AC coefficients". DC coefficient is the coefficient with  zero  frequency  in  both  dimensions,  and  AC  coefficients  are  remaining  63 coefficients with non-zero frequencies. The DCT step can concentrate most of the signal in the lower spatial frequencies. In others words, most of the spatial frequencies have zero  or  near-zero  amplitude  and  need  not  be  encoded.  Because  the  DCT  equations contain transcendental functions, no physical implementation can compute them with perfect accuracy. Besides, JPEG does not specify unique DCT algorithm in its proposed standard.  This,  however,  makes  compliance  more  difficult  to  confirm  since  two compliant  encoders  (or  decoders)  generally will  not  produce identical  outputs  given identical inputs.

2. Quantization

To achieve further compression, each of the 64 DCT coefficients is uniformly quantized
in conjunction with a 64-element Quantization Table, which is specified by the application. The purpose of quantization is to discard information which is not visually significant.. Because quantization is a many-to-one mapping, it is fundamentally lossy. Moreover, it is the principal source of lossyness in DCT-based encoder.Quantization is defined as division of each DCT coefficient by its corresponding quantizer step size, followed by rounding to the nearest integer, as following equation:
 
and then the output is normalized by the quantizer step size.

Each step size of quantization ideally should be chosen as the perceptual threshold to compress the image as much as possible without visible artifacts. It is also a function of the source image characteristics, display characteristics and viewing distance.

3. Entropy Encoding
The final processing step of encoder is entropy coding. Before entropy coding, there are a
few processing steps for the quantized coefficients. Note that the DC coefficient is treated separately from the 63 AC coefficients. The DC coefficients are a measure of the average value of the 64 image samples. Because there is usually strong correlation between the DC coefficients of adjacent 8x8 blocks, the quantized DC coefficient is encoded as the difference from the DC term of the previous block in the encoding order, called Differential Pulse Code Modulation ( DPCM ), and the function is as followed :
                     DiffDC(i) = DC(i) - DC(i-1)

where i is the i-th block, DC(0) = 0

DPCM  can  usually  achieve  further  compression  due  to  the  smaller  range  of  the coefficient   values. The remaining AC coefficients are ordered into the "zig-zag" sequence, which helps to facilitate entropy coding by placing low-frequency coefficients before high-frequency coefficients.
 
Now the outputs of DPCM and "zig-zag" scanning can be encoded by entropy coding separately. It encodes the quantized DCT coefficients more compactly based on their statistical characteristics.
Entropy coding can be considered as 2-step process. The first step converts the zig-zag sequence of quantized coefficients into an intermediate sequence of symbols. The second step  converts  the  symbols  to  a  data  stream  in  which  the  symbols  no  longer  have externally identifiable boundaries. The form and definition of the intermediate symbols is dependent on both the DCT-based mode of operation and the entropy coding method. 

2.2 Decoding

The block Diagram shows decoding processing steps:

1. Entropy Decoding

Similar to entropy coding, entropy decoding also can be considered as 2-step process.
The first step converts the input bit stream into the intermediate symbols. The second step converts the intermediate symbols into the quantized DCT coefficients. In fact, the output of the second step is the DC difference, the output of DPCM, and the AC coefficients after zig-zag scan. Therefore, the DC difference is then decoded into the quantized DC coefficient, and the AC coefficients are ordered into original order.

2. De-Quantization

The following step is to de-quantize the output of entropy decoding, returning the result
to a representation appropriate for input to the IDCT. The equation is as follow-
3. Inverse Discrete Cosine Transform (IDCT)

The last step of decoder is the IDCT. It takes the 64 quantized DCT coefficients and
reconstructs a 64-point output image signal by summing the basis signals. JPEG does not specify a unique IDCT algorithm in its standard either. The mathematical definition of the 8x8 IDCT is as follow-

Tidak ada komentar:

Posting Komentar

Diberdayakan oleh Blogger.