H.261

H.261 is an 1990 ITU video coding standard originally designed for transmission over ISDN lines on which data rates are multiples of 64 kbit/s. The data rate of the coding algorithm was designed to be able to operate between 40 Kbits/s and 2 Mbits/s. The standard supports CIF and QCIF video frames with luma resolutions of 352x288 and 176x144 respectively (and 4:2:0 sampling with chroma resolutions of 176x144 and 88x72, respectively). It also has a backward-compatible trick for sending still picture graphics with 704x576 luma resolution (which was added in a later revision around 1994).

H.261 was the first practical digital video coding standard. The H.261 design was a pioneering effort, and all subsequent international video coding standards (MPEG-1, MPEG-2/H.262, H.263, and even H.264 have been based closely on its design). Additionally, the methods used by the H.261 development committee (led by Sakae Okubo) to collaboratively develop the standard have remained the basic operating process for subsequent standardization work in the field. The coding algorithm uses a hybrid of motion compensated inter-picture prediction and spatial transform coding with scalar quantization, zig-zag scanning and entropy coding.

The basic processing unit of the design is called a macroblock. Each macroblock consists of a 16x16 array of luma samples and two corresponding 8x8 arrays of chroma samples using 4:2:0 sampling and a YCbCr color space.

The inter-picture prediction removes temporal redundancy, with motion vectors used to help the codec compensate for motion. Transform coding using an 8x8 discrete cosine transform (DCT) removes the spatial redundancy. Scalar quantization is then applied to round the transform coefficients to the appropriate precision, and the quantized transform coefficients are zig-zag scanned and entropy coded (using a Run-Level variable-length code) to remove statistical redundancy.

The H.261 standard actually only specifies how to decode the video. Encoder designers were left free to design their own encoding algorithms, as long as their output was constrained properly to allow it to be decoded by any decoder made according to the standard. Encoders are also left free to perform any pre-processing they want to their input video, and decoders are allowed to perform any post-processing they want to their decoded video prior to display. One effective post-processing technique that became a key element of the best H.261-based systems is called deblocking filtering. This reduces the appearance of annoying block-shaped artifacts caused by the block-based motion compensation and spatial transform parts of the design. Indeed, blocking artifacts are probably a familiar phenomenon to almost everyone who has watched digital video. Deblocking filtering has since become an integral part of the most recent standard, H.264 (although even when using H.264, additional post-processing is still allowed and can enhance visual quality if performed well).

The refinements introduced in later standardization efforts have resulted in significant improvements in compression capability relative to the H.261 design. This has resulted in H.261 becoming essentially obsolete, although it is still used as a backward-compatibility mode in some video conferencing systems and for some types of internet video. However, H.261 remains a major historical milestone in the development of the field of video coding.

src: http://en.wikipedia.org/wiki/H.261
H.261 is an 1990 ITU video coding standard originally designed for transmission over ISDN lines on which data rates are multiples of 64 kbit/s. The data rate of the coding algorithm was designed to be able to operate between 40 Kbits/s and 2 Mbits/s. The standard supports CIF and QCIF video frames with luma resolutions of 352x288 and 176x144 respectively (and 4:2:0 sampling with chroma resolutions of 176x144 and 88x72, respectively). It also has a backward-compatible trick for sending still picture graphics with 704x576 luma resolution (which was added in a later revision around 1994).

H.261 was the first practical digital video coding standard. The H.261 design was a pioneering effort, and all subsequent international video coding standards (MPEG-1, MPEG-2/H.262, H.263, and even H.264 have been based closely on its design). Additionally, the methods used by the H.261 development committee (led by Sakae Okubo) to collaboratively develop the standard have remained the basic operating process for subsequent standardization work in the field. The coding algorithm uses a hybrid of motion compensated inter-picture prediction and spatial transform coding with scalar quantization, zig-zag scanning and entropy coding.

The basic processing unit of the design is called a macroblock. Each macroblock consists of a 16x16 array of luma samples and two corresponding 8x8 arrays of chroma samples using 4:2:0 sampling and a YCbCr color space.

The inter-picture prediction removes temporal redundancy, with motion vectors used to help the codec compensate for motion. Transform coding using an 8x8 discrete cosine transform (DCT) removes the spatial redundancy. Scalar quantization is then applied to round the transform coefficients to the appropriate precision, and the quantized transform coefficients are zig-zag scanned and entropy coded (using a Run-Level variable-length code) to remove statistical redundancy.

The H.261 standard actually only specifies how to decode the video. Encoder designers were left free to design their own encoding algorithms, as long as their output was constrained properly to allow it to be decoded by any decoder made according to the standard. Encoders are also left free to perform any pre-processing they want to their input video, and decoders are allowed to perform any post-processing they want to their decoded video prior to display. One effective post-processing technique that became a key element of the best H.261-based systems is called deblocking filtering. This reduces the appearance of annoying block-shaped artifacts caused by the block-based motion compensation and spatial transform parts of the design. Indeed, blocking artifacts are probably a familiar phenomenon to almost everyone who has watched digital video. Deblocking filtering has since become an integral part of the most recent standard, H.264 (although even when using H.264, additional post-processing is still allowed and can enhance visual quality if performed well).

The refinements introduced in later standardization efforts have resulted in significant improvements in compression capability relative to the H.261 design. This has resulted in H.261 becoming essentially obsolete, although it is still used as a backward-compatibility mode in some video conferencing systems and for some types of internet video. However, H.261 remains a major historical milestone in the development of the field of video coding.

src: http://en.wikipedia.org/wiki/H.261
Created by: muppetmaster, Last modification: Sun 06 of Mar, 2005 (12:14 UTC)
Please update this page with new information, just login and click on the "Edit" or "Discussion" tab. Get a free login here: Register Thanks! - Find us on Google+