Sony Ericsson K610im Cell Phone User Manual


 
Formatting video for local playback
Generated by Clearspace on 2008-05-18-05:00
5
Resolution
Both visible and audible information are naturally analogue mediums. This analogue information must be sampled to
be stored digitally. The number of samples dictates the resolution of the digital copy, with more samples improving
the resolution and definition. Within the scope of this document and its purpose, there are only four digital sample
resolutions that are relevant. The first is the visual resolution of the video stream, measured in pixels as width by
height, comparable to resolution of your computer display. Higher pixel resolution permits more detailed images. The
visual resolution also dictates the aspect ratio of the video. Aspect ratio is the ratio of width to height, typically 16:9
for widescreen video, and 4:3 for standard video.
The second is the temporal resolution of the video stream; the frame rate, measured in frames-per-second (fps). It
affects the perceived smoothness of motion in the video. With the aid of motion blur and similar effects that make use
of how our eyes perceive movement, 24 fps is the minimum to produce perfectly fluid motion on any scene, and is
the standard for feature films. Slightly lower frame rates are still quite smooth to the eye for many scenes, down to
around 20 fps, below which a staggered effect on motion is ever more easily perceived by the viewer.
The third resolution of importance is the temporal resolution of the audio stream; termed sample rate, and measured
in kilohertz (KHz). It governs the maximum sound frequency (as in pitch) the digital audio stream can contain. To
reproduce a sound in a digital copy, the sample rate must be at least double the frequency of the sound. 44.1KHz
and 48Khz are the main sample rate standards in multimedia video, therefore covering frequencies beyond the upper
audible range of 20KHz.
Finally there is also the directional resolution of the audio stream, measured by the number of channels present in
the audio. A single channel is a mono recording, while two channels afford stereo sound. Increasing channels further
to 4 or 5 allows fully directional surround sound, for example, 5.1 surround sound uses 5 channels in a digital audio
stream.
Compression
To reduce storage requirements and improve portability, data streams in digital video are usually compressed. Each
stream is passed through a compression algorithm, referred to as a codec (COmpresser/ DECompressor). Separate
codecs are used for video and audio streams, as each stream has quite different characteristics.
The act of compressing a stream is known as encoding. In order to play back a compressed stream, it must be
decoded. Therefore the playback hardware must support decoding of the compression codec used. Converting an
already compressed stream from one codec to another codec is termed transcoding.
Compression is usually 'lossy', meaning that some data is lost and the compressed stream is not a bitperfect copy of
the original. Whether the effect of these 'lost' bits of data is perceptible upon playback depends greatly on the type
and amount of compression used. Data loss from high compression levels can introduce perceptible noise and other
distortions into a stream, and these noticeable effects are termed compression artefacts.
Compression amounts are set by the amount of data assigned to the stream, typically measured in kilobits\-
per-second (kbps), and referred to as bit rate (as in the rate of bits assigned). Compression codecs often have
sub-features that extent or improve their performance. It is important to note that some playback devices do not
support certain features of a codec. To make the compatibility of these features easier to manage and understand,
most codecs have profiles and/or levels describing which features and parameters can be used.
For example, MPEG-4 video can be encoded in Simple or Advanced Simple profiles in levels of 0 to 5, with simple
profile level 1 being used for video on mobile phones, and advanced simple profile levels used on most web video
content. Another example is WMV video, which comes in profiles such as WMV8, WMV9, and WMV9.1.