Chapter 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22
Appendix A | Appendix B | Appendix C | Glossary | Index | Legal Stuff | License

Appendix A - Filter and Data Rate Control Details

Adaptive Noise Reduction | Deinterlacing | Using the Static Mask | Adaptive Data Rate Control

Adaptive Noise Reduction

The Adaptive Noise Reduction filter can make a dramatic improvement in your final compressed movies. It selectively blurs flat fields and removes stray pixels but does not alter edge detail. This produces an image that compresses better, but does not look "fuzzy."

Media Cleaner has a few default settings, but also allows you to customize the application of the Adaptive Noise Reduction filter. In order to manually set this feature effectively, a basic understanding of what the various filters are doing is useful.

How It Works
The Adaptive Noise Reduction filter looks at each pixel in your image, and compares it to the surrounding pixels. The "flat field" option looks for very small differences between pixels and their surroundings. The "stray pixels" option looks for large differences between single pixels and their surroundings. Anything that doesn't fall into the "stray pixels" or "flat field" noise category is considered normal and left alone.

The following diagrams help illustrate how the filter identifies and deals with various types of pixels. The center pixel is the one being analyzed relative to the eight pixels surrounding it.



Normal Pixels

The center pixel is compared to the surrounding pixels and found to be within an acceptable difference - it is similar enough not to be a stray, and different enough not to be flat field noise. A normal pixel is left unaltered.


Flat Field Noise

BEFORE - the center pixel is found to be extremely close to the surrounding pixels but not identical.


AFTER - The center pixel is replaced with an average (mean) of all of the surrounding pixels to make it as similar as possible.


Stray Pixels

BEFORE - the center pixel is found to be significantly different than the surrounding pixels.


AFTER - the center pixel is replaced with the "most typical" (median) pixel from the surrounding pixels.

Go back to the top of the page



Setting the Custom Option

You may select what percentage difference between the center pixel and the reference pixels will cause a pixel to be treated as a "stray pixel" or as "flat field" noise. This lets you "fine-tune" what pixels are being modified and what is being left untouched. The number you enter is a percentage. Any pixel that does not fall into either percentage set for "stray pixel" or "flat field" noise will be considered normal, and left unmodified.

For example, if you set the "stray pixel" to 95, any pixel that is 95% different or more than the surrounding pixels would be "stray." If you set the "flat field" noise to 5, any pixel that is 5% or less different than the surrounding pixels will be considered "flat field" noise. With these two settings, any pixel that was greater than 5% different but less than 95% different from the surrounding pixels would be "normal" and left unmodified.

NOTE: If you set the "stray pixel" difference to 0, all the pixels in the image will be considered "stray" and have a median filter applied to them. Likewise, if you set the "flat field" noise difference to 100, all the pixels will be considered "flat field" noise, and will have the mean filter applied to them.

Generally, you don't want to set both difference settings to the same number. For example if you set "stray pixels" to 50 and "flat field" noise to 50 you would effectively be telling the filter there were no normal pixels at all in the image. Everything less than 50% different would be treated as "flat field" noise, everything over 50% different would be treated as "stray pixels," and nothing would be treated as "normal."

Go back to the top of the page



Iterations

You can specify how many times the adaptive noise reduction filter is applied with the "Iterations" option. For most movies, one application is sufficient. For very noisy source, or for low data rate movies, multiple applications may be preferable. More than five applications usually create odd visual artifacts and significantly slows down processing.

Go back to the top of the page



When the Noise Reduction is Applied

You can apply the Adaptive Noise Reduction filter before and/or after the video has been scaled to the size you specify in the Video tab.

Generally speaking, applying the filter after the scale makes the noise reduction most effective. However, if you have stray pixels in your source, you should apply the filter before the scale to more effectively remove them. (Stray pixels get averaged into the surrounding pixels during the scaling, and are thus harder to remove after the scale).

As with many filter details, experimentation using the Dynamic Preview as a reference is usually helpful.

Go back to the top of the page



Tips for Filter Use

Different source material often has different noise characteristics. The default settings will normally give you good results, but careful tuning of the parameters to your specific video may give better results. Experimentation is the key to setting these parameters. It's usually best to make fairly large changes at first and then back off once you "overshoot", rather than slowly "inching up" on your target. The Dynamic Preview is very useful in determining how well your settings are working.

Go back to the top of the page



Blur vs. Adaptive Noise Reduction

There is definitely some overlap in uses of the blur filter and Adaptive Noise Reduction. Both are useful to minimize noise, but generally the Adaptive Noise Reduction is preferable because it does not produce "fuzzy" images.

If you set both a blur and the Adaptive Noise Reduction, the blur will be applied after the Noise Reduction.

Experimentation is the key to determining the right level and mix of video filters.

Go back to the top of the page



Deinterlacing

What is interlacing?
"Interlacing" is an artifact that arises from the fact that each NTSC video frame consists of two images known as "fields." Each field is only half of the image. Half of the alternating field lines contain image data, and the other half are blank.

Every 1/60th of a second, the television draws the alternating field. For example, during the first 1/60th of a second, the even lines might be the image and the odd lines would be black. Then the next 1/60th of a second, the odd lines would be image and the even lines would be black. Our eyes put the two alternating fields together to create 30 whole frames per second ­ on a television screen, we don't normally notice the interlaced nature of the display.

Interlacing was originally designed to compensate for the fact that early televisions had a hard time redrawing the whole screen fast enough and tended to "flicker". Interlacing solved the flicker problem by requiring the TV to only draw half as much data at a time. In comparison, computer monitors are non-interlaced, which means they draw the whole image with each refresh of the screen.

Most full-screen capture cards translate the analog 60 field per second interlaced video signal into a 30 frame per second non-interlaced image. They do this by combining ("interleaving") the even and odd fields to create a single image for each frame. Changing from an interlaced to non-interlaced image makes the interlacing artifacts more noticeable as explained below.

Go back to the top of the page



Why is interlacing a problem?
The problem arises from the fact that the two fields that make one video frame have slight differences between them because they were taken 1/60th of a second apart. Areas that have a large amount of movement often become separated into alternating lines. This looks similar to a motion blur, except that the even and odd lines are apparent.

Interlacing artifacts look strange on a non-interlaced computer monitor, so you should normally remove them with the "Deinterlace" feature. The deinterlace feature has various modes to handle the slightly different interlacing artifacts created by your movie's original source, either video or film.

Go back to the top of the page



When to Use Blend, Odd, or Even Deinterlacing

Video Source

If your source was shot on video, and is relatively static, you generally should use "Even" or "Odd", as this gives you a sharper image. If your video has lots of fast motion, the "Blend" option may give you a better result by maintaining the "motion blur" effect of the original interlacing. You should experiment to find the setting you like best.

NOTE: Due to some math tricks, "Accurate" scaling is faster with the "Even" or "Odd" options than with the "Blend" option.

Film Source
If your source was originally shot on film, and then translated to video, you should use "Blend" to minimize the effect of the 3/2 pulldown that was introduced during conversion.

Introducing a "pulldown" is the process by which new frames are created to compensate for the different frame rates between video and film. Since film is 24 fps and NTSC video is about 30 fps, 6 extra frames are added per second to make the final frame rate a full 30 fps. For NTSC video, this is called a 3/2 pulldown because during the transfer of film to video, alternating film frames are recorded to two and then three video fields. The diagram below illustrates this conversion.

In video, the new pulldown frames have strong interlace lines in areas of rapid movement. The effect is similar to normal interlacing, only more pronounced. On an interlaced television screen this effect is not objectionable. However, on a non-interlaced computer monitor this effect looks strange, and should be removed. "Blend" blurs the interlace lines to create a combined image that usually looks acceptable in the final movie.

NOTE: Depending on your final frame rate, using the "Odd" or "Even" option may create duplicate frames. This is due to the fact that some of the pulldown frames are half of the previous frame and half of the next frame ­ removing half of the image can make the pulldown frame all of the previous or next frame, so you may get two identical frames in a row. With NTSC video, this may happen if your final frame rate is higher than 15 fps.

Go back to the top of the page



Dealing with "Video Stutter"

If you use the "Odd" or "Even" deinterlacing mode and get a slight "stutter" in the video, you may be seeing frame duplication as described above. To double-check for frame duplication, step through the video one frame at a time with the arrow keys to look for identical frames.

If you notice duplicated frames, try using the "Blend" deinterlacing option to see if it produces better results.

Go back to the top of the page



Using the Static Mask

(formerly the "Talking Heads filter")

The Static Mask is an option which allows you to define static zones in the video and then composite the first frame throughout the entire movie in these zones. This helps eliminate video noise in areas that shouldn't change, and therefore improves compression.


Why use it?
People often ask why they should use the Static Mask on an area if the area wasn't changing in the first place. The answer is that even static parts of the video are actually changing to some degree due to video noise. Depending on how noisy the signal is, this updating of supposedly "unchanging" areas can be very significant. The Static Mask keeps pixels in the defined areas from changing at all, which greatly improves the codec's ability to compress the overall image.

For example, if you had a newscaster in front of a static background, that background should be exactly the same between each frame and should take good advantage of temporal compression. However, unless you use a blue screen and then composite in a digital still behind the newscaster, the background has some random pixels in it due to the noise inherent in all video systems. Higher quality video cameras usually have less noise, which is why Betacam-SP compresses better than VHS.

To remove this noise, the Static Mask lets you define the areas that should be exactly the same throughout the movie, and then composites the first frame into this zone to remove the pixel noise.

NOTE: The Static Mask works best with "talking heads" type of video (a person talking into the camera, usually in front of a backdrop) which is why it used to be called the "Talking Heads filter". Unless you want unusual effects, you should only use the filter with movies that have no camera movement and areas that are identical throughout the entire clip.

Go back to the top of the page



Making the Mask Image

To use the filter, you must create a greyscale mask image. It is recommended that you do this in an image editing program, such as Adobe Photoshop.

To help align the mask with the movie, copy a sample frame from the movie and paste it into a new file in your image software. If your program supports layers, like Adobe Photoshop, simply create a new layer and paint with a black brush the areas that you don't want to change. Save the mask layer only, as a PICT file, and then use this image for the Static Mask.


Static Mask Tips

Don't make a mask that is too close to the changing subject ­ if the subject moves into the static zone, they will be "cut off". Also, if you create a feathered edge between the zones the transition will be less noticeable than if the mask has a hard edge.

Media Cleaner will open any PICT file (color, greyscale, 1-bit) but will dither it to 1-bit. If you set a "mild" blur in the Video tab, the sharp edges and dithering of the mask will be minimized.

You can also use the Static Mask for compositing effects such as watermarking and placing frames around movies. Please check out the Tips section on our WWW site for more ideas on how to use the Static Mask. Just select "Terran's WWW site" from the Internet menu, and click on the "Tips & Info" button.

Go back to the top of the page



If the Image Still Changes After the Static Mask

The Static Mask preprocesses the image prior to compression; the selected codec does the actual compression of your image. Many codecs, including Cinepak, generate new keyframes in such a fashion that even areas that are identical throughout the whole movie may change on the keyframe. When using these codecs, you may experience some pixel movement in the static areas during the keyframes.

There is no perfect solution to this, since the effect occurs within the codec, and is not controllable by Media Cleaner. To minimize this pixel movement, you can make your keyframes farther apart, or turn them off entirely in the Compress tab.

NOTE: Minimizing or removing the keyframes will normally create a movie that works fine when played from start to end. However, random access will be very slow. Some experimentation and testing is often useful in dealing with this behavior.

Go back to the top of the page



Adaptive Data-Rate Control

In Media Cleaner Pro 2.0, the adaptive data rate control works with the following codecs: Video, Animation, and Photo-JPEG. These codecs have rarely been used for WWW and CD-ROM projects because they aren't data rate limited. Media Cleaner's data rate control now makes these previously "unlimited" codecs useful for multimedia projects. Please see "Codec Central" in the Media Cleaner Internet menu for more details of how these codecs may be applicable to certain projects.

Unfortunately, the current version of Cinepak has some issues that block Media Cleaner's adaptive data rate control ­ we are working to address these in a future release. Terran is also working closely with the new codec vendors to assure that adaptive data rate control will work properly with their new products when they are released in mid- to late- 1997.

Go back to the top of the page



How Adaptive Data Rate Control Works

The adaptive mode uses a two-pass approach to control the data rate of your movie. First, it analyzes the whole movie to determine the best data rate model for it, taking into account the actual video, the preprocessing filters, and the compression parameters. Second, the movie is compressed to the specifications created in the analysis, using recompression of frames to meet the correct data rates. Since Media Cleaner is aware of what the QuickTime buffers can handle at any time, some parts of the movie may have a higher or lower data rate than specified.

Due to the fact that the movie is first analyzed, then each frame compressed to a particular size, this process is significantly slower than the "Basic" method of compression. However, for difficult to compress movies, it often produces better results.

Go back to the top of the page



When to Use It

The improvement in image quality is most noticeable in movies with mixed easy- and hard-to-compress sections, as described below. In movies that have a constant level of action, or are very static, you may not notice significant improvement with adaptive as compared to the other forms of data rate control.

If you have a Cinepak movie that consistently compresses with too low a data rate in the "Basic" data rate mode, the Video codec with adaptive data rate control may be a better option.

Go back to the top of the page



Are those "spikes" supposed to be there?

If you analyze files that were created with Adaptive data rate control, they may have large "data spikes" that significantly exceed the average data rate you specified. Many analysis programs will mark these movies as "unplayable".

This is normal and expected. The adaptive data rate control takes into account the buffers in QuickTime to allow for these spikes ­ the final movie will play properly.

This type of intentional spike may cause many developers to be skeptical. For a long time, everyone has been functioning under the assumption that a flat data rate is good ­ the flatter the better. In most cases, this assumption is wrong.

The trick is to make sure the data rate never exceeds what the computer can handle at any given time, taking into account the way the buffers handle spikes. Prior to Media Cleaner Pro 2.0, there were no programs capable of doing this; Media Cleaner now offers developers access to a new model of data rate control.

Go back to the top of the page



Why is a "flat" data rate bad and controlled "spikes" good?

Flat data rates are bad because they "waste" precious data on static parts of the movie that don't need it, and don't save this data for the tough parts of the movie that could benefit from a temporary increase in data rate.

Let's look at an example of a scene in which a newscaster is sitting in front of a static background, then cuts to a hand-held camera shot of the story he was narrating, and then back to the newsroom. The first scene of the newscaster in front of an unchanging backdrop is easy to compress, because there is little change. Maybe only 100 KBps is required to get good image quality. The next scene of the hand-held camera shot is really hard, with massive changes between each frame. Perhaps 250 KBps is needed to compress this scene well. Finally, the shot of the newscaster back in the newsroom is easy, again needing only 100 KBps.

Adaptive data rate control analyzes this whole sequence first and then determines that it should "save" the data from the two easy scenes for the one hard scene in the middle. Since the hard scene is short, the adaptive data rate control may actually be able to use the whole 250 KBps for a very short time, and then fall back to a much lower data rate for the final static scene. The result: good image quality throughout the movie, even in short, hard-to-compress sections.

A flat data rate model, on the other hand, unintelligently forces all three scenes to be a fixed rate, perhaps 180 KBps. For the easy scenes, the additional data doesn't really improve the image because these sections are already good enough at 100 KBps. However, for the hand-held scene, only 180 KBps is available. The result: poor image quality in the hard parts, and minimal improvements in the easy parts.

Go back to the top of the page