home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Chip: Special Survival Kit
/
Chip_Special_Survival_Kit_fuer_PC_Anwender.iso
/
03grafik
/
improces
/
primer.doc
< prev
next >
Wrap
Text File
|
1994-09-01
|
23KB
|
477 lines
PRIMER.DOC for IMPROCES(c).
Copyright John Wagner 1991-93. All rights reserved.
========================================================================
An Image Processing and VGA Primer
This article may only be distributed as part of the IMPROCES, Image
Processing Software package by John Wagner. IMPROCES is used for the
examples and it is assumed the reader has a copy of the program. This
article may not be reproduced in any manner without prior permission
from the author.
( 1) THE IMAGE AND THE SCREEN:
Images are represented as a series of points on a surface of varying
intensity. For example, on a monochrome or black and white photograph,
the points on the image are represented with varying shades of gray. On
a computer screen, these points are called pixels. Pixels on the screen
are mapped into a two dimensional coordinate system that starts at the
top, left corner with the coordinate 0,0. The coordinates in the X
direction refer to pixels going in the right (horizontal) direction and
coordinates in the Y direction refer to pixels going in the down
(vertical) direction. The coordinates X,Y are used to define a specific
pixel on the screen. When a computer image is said to have a resolution
of 320x200x256, it means that the image has an X width of 320 pixels, a
Y length of 200 pixels and contains 256 colors.
Diagram 1: Pixel Mapping on 320x200 video screen:
0,0 319,0
X-------->
Y
|
|
V
0,199 319,199
( 2) COLOR:
A shade of gray is defined as having equal levels of Red, Green and Blue
(RGB). A color image, however, is represented as having points that are
represented by varying levels of RGB. NOTE: The color model of RGB is
just one way of representing color. Conveniently it is also the model
that computers use when representing colors on a screen. For that
reason, it will be the model we will use here!
When splitting up a color into it's RGB components, it is common to use a
number between 0 and 1 (or percentage of total color) to represent the
colors RGB intensities. R=0, G=0, B=0 (usually shown as 0,0,0) would be
the absence of all color (black) and 1,1,1 would be full intensity for
all colors (white). .5,0,0 would be half red, while 0,.25,0 would
be one quarter green. Using this model, an infinite quantity of colors are
possible. Unfortunately, personal computers of this day and age are not
capable of handling an infinite quantity of colors and must use
approximations when dealing with color.
( 3) THE VGA:
With the advent of the VGA video subsystem for IBM PC's and compatibles,
Image Processing is now available to the home PC graphics enthusiast. It
will help to understand the limitations of the hardware we are working
with. The VGA can display 256 colors at one time out of a possible
262,144. The number 262,144 comes from the limitations of the VGA
hardware itself. The VGA (and many common graphics subsystems)
represent their 256 colors in a lookup table of RGB values so that the
memory where the video display is mapped need only keep track of one
number (the lookup value) instead of three (see diagram 2). The VGA
lookup table allows for 64 (0 to 63) levels of RGB for each color. This
means that 64 to the 3rd power of different colors are possible.
64^3=262,144. Because only 64 levels of RGB are possible, it is only
possible to represent 64 shades of gray with a VGA. Remember that a
shade of gray is defined as equal levels of RGB. There is also a
disparity in the common usage of values from 0 to 1 to define a level of
RGB. However, this problem is easily solved by dividing the VGA lookup
table RGB number by 64 to get its proper percentage of the total color
(for example, 32/64=.5 or 50% of total color).
Diagram 2: Sample Color Lookup Table (LUT) for VGA
Color R G B
1 0 23 56
2 34 24 45
3 23 12 43
....
254 13 32 43
255 12 63 12
Another important aspect of the VGA is that it only allows up to 256
colors to be displayed at one time. There are now Super VGA cards that
allow you to work in display resolutions up to 1024x768 with 256 colors.
These Super VGA cards have more memory on board so they can handle the
higher resolutions. It is important to understand that the amount of
memory on the video board will determine the highest resolution it can
handle.
Video Memory is bitmapped. In a 256 color mode, one pixel requires one
byte of memory, as a byte can hold a value from 0 to 255. VGA video
memory begins at the hexidecimal address A0000000. The VGA video memory
area is 64K in length. Because of the length of the VGA memory area,
the maximum resolution a standard VGA card can achieve is 320x200x256.
This is because 320 pixels (bytes) times 200 pixels (bytes) equals
64,000 bytes (62.5K) and fills the video memory area.
Diagram 3: Memory Address Allocation of Pixels
pixel 0,0───┐ 0,1 0,2
│ │ │
Memory Address A0000000 ┘ │ │
A0000001 ───┘ │
A0000002 ────────┘
But wait a second, how can we get video modes up to 1024x768? If you
want a resolution of 1024x768x1byte (256 colors), you require 1024x768
bytes of video memory, 786,432 bytes, or 768K. Because the architecture
of the PC only allows for a maximum of 64K of VGA memory, a Super VGA
card maintains its own pool of memory that is displayed to the screen
and swaps memory in and out of the 64K "proper" VGA video memory address
space so that programs can write to it.
( 4) SOME INFORMATION ABOUT IMPROCES:
IMPROCES image processing functions all work in a defined area called
the WORK AREA. The default work area starts at the top-left corner of
the screen and ends at pixel 196 in the X (horizontal) direction and at
pixel 165 in the Y (vertical) direction. These aren't magical numbers.
A friend of mine lent me a CCD device to capture grayscale images and
process them with IMPROCES. The size of the image of the CCD device
output was, you guessed it, 196x165. You can change the WORK AREA
dimensions by selecting WORK AREA from the ENHANCE pull-down menu and
then defining a new rectanglar area for IMPROCES to use.
( 5) THE HISTOGRAM:
With all of that out of the way, let's examine a few things we can learn
from an image without actually modifying it. A tool that is commonly
used to determine the overall contrast of an image is the Histogram. A
histogram is defined as the measure of the distribution of a defined
set. As you probably know, histograms are not unique to image
processing.
The histogram takes a count of all the values on the image and displays
them graphically. When I say a count, I mean how many pixels that
contain the color in LUT (LookUp Table) 0 through 255. The count of a
color is called the BIN of the color.
To display a histogram with IMPROCES, select AREA HISTO from the ENHANCE
pull-down menu. The histogram is displayed from the left to the right
starting at color 0 and working over a column at a time to color 255.
The BIN's are displayed as lines going upward. You can move the mouse
to a desired BIN and click on that column to get the exact count for
that column, which is shown in the lower right corner. You can also
press 'S' to save the histogram to an ASCII file which you can examine
later.
Assuming that the image is a grayscale image, the histogram shows us
the overall contrast of the image by how much of the grayscale is
covered by the image. A high contrast image will cover most of the
grayscale while a low contrast image will only cover a small portion of
the grayscale.
Diagram 4: Examples of Histograms:
High Contrast Image:
-100 x6
│││ -
│ ││││ -
││││││││ -
│││││││││ -
│││││││││││ -
└┴┴┴┴┴┴┴┴┴┘ -0
0 255
Low Contrast Image:
-100 x8
│││ -
││││ -
││││ -
││││ -
││││ -
────┴┴┴┴─── -0
0 255
( 6) CONTRAST ENHANCEMENT:
Contrast enhancement is one of the easiest to understand of the image
processing functions. In IMPROCES, contrast stretching will only work
properly on images with a grayscale palette. Future versions of
IMPROCES will probably allow for contrast stretching of color images.
Contrast Stretching will take a portion of the grayscale and stretch it
so that it covers a wider portion of the grayscale. To do this, you
must first define an area of the grayscale that you would like to
stretch. IMPROCES provides three ways to do this. All of the methods
use two variables, one called Low_CLIP (L_CLIP) and one called the
High_CLIP (H_CLIP). Depending on which method you use, the variables
will be used in different ways.
When using the CNTR STRTCH method, the first BIN working up from 0 that
contains more pixels then the value of L_CLIP will become the color 0
(black), and any BINs below that value are set to 0 as well. The first
BIN working down from 255 that contains more pixels then the value of
H_CLIP will become the value 255 (white) and any BINs above that will
become 255 as well. All of the BINs in between will be remapped between
0 and 255 by a ratio of where they were with respect to the original LOW
and HIGH CLIP values.
Take the original low contrast image:
Low Contrast Image:
-100 x8
│││ -
││││ -
││││ -
││││ -
││││ -
────┴┴┴┴─── -0
0 255
L_CLIP and H_CLIP are both set to 30 so the L_CLIP and H_CLIP will
hit at these points:
-100 x8
│││ -
││││ -
││││ -
││││ -
││││ -
────┴┴┴┴─── -0
0 | | 255
L_CLIP H_CLIP
These BINs will now be reset to 0 and 255 and the BINs in
between are set in respect to their original locations to the
L_CLIP and H_CLIP BINs:
Constrast Stretched Image:
-100 x8
│ │ │ -
│ │ │ │ -
│ │ │ │ -
│ │ │ │ -
│ │ │ │ -
└──┴───┴──┘ -0
0 255
L_CLIP H_CLIP
The resulting image will have it's contrast stretched across the
entire grayscale, resulting in a higher contrast image.
The standard CNTR STRCH works well for a lot of images. The problem is,
rarely will an image be spread so evenly across the grayscale to begin
with. A lot of times there will be spikes in the histogram at either end
that you might want to remove.
Histogram with spikes:
-100 x6
│ │ │ -
││ │││ │ -
││ ││││ │ -
│││││││││ -
│││││││││ -
─┴┴┴┴┴┴┴┴┴─ -0
0 255
Using the standard contrast stretch, you would not be able to get over
these spikes if you wanted to stretch the middle of the histogram.
IMPROCES provides the CNTR VSTCH for this purpose. This method uses the
variables L_CLIP and H_CLIP to pick which BIN you want to be the L_CLIP
and H_CLIP values, without regard to their values. The only thing to
remember is not to set the L_CLIP higher then the H_CLIP, otherwise the
program will give you an error message. Besides the difference in the
way the program uses the variables, VSTCH works identical to STRCTH.
CNTR LSTRCH works the same as VSTCH in the respect that the variables
are used to pick the L_CLIP and H_CLIP values. The difference is that
the BINs that are below the L_CLIP are not set to 0, they are left alone
and the same for the BINs above the H_CLIP value are left alone, not set
to 255. Only the BINs in between L_CLIP and H_CLIP are stretched
between 0 and 255.
( 7) CONVOLUTION:
Convolution is another common method of processing an image. It
determines a new value for each pixel by evaluating the values of the
neighboring pixels. The main points of convolution can be explained
rather easily:
A matrix, called a kernel, is defined with certain values. The
kernel is then passed over the image from left to right, top to
bottom, with the value of the center pixel being replaced with
the sum of the products of the kernel values and the pixels
under them.
The dimensions of the kernel must be odd numbers so that there will be a
center position to represent each target pixel. IMPROCES uses a 3x3
kernel:
┌───┬───┬───┐
├───┼───┼───┤
├───┼───┼───┤
└───┴───┴───┘
Values are assigned to the kernel depending on the required convolution:
Sharpening kernel:
-1 -1 -1
-1 9 -1
-1 -1 -1
The kernel is passed over the image from left to right, top to bottom.
The kernel values are multiplied, point by point with the pixels in the
3x3 section of the image under it. The products are then summed and the
middle pixel in the image under the kernel is then replaced with the new
value.
Sharpening kernel:
-1 -1 -1
-1 9 -1
-1 -1 -1
Example of area of image being processed:
23 34 25
23 43 21
23 43 43
Product of values:
-23 -34 -25
-23 387 -21
-23 -43 -43
Sum of products:
156
Changed area of image:
23 34 25
23 156 21
23 43 43
The new value of the center pixel is then written to the screen. You
should note that the output from the previous operation is not used for
input in the next operation. The input and output image must be treated
separately. IMPROCES does this operation in place by using two rotating
three line buffers. The resulting image is said to have convolved from
the original.
( 7)(a) LAPLACIAN KERNEL:
The example shows a sharpening kernel. A sharpening kernel is actually
a Laplacian kernel with the original image added back in. The Laplacian
is an edge detecting kernel, so when you detect the edges of the image
and then add the original image back in, you sharpen the edges, thereby
improving the sharpness of the image. A Laplacian kernel looks like so:
Laplacian kernel:
-1 -1 -1
-1 8 -1
-1 -1 -1
You will notice that the center value for the Laplacian is an 8 and the
sum of the kernel is 0. What this does to an image is to make areas
that have no real features and are close to being a continuous tone
disappear and leave features that have a lot of contrast with their
neighbors. This is the function of an edge detector. If the kernel has
a sum of 0, it will enhance the edges of an image in a certain
direction. In the case of the Laplacian, the enhancement will take
place in all directions. Here is why this will happen:
If you have a 3x3 area of the image in which pixels are equal to
the same value, for example 200, it will be uniform in color and
contain no edges. When the Laplacian is passed over this
section, the result of the convolution will be 0, or the absence
of an edge. If you have an area that looks like:
100 100 100
200 200 200
250 250 250
The result of the convolution with the Laplacian would be 150,
and there would be an edge present.
Take for example a run of pixels that looks like this: 0 150
200, that area obviously contains an edge. On a graph it would
look like this:
/|255
/ |200
/ |150
0-----|
If you used a 1x3 kernel with the values -1 0 1, the middle
value would become 200 ( (0x0=0) + (150x0=0) + (200x1=200)) and
show the precence of the edge. If the values of the run were
100 100 100, the area would be uniform and the value of the
middle pixel would become 0 and accurately depict the uniform
area.
( 7)(b) HORIZONTAL KERNEL:
A horizontal kernel looks like this:
Horizontal kernel:
-1 -1 -1
0 0 0
1 1 1
The horizontal kernel will only detect edges in the horizontal direction
due to the direction of the kernel.
( 7)(c) VERTICAL KERNEL:
A vertical kernel looks like this:
Vertical kernel:
-1 0 1
-1 0 1
-1 0 1
The vertical kernel will only detect edges in the vertical direction due
to the direction of the kernel.
There are other methods of edge detecion available that IMPROCES does
not implement at the present time. In fact there are new methods and
filters being invented every day. Future versions of IMPROCES will
support convolving an area with a separate filter for each direction and
other forms of edge detection.
( 7)(d) THE BOOST FUNCTION:
There is also a variable called BOOST that is used when using the
convolution filters. BOOST is used to increase or decrease the amount
of the filter that is applied to the original image. What happens is
the pixels under the kernel are first multiplied by the corresponding
kernel value and then are multiplied by the BOOST value. A BOOST value
of less then 1.0 will lessen the effect of the filter, while a value of
greater then 1.0 will increase the effect.
IMPROCES includes a Custom Filter function that lets you define the
kernel to use. You will also note that as of version 3.0 of IMPROCES,
there are separate functions for grayscale images and for color images.
The grayscale functions work a lot faster. The color functions must
convol the RGB attributes of each pixel and then search the palette for
the proper color to replace the pixel with. The color process rarely
will find an exact match for the color that gets produced from the
convolution, but it will find the closest possible match and use it. The
gray functions need only convol the LUT value of the pixels and use the
result to get an exact match.
( 7)(e) AVERAGE AND MEDIAN:
Two other filters that are included are the Average and Median filters.
Both of these use a 3x3 matrix as before, but only the values in the
image are used.
Average will find the average (mean) value of the pixels under the
matrix and replace the center pixel with that value. Median will find
the middle value used and use that. Both functions work the same for
grayscale and color images.
An important note for all of the functions that use a matrix or a kernel
is that the pixels on the edges (top, right, left, bottom) of the work
area will not be changed by the functions. This is because these pixels
do not have enough neighbors to be used in processing. In IMPROCES
these pixels are simply left alone.
( 8) CONCLUSION:
Image Processing is used in many fields. With the price of personal
computers equipped with VGA and SVGA hardware dropping like a rock, and
with software like IMPROCES available for only $30, Image Processing is
now available to the masses. It is exciting, useful, and most of all
fun.
I hope this little primer has aroused your interest in Image Processing
and that it helps make some things that IMPROCES can do a little
clearer. If for some reason you obtained this article without a copy of
IMPROCES, the latest version of the program can be downloaded from the
Dust Devil BBS in Las Vegas, Nevada, (702)796-7134. I can also be
reached at the Dust Devil BBS if you have any questions about IMPROCES.
John Wagner
[PRIMER.DOC revised November 1992]