Since the “basicoperation for smoothing an image is addition, the opposite operation will result in sharpening the image, The sharpening effect is more subtle than smoothing. but more common and more useful, Nearly every image published, especially in monochrome publications, must be sharpened to some extent. For an example of a sharpened image, see Figure 7,9. Sharpening an image consists of highlighting the edges of the objects in it, which are the very same pixels blurred by the previous algorithm, Edges are areas of an image with sharp changes in intensity between adjacent pixels, The smoothing algorithm smoothed out these areas; now we want to pronounce them.

In a smooth area of an image, the difference between two adjacent pixels will be zero or-a very small number. If the pixels are on an edge, the difference between two adjacent pixels will be a large value (perhaps negative). This is an area of the image with some degree of detail that can be sharpened. If the difference is zero, the two pixels are nearly identical, which means that there’s nothing to sharpen there. nus is called a “flat” area of the image. (Think of an image with a constant background. There’s no detail to bring out on the background.)

The difference between adjacent pixels isolates the areas with detail and completely flattens out the smooth areas. The question now is how to bring out the detail without leveling the rest of the image. How about adding the difference to the original pixel? Where the image is flat, the difference is,negligible, and the processed pixel practically will be the same as the original one. If the difference is significant, the processed pixel will be the original plus a value that’s proportional to the magnitude of the detail. The sharpening algorithm can be expressed as follows:

If you simply add the difference to the original pixel, the algorithm brings out too much detail. You usually add a fraction of the difference; a 50 % factor is common.

The variables Dx and Dy express the distances between the two pixels being subtracted. You can subtract adjacent pixels on the same row, adjacent pixels in the same column, or diagonally adjacent pixels, which is what I did in this subroutine. Besides adding the difference to the original pixel value, this subroutine must check the result for validity. The result of the calculations may exceed the valid value range for a Color value, which is 0 to 255. That’s why you must clip the value if it falls outside the valid range.

**Embossing Images**

To sharpen an iniage, we add the difference between adjacent pixe1s to the pixel value. What do you think would happen to a processed image if you took the difference between adjacent pixels only? The flat areas of the image would be totally leveled, and only the edges would remain visible. The result would be an image like the image on the right in Figure 7.10. This effect clearly sharpens the edges and flattens the smooth areas of the image. By doing so, it gives the image depth. The processed image looks as if it’s raised and illuminated from the right side. This effect is known as emboss or bas relief.

The actual algorithm.is based on the difference between adjacent pixels. For most of the image, however, ‘the difference between adjacent pixels is a small number, and the image will turn black. The Emboss algorithm adds a constant to the difference to bring some brightness to areas of the image that would otherwise be dark. The algorithm can be expressed as follow

As usual, you can take the difference between adjacent pixels in the same row, ‘adjacent pixels in the same column, or diagonally adjacent pixels. The code that implements the Embo-ss filter in the Image application uses differences in the X

and Ydirections (set the values of the variables Dx or Dy to 0 to take the difference in one direction only). The Emboss filter’s code is shown next.

The variables Dx and Dy determine the location of the pixel being s6btracted from the one being processed. Notice that the pixel being subtracted is behind and above the current pixel. If you set the Dx and Dy variables to -1, the result is similar, the processed image looks engraved rather than embossed.

**Diffusing Images**

The Diffuse special effect is different from the previous ones, in the sense that it’s not based on the swris or the differences of pixel values. The Diffuse special effect uses the Rnd{) function to introduce some randomness to the image and- give it a painterly look, as demonstrated in Figure 7.lt. This time we won’t manipulate the values of the pixels. Instead, the current pixel will assume the value of another one, selected randomly in its 5 x 5 neighborhood with the help of the Rndf) function.

**Solarizing Images**

The last special effect in the Imageapplication is based on a photographic technique, and it’s called solarization. Figure 7.12 shows an example of a solarized image. Part of the image is unprocessed and part of it is inverted. You can use many rules to decide which pixels to invert, and you should experiment with this algorithm to get the best possible results for a given image

The algorithm as implemented in the Image application inverts the basic color components whose values are less than 128. If a pixel value is (58, 199, 130), then only its Red component will be inverted, while a pixel with a value (32, 99,110) will be inverted completely. The code behind the Solarize menu command follows.

Notice the If statements that invert the Color values. The second comparison operator is not really needed because a Color value can’t exceed 255. However, you can set it to another smaller value to experiment with the algorithm (in other words, invert a Color value in a specific range).

**Implementing Custom Filters**

The last operation on the Process menu is a versatile technique for implementing many filters. The Custom Filter command leads you to’ another Form, shown in Figure 7.13. You can use this Form to specify a 3 x 3 or 5 x 5 block over which the calculations will be performed. Imagine that this block is centered over the current pixel. The coefficients in each cell of this brock are multiplied by the underlying pixAJvalues, and all the products are added together. Let’s call this sum SP (sum of products). The sum of the products is then divided by the Divide factor, and finally the Bias is added to the result. The code that processes an image with a custom filter, as specified in the Custom Filter window is shown next.

The subroutine reads the values of the various controls on Form2 (the filter’s Form) and uses them to process the image as described. The custom filter is the slowest one, but it’s quite flexible. .

To understand how this filter works, let’s implement the smoothing algorithm . as a custom filter. The smoothing algorithm adds the values of the current pixel and its eight neighbors and divides the result by 9. If you set all the coefficients in the filter to I, the sum of the products will be the sum of all pixel values under the filter’s block. Multiplying each pixel by 1 won’t change their values, and 0 the sum of the products will be the same as the sum of the pixel values. To calculate the average, you must divide by 9, so set the Divide field on the Custom Filter Form to 9. The Bias field should be O.If you apply this custom filter to the image, it will have the same effect on the image as the smoothing algorithm. The values of all nine pixels under the block are added, their sum is divided by 9, and the result, which is the average of the pixels under consideration, is assigned to the center pixel of the block The same process is repeated for the next pixel on the same row, and so on, until the filter is ap~lied to every pixel of the image. Let’s look at one more example-of the-Custom Filter command, this time one that uses the Bias field. The Emboss algorithm replaces each pixel with its difference from the one on the previous row and column and then adds the”bias 128 so that the embossed image won’t be too dark To implement the Emboss algorithm as a Custom Filter,.set the coefficients

The pixel to the right of the current pixel is subtracted from the current pixel, and the bias 128 is added to the result, which is exactly what the actual algorithm does.

The pixel to the right of the current pixel is subtracted from the current pixel, and the bias 128 is added to the result, which is exactly what the actual algorithm does