Properties and Applications of the 2D Fourier Transform

The 2D Fourier Transform is a good concept since it detects frequencies of periodic patterns. Applications of the 2D Fourier transform may range from Ridge Enhancement, Pattern Removal, etc. Some of these applications will be discussed later. First, I will discuss the basic properties of the Fourier transform.

As discussed in the previous blog, when a large image is transformed to Fourier space, it will become smaller and when a small image is transformed to Fourier space, it will become larger. This property is called anamorphism. For the 2D Fourier transform, anamorphism works independently for the x-axis and y-axis. To demonstrate anamorphism, I generated four different types of images: tall rectangle aperture, wide rectangle aperture, two symmetric dots and two symmetric dots with wider separation.

Figure 1. Images generated to demonstrate anamorphism
Figure 1. Images generated to demonstrate anamorphism

For the tall rectangle (Figure 1 top left), we will expect a wide rectangle with a repeating slit pattern on each axis. For the wide rectangle (Figure 1 top right), we will expect a tall rectangle with a repeating slit pattern on each axis also. For the symmetric dots (Figure 1 bottom left), we will expect a sinusoidal pattern similar to that of the corrugated roof. For the symmetric dots with wider spearation (Figure 1 bottom right), we will also expect another sinusoidal pattern but with a smaller wavelength. To check the predictions, we get the Fourier transform of each image:

Figure 2. Fourier transforms of the images in Figure 1
Figure 2. Fourier transforms of the images in Figure 1

As expected, the results match the predicted Fourier transforms. Anamorphism was demonstrated in the two images (Figure 1 top), a taller image will seem small in the Fourier plane and vice versa. For the case of the symmetric dots, as seen in the previous blog, it should be the Fourier transform of a sinsuoidal wave. Since the Fourier transform of anything in the Fourier plane will just return to the space plane, two symmetric dots will result to a sinusoidal pattern. This applies to the second symmetric dots (Figure 1 bottom right). The pattern seemed thinner because the dots now represent a higher frequency thus having a smaller wavelength.  Anamorphism is also observed for the dots. When the dots are closer to the center (smaller gap between them), the sinusoid will have a smaller frequency thus having a wider wavelength and vice versa.

Now we will discuss the rotation property of the Fourier transform. To demonstrate this, I generated four sinusoidal waves of different angle of rotations (0°, 30°, 45°, 90°) as seen in Figure 3.

Figure 3. Images used to demonstrate rotation
Figure 3. Images used to demonstrate rotation

For the rotation property of the Fourier transform, I predict that the transform will also be rotated in the same direction and the same angle as that of the rotation in the image space. To again check the predictions, we get the Fourier transforms of the images:

imagepartbft
Figure 4. Fourier transforms of the images in Figure 3

Again, as expected, the Fourier transforms just rotated in the same direction and angle. It is observable for all images in Figure 4. This happens because in the Fourier space, the dots appear on the axis where the periodic pattern (sinusoid) is observed. If the image is periodic along the x-axis, the dots will appear on the x-axis. When the image is rotated, the axis where the sinusoid resides also rotates. This is why the dots also appear on a rotated axis.

This time, I will discuss another property of the Fourier transform which is the combination so I created a pattern in which two types of sinusoids are present: one in the x-direction and the other in the y-direction as seen in Figure 5. I created four of this pattern with different frequency combinations for each.

Figure 5. Images with two types of sinusoid and with different frequency combinations
Figure 5. Images with two types of sinusoid and with different frequency combinations

The images above are generated by multiplying two sinusoids (one running in the x-direction and the other in the y-direction). Each image above also have different frequency combinations. Since the images above are a product of two sinusoids, the Fourier transform of the product of two functions is the convolution of the individual Fourier transforms of each function. Since the Fourier transform of sinusoidal waves are two dirac delta peaks, the convolution of two dirac delta peaks on two dirac delta peaks will be four dirac delta peaks. The peaks will be located at points with the correct x and y frequency combinations. The frequencies present in Figure 5 will be: 1 Hz in x-axis, 1 Hz in y-axis (top left), 0.5 Hz in x-axis, 0.5 Hz in y-axis (top right), 1 Hz in x-axis, 0.5 Hz in y-axis (bottom left), 0.5 Hz in x-axis, 1 Hz in y-axis (bottom right).

Figure 6. Fourier transforms of the images in Figure 5
Figure 6. Fourier transforms of the images in Figure 5

The expected results were correct and the dots seem to be located on the correct points in the Fourier plane. Also note that the results in Figure 6 are also the convolution of the individual Fourier transforms of the sinusoids with different frequencies. This conforms with my predictions. To tackle further the discussion on rotation, I created a new image with three different type of sinusoidal waves: one along the x-axis, another along the y-axis and the last along an axis rotated 45°. As discussed above, since the Fourier transform of the product of two functions is the convolution of the individual Fourier transforms of each, convolving two dirac delta peaks into two dirac delta peaks into two two more dirac delta peaks should result to eight peaks. The predictions agreed with the results as shown below in Figure 7.

Figure 7. The image generated to demonstrate combinations and rotations (left) and its Fourier transform (right)
Figure 7. The image generated to demonstrate combinations and rotations (left) and its Fourier transform (right)

For the next part, I will show some common patterns observed in many images and discuss how to use Fourier transform to deal with it. I already discussed symmetric dots above and how they have a sinusoid for a Fourier transform. This time, I will discuss symmetric circles, symmetric squares and symmetric Gaussian curves.

part3circ
Figure 8. Symmetric circles with different distance between them (88, 68, 48, 28 units)

Since we have learned about convolution above, we know that these image are just a convolution of a circle and a dirac delta function. The transform of a convolution is just the product of the individual transforms of each function. Also, we know that the transform of a circle is an airy disk pattern and the transform of a dirac delta function is a sinusoid. For the transforms of the images, we should expect an airy disk pattern with a sinusoidal varying intensity. Along the four images, they differ by separation. In a dirac delta function, when the peaks come closer to the center, the wavelength of the sinusoid is larger and vice versa. So here, we expect the same for the circles.

part3circft
Figure 9. Fourier transform of the images in Figure 8

It may not be noticeable but the sinusoids in each image have different frequencies. This agrees with our predicted results. Also, this same pattern will be observed for the symmetric squares and the symmetric Gaussian curves.

part3sqgft
Figure 10. Symmetric squares and its Fourier transform (top) and symmetric Gaussian curves and its Fourier transform (bottom)

Another common pattern observed in many images are dot patterns. It may be random dots or an array of dots. I generated four images: one with a random dot pattern and the three with evenly spaced dots of different spacing between dots.

dotpattern
Figure 11. The dot patterns generated

The Fourier transform of the images on Figure 11 is shown at Figure 12.

dotpatternft
Figure 12. Fourier transform of the images in Figure 11

In the first image (top left), we notice that the Fourier transform of randomly placed dots is also like a pattern of sinusoidal waves joined together. Since there are 10 peaks with a counterpart peak on the opposite side (with respect to the center), there would at least be 10 sinusoids joined together in Figure 12 (top left). For the other three images, peaks are also observed in the Fourier transform. ALso we notice that anamorphism is observable here. As the gap between dots become larger in the space plane, the gaps between dots become smaller in the Fourier plane. This part of this activity is only to be familiar of the common patterns found on certain images and to be able to remove them using filtering.

Since we are done discussing the properties of the Fourier Transform, we now go to its applications. It is important to note that most of the applications that will be discussed here will be dealing with convolution.

The first application will be dealing with ridge enhancement. In images of fingerprints of people, we notice that sometimes, blotches occur and the lines may appear unclear. To get a better image than the raw fingerprint image itself without altering anything, we use filtering to remove unnecessary frequencies.

fingerprint
Figure 13. Image of my own fingerprint

First of all, and the most important part, we will take the Fourier transform of the fingerprint image and take its logarithm (due to high range of values). The resultant image is seen in Figure 14.

fpfft
Figure 14. Fourier transform of the image in Figure 13

We can see that the Fourier transform of my fingerprint shows an image with a bright central point and two unclear rings around it. We know that these points on the ring correspond to a frequency on the image. Since it is difficult to make a filter manually, I automated the creation of the filter and used a threshold value of 0.45. Since I know that the values on the transform range from 0-1 only, everything below 0.45 will be zeroed out and everything above or equal to it will be equal to 1. The generated filter will be shown in Figure 15.

Figure 13. The generated filter with a threshold of 0.45
Figure 15. The generated filter with a threshold of 0.45

By convolving the filter with the transform of the fingerprint, we get an enhanced image shown in Figure 16. Some parts of the blotches seem to disappear and the gaps on the fingerprints are fixed.

fpwithresult
Figure 16. Original fingerprint (left) with the filtered image (right)

Although it may not be very noticeable but the changes are there if you look closely into the image. Also, a better filter may make better results.

Another application to be discussed here is pattern removal. Based on the known knowledge about the Fourier transform, we can already deduce that we can spot patterns using this method. If we can spot the patterns, we can also remove them by filtering. The next image is a photograph captured by the NASA Lunar Orbiter.

hi_res_vertical_lg
Figure 17. Photograph with observed repeating vertical line pattern (captured by the NASA Lunar Orbiter)

First of all, get the Fourier transform of the image in grayscale. I used the logarithm scale again due to the large difference in values. Figure 18 shows the Fourier transform of the image in Figure 17.

moonsft
Figure 18. Fourier transform of the image in Figure 15

I know that the vertical lines are similar to that of the sinusoidal pattern and similar to that of the horizontal lines so I created a filter that can eliminate the peaks on the x- and y-axis. The created filter is shown in Figure 19.

moonfilter
Figure 19. Filter created to eliminated the vertical and horizontal lines in Figure 17

We note that the center of the filter is hollow. It is because the center peak resembles the addition of a bias to the image. It also contains important information on the very small frequencies. The resulting image after convolving the filter and the Fourier transform of the original will image will be shown at Figure 20.

Figure 18. Original image in grayscale (left) and the filtered image (right)
Figure 20. Original image in grayscale (left) and the filtered image (right)

It shows a very significant result in such a way that most of the vertical and horizontal lines disappeared on the second image. To spice things up, I tried doing the Fourier transform with the individual Red, Greed and Blue channels of the image to retain the color. The result can be found in Figure 21.

partergb
Figure 21. RGB version of the images in Figure 20

The last image to be cleaned is an oil painting from the UP Vargas Museum Collection. In this image, we see the canvas weave that look like small dots scattered almost evenly over the image.

OLYMPUS DIGITAL CAMERA
Figure 22. Oil painting with a canvas weave pattern

I converted the image into grayscale and used Fourier transform on it. The resulting image is shown in Figure 23.

sft0
Figure 23. Fourier transform of the image in Figure 22

In Figure 23, we notice that there are white peaks symmetric along the center of the image. Also, we notice that in Figure 22, the canvas weave pattern seem to be similar to that of Figure 11 (top right). To eliminate those dots, we must filter the image in Fourier plane and remove white peaks in Figure 12 (top right). So in figure 23, I created a filter only filtering the noticeable white peaks.

parte_72
Figure 24. Filter used to partially eliminate the pattern in Figure 22

The resultant image is shown in Figure 25. Although, the dots may not be removed completely, the reason for this is that the other peaks blended with the background in the Fourier plane that is why it is unnoticeable.

paintingorigfilt
Figure 25. The original image (left) vs the filtered image (right) in grayscale

Of course, I want to spice things up again so I separated the colored image into its Red, Green and Blue channels. I performed Fourier transform individually for each channel and convoluted with the same filter in Figure 24. I recombined the three channels and got a result shown in Figure 26. The colors now seemed clearer and some dots or some of the canvas weave pattern now disappeared due to the filtering method.

finalimage
Figure 26. RGB version of the images in Figure 25

Also, to view the removed pattern, I inverted the filter (0’s become 1 and 1’s become 0).  The resultant image is shown in Figure 27 and 28

Figure 27. Grayscale image of the removed pattern
Figure 27. Grayscale image of the removed pattern
Figure 28. Colored image of the removed pattern
Figure 28. Colored image of the removed pattern

.The reason why some colors disappeared after filtering the image because as observed in Figure 28, some colors are removed.

Review:

First of all, I would like to thank everyone who helped me in finishing this activity. I would like to thank Ms. Crizia Alcantara and her blog on the same topic. This is where I confirmed and compared my results. I would also like to thank my classmate, Ralph Aguinaldo for helping me think of ways to create the filter. Lastly, I would like to thank Ma’am Jing for guiding me throughout this activity and for confirming my results.

THANK YOU!
THANK YOU!

Next, I would like to express myself. I super enjoyed this activity especially the applications part. I was shocked at first that I can actually perform pattern elimination myself. I never thought that I can do this once in my life. And for performing part E and F with RGB colors, I give myself a 12 out of 10.

12_2

Appendix:

1

2

3

4

5

6

7

8

9

10

Fourier Transform Model of Image Formation

This activity will be about Fourier Transform and its applications in imaging. I used Scilab to perform the Fourier Transform on the images. This activity is composed of four parts: discrete FFT familiarization, convolution, template matching using correlation and edge detection.

For the first part of this activity, we generated certain shapes: a circle, the letter “A”, a sinusoid along the x-direction (corrugated roof), a simulated double slit, a square aperture and a 2D Gaussian bell curve.

Figure 1. Circle generated from Scilab.
Figure 1. Circle generated from Scilab
Figure 2. The letter
Figure 2. The letter “A” made from MS Paint
Figure 3. Sinusoid along x shaped like a corrugated roof.
Figure 3. Sinusoid along x-direction generated from Scilab
Figure 4. Simulated double slit
Figure 4. Simulated double slit generated from Scilab
Figure 5. Square aperture generated from Scilab
Figure 5. Square aperture generated from Scilab
Figure 6. Gaussian bell curve generated from Scilab
Figure 6. Gaussian bell curve generated from Scilab

After generating these images, I applied 2D Fourier Transform on each of them. Note that the result for the fourier transform will be complex and will have a real and imaginary part. By taking the absolute value of the values that I got from applying the FFT, I will get its modulus.

Figure 7. FFT of circle
Figure 7. FFT of circle
Figure 8. FFT of the letter
Figure 8. FFT of the letter “A”
Figure 9. FFT of the sinusoid along the x-direction
Figure 9. FFT of the sinusoid along the x-direction
Figure 10. FFT of the double slit
Figure 10. FFT of the double slit
Figure 11. FFT of the square aperture
Figure 11. FFT of the square aperture
Figure 12. FFT of the Gaussian curve
Figure 12. FFT of the Gaussian curve

The physical interpretation of Fourier transform in Optics is like a lens which follows the criteria below:

Figure 13. The physical interpretation of Fourier Transform
Figure 13. The physical interpretation of Fourier Transform Retrieved from: cns-alumni.bu.edu

The lens above takes the Fourier transform of the input image. The resulting image at the screen will then be the Fourier image. Note that the input image must be of a distance f from the lens where f is the focal length of the lens. Likewise, the resulting image must also be of the same distance f from the lens on the other side. It is observed that rays are coning out of the input image and because the image is at the focus, the output rays are parallel to each other and producing the Fourier image on the screen [1]. This will make sense later on in this activity.

Conversely, it works with input parallel rays:

Figure 14. Parallel rays concentrate after passing through the lens
Figure 14. Parallel rays concentrate after passing through the lens Retrieved from: cns-alumni.bu.edu

Basically, the first part is just for the familiarization of the Fourier transforms of common patterns. A circular aperture will produce a transform of a circle with rings around it. As seen from Figure 13 and Figure 14, if the image is small, the transform will be big and likewise, if the image is big, the transform will be small. This also applies to the letter “A”. A bigger letter would lead to a smaller transform.

For the sinusoid, the Fourier transform also shows peaks on all frequencies inside the sinusoid just like on the 1D Fourier Transform of a sine wave. The smaller the frequency is, the closer the peak is to the middle peak. The middle peak in Figure 9 signifies the bias that I applied on the sinusoid.

For the double slit, it will show a pattern similar to that of Young’s double slit experiment. The diffraction pattern is dependent on the slit width and the length of separation.

For the square aperture, it is similar to the circular aperture. Several ring-like patterns are observed but is incomplete due to the corner of the square. Also, a bigger square leads to a smaller transform and vice versa.

Lastly, the Gaussian bell curve is somewhat similar to the circular aperture. Another circular shape can be seen on the Fourier transform. The difference is that the transform of the bell curve does not have rings around it. This is because the value of the bell curve anywhere does not become zero. A unique property about the Gaussian bell curve is that its transform is also a Gaussian bell curve.

For all these Fourier transforms, I performed another Fourier transform on them and resulted in the original image but is rotated 180°.

The second part of this activity deals with convolution. I created a 128×128 image of the letters “VIP” as shown below.

VIP
Figure 15. The letters “VIP”

I also generated a circular aperture and used fftshift() on it. I then performed Fourier transform on the image with the “VIP”. I multiplied the two resulting images and again, performed the second Fourier transform on it. The results were:

Figure 16. The convolved image with the radius of circle equal to 0.1
Figure 16. The convolved image with the radius of circle equal to 0.1
Figure 17. The convolved image with the radius of circle equal to 0.4
Figure 17. The convolved image with the radius of circle equal to 0.4
Figure 18. The convolved image with the radius of circle equal to 0.7
Figure 18. The convolved image with the radius of circle equal to 0.7
Figure 19. The convolved image with the radius of circle equal to 1
Figure 19. The convolved image with the radius of circle equal to 1

The difference in these images is their resolution. A smaller circular aperture leads to an image of lower resolution and a bigger aperture leads to an image of higher resolution.

The third part of the activity is about template matching using correlation. I created a 128×128 image of the statement “THE RAIN IN SPAIN STAYS MAINLY IN THE PLAIN” using Paint. I also made a separate 128×128 image of the letter “A” of the same font and size as the A’s from the above statement. I placed the “A” in the middle of the image.

spain
Figure 20. The image of the statement mentioned above
Figure 21. The letter "A" to be used in template matching
Figure 21. The letter “A” to be used in template matching

I took the Fourier transforms of both images and multiplied the Fourier transform of “A” to the conjugate of the Fourier transform of the statement. I performed another Fourier transform on the resultant image and got:

correlatenofilter
Figure 22. The resultant image after performing the second Fourier transform

It is noticeable that the resultant image is somewhat similar to that of the original image but is very blurry. Another observation is that five very white points can be seen on what appears to be where the letter “A”s are located. This technique is called correlation. By applying a filter on the code, I can eliminate rest of the image and only the five white points remain.

correlate
Figure 23. Filtered image of Figure 22

The last part of this activity is about edge detection using correlation. I used the same image in Figure 15. I also generated three 128×128 matrices with different types of 3×3 patterns in the middle. I created a horizontal pattern, vertical pattern and dot pattern. I repeated the same procedure on the third part of the activity with the statement now being the letters “VIP” and the letter “A” now being the pattern.

Figure 24. Resultant image for the horizontal pattern
Figure 24. Resultant image for the horizontal pattern
Figure 25. Resultant image for the vertical pattern
Figure 25. Resultant image for the vertical pattern
Figure 26. Resultant image for the dot pattern
Figure 26. Resultant image for the dot pattern

This technique scans the whole image and searches it for a pattern similar to the pattern given as observed from part 3. The dot pattern can be used for edge detection of certain shapes because of the work it did in Figure 26.

But this activity will not end with me not trying something new. I leaned that the correlation technique allows us to scan the whole image and search of the similar pattern it has been given. I tried this technique on the famous game called “Where’s Waldo?”. I made a program dedicated to it but unfortunately, the one I tried with the colored version of the game does not return good results so I only did it with a black and white version of the game. First, I tried to find Waldo on the image and isolated him and created a new file of the same size as the original one with only Waldo in the middle and covered in black background.

waldobnw
Figure 27. Where’s Waldo?
waldopicbnw
Figure 28. There’s Waldo!

I used correlation to find Waldo and the resulting image is:

Figure 29. Resultant image of "Where's Waldo?"
Figure 29. Resultant image of “Where’s Waldo?”

Although it is unclear but A white peak can be seen encircled by the red circle. If I map it to the original image of “Where’s Waldo?”, it is where Waldo should be located. I can say that this technique really is helpful.

References:

[1] S. Lehar. An Intuitive Explanation of Fourier Theory. Retrieved: September 15, 2015 from http://cns-alumni.bu.edu/~slehar/fourier/fourier.html

Review:

First of all, I would like to thank everyone who helped me in finishing this activity. I thank Ms. Eloisa Ventura for guiding me to get the correct results for this activity. I would also like to thank Ma’am Jing for the additional guidance and for confirming everything.

thanks4

Secondly, I enjoyed this activity because for me, this is brand new information. Making the code is also fun because I have interest in programming. If I were to rate myself in this activity, I would give myself a 10 + 2 so a 12 for experimenting on the popular game “Where’s Waldo?”.

12

Appendix:

code

code2

code3

code4

code5

Code 6

Code 7
The codes I used for activity 5

Length and Area Estimation

Now here we are again in another exciting activity in Applied Physics 186. YEY!

yey
AAAAAAAAAH!

This time I performed the activity on length and area estimation. This activity is composed of three parts: simulated shape area, google map area and ImageJ measurements.

toomuchDo not worry, it will not be that hard and since I learned Scilab last activity, this activity will be a piece of cake!

dealwithit

To start, I used past codes (see Activity 3) to generate two kinds of shape: a square and a circle.

Figure 1. Centered Square
Figure 1. Centered Square
Figure 2. Centered Circle
Figure 2. Centered Circle

For the two images, the inside of the shape is white while the background is black. This is important because in using the edge function, the boundary must be black to white. If there are any other color present in image, an error would occur. Note that there are different types of edge functions in Scilab (e.g. canny, log, sobel, etc.). These different types will output a different type of edge image but I will not be discussing it here. For this activity, I have chosen the canny type because it outputs the area closest to the theoretical value. The output after using the canny edge function will then be:

Figure 3. Result of the edge function on the square
Figure 3. Result of the edge function on the square
Figure 5. Result of the edge function on the circle
Figure 4. Result of the edge function on the circle

After applying the edge function, I scanned the whole image and searched for a point inside the shape such that no multiple intersection of the radial line occurs.

Figure 3. An example of multiple intersections
Figure 5. An example of multiple intersections Retrieved from: http://www.engram9.info

The best point to choose when dealing with those two shapes is their respective centers. To get the center point, I added the topmost point and the bottom point and divided it by two to get the center in the y-axis. To get the center for the x-axis, I added the rightmost and leftmost point then again divided by two. Starting now, all angle measurements will be from this point.

But before solving for the angle, I used the find function to scan the whole image and return the pixel coordinates in which the pixel is white (or has a value of 1) so that I can now locate the edge pixel coordinates. After getting all edge pixels, I solved for its individual angle (per edge pixel) with respect to the chosen center and sorted it according to increasing angle. With this, we can now apply Green’s theorem to solve for the area inside each of the shapes.

To find the area, I cut the area in small triangle slices. Each pixel coordinate is a point on the edge of the area. We then take two adjacent points with coordinates (x1,y1) and (x2,y2) and find the area of the triangle formed by the two points.

greens
Figure 6. The triangle formed using the center and the two points. Retrieved from: Activity 4 Lecture and Laboratory Manual

The figure above shows an example of the triangle formed by the two points. Taking all the points on the edge pixels and taking their individual areas will give me the total area of the shape. Green’s theorem states that the area of the triangle is equal to:

Equation 1. Green's theorem
Equation 1. Green’s theorem

From the square and circle example, I got a total area of 1.0024859 square units for the square and 1.1308316 square units for the circle. Theoretically, the area for the square and circle should be 1.0 square units and 1.13097 square units respectively. I will therefore get a percent error of 0.25% and 0.012%. These percent error values are small enough to safely assume that the Green’s theorem is usable for any given area under the limitation that the area should be a regular polygon (but for the case of an irregular polygon, the area can be broken down into smaller regular polygon pieces so Green’s theorem can still be used).

The code used for this activity is shown below. Just replace the parameters of imread into the filename of the image in black and white:

Figure . The code for this activity
Figure 7. The code for this activity

Now that we have verified Green’s theorem to be usable, we take an area from Google maps and, using Scilab, find the estimated area of the chosen spot. For my case, I have chosen the state of Colorado in the United States of America. I have chosen this spot because it has a relatively easy shape but large enough so that the scale will be in hundred thousand square kilometers.

colorado
Figure 8. The state of Colorado in the United States. Retrieved from: Google maps

The first will then be to edit the photo such that the area to be covered will be white and the background will be black.

colorado2
Figure 9. The edited state of Colorado

From the image above, we used the canny edge function to isolate the edge pixels.

Figure 9. The edge pixels of the state of Colorado
Figure 10. The edge pixels of the state of Colorado

We then choose a central point and sort the angle of each pixel coordinates from the center in increasing order. After sorting, we use Green’s theorem to get its area. For this case, I multiplied the scaling factor (pixel to kilometers) with the acquired area to have it in square kilometers. The calculated area (from Scilab) is 275449.76 square kilometers. From Wikipedia, the area of Colorado is 269837 square kilometers. I got a percent error of 2.08% which is relatively small. The errors I got may be attributed to the inaccurate editing in Paint (from colored to black and white). Some areas not in the state may have been mistakenly edited to be white and thus part of the state. Since the scale is so large, the value of each pixel will be equal to a few kilometers so a small mistake may lead to a large change in the area of the state.

For the last part of this activity, we are to use the program ImageJ by the National Institutes of Health, US. I scanned a Gerry’s Grill membership card together with a protractor.

imagej2
Figure 11. Gerry’s grill membership card together with a protractor

I used 1 cm from the protractor and used it to calibrate. With the lengths calibrated, I then proceeded to measure the area of the card. I also measured the area of the card using the protractor. From ImageJ, I got an area of 45.609 square centimeters. In using the protractor, I got an area of 45.9 square centimeters. I got a percent error of 0.63%. The error I got can be attributed to the curved corners of the card. It is not easy to measure the area of a card when it has curved corners.

Review:

First of all, I enjoyed doing this activity. Even though I have had a mini vacation for one week (from Thursday to Wednesday) sorry maam =)

But during that mini vacation, in my free time, I did this activity and I was amazed with the results. Anyway, I finished this activity but I have not been able to think of ways to put “above and beyond” in my report so I will just give myself a grade of 10.

10-10

Also, I would like to give credits to those who helped me in this activity. I would like to thank Jesli Santiago and Martin Bartolome for helping me in my code for the Green’s theorem. I would also like to thank the state of Colorado for using their state in my report. Thank you America!

america