Skip to main content

Cannot explain hole in mosaic image

Please note these forums are being decommissioned and use the new and improved forums at
1 reply [Last post]
Joined: 2009-08-17

I have a pretty simple geographic image generator that I am having problems with.

Users request a certain size image (say 512x512) over a certain number of latitude/longitude degrees (say 10 degrees x 10 degrees). Using that I calculate the lat/lon degrees per pixel. 512 / 10 = 0.01953125 degrees per pixel in my final image.

I then take all the images I need to create the mosaic, scale them down to that lat/lon degrees per pixel and translate to the correct location on the final image.

How ever I am getting 'holes' in the final image and I am not sure why. Looking at my images width/height and and minX/minY I cannot explain the hole.

Here is a rough example. I have two images that when mosaiced together should produce an image that is 514x514.

left image is 330(width)x514(height), minY = 0, minX = -1
right image is 185(width)x514(height), minY = 0, minX = 329

So when I mosaic starting at 0,0 going to 514,514 left image from 0,0 should be 329x514 and right image starting at 329,0 should go from 329 to 514. However I end up with a hole right down the image at y = 329.

0            329           514
------------- x -----------
------------- x -----------
------1------ x -----2-----
------------- x -----------
------------- x -----------

I cannot explain this because the left image should go to pixel 329 and the right image should start at 329.

I have the the JAI docs here :, but I cannot find any reason why this is happening.

I tried changing the interpolation on the translation to something like linear or cubic but that did not help.

Does anyone have any idea on what is going on????????


Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Joined: 2007-11-14

I am not sure whether the following explains your problem but it will be worth considering.
Whenever you transform an image from one set of coordinates to another there is an important point to bear in mind (at least for any transform more complicated than a translation by an integer number of pixels).
If you loop over the pixels of the source image to work out where they should end up in the target image there are two problems. The first is that the target position will not generally be an integer pixel position, so you have to add proportions of the brightness to each of the surrounding target pixels (a kind of inverse interpolation but read/add/write rather than just reading each neighbouring pixel). More importantly, the second problem is that you are not guaranteed to cover all the pixels in the target image, so there will be some black holes.
Therefore it is important to do the transform the opposite way round: loop through all the pixels of the target image; for each one work out where the data should have come from; that is generally a non-integer pixel position so you read an interpolated value from surrounding pixels. This way you can be sure not to leave any black holes. There may be some black edge pixels if the inverse transform does not map exactly over the source image, but that would be correct.
I always program such transforms myself rather than relying on other APIs to have got it right.