Panorama test sequence
My result of the test sequence is almost identical to the provided solution image. The overall process is as follows:
1. Wrap each image into spherical coordinates. (see WarpSphericalField)
2. Detect the features in each wrapped image. (I used the Features.exe provided in Homework 1)
3. Match features between images (I still used Features.exe provided in Homework 1 to do the feature matching with ratio test)
4. Align every pair of neighboring images using RANSAC.
5. Blend the aligned images to create the final panorama.
Sequence taken by hand
What Worked Well:
The first photo below was taken in my Microsoft office room. I took 6 vertical images and combined them. The second photo was taken at my home. It’s composed from 7 vertial images. I found that when the photos are taken in places where there isn’t too much lighting variation, the blendwidth = imageWidth/2 blending approach works well.
What Didn’t Work Well:
1. As I hold the Windows Phone by hand, it’s really hard to keep it absolute vertical when I rotate the position. In the implementation of the homework, the obvious weakness is that the homography calculated between neighboring images only accounted for translation, so any camera rotation about a different axis cause problems. I also see some alignment issues because of this.
2. Ghosting is still an issue when I take pictures in a very small room (e.g. my second photo below) because the distortion would be more obvious when the scene is small.
3. I tried using my own Features.exe implemented Homework 1. It works well for the test sequence photo, but in cases of low illumination (e.g. indoor) and picture rotation, it performed worse than the provided Features.exe, and outputed some strange matching pictures. Thus I discarded this approach.