Clouds are not normally a boon for image-processing algorithms because their shadows can distort objects in a scene, making them difficult for software to recognise. However, researchers at Washington University in St Louis, Missouri, are making shadows work for them, helping them to create a depth map of a scene from a single camera. Depth maps record the geography of a 3D landscape and represent it in 2D for surveillance and atmospheric monitoring. They are usually created using lasers, because adjacent pixels in camera images do not equate to adjacent geographic points: one pixel might form the line of a hill in the near distance, while an adjoining one is from a more distant landmark.
Enter the clouds - the shadows they cast can hint at real-world geography, researchers say. By comparing a series of images and recording the time at which the passing shadows change a pixel's colour they can estimate the distance between each pixel. If the wind speed is known you can reconstruct the scene with the right scale. That is very difficult from a single camera viewpoint. Compared with laser-created maps, average positional error in the cloud map was just 2 per cent. The work is to be presented at the Computer Vision and Pattern Recognition conference in San Francisco.
More information:
http://www.newscientist.com/article/mg20627655.500-clouds-add-depth-to-computer-landscapes.html
More information:
http://www.newscientist.com/article/mg20627655.500-clouds-add-depth-to-computer-landscapes.html