In a recently published patent application, Samsung disclosed an invention that puts shallow depth-of-field capabilities into tiny, point and shoot cameras. Samsung gets there by using dual lenses in a single camera and using that second lens in a rather novel way.
While the primary lens is capturing the full-resolution image, a secondary lens and sensor captures another image with the sole purpose of evaluating the relative distances of areas in the image. (In some configurations, the camera may actually be a 3D camera, while in other configurations, it may truly be a secondary lens and sensor.) Then, the camera merges the data with the primary image to create a depth map and applies a graduated blur based on this depth map.
This doesn’t appear to be too dissimilar from what programs like Alien Skin’s Bokeh 2 or Topaz Lens Effects. In fact, the depth map in the Samsung patent reminds me a lot of what you create inside Lens Effects – the difference being that you build the depth map in Lens Effects instead of the distance calculations determining the depth map automatically for Samsung’s camera.
This is a feature that I think makes sense for a point and shoot camera. As noted in my review, I was very happy with the results I could quickly generate from Bokeh 2; however, most folks using compact cameras don’t want to fool with Photoshop for adding bokeh to otherwise simple snapshots.
But if you could get the same simulated bokeh without any extra work for the user, then I think consumers would be more than pleased with the results. If Samsung plays its cards right in terms of operation and price point, this could be a very solid feature for its point and shoot cameras.
While there are other compact cameras out there that offer some sort of synthetic bokeh effect, I don’t know that any of them are quite as technically precise and elegant as what Samsung is proposing in this patent.
What do you think about Samsung’s proposed solution to the depth of field problem in tiny-sensored cameras?