Register   Login
Newsletters  
Our Newsletters Minimize

 by J. Gerry Purdy, Ph.D.

  by Tom Wheeler

 by Bryan Purdy

 by Jason Purdy

 

To view the most recent articles, select it form the list below. To browse current or archived newsletters by month, use the calendar tool to the right. Otherwise, to search by subject, use the search tool on the right. Enjoy!

 

  

Select a Newsletter to View:
  

Newsletters Minimize
Oct 31

Written by: J. Gerry Purdy
10/31/2012 

 

 

 

We all know that images captured in smartphones are “OK” but not great. They provide sub-par performance compared with dedicated point and shoot, as well as digital SLR, cameras but are able to capture the moment even if the quality isn’t all you’d like it to be. We have traded convenience for fidelity. Most of us – including me – still carry a dedicated digital point and shoot camera around to social events and on vacations to capture a photo that is a whole lot better than what you can capture in a smartphone. That’s all about to change. Here’s why.

 
There are a number of reasons that a camera in a smartphone can’t capture good images:
  • Small size of the lens – the lens in a smartphone is tiny. It doesn’t let in much light and the light it does let in is ‘spread out’ through the image capture chip.
  • Shallow depth of the lens – large lenses used in digital SLRs have depth that can provide optical zoom and ability to focus within a very narrow range.
  • Small image capture chips – smartphone image processing chips are limited compared with larger digital cameras that have dedicated chips to process images.
  • Low power flash – smartphones have small LED flash lights to help improve lighting in many low light or ‘faces in shadows’ situations vs. the large (and sometimes multiple) flash accessories available in larger cameras.
  • Inability to do optical zoom – you can’t really do optical zoom in a smartphone camera. There’s no depth or way to install a large lens inside the case, although some innovations have led to larger lenses added as an external accessory.
Wouldn’t it be great if it were possible to take really great 3D images and full HD video in a smartphone camera? Up until recently, these criteria seemed like a fantasy or something that could only be done with digital effects in the movies.

The smartphone form factor is thin, the image capture chips are small, the flash is low power and there’s no ability to do optical zoom. It seems like an impossible environment in which to take really great digital images or HD video. I had concluded that you simply couldn’t get a camera in a smartphone to take anywhere as good an image or video as you could get with a dedicated, larger digital camera.

The first inkling that I might be wrong came from Nokia with their 41MP camera followed by their 8MP Pure View technology in the new Lumia 920 that can capture good images in low light. But, these were still just extensions of what a camera in a smartphone can already do – take an image using a small lens.

In order to achieve a real breakthrough in image capture in small devices like smartphones, something truly innovative and, most likely, radical had to be invented. The founders of Pelican Imaging started with the idea of using an array of low resolution photoplanes coupled to an array of small lenses and using the overlapping information to create astounding images and videos.

The core intellectual property (IP) starts by using an array of 16 inexpensive, mass produced accurately aligned cameras. The array creates 16 images – each one slightly different from the other since it is capturing the image from a slightly different angle. Because the input images all come from lenses and pixel technologies that are mature and low cost, the yield of the solution should be very good. You can see the 16 ‘similar’ images captured by using the 16 lenses in Figure 2.
 

 
Because each image is taken from a slightly different angle, the processing logic can determine the distance objects are from the camera, thus providing a depth map of the scene. Every camera image creates a 3D image. And, the 16 slightly different versions of the photo offer the ability for Pelican’s proprietary (and patent pending) software to synthesize a higher resolution image. Thus, Pelican Imaging can produce far better images than can be made with traditional high megapixel single lenses and image sensor chips.

We’re on the verge of a renaissance in smartphone camera images from past attempts to pour more investment into capturing better static (unchangeable) images to a ‘new age’ of smartphone camera images that start with a number of lower quality images but then process them using the smartphone’s CPU to produce amazing dynamic (changeable on the fly), high quality images not possible before.

Here’s one of the really cool capabilities of having the array of 16 images: you can focus the image on any part of it because the array contains the depth throughout the image. Thus, a user could want to focus on someone in the front of a photo but then someone else might want to focus on someone in the back of the photo. This is shown in Figure 3.

It’s even possible to focus on one person and then ‘blur out’ the background dynamically, and that is the mark of a professional photographer using very expensive lenses with a high aperture (low f-stop number).

Another very cool capability of having 16-array images with depth is the ability to estimate fairly accurately how far away any object is from the camera. Look at Figure 4.

Pelican’s processing technology enables the improvement of each of the individual raw sub-array images that it takes. All of us have taken photos where part of the photo is blurred or doesn’t have the right lighting against the background or is just a small part of the photo that we want to ‘blow up’ and use.

Take a look at Figure 5 which is a tiny portion of an overall image from a few of the Pelican sub-array cameras. On the left is a blow up of that tiny image which clearly is somewhat out of focus and grainy. Now, look at the image on the right that was created after applying Pelican Imaging’s proprietary processing. The ability for Pelican to be able to use an array of cameras to actually increase the resolving power to sub-pixel accuracies, called super resolution, is one if their core competencies. It’s very impressive.
 

 
And, finally, trying to get an acceptable image in low lighting conditions is very difficult to do. Take a look at Figure 6 taken with 3 LUX light – basically at night with ambient light.
 

 
Improving images taken in low light is getting a lot of attention. Nokia recently demonstrated similar improvements in images captured in low light using their Pure View technology. It appears to me that if Nokia used Pelican Imaging’s technology with some of their enhancement capabilities, the results would be even better.

What’s the future of smartphone cameras and image processing? The current image technology roadmap would forecast higher performance lenses with more megapixels and continually improving image processing capability – the same way digital cameras are improved. But, this increases the cost of the device which acts directly in the opposite direction of the industry’s pressure to provide more capability at the same cost and, in the developing world, decrease cost to make smartphones available to more people.

As the cost of the technology comes down, smartphone and mobile device manufacturers can increase the number of inexpensive lenses enabling larger arrays – perhaps up to 8x8 at some point – along with improved image capture chips & software that continues to push the processing to the smartphone’s CPU.

The processing requirements of these larger arrays will correspondingly increase, but due to the ongoing increase in the processing capabilities of future smartphone CPU’s (like we have seen in PCs over the past 20+ years), the camera function and resulting images become a problem of computation not of more expensive components. Also realize that we’ll likely see improvements in all aspects of array cameras and software not only from the size of the lens array but the resolution of each image in the array and the resulting processing of those images.

So, imagine that we’re holding a smart phone next year or the year after with a new looking camera that produces astounding images in low light, provides the ability to dynamically  change the focus (or fix a blurry image) anywhere in the image, view it in either 2D or 3D or take single frames out of an HD movie. These images will likely result in much better images that can be taken with expensive digital SLR cameras today – all done in a not too far in the future smartphone.

And, it wouldn’t surprise me that Pelican Imaging’s IP may be incorporated into larger, more expensive cameras simply to get access to the company’s proprietary imaging processing capabilities.


Written By:
 
J. Gerry Purdy, Ph.D. 
Principal Analyst
Mobile & Wireless
MobileTrax LLC
404-855-9494
 
Disclosure Statement: From time to time, I may have a direct or indirect equity position in a company that is mentioned in this column. If that situation happens, then I’ll disclose it at that time.
 

Tags:
  

Newsletters Minimize
    

Newsletters Minimize
    

Copyright 2010, Mobiltrax Terms Of Use   Privacy Statement      Top