Verifeyed Professional Edition Online User Manual contains everything you need to know to become a top-level image forensics expert – even if you know nothing about digital images. The Image Forensics Academy section of this page provides a summary of the theoretical knowledge we’ve gained after 8 years of research. For ease of understanding, we’ve left out the complex mathematics.

  • The functionalities and technology behind Verifeyed that make it exceptional
  • a tutorial to help you become a top-level digital forensics photo analyst in 2 minutes
  • samples of fraudulent images detected by Verifeyed

 

Online User Manual

The main functionalities of Verifeyed Professional Edition:

  • Detection of digitally edited photos, videos, and PDF files
  • Image source identification of cameras and camcorders (image and video ballistics)
  • Highlighting of suspicious (modified) parts within images

What is detecting digitally edited photos?

Detecting digitally edited photos with Verifeyed Professional Edition makes it possible to determine whether or not JPEG photos have been digitally altered (“photoshopped”) since leaving the camera or mobile device, i.e. determining if the photo has been processed by any photo editing software. Having available a tool to assess the trustworthiness (originality) of a digital image is valuable for many industries and individuals. Moreover, Verifeyed makes it possible to highlight most likely modified parts of a digital photo.

What is identifying the source camera or camcorder?

Identifying a source camera makes it possible to detect the particular camera or mobile device used to capture a digital photo, proving a particular image was obtained by a particular device. It also easily differentiates between cameras and smartphones of the same make and model. This is especially useful in the forensic, law enforcement, insurance, financial, and media industries. Insurance companies, for example, often need to know whether or not claim-substantiating photos were taken by the person looking for compensation. Law enforcement agencies are also tasked with finding the source camera when criminal activity is discovered in digital images (e.g. child pornography, etc).

What is highlighting suspicious (modified) parts within images/videos?

Verifeyed Professional Edition also provides a function to instantly highlight the modified of parts of digital images. State-of-the-art science enables the program to localize the placement of digital modification in high-quality JPEG photos. Some typical samples are shown below. Below is a fake driver’s license (left), with the edited portions highlighted by Verifeyed Professional Edition.

Below is a fake insurance claim photo, with the edited regions highlighted by Verifeyed Professional Edition. The original image has been found and is also shown (right).

Below is a fake journal photo (left), with the edited regions highlighted by Verifeyed Professional Edition. The original image has been found and is also shown (right).

Below is a fraudulent check (fake signature), with the forged areas highlighted by Verifeyed Professional Edition.

Technology behind identifying digitally edited photos

Verifeyed uses an instant, cutting–edge technology that can analyze images without the need for watermarks or digital signatures. The technology, which is patent pending, is constantly being improved and updated. Verifeyed automatically determines if any changes have been made to the digital image pixel data or its metadata.

A typical camera has several components that leave data (fingerprints) that can later be used to verify a digital image’s integrity. The data brought into the image due to post processing and compression components of the camera pipeline is most useful for verifying the image integrity. Verifeyed extracts the image file’s metadata, JPEG compression parameters, pixel data and other features then scans the extracted data for evidence of digital manipulation (modification). For a deeper insight into the technology used to identify digitally edited photos, see the Image Forensics Academy section lower down this page.

Technology behind identifying source cameras (image ballistics)

All imaging devices have a sensors that convert light (i.e. the optical image), into an electronic signal. Most modern devices use CCD or CMOS sensors. These sensors consist of millions of tiny elements called pixels. Variations in the size and material properties of each pixel, as well as their unique behavior, enable us to create a digital fingerprint of the sensor. These fingerprints are then used to carry out image ballistics tests to determine the source of the digital image.

Verifeyed software uses of a set of reference images taken by the camera under question to automatically extract the device’s sensor fingerprints. Two different fingerprints are produced for each sensor. Each carries different information about that sensor. Here (below) is a typical set of sensor fingerprints. For a deeper insight into the technology used to identify source cameras, see the Image Forensics Academy section lower down this page.

Technology behind identifying fakes video frames and source camcorders (video ballistics)

The process is pretty similar to that of identifying source cameras. For a deeper insight into the technology used, see the Image Forensics Academy section lower down this page.

Installation and activation of software

This software runs on the Windows operating system. It is designed to be intuitive and user-friendly. Once you download the software, double click on setup.exe to install, and then simply follow the installation wizard’s step-by-step instructions.

Technology technology behind identifying digitally edited PDFs

Every PDF file has its own distinctive structure and compression properties. Verifeyed can use these PDF features to detect whether files have been tampered with. This includes such techniques as as identifying PDFs that have undergone unauthorized editing, and checking for inconsistencies in PDF file structure.

How to detect digitally edited photos

Though the technology is very complex, the software includes an intuitive and easy-to-use wizard that simplifies the detection of digitally-edited photos. The only thing required of the user is the selection of JPEG photos to be tested. Several files may be selected at once, in any of the following ways:

  • Hold down the Ctrl key and click the files.
  • Left-click and drag the cursor over the desired files.
  • To select all media files, click the Select All button

Once digital images are selected, they automatically undergo testing and results are displayed.

How to highlight suspicious parts within a photo

Simply click on analysis of local compression non-consistencies. If the image contains a region with a different compression quality than rest of the image (due to splicing of different source images, tampering or photoshpping of a part of image, etc.), this region will be highlighted by red color. Areas highlighted by blue are areas that cannot be analyzed due to their compression properties (usually a lack of texture richness). Please note that even with state-of-the-art technology, it is not possible to provide an accurate image forgery localization. Users are therefore recommended to rely on this part of the program on as a supplement to prior assumptions about the trustworthiness of a part of an image. To have some chance of localizing a suspicious portion of a JPEG image, it must be high-quality, with a modified region of at least 64 x 64 pixels.

How to identify source cameras

There are two important components when identifying the source camera: the digital images under analysis and the particular camera or smartphone suspected to be the creation source of some or all of those images. As previously mentioned, this technology is based on analyzing device sensors. There are two main steps how to perform the test, both of which are provided by the software with a step-by-step wizard. First, a fingerprint of the sensor of the camera or smartphone under analysis must be created. To do this, the user provides Verifeyed Professional Edition with a number of images captured by the device under analysis (i.e. device reference images), which are then used to create a set of fingerprints for the device. (Please see next section to learn how to choose a proper device reference images.) After this, the intuitive wizard will guide the user through the simple image identification process.

Device reference images

Similar to other forensic ballistics tests, a user needs to provide the software 25 to 40 reference images captured by the particular camera or smartphone under analysis. These digital images make it possible to extract the device sensor’s fingerprint. First, define the digital image’s size and orientation. Next, set up the camera to produce device reference images of similar size and orientation. This step is very critical, because sensor fingerprints are position-dependent, and if it’s not done correctly, the device fingerprint cannot be identified. In general, the closest one can get to the parameters of the original image device, the better chance one has to correctly identify the device fingerprints.

Choosing a proper device reference image is important. It should not be too noisy, textured, or contain multiple edges. The best camera reference images are light, not overexposed, not extremely textured images (pictures of light walls, sky, white papers, etc). Moreover, digital images should be original and not modified by software.

Verifeyed Professional Edition has an automatic image quality check procedure to notify the user if reference images are not of sufficient quality. When taking various camera reference images, do not use tripod. Small variations in camera positions are necessary.

Below, see examples of good device reference digital images that do not contain strong edges or noise.

http://verifeyed.com/wp-content/uploads/2012/12/good2.jpg good7 SONY DSC good5

good4 good8 good_1 image forensics - good camera reference image

Below, examples of bad device reference digital images.

bad1 OLYMPUS DIGITAL CAMERA bad4 bad5

bad6 MINOLTA DIGITAL CAMERA bad8 bd3

Interpreting the results of image source identification

If the fingerprint of a particular camera or smartphone is found in the images being tested, Verifeyed Professional Edition alerts the user and shows the result’s probability of accuracy. Digital images highlighted in green are those that stand a very good chance – more than 99% – of being captured by the analyzed fingerprint. Those highlighted in blue have more than 95% chance of being captured. Images that are not highlighted do not contain any traces of the sensor fingerprint. This could be due to the fact that they have either been captured by a different device or that their image characteristics have been altered by cropping, rotation, enlargement, or other forms of editing. Please note that results should be interpreted like other typical ballistics tests: if traces of device fingerprints are not found in a digital image, this doesn’t necessarily mean that the particular device has not been used to capture the image.

How to identify the source camcorder

The process is pretty similar to that of identifying source cameras. Instead of a number of digital reference images, a single reference video file is used to calculate the camcorder’s sensor fingerprint. This video should be at least 25 seconds in duration and it is highly recommended that the camcorder reference video content has the same properties as those of digital photos, as described above. Please note that when analyzing video files, Verifeyed first extracts useful frames (images) from the video, and then uses them as the basis for calculating the fingerprint.

As videos are often more than ten minutes in length, we highly recommend that you carry out your analysis on a specific portion of video file only. Verifeyed makes it easy to determine which part of the video should be used and which parts you can be ignored. Thus saving a great deal of computational time. For a deeper insight into the technology used to identify source camcorders, see the Image Forensics Academy section lower down this page.

How to identify fake frames in a video

As mentioned above, Verifeyed enables you to determine which part of video should be used to calculate the sensor fingerprint. It also makes it possible to have several different fingerprints from different parts of the same video. This can be very useful if you want to check whether they both originate from the same device, or whether one of them might have been captured by another camcorder and pasted into the suspect file.

Another of Verifeyed’s useful features is that it allows you to analyze the trustworthiness of individual parts of a video by matching them to a camcorder sensor fingerprint. For example, if you suspect that part of a video file is has been faked, you can either use the technique described in the previous paragraph or you can calculate the sensor fingerprint using a portion of the file which you believe to be genuine. In this way, frames from different parts of the same video can be matched individually to this fingerprint. This can provide another helpful clue as to just how trustworthy the video is. In this part the frames (images) that contain traces of the sensor fingerprint are highlighted in green. In other words, if the analyzed frames are genuine, most of them will be highlighted in green.

Screenshots

Below are some screen shots of Verifeyed Professional Edition.

Download Verifeyed Professional Edition

Click here to see pricing information and download Verifeyed Professional Edition.

 

security_image
 

 

Image Forensics Academy

Must-have knowledge

Generally, there are two essential tasks in the forensic analysis of digital images: integrity verification (genuineness analysis) and image ballistics. Integrity detection determines whether or not JPEG photos have been digitally altered (“photoshopped”) after leaving a camera or mobile device. In image ballistics we address the problem of linking the digital images under investigation to either a group of possible source imaging devices or to the particular source imaging device used to capture the photos.

A typical camera consists of several components (see figure below). The core of every digital camera is the image sensor. The image sensor (typically a CCD or CMOS) consists of small elements called pixels that collect photons and convert them into voltages that are subsequently sampled to a digital signal in an A/D converter. Before the light from the subject of the photograph reaches the sensor it passes through the camera lens, an antialiasing (blurring) filter, and finally through a color filter array (CFA).

camera_components
 

The CFA is a mosaic of tiny color filters placed over the pixels of an image sensor to capture color information. Color filters are needed because typical consumer cameras only have one sensor, which cannot separate color information. The color filters filter the light by wavelength range, such that the separate filtered intensities include information about the color of the light. Most commonly, a Bayer color filter is used. Here, each pixel captures the intensity of the red, green, or blue color information. This output is further interpolated (demosaicked) using color interpolation algorithms to obtain all three basic color channels for each pixel. The resulting signal is then further processed using color correction and white balance adjustment. Additional processing includes gamma correction to adjust for the linear response of the imaging sensor, noise reduction, and filtering operations to visually enhance the final image. Finally, the digital image is compressed and stored in a specific image format such as JPEG.

What is important in terms of the forensic analysis of digital images is that the different components of a camera leave different kinds of artifacts or fingerprints that are useful for the integrity verification or ballistic analysis of photographs. Typically, fingerprints left by CFA, post processing, and compression part are shared by cameras of the same make and model. In other words, assuming that we know their value and behavior for a particular make and model of camera, and based on the fact that digital image editing (e.g., photoshopping) changes these values (fingerprints), they can be employed to verify the original source of digital images.

On the other hand, every camera has its own unique sensor, consisting on millions of pixels. Each pixel has its own unique properties. Thus, if we are able to obtain the information that is brought into the image by the sensor (information that will remain stable and present in all images captured by that sensor), and which cannot be found in any image captured by any other sensor, then we can consider it the fingerprint of that particular sensor or camera. These fingerprints can be employed to link digital images to the particular digital cameras that captured them.

Detecting digitally edited photos

Of the several methods available for verifying the integrity of digital images, Verifeyed uses the three most effective. These are discussed below.

JPEG compression parameters

As this section makes reference to the characteristics of JPEG compression, we shall briefly explain how it works.

Although JPEG files can be encoded in several different ways, the algorithm outlined below is the most common. 1) RGB to YCbCr Conversion: Typically, the image is first converted from RGB to YCbCr. This consists of one luminance component (Y), and two chrominance components (Cb and Cr). The resolution of the chroma components is generally reduced by a factor of two. 2) Splitting: After conversion, each component is split into adjacent blocks of 8 x 8 pixels. 3) Discrete Cosine Transform: Each block of 8 x 8 pixels of the Y, Cb, and Cr components undergoes a discrete cosine transform (DCT).

We can think of DCT as a way of converting image values to a more convenient representation in preparation for compression. The result of this is an 8 x 8 block of so-called DCT coefficients for each 8 x 8 block of the Y, Cb, and Cr components of the pixels. 4) Instead of a pixel representation of the image, JPEG files only contain the aforementioned DCT coefficient. Using a technique called Inverse Discrete Cosine Transform, we are able to retrieve the values of the original pixels. Example:

pixels_dct
 

The left-hand side of the example above representss an 8 x 8 block of pixels. Its DCT representation (the so-called, DCT coefficients) is shown on the right-hand side. Notice that the right-hand side contains a lot of small (almost zero) values. Ignoring these small values and setting them to zero causes no significant change in the original image. This is the main value of JPEG compression. Having a lot of zero value DCT coefficients enables us to encode the image in a much smaller file size than the original uncompressed version. The more zero DCT coefficients we have, the better the compression rate (the smaller the file size) we obtain (and, of course, the worse the quality of the photo).

The question is: how do we decide which DCT coefficients to set to zero? In fact, this is decided by selecting the JPEG compression rate or quality factor. For example, Photoshop allows JPEG files to be stored at 12 levels of compression. GIMP has 99 quality levels. These quality levels are called the quantization matrices. A quantization matrix is a matrix (block) of 8 x 8 values. Each DCT coefficient, in blocks of 8 x 8 DCT coefficients, is divided by a corresponding quantization value defined in the quantization matrix. Considering the following quantization matrix and our previous example, we obtain:

dct_quantization
 

The quantization matrix is shown on the left-hand side of the example above. The right-hand side shows the “quantized values” of the DCT coefficients. As is apparent, there are now a lot of zeros. This makes it easier to store the image in a smaller file size. Generally, JPEG files include a separate quantization table for each Y, Cb, and Cr color channel. Most Cb and Cr quantization tables are the same.

It is important to note that the information that can be gained from JPEG compression rates (or the JPEG quality factor) is of no specific value to image forensics. For example, a JPEG quality factor of 75% could be associated with two totally different quantization tables in two different software packages. It is therefore the quantization table that determines the rate of JPEG compression. Other parameters also affect the JPEG algorithm (Huffman tables for example, and Horizontal and vertical sub-sampling rates, to name but two). More information on these parameters the can be found at: http://en.wikipedia.org/wiki/JPEG

It would be very helpful, from the image forensics point of view at least, if camera and software producers employed their own specific parameters. In other words, if every camera and every software package used a unique quantization matrix, then every digital image in the world could easily be linked to the camera or software that created it. Unfortunately, in real life, things are rarely that simple. Although a sizable number of cameras and software packages do have their own unique parameters, many have parameters in common. Nevertheless, as all JPEG parameter values are known, they can be used to carry out the task of verifying an image’s authenticity.

JPEG parameters values can be obtained in two ways. One is to collect100% original data from the device that took the photos under investigation and extract the values from these. The other is to visit verifeyed.com and refer to the thousands of device values published on the site (contact us to get more information). We have withheld some of the data from publication in order to prevent its potential abuse.

During analysis, when comparing the fingerprints of images being tested with the camera, you should be very careful when interpreting the results. The following example explains why.

We took 750, 000 original digital images produced by a NIKON camera, of which 749, 210 had the same quantization table. However, 0.1 percent of the original images had different quantization tables. This experiment uncovered a hidden group of very rare quantization tables in some types of cameras which had initially seemed to be producing only a limited set of quantization tables.

In other words, having a photo that had supposedly been captured by this NIKON camera (based on its metadata) and with this rare set of quantization tables, one can only conclude that the photo is not original. This is because it is hard to find information about the existence the rare types of quantization tables created by this camera (even the camera producers themselves rarely have this information). So, as you can image, forensic experts have to be very cautious when drawing conclusions.

JPEG files contain a number of useful camera fingerprints. These include:

  • The Exif byte order of the JPEG header
    • EXIF data are stored either in Little Endian or Big Endian format (Motorola).
  • Maximum horizontal and vertical sampling factors
    • Due to the densities of the color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y’ component) than in its hue and color saturation (the Cb and Cr components). Using this knowledge, JPEG encoders can be designed to downsample image data and compress images more efficiently. The ratios at which downsampling is normally performed for JPEG images are 4:4:4 (no downsampling), 4:2:2 (reduction by a factor of 2 in the horizontal direction), or (most commonly) 4:2:0 (reduction by a factor of 2 in both the horizontal and vertical directions).
  • JPEG color space
    • Color space of stored JPEG data (typically, RGB)
  • JPEG markers E1 E1 DB C0 C4 DA
    • Every JPEG image file consists of a sequence of segments that carry information about the image, codec, producer, etc. Each segment begins with a marker with the binary format 0xFF followed by a byte indicating the type of marker. For example, 0xFFD8 defines SOI (Start of image) which means the entry point of the JPEG image file. Conversely, 0xFFD9 means EOI (End of image) and defines the end point of the file. Typical JPEG files contain markers that define: a thumbnail image, the Huffman tables used, Quantization tables (QTs), etc. The basic format of markers is shown below: 0xFF + Marker Number (1 byte) + Data size (2 bytes) + Data (n bytes). Most cameras and software packages bring a limited, and often stable, set of markers into the image file.
  • EXIF Tags.
    • Different types of information (image size, make and model of camera, focal length, etc.) in EXIF are encoded using EXIF tags. Most cameras and software packages bring a limited, and often stable, set of markers into the EXIF segment.
  • Length of the white space in the image file head: 383 bytes
    • Most cameras divide important segments of JPEG files with a long segment of periodically repeated Hexadecimal values such as 00 or FF. The length of the dominant white-space area of the image is the source of another potential camera fingerprint.
  • Character forming the white space in the image file head: 00
    • The dominant white space area consists of periodically repeated characters such as 00 or FF.
  • Is the JPEG format progressive?
    • Some cameras and software packages use a progressive JPEG algorithm to encode images.
  • Is entropy coded using an arithmetic method?
    • Some cameras and software packages use an arithmetic algorithm to encode entropy.
  • Number of found quantization tables, luminance and chrominance quantization tables.
    • Quantization tables are also helpful in integrity verification.
  • Huffman tables (main image):
    • Huffman tables are also helpful in integrity verification since many cameras use only one specific type of Huffman table, which differs depending on the software package.
  • Thumbnail image
    • Most JPEG images contain an embedded thumbnail image (typically, 160 x 120 pixels) that is also in JPEG format. In other words, everything we have learned so far about extracting information from the JPEG applies to its thumbnail as well.
  • Preview image
    • These days, most image files (BMP, TIFF, GIF, etc.) contain a small embedded preview image (typically, 640 x 480 pixels). Despite the file type, these thumbnail images are also in JPEG format. In other words, everything we have learned so far can be applied to these thumbnail images as well. It is quite hard to extract this preview image from JPEG files because its location and properties are not standardized; they depend on the particular camera. Nonetheless, with Verifeyed you can extract the thumbnail with just one click.

Obviously, when comparing image fingerprints with a knowledge database it is critical that the information about the camera fingerprints (image resolution, etc.) is reliable so that miscalculations that might have unfortunate repercussions do not occur.

An experiment

Over the last 5 years we have been running an experiment which involves the direct and indirect participation of more than 10 million owners of different imaging devices. Some of the interesting data we obtained are presented below.

  • There are more than 15,000 different imaging devices or image software packages on the market.
  • There are more than 62,800 different quantization tables on market.
  • Most cameras use a set of 3 or more pre-defined quantization tables, or compute quantization tables on the fly based on image content.
  • Little Endian format is used in around 70% of cameras.
  • Little Endian format is used in around 60% of image editors.
  • Around 5% of cameras brought a white segment of more than 1000 bytes into the image.
  • Around 10% of cameras brought a white segment of more than 500 bytes into the image.
  • The most commonly used white space character is 00.
  • The second most commonly used white space character is FF.
  • Around 17% of cameras embed a preview image into the JPEG file.
  • Around 85% of cameras embed a thumbnail image into the JPEG file.
  • The variety of Huffman tables is much smaller than the variety of quantization tables.

 

Analyzing pixel data

In the pixel value group of features, Verifeyed searches for evidence of repeated JPEG compression. When altering a JPEG image, typically it is decompressed, loaded into photoediting software, manipulated, then recompressed. Verifeyed detects the changes that secondary compression makes to the image.

To understand this better, one must understand the concepts behind the JPEG algorithm. Typically, an image is first converted from the RGB color space to the YCbCr color space. Each component is then split into adjacent blocks of 8×8 pixels, each of which undergoes discrete cosine transform (DCT) resulting in 64 DCT coefficients. In the next step, all 64 DCT coefficients are quantized. Quantization is the process of constraining something from a relatively large or continuous set of values (real numbers) to a relatively small, discrete set (integers). The DCT coefficients of a JPEG image compressed only once by a camera are different from the DCT coefficients of an image compressed more than once.

Below is an example of a histogram of DCT coefficients compressed and quantized once (left) and more than once (right). The difference is clear to see. Using an artificial intelligence method and neural networks, Verifeyed detects the DCT coefficients that exhibit evidence of multiple compressions.

quantization
 

This test is effective for JPEG images with a quality higher than 75% (the standard JPEG quality for most digital cameras and smart phones).

Oftentimes, when visualizing traces of single/multiple JPEG compression in digital images, a so-called Fourier transform spectrum is employed. This spectrum is another representation of data. Traces of multiple JPEG compression are revealed in “Non-expected” peaks in the spectrum chart. The example below shows a Fourier spectrum of 6 different DCT coefficients. The spectrums in the first columns represent once only JPEG compression (i.e. the original JPEG image); the spectrums in the second and third columns exhibit multiple compressions of the JPEG’s DCT coefficients.

fourier_spectrums
 

All methods used for the detection of traces of multiple JPEG compression work much better and more reliably the higher the quality of the JPEG images. The reason for this is simple. These methods depend on an abundance of non-zeroed DCT coefficients in order to calculate the features needed. Stronger JPEG compression makes most of the DCT coefficients zero, which leaves very little usable data. As a result, the calculation is unlikely to be successful.

Detecting suspicious parts of images

To detect suspicious parts of images we use principles more or less the same as those described in the previous section. However, for security reasons, we cannot reveal the exact algorithms used.

We are often asked why there is no 100% guarantee of accuracy when attempting to detect suspicious elements in digital images, and why detection is almost always impossible in the case of heavily compressed JPEG images.

We try to answer the question by showing the example below. Here, we have a set of numbers. The numbers represent the pixels in a digital image of 8 x 9 pixels. Algorithms do not see faces, houses, cars, etc. in the digital image. They see only numbers. The image represented by the set of numbers below has a JPEG quality factor of 100%.

image_8x9
 

Now, we compress the same image by quality factors of 95% and 65% (right and left respectively).

image_8x9x2
 

As is apparent, the image compressed with a quality factor 95% is still quite close to the original and the data values preserve a reasonable amount of the quality. But, the image compressed by a quality factor of 65% differs markedly from the original. The values have been severely corrupted and almost no detail is preserved. This is the case with digital images of higher compression rates under analysis. In most instances, the rich information necessary to enable forensic methods to detect traces of multiple compressions in suspicious regions of the image is lost. It is not easy to determine the precise boundary. It depends on the specific image content. However, it is generally somewhere between 70% and 75% JPEG quality.

Analyzing metadata

Software modifications often leave traces in image metadata. This means that analyzing an image’s metadata is a must. Nonetheless, keep in mind that metadata are just simple data which can very easily be changed. Experienced forgers attempt to keep metadata clean and consistent.

 

Image ballistics

Image sensors suffer from several fundamental and technological imperfections that result in performance limitations and noise. If you take a picture of an absolutely evenly lit scene, the resulting digital image will still exhibit small changes in intensity between individual pixels. This can be due to pattern noise, readout noise or shot noise.

While readout noise or shot noise are random components, the pattern noise is deterministic (its behavior can be mathematically modeled and estimated) and remains approximately the same if multiple pictures of the same scene are taken. As a result, pattern noise might provide the sensor fingerprint we are searching for.

Pattern Noise (PN) has two components: Fixed Pattern Noise (FPN) and photo response nonuniformity (PRNU). FPN is independent of pixel signal; it is additive noise, and some high-end consumer cameras can suppress it. The FPN also depends on exposure and temperature.

PRNU is formed by variation in the dimensions of pixel and inhomogeneities in the silicon which results in variations in pixel output. It is multiplicative noise. Moreover, it does not depend on temperature and seems to be stable over time.

The values of PRNU noise increase with the signal level (it is more visible in pixels showing light scenes). In other words, PRNU noise is suppressed in very dark areas. Moreover, PRNU is not present in areas of an image that are completely saturated. Thus, such images should be ignored when searching for PRNU noise.

Since it can be shown that PRNU has a dominant presence in the pattern noise component, PRNU noise is employed as the fingerprint of camera sensors.

Nonetheless, having a larger set of cameras of the same and different models available, and a large set of ground-truth digital images captured by these devices, one can run an experiment to measure the effectiveness and fragility of existing methods. By performing such an experiment it is fairly easy to notice that state-of-the-art source identification methods suffer from a number of basic imperfections. These have been fixed by Verifeyed.

There are some freely available libraries that allow the computation of PRNU. Despite this, users often fail and become disheartened. Below, we reveal three major reasons for their failure. Unfortunately, for reasons of security, we are not at liberty to divulge exactly how Verified managed to solve the problem of providing accurate results.

Impact of optical zoom

Perform a simple experiment. Take a camera with a rich optical zoom option and shoot some test images with varying degrees of optical zoom. Then, carry out camera source identification using the freely available PRNU software.

You’ll be disappointed by your results and you’ll be asking yourself how this could possibly happen. The reason is a phenomenon called vignetting, which causes a change in the PRNU values at different zoom levels. There are several types of vignetting: mechanical, optical, natural and pixel. Some types of vignetting can be completely covered by lens settings (using special filters), but most digital cameras use built-in image processing to compensate for vignetting when converting raw sensor data to standard image formats such as JPEG or TIFF.

Verified managed to solve the problem and provide accurate results.

Impact of embedded camera software

Let’s assume that we have 100 different iPhone 3 devices. Moreover, we have a digital image captured by one of these iPhones and we want to identify the particular source device. In other words, we need to have a fingerprint of each device that distinguishes it uniquely and eliminates any features it might have in common with the other devices.

 

On the other hand, digital consumer cameras contain embedded software that performs operations such as color filter array (CFA) interpolation, white balancing, gamma correction, color enhancement, and interpolation (digital zoom). Because this embedded software is usually common to cameras/smartphones of the same model, it introduces similar changes in the digital images produced by these cameras. This is a serious problem that results in a higher rate of false positives when a large number of source imaging devices of same model are under investigation.

 

Impact of heavy JPEG

Let’s stay with the previous iPhone 3 example and assume that this digital camera produces heavily compressed JPEG images. As we know, highly compressed JPEG images exhibit blocking artifacts. These blocking artifacts are another change brought into the image by the camera’s embedded software and they are also common to cameras of the same model. In other words, this is another source of false positive results when linking a photo to a large set of possible source cameras of the same model. Moreover, this is quite a common problem in real-life applications (for example, when inspecting Facebook photos or Youtube videos).

Video ballistics

The technology behind video ballistics is very similar to that of image ballistics. The current state-of-the-art version of Verifeyed makes it possible to effectively analyze videos in motion JPEG format files (AVI). The reason for this is that a 25-second video provides enough usable frames to identify the video sensor. There are other coding algorithms as well, such as MPEG4/H264. At present, it is not possible to make reliable ballistics tests on these codec types because of the inter and intra dependency (correlation) within video frames captured by camcorders of the same model.

Video files comprise a series of frames. Some are known as “I-frames”; the others as B or P frames, depending on the particular video coder. The most useful frames are the I-frames. These are independent frames that carry the most useful information in terms of digital image ballistics testing. Other frames are also useful for detecting evidence of tampering in video files.