Uncategorized Archives - Page 2 of 3 - Verifeyed
1
archive,paged,category,category-uncategorized,category-1,paged-2,category-paged-2,ajax_fade,page_not_loaded,,qode-title-hidden,qode_popup_menu_push_text_top,qode-child-theme-ver-1.0.0,qode-theme-ver-10.1.1,wpb-js-composer js-comp-ver-5.0.1,vc_responsive

Another famous case of digital image manipulation is this widely published photograph taken during the 2003 Iraq war.

Brian Walski, who was covering the war for the Los Angeles Times, combined two of his Iraqi photographs into one to improve the composition and to create a more interesting image. The image shows an armed British soldier and Iraqi civilians under hostile fire in Basra. The soldier is gesturing at the civilians and urging them to seek cover. The standing man holding a young child in his arms seems to look at the soldier imploringly. It is the kind of picture that wins a Pulitzer. The tampering was discovered by an editor at The Hartford Courant, who noticed that some background people appeared twice in the photograph. It ended with the photographer being fired.

Image authentication without using watermarks and signatures (called the passive or blind approach) is regarded as a new direction and does not need any explicit prior information about the image. The decision about the trustworthiness of an image being analyzed is based on fusion of the outcomes of separate image analyzers. Here, we provide an overview of some of methods (analyzers) which are employed to analyze digital images.

  • Detection of interpolation and resampling. When two or more images are spliced together to create high quality and consistent image forgeries, geometric transformations are almost always needed. These transformations, typically, are based on the resampling of a portion of an image onto a new sampling lattice. This requires an interpolation step, which typically brings into the signal statistical changes. Detecting these specific statistical changes may signify tampering.
  • Detection of near-duplicated image regions. Detection of duplicated image regions may signify copy-move forgery. In copy-move forgery, a part of the image is copied and pasted into another part of the same image typically with the intention to hide an object or a region.
  • Detection noise inconsistencies. The amount of noise in authentic digital images is typically uniformly distributed across an entire image and typically invisible to the human eye. Additive noise is a very commonly used tool to conceal the traces of tampering and is the main cause of failure of many active or passive authentication methods. Often by creating digital image forgeries, noise becomes inconsistent. Therefore, the detection of various noise levels in an image may signify tampering.
  • Detection of double JPEG compression. In order to alter an image, typically the image must be loaded onto a photo- editing software and after the changes are done, the digital image is re-saved. If the images are in the JPEG format, then the newly created image will be double or more times JPEG compressed. This introduces specific correlations between the discrete cosine transform (DCT) coefficients of image blocks. The knowledge of image’s JPEG compression history can be helpful in finding the traces of tampering.
  • Detection of inconsistencies in color filter array (CFA) interpolated images. Here, the hardware features of digital cameras are used to detect the traces of tampering. Many digital cameras are equipped with a single charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) sensor. Then, typically, the color images are obtained in conjunction with a color filter array. In these cameras, only a single color sample is captured at each pixel location. Missing colors are computed by an interpolating process, called CFA interpolation. This process introduces specific correlations between the pixels of the image, which can be destroyed by the tampering process.
  • Detecting inconsistencies in lighting. Different photographs are taken under different lighting conditions. Thus, when two or more images are spliced together to create an image forgery, it is often difficult to match the lighting conditions from the individual photographs. Therefore detecting lighting inconsistencies offers another way to find traces of tampering.
  • Detecting inconsistencies in perspective. When two or more images are spliced together, it is often difficult to maintain correct perspective. Thus, for instance, applying the principles from projective geometry to problems in image forgery detection can be also a proper way to detect traces of tampering.
  • Detecting inconsistencies in chromatic aberration. Optical imaging systems are not perfect and often bring different types of aberrations into an image. One of these aberrations is the chromatic aberration, which is caused by the failure of an optical system to perfectly focus light of different wavelengths. When tampered with, this aberration can become inconsistent across the image. This can be used as another way to detect image forgeries.

The data hiding approach refers to a method of embedding secondary data into the primary multimedia sources. This is carried out mainly to fulfill authentication and tampering detection, copyright protection and distribution control. The idea of hiding information has a long history, likely to date back a couple of thousand years. In recent decades, techniques of adding some imperceptible data to multimedia sources received special attention from the research community. This many methods of data hiding developed into multimedia security applications in. Most of them are referred to as digital watermarking or Hash marks.

The main advantage of data hiding compared to digital signatures is that it gives the ability to associate the secondary data with the primary media in a seamless way. They are mostly imperceptible and travel with the host image. The data hiding approach can be divided further in several fields. Digital watermarking is the most popular one.

Many watermarks have been proposed so far. Below are examples of spatial and frequency domain watermarks. Digital watermarking assumes inserting of a digital watermark at the source (e.g., camera) and verifying the marks integrity at the detection side. Digital watermarks are mostly imperceptible; they are inseparable from the digital media they are embedded in, and they undergo the same transformations as the digital media itself. A major drawback of approaches based on watermarks is that the watermarks must be inserted either at the time of recording the image, or later by a person authorized to do so. This limitation requires specially equipped cameras or subsequent processing of the original image. Furthermore, some watermarks may degrade the image quality. It also requires hiding the watermark key from the device owner.

Digital watermarks are classified as visible or invisible. The visible group is perceptible to the human eye. In the case of the latter group, the existence can only be determined using a detection algorithm. In addition, watermarks can also be designed to be fragile or robust. Fragile watermarks become corrupted when any part of the image is modified. Thus, the most fundamental property of invisible fragile watermarks is the test of image authenticity and tamper detection. Robust watermarks are not affected by common image- manipulation procedures. Therefore, they are a proper way of ownership protection.

The digital signature approach offers an interesting alternative to classical watermarking techniques and is based on the idea of traditional paper- based signing by transforming it into the digital world. It consists mainly of extracting unique features from the image at the source side and encoding these features to form digital signatures. Afterwards signatures are used to verify the image integrity by signature comparison at the detection side. If any changes are made to the image after it was signed, they automatically invalidate the signature. Signatures as well as watermarks provide, among other qualities, protection from tampering, copyright infringement and illegal distribution. There are many methods to generate the signatures, for example, based on image histograms, colors, geometric information, frequency information, etc.

The major drawbacks of digital signatures are similar to watermarks drawbacks. The main disadvantage is the need for a fully secure and trustworthy source. We need a common algorithm on both source and detection sides. The image alone is not self- sufficient to perform the authentication process. Because of this, the benefits of the active approach are significantly reduced.

In September 2006, a digitally edited photo of the CBS news anchor Katie Couric appeared in their in-house magazine Watch! The original shows that her image was edited to create a slimmer waistline and a thinner face.

 

Creating image forgeries has a long history. Here, are some examples of earlier image forgeries. Here, a photo manipulation with Stalin is shown (1930):


This image shows a historical manipulation with a photo of Mao Tse- tung (1936):

This image shows an earlier photo manipulation with a photo of Adolf Hitler (1937). Here, Joseph Goebbels has been removed from the original photo.

Another example of earlier image forgeries. In the summer of 1968 Fidel Castro approves of the Soviet intervention in Czechoslovakia. Carlos Franqui (the man in the middle) cuts off relations with Castro and goes into exile in Italy. His face was then removed from the photograph.