Toward video tampering exposure: inferring compression parameters from pixels.
MetadataShow full item record
JOHNSTON, P., ELYAN, E. and JAYNE, C. 2018. Toward video tampering exposure: inferring compression parameters from pixels. In Pimenidis, E. and Jayne, C. (eds.) Communications in computers and information science, 893: engineering applications of neural networks; proceedings of the 19th international engineering applications of neural networks conference (EANN 2018), 3-5 September 2018, Bristol, UK. Cham: Springer [online], pages 44-57. Available from: https://doi.org/10.1007/978-3-319-98204-5_4
Video tampering detection remains an open problem in the field of digital media forensics. Some existing methods focus on recompression detection because any changes made to the pixels of a video will require recompression of the complete stream. Recompression can be ascertained whenever there is a mismatch between compression parameters encoded in the syntax elements of the compressed bitstream and those derived from the pixels themselves. However, deriving compression parameters directly and solely from the pixels is not trivial. In this paper we propose a new method to estimate the H.264/AVC quantisation parameter (QP) in frame patches from raw pixels using Convolutional Neural Networks (CNN) and class composition. Extensive experiments show that QP of key-frames can be estimated using CNN. Results also show that accuracy drops for predicted frames. These results open new, interesting research directions in the domain of video tampering/forgery detection. Please note that the title of this document is different from the published version.