Face recognition in a video with opencv3.0 and Python3.4


Face detection with opencv3.0 from the movie "The Bourne Ultimatum", 2007. Opencv helps you with editing a video frame by frame. It can be done in less than 40 lines.

First, download a movie clip from youtube. Handle it easily with youtube-dl.
\$ youtube-dl https://www.youtube.com/watch?v=DUd5RPVDjPY
 You will find what you've downloaded in the current directory. If you don't have youtube-dl and in ubuntu just like me, execute "sudo apt-get install youtube-dl".

Renaming the video you downloaded and cripping a new one makes handling easily. Here I scaled it as the width=640 with the maintained aspect ratio.

\$ ffmpeg -i TheBourneUltimatum.mp4 -ss 00:00:41 -to 00:00:51 -vf "scale=640:-2" -c:a copy TheBourneultimatum_cut.mp4
If you don't have ffmpeg and you are using ubuntu14.04, try the following codes.
\$ sudo add-apt-repository ppa:mc3man/trusty-media
\$ sudo apt-get update
\$ sudo apt-get install ffmpeg
Let's edit it with opencv and python.

import cv2

faceCascae = faceCascade = cv2.CascadeClassifier(r'/home/watanabe/opencv/data/haarcascades/haarcascade_frontalface_default.xml') # change your cascade path
video_capture = cv2.VideoCapture(r'/home/watanabe/Videos/TheBourneUltimatum_cut.mp4')
fourcc = cv2.VideoWriter_fourcc(*'XVID')
writer = cv2.VideoWriter(r'/home/watanabe/Videos/TheBourneUltimatum_face_without_audio.mp4',fourcc, 20.0, (640,360))

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()
    if ret == True: # detect the last frame of a video

        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

        faces = faceCascade.detectMultiScale(
            gray,
            scaleFactor=1.01,
            minNeighbors=5,
            minSize=(30, 30)
#        flags=cv2.cv.CV_HAAR_SCALE_IMAGE
        )

        # Draw a rectangle around the faces
        for (x, y, w, h) in faces:
            cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)

        # Display the resulting frame
        cv2.imshow('Video', frame)
        writer.write(frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    else:
        break

video_capture.release()
writer.release()
cv2.destroyAllWindows()


Then, merge movie and audio with ffmpeg. First, split an audio from the video. Then, join the audio to the video you created.
\$ ffmpeg -i TheBourneUltimatum_cut.mp4 -vn -acodec copy TheBourneUltimatum_cut_audio.aac
\$ ffmpeg -i TheBourneUltimatum_cut_audio.aac -i TheBourneUltimatum_face_without_audio.mp4 -c:v copy -c:a aac -strict experimental TheBourneUltimatum_face_video.mp4
You may want to create animation gif. Moviepy helps you.
from moviepy.editor import *
clip = (VideoFileClip(r'/home/watanabe/Videos/TheBourneUltimatum_face_without_audio.mp4')
                        .subclip((0,0.0),(0,6.0))
                        .resize(0.5))
clip.write_gif(r'/home/watanabe/Videos/TheBourneUltimatum_face.gif',fuzz=50, fps=16)

 That's pretty easy ! Here are created movie.








Bibliography


コメント