Skip to content
/ LipSync Public

This repository demonstrates the use of the powerful Wav2Lip model to synchronize lip movements with speech in videos. The deep learning model generates human-like accurate lip sync that enhances the visual appeal of any talking-head video.

Notifications You must be signed in to change notification settings

DrRuin/LipSync

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LipSync using Wav2Lip

This repository demonstrates the use of the powerful Wav2Lip model to synchronize lip movements with speech in videos. The deep learning model generates human-like accurate lip sync that enhances the visual appeal of any talking-head video.

Inputs

We've used the following video and audio for this demonstration:

Output

The final result after the lip sync can be found in the outputs.mp4 file in this repository.

Context

This project/assessment was completed as part of a screening task assigned for the role of AI Engineer Intern. It served as a practical exercise in applying state-of-the-art AI methods to solve real-world problems, specifically in the domain of video processing and enhancement.

About

This repository demonstrates the use of the powerful Wav2Lip model to synchronize lip movements with speech in videos. The deep learning model generates human-like accurate lip sync that enhances the visual appeal of any talking-head video.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published