介绍
Wap2Lip技术,能精准生成口型同步视频。比起NeRF的优点就是,官方已经训练好了通用模型,直接拿来用就行。
源代码地址:https://github.com/Rudrabha/Wav2Lip
Wap2Lip支持谷歌colab,在线体验完整的代码部署,执行生成数字人过程。
地址:https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing
Wap2Lip支持网页体验,上传视频和音频文件,即可生成口型同步视频。
体验地址:https://bhaasha.iiit.ac.in/lipsync/
效果图:
安装过程
使用云平台搭建:
<center>naidia-smi查看支持最大cuda版本,nvcc --version查看当前cuda版本</center>
1、拉取代码
git clone https://github.com/ashawkey/RAD-NeRF.git
cd RAD-NeRF
2、创建虚拟环境
#创建虚拟环境
conda create -n wav2lip python=3.7.1
#切换环境
conda activate wav2lip
#初始化环境(报错再执行)
conda init bash
source ~/.bashrc
3、安装ffmpeg扩展
apt-get update
apt-get install ffmpeg
4、安装依赖
# 安装依赖
pip install -r requirements.txt
## 报错时,更新一下opencv-python版本:ERROR: Could not find a version that satisfies the requirement opencv-python==4.1.0.25
pip install --upgrade opencv-python
## 执行报错,删除所有pytorch环境,根据cuda版本安装pytorch环境
nvcc --version ### 查看cuda版本
conda uninstall pytorch torchvision torchaudio cudatoolkit
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
5、生成口型视频
# 创建文件夹
mkdir input_model
mkdir input_video
mkdir input_audio
mkdir input_image
# 安装上传下载扩展(按需)
sudo apt-get install lrzsz
# 执行生成数字人命令
## --audio后可以是一个音频文件,也可以是一个视频文件
python inference.py --checkpoint_path <path-to-model-file> --face <path-to-video-file> --audio <path-to-audio-file>
eg:
python inference.py --checkpoint_path input_model/wav2lip.pth --face input_video/xiaoheshang.mp4 --audio input_video/xiaoheshang.mp4
python inference.py --checkpoint_path input_model/wav2lip.pth --face input_image/xiaoheshang.jpg --audio input_video/xiaoheshang.mp4
python inference.py --checkpoint_path input_model/wav2lip_gan.pth --face input_video/xiaoheshang.mp4 --audio input_video/xiaoheshang.mp4
python inference.py --checkpoint_path input_model/wav2lip_gan.pth --face input_image/xiaoheshang.jpg --audio input_video/xiaoheshang.mp4
执行过程:
Using cuda for inference.
Reading video frames...
Number of frames available for inference: 8000
Extracting raw audio...
ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
configuration: --prefix=/root/miniconda3/envs/wav2lip --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[mp3 @ 0x55705e8a9d40] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from 'input_audio/xiaobao.MP3':
Metadata:
encoder : LAME3.101 (beta 2)
Duration: 00:00:13.51, start: 0.000000, bitrate: 128 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to 'temp/temp.wav':
Metadata:
ISFT : Lavf58.29.100
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Metadata:
encoder : Lavc58.54.100 pcm_s16le
size= 2327kB time=00:00:13.50 bitrate=1411.2kbits/s speed= 498x
video:0kB audio:2326kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.003274%
(80, 1081)
Length of mel chunks: 335
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 21/21 [00:08<00:00, 2.50it/s]
Load checkpoint from: input_model/wav2lip_gan.pth████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 21/21 [00:08<00:00, 2.68it/s]
Model loaded
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:11<00:00, 3.76s/it]
ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
configuration: --prefix=/root/miniconda3/envs/wav2lip --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'temp/temp.wav':
Metadata:
encoder : Lavf58.29.100
Duration: 00:00:13.51, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Input #1, avi, from 'temp/result.avi':
Metadata:
encoder : Lavf59.27.100
Duration: 00:00:13.40, start: 0.000000, bitrate: 683 kb/s
Stream #1:0: Video: mpeg4 (Simple Profile) (DIVX / 0x58564944), yuv420p, 450x450 [SAR 1:1 DAR 1:1], 677 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #1:0 -> #0:0 (mpeg4 (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x55b3d322d240] -qscale is ignored, -crf is recommended.
[libx264 @ 0x55b3d322d240] using SAR=1/1
[libx264 @ 0x55b3d322d240] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x55b3d322d240] profile High, level 3.0, 4:2:0, 8-bit
[libx264 @ 0x55b3d322d240] 264 - core 157 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=14 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'results/result_voice.mp4':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p(progressive), 450x450 [SAR 1:1 DAR 1:1], q=-1--1, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc58.54.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc58.54.100 aac
frame= 335 fps=0.0 q=-1.0 Lsize= 797kB time=00:00:13.51 bitrate= 483.0kbits/s speed=20.5x
video:576kB audio:209kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.505751%
[libx264 @ 0x55b3d322d240] frame I:2 Avg QP:19.76 size: 10590
[libx264 @ 0x55b3d322d240] frame P:172 Avg QP:21.65 size: 2672
[libx264 @ 0x55b3d322d240] frame B:161 Avg QP:25.60 size: 675
[libx264 @ 0x55b3d322d240] consecutive B-frames: 28.1% 20.3% 9.9% 41.8%
[libx264 @ 0x55b3d322d240] mb I I16..4: 11.4% 83.3% 5.3%
[libx264 @ 0x55b3d322d240] mb P I16..4: 1.5% 7.2% 0.2% P16..4: 40.2% 10.1% 4.4% 0.0% 0.0% skip:36.3%
[libx264 @ 0x55b3d322d240] mb B I16..4: 0.3% 1.2% 0.0% B16..8: 31.8% 2.0% 0.2% direct: 0.9% skip:63.6% L0:50.7% L1:45.4% BI: 4.0%
[libx264 @ 0x55b3d322d240] 8x8 transform intra:80.5% inter:86.1%
[libx264 @ 0x55b3d322d240] coded y,uvDC,uvAC intra: 56.0% 77.7% 15.7% inter: 10.0% 14.9% 0.2%
[libx264 @ 0x55b3d322d240] i16 v,h,dc,p: 31% 34% 22% 14%
[libx264 @ 0x55b3d322d240] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 27% 25% 31% 3% 3% 2% 3% 3% 3%
[libx264 @ 0x55b3d322d240] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 55% 18% 9% 2% 3% 3% 4% 3% 2%
[libx264 @ 0x55b3d322d240] i8c dc,h,v,p: 39% 27% 26% 7%
[libx264 @ 0x55b3d322d240] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x55b3d322d240] ref P L0: 68.2% 11.0% 14.8% 6.0%
[libx264 @ 0x55b3d322d240] ref B L0: 85.0% 12.0% 3.0%
[libx264 @ 0x55b3d322d240] ref B L1: 96.8% 3.2%
[libx264 @ 0x55b3d322d240] kb/s:351.86
[aac @ 0x55b3d3222e40] Qavg: 5754.420
6、调优
#获取更好效果的技巧:
--pads 0 20 0 0 调整检测到的面限界框(上下左右)
--nosmooth 使用不平滑(当出现两个嘴唇或者其他奇怪的画面时,可能是平滑过渡导致的)
--resize_factor 1 获取低分辨率的视频(可能会获得更好的视觉效果)
没有使用gan模型,对以上参数多尝试,可能会有更好的效果
python inference.py --checkpoint_path input_model/wav2lip.pth --face input_image/xiaoheshang.jpg --audio input_audio/xiaoheshang.mp4 --resize_factor 1
7、训练
Wav2Lip也支持自己训练模型
参考:https://github.com/Rudrabha/Wav2Lip?tab=readme-ov-file#preparing-lrs2-for-training
报错处理
大多数情况都是pytorch版本的问题,卸载重装即可。
查看pytorch对应版本:https://pytorch.org/get-started/previous-versions/
参考资料
- Github文档:https://github.com/Rudrabha/Wav2Lip
- AI主播虚拟人技术实现Wav2Lip:https://yv2c3kamh3y.feishu.cn/docx/S5AldFeZUoMpU5x8JAuctgPsnfg
本文由mdnice多平台发布
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。