subDesTagesMitExtraKaese 3588ebcb6b Merge remote-tracking branch 'github/master' | 2 år sedan | |
---|---|---|
dataset | 3 år sedan | |
doc | 3 år sedan | |
eval | 3 år sedan | |
images | 3 år sedan | |
model | 3 år sedan | |
.gitignore | 3 år sedan | |
LICENSE | 3 år sedan | |
README.md | 3 år sedan | |
data_path.py | 3 år sedan | |
export_onnx.py | 3 år sedan | |
export_torchscript.py | 4 år sedan | |
inference_images.py | 3 år sedan | |
inference_speed_test.py | 3 år sedan | |
inference_utils.py | 4 år sedan | |
inference_video.py | 3 år sedan | |
inference_webcam.py | 3 år sedan | |
inference_webcam_ts.py | 3 år sedan | |
inference_webcam_ts_compositing.py | 3 år sedan | |
requirements.txt | 4 år sedan | |
run.sh | 3 år sedan | |
runCompositing.sh | 3 år sedan | |
runTS.bat | 3 år sedan | |
runTS.sh | 3 år sedan | |
train_base.py | 3 år sedan | |
train_refine.py | 3 år sedan |
Official repository for the paper Real-Time High-Resolution Background Matting. Our model requires capturing an additional background image and produces state-of-the-art matting results at 4K 30fps and HD 60fps on an Nvidia RTX 2080 TI GPU.
Disclaimer: The video conversion script in this repo is not meant be real-time. Our research's main contribution is the neural architecture for high resolution refinement and the new matting datasets. The inference_speed_test.py
script allows you to measure the tensor throughput of our model, which should achieve real-time. The inference_video.py
script allows you to test your video on our model, but the video encoding and decoding is done without hardware acceleration and parallization. For production use, you are expected to do additional engineering for hardware encoding/decoding and loading frames to GPU in parallel. For more architecture detail, please refer to our paper.
Check out Robust Video Matting! Our new method does not require pre-captured backgrounds, and can inference at even faster speed!
We provide several scripts in this repo for you to experiment with our model. More detailed instructions are included in the files.
inference_images.py
: Perform matting on a directory of images.inference_video.py
: Perform matting on a video.inference_webcam.py
: An interactive matting demo using your webcam.Additionally, you can try our notebooks in Google Colab for performing matting on images and videos.
We provide a demo application that pipes webcam video through our model and outputs to a virtual camera. The script only works on Linux system and can be used in Zoom meetings. For more information, checkout:
You can run our model using PyTorch, TorchScript, TensorFlow, and ONNX. For detail about using our model, please check out the Usage / Documentation page.
Configure data_path.pth
to point to your dataset. The original paper uses train_base.pth
to train only the base model till convergence then use train_refine.pth
to train the entire network end-to-end. More details are specified in the paper.
* Equal contribution.
This work is licensed under the MIT License. If you use our work in your project, we would love you to include an acknowledgement and fill out our survey.
Projects developed by third-party developers.