Ingen beskrivning

subDesTagesMitExtraKaese 3588ebcb6b Merge remote-tracking branch 'github/master' 2 år sedan
dataset d993eaac3e Publish train scripts 3 år sedan
doc 13cb499ece Improve ONNX compatibility 3 år sedan
eval 39aabd96fd Add evaluation code 3 år sedan
images b14e735b43 Update teaser image 3 år sedan
model f09e0931c8 Rename variable to fix torchscript export 3 år sedan
.gitignore eace51f63c added start .bat file, fixed divide by zero error 3 år sedan
LICENSE d766f5b9c7 Update license 3 år sedan
README.md 220003663d Update README.md 3 år sedan
data_path.py d993eaac3e Publish train scripts 3 år sedan
export_onnx.py 8860031c3e Update comment 3 år sedan
export_torchscript.py ee86e2636f Initialize 4 år sedan
inference_images.py 3d819586bf Combines PR-#26 and #28 (#29) 3 år sedan
inference_speed_test.py 21702785a8 Add argument option for cpu support 3 år sedan
inference_utils.py ee86e2636f Initialize 4 år sedan
inference_video.py 9c9b9592e8 add option to composite onto a target video with --video-target-bgr 3 år sedan
inference_webcam.py fefd275e16 cleaned up cuda conversion 3 år sedan
inference_webcam_ts.py fb5961f70b moved cv2_frame_to_cuda to inference_webcam 3 år sedan
inference_webcam_ts_compositing.py f0adc6b7a3 removed debug print 3 år sedan
requirements.txt ee86e2636f Initialize 4 år sedan
run.sh d51f89b7e9 add run script 3 år sedan
runCompositing.sh 77f1bc25c2 added webcam inference with compositing 3 år sedan
runTS.bat 89e1a401cc added RGB parameter to Background to adjust backgroundcolor Syntax in .bat file: --background-image RGB0:0:0 3 år sedan
runTS.sh a8c25af64d added webcam inference with torch script 3 år sedan
train_base.py d993eaac3e Publish train scripts 3 år sedan
train_refine.py d993eaac3e Publish train scripts 3 år sedan

README.md

Real-Time High-Resolution Background Matting

Teaser

Official repository for the paper Real-Time High-Resolution Background Matting. Our model requires capturing an additional background image and produces state-of-the-art matting results at 4K 30fps and HD 60fps on an Nvidia RTX 2080 TI GPU.

Disclaimer: The video conversion script in this repo is not meant be real-time. Our research's main contribution is the neural architecture for high resolution refinement and the new matting datasets. The inference_speed_test.py script allows you to measure the tensor throughput of our model, which should achieve real-time. The inference_video.py script allows you to test your video on our model, but the video encoding and decoding is done without hardware acceleration and parallization. For production use, you are expected to do additional engineering for hardware encoding/decoding and loading frames to GPU in parallel. For more architecture detail, please refer to our paper.

 

New Paper is Out!

Check out Robust Video Matting! Our new method does not require pre-captured backgrounds, and can inference at even faster speed!

 

Overview

 

Updates

  • [Jun 21 2021] Paper received CVPR 2021 Best Student Paper Honorable Mention.
  • [Apr 21 2021] VideoMatte240K dataset is now published.
  • [Mar 06 2021] Training script is published.
  • [Feb 28 2021] Paper is accepted to CVPR 2021.
  • [Jan 09 2021] PhotoMatte85 dataset is now published.
  • [Dec 21 2020] We updated our project to MIT License, which permits commercial use.

 

Download

Model / Weights

Video / Image Examples

Datasets

 

Demo

Scripts

We provide several scripts in this repo for you to experiment with our model. More detailed instructions are included in the files.

  • inference_images.py: Perform matting on a directory of images.
  • inference_video.py: Perform matting on a video.
  • inference_webcam.py: An interactive matting demo using your webcam.

Notebooks

Additionally, you can try our notebooks in Google Colab for performing matting on images and videos.

Virtual Camera

We provide a demo application that pipes webcam video through our model and outputs to a virtual camera. The script only works on Linux system and can be used in Zoom meetings. For more information, checkout:

 

Usage / Documentation

You can run our model using PyTorch, TorchScript, TensorFlow, and ONNX. For detail about using our model, please check out the Usage / Documentation page.

 

Training

Configure data_path.pth to point to your dataset. The original paper uses train_base.pth to train only the base model till convergence then use train_refine.pth to train the entire network end-to-end. More details are specified in the paper.

 

Project members

* Equal contribution.

 

License

This work is licensed under the MIT License. If you use our work in your project, we would love you to include an acknowledgement and fill out our survey.

Community Projects

Projects developed by third-party developers.