Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added an implementation to add different virtual backgrounds #1618

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 65 additions & 0 deletions Virtual_Background/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Awesome Face Operations: Virtual Background
Virtual background involve removing the background
from an image or video stream, and replacing it
with an image/video of our choice.


This is a real-time filter that creates a virtual
background of our choice on a live feed using using
Mediapipe's selfie segmentation model.

How it works:

1. The selfie segmentation model is loaded from Mediapipe.

2. Video capture is started from the webcam.

3. The frame size is obtained, and the background image is loaded.

4. A Gaussian blur is applied to the background image.

5. Each frame from the video stream is processed:

a. The segmentation mask is extracted using selfie segmentation.

b. A condition mask is created based on the threshold value.

c. The background is applied to the frame using the mask.

d. The frames per second (FPS) are counted and displayed.

6. The final result (`output_image`) with the virtual background is displayed using `cv2.imshow`.

7. The loop ends and the windows are closed when 'q' is pressed.


## Sample
Note: The filter was applied to stock footage for the sample gif

![output](output.gif)

## Getting Started

* Clone this repository.
```bash
git clone https://github.com/akshitagupta15june/Face-X.git
```
* Navigate to the required directory.
```bash
cd Awesome-face-operations/Virtual_Background
```
* Install the Python dependencies.

```bash
pip install -r requirements.txt
```
* Run the script.
```bash
python bg.py
```

Note: Press 'q' to quit the filter
## Author

[Abir-Thakur](https://github.com/Inferno2211)

Binary file added Virtual_Background/bg.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
67 changes: 67 additions & 0 deletions Virtual_Background/bg.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
import cv2
import mediapipe as mp
import numpy as np
import time

print("All modules imported!")
#Set variables
threshold=0.5
b_amt=15
fps=0

#Load image segmentation model
mp_selfie_segmentation = mp.solutions.selfie_segmentation
selfie_segmentation = mp_selfie_segmentation.SelfieSegmentation(model_selection=0)

print("Starting video capture...")
#Start video capture from webcam
cap = cv2.VideoCapture(0)

#Find out frame size
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

#Load and resize background
background_img = cv2.imread("bg.jpg")
background_img = cv2.resize(background_img,(width,height))

#Apply gaussian blur to background
bg_image = cv2.GaussianBlur(background_img, (b_amt,b_amt), 0) #[:,:,::-1]

print("Capture started!")
start_time = time.time()
frame_count=0
while True:
#Read frame from video stream
ret, frame= cap.read()

#Create a mask for person and background
results = selfie_segmentation.process(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
segmentation_mask = results.segmentation_mask

#Process mask using threshold value
condition = np.stack((segmentation_mask,) * 3, axis=-1) > threshold

#Use the mask to apply background to frame
output_image = np.where(condition, frame, bg_image)

#Count and display FPS
frame_count += 1
elapsed_time = time.time() - start_time
if elapsed_time >= 1.0: # Update FPS every second
fps = frame_count / elapsed_time
print(f"FPS: {round(fps, 2)}")
start_time = time.time()
frame_count = 0
cv2.putText(output_image, f"FPS: {round(fps, 2)}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)

#Display final result
cv2.imshow("Virtual background",output_image)

# Exit the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# Release the video capture and close all windows
cap.release()
cv2.destroyAllWindows()
Binary file added Virtual_Background/output.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions Virtual_Background/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
opencv-python==4.7.0.72
mediapipe==0.10.1
numpy==1.23.5
Loading