I'm trying to build a portable green screen photo booth. There's no way to know the lighting conditions ahead of time so I can't hard code the the color values for chroma key.
I thought the easiest way to get around this issue would be to build a calibration script that will take a picture of the blank background, get the "highest" and "lowest" colors from it, and use those to produce the background mask.
I'm running into trouble because while I can get the highest or lowest value in each of the channels, there's no guarantee that when the three are combined they match the actual color range of the image.
UPDATE: I have changed to use only the hue channel. This helped a lot but still isn't perfect. I think better lighting will make a difference but if you can see any way to help I would be grateful.
Here's what I have (edited with updates)
import numpy as np
screen = cv2.imread("screen.jpg")
test = cv2.imread("test.jpg")
hsv_screen = cv2.cvtColor(screen, cv2.COLOR_BGR2HSV)
hsv_test = cv2.cvtColor(test, cv2.COLOR_BGR2HSV)
hueMax = hsv_screen[:,:,0].max()
hueMin = hsv_screen[:,:,0].min()
lowerBound = np.array([hueMin-10,100,100], np.uint8)
upperBound = np.array([hueMax+10,255,255], np.uint8)
mask = cv2.inRange(hsv_test,lowerBound,upperBound)
output_img = cv2.bitwise_and(test,test,mask=inv_mask)
screen and test images
HUE colors have specified intervals, get the green interval and then do an
inRange using it, ignoring the saturation and value, that would give you all degrees of green, to minimize noise much, make the value range to be from 20% to 80%, that would avoid the too much light, or too much dark regions, and ensure you only get what's green anywhere in the screen.
Detecting using only HUE channel is dependable.