Kush131 Kush131 - 5 months ago 18
Objective-C Question

Cannot figure out Numpy equivalent for cv.mat.step[0]

I am currently in the process of transferring code from an old OpenCV example into OpenCV3 in Python (using PyObjC and the Quartz module). The Objective-C code takes a UIImage and creates a material that can be used by OpenCV. My python code takes a CGImage and does the same thing.

Here is the Objective-C code:

(cv::Mat)cvMatFromUIImage:(UIImage *)image {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;

cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)

CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags

CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);

return cvMat;

Here is my Python equivalent:

def macToOpenCV(image):
color_space = CGImageGetColorSpace(image)
column = CGImageGetHeight(image)
row = CGImageGetWidth(image)
mat = np.ndarray(shape=(row, column, 4), dtype=np.uint8)
c_ref = CGBitmapContextCreate(mat,
, # mat.step[0],
kCGImageAlphaNoneSkipLast |

CGContextDrawImage(c_ref, CGRectMake(0, 0, column, row), image)
return mat

I am fairly confident that I have most of this right currently, but I am lost what I should be calling for the equivalent of cvMat.step[0] in Numpy. I also would welcome some general code review on the code segment, because when I use cv2.imshow() I am not getting the image I expect at all :).



I ended up abandoning the above approach and found an answer on this stack overflow question that worked after a little bit of editing: Converting CGImage to python image (pil/opencv)

image_ref = CGWindowListCreateImage(CGRectNull,

pixeldata = CGDataProviderCopyData(CGImageGetDataProvider(image_ref))

height = CGImageGetHeight(image_ref)
width = CGImageGetWidth(image_ref)

image = Image.frombuffer("RGBA", (width, height),
                         pixeldata, "raw", "RGBA", 0, 1)

# Color correction from BGRA to RGBA
b, g, r, a = image.split()
image = Image.merge("RGBA", (r, g, b, a))
return np.array(image)

Image in this case is PIL.Image. You also can see I opted for an automatic stride calculation (parameter 0 in frombuffer()) mostly because the function the answer used I could not get to work.