I'm doing a computer vision project on my RaspberryPi 2.
The computer vision is done through the well known OpenCV library with Python bindings.
I want to show a live stream of what the Raspberry Pi is doing with an iOS app.
The images being elaborated by the Raspberry Pi are OpenCV Mats that in Python are no less then Numpy Arrays.
The iPhone app on the other end doesn't have OpenCV elaboration capabilities and it can only work with images in my logic.
Now, while i'm designing this thing, I can't figure out the best way to do so.
I would separate the problem this way:
I solved the problem this way:
cv2.imencode(extension, mat)function which returns a byte array with the encoded image.
The function used for conversion is
def mat_to_byte_array(mat, ext): success, img = cv2.imencode(ext, mat) bytedata = img.tostring() # numpy function used for casting to bytearray return success, bytedata
On iOS I used SwiftSocket Library to implement the TCP Client.
It all works fine and smoothly now.