Andrés Andrés - 11 days ago 6
C++ Question

rosserial publish sensor_msgs/Image from windows

I am working on a project with windows and the sdk of the kinect v1 sensor. The goal is to send the color images from the kinect through ros,
using rosserial.

I don't know exactly how to deal with this. Right now I am using the sensor_msgs/Image message to publish the RGB values. This is the code that I have so far:

img.header.stamp = nh->now();
img.header.frame_id = "kinect1_ref";
img.height = height;
img.width = width;
img.encoding = "rgb8";
img.is_bigendian = false;
img.step = width*3;

BYTE* start = (BYTE*) lockedRect.pBits;
img.data = new uint8_t[width*3*height];



long it;
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
const BYTE* curr = start + (x + width*y)*4;
for(int n=0; n<3; n++){
it = y*width*3 + x*3 + n;
img.data[it] = (uint8_t) curr[2-n];
}
}
}
pub->publish(&img);


In the code, img is the sensor_msgs/Image, and lockedRect.pBits is a pointer to the first byte of the image of the kinect. As far as I know, the image from kinect is stored row wise, in top-to-bottom left-to-right order, with each pixel being represented by 4 sequential bytes representing a padding byte then R, G and B.

I actually am able to send this to ros, but when I am trying to visualizate it with , I get the following error:

Error Loading Image: OGRE EXCEPTION. Stream size does not match calculated image size in image.

I am very stack with this, the size that I set is correct, taking into account the 3 channels for RGB. The conversion from BYTE to uint8_t should be trivial, since they are both unsigned char.

PD: I know that I can visualizate the kinect data from ubuntu and ros using openni_launch, but I need to use windows in this case due to the speech recognition engine.

PD2: cv_bridge, usually used to publish images in ros, is not included in the rosserial libraries. Therefore I have to build the image message from scratch (there could be another way?)

Any help will be really appreaciated, thank you in advance!

EDIT: The class of sensor_msgs/Image message generated by rosserial for windows is contains only this code:

class Image : public ros::Msg{
public:
std_msgs::Header header;
uint32_t height;
uint32_t width;
char * encoding;
uint8_t is_bigendian;
uint32_t step;
uint8_t data_length;
uint8_t st_data;
uint8_t * data;
virtual void serialize(unsigned char *outbuffer);
virtual void deserialize(unsigned char *inbuffer);}


Both the methods serialize and deserialize are not wrote here, but I actually do not know how they work.

Answer

I wonder how this compiles for you. img.data is an std::vector<uint8_t>. With that said, try this:

img.header.stamp = nh->now();
img.header.frame_id = "kinect1_ref";
img.height = height;
img.width = width;        
img.encoding = "rgb8";
img.is_bigendian = false;
img.step = width * 3;

BYTE* start = (BYTE*) lockedRect.pBits;
img.data.resize(img.step * height);

long it;
for (int y = 0; y < height; ++y) {
    for (int x = 0; x < width; ++x) {
        const BYTE* curr = start + (x + width*y)*4;
        for(int n=0; n<3; n++){
            it = y*width*3 + x*3 + n;
            img.data[it] =  (uint8_t) curr[2-n];
        }
    }
}   
pub->publish(&img);

Update:

I found this information about the protocol used in rosserial_windows limitations. Although, you are specifying the height and width and the step size, the serialization is agnostic to these message variables. I seems you have to also specify the length of the array in data_lendth. But here is the issue. For some odd reason the maximum array length is limited to uint8_t.

The array length of any array type T[] will have require the variable T_length to be set. This means that you cannot exceed the uint8_t limitation without changing the protocol.

One solution but painful and slow is to split the image in 255 byte custom messages and index them. You'll have to put the image data together on the receiver side and create a sensor_msgs::Image from the data and publish that.