btown btown - 3 months ago 48
C++ Question

Serializing OpenCV Mat_<Vec3f>

I'm working on a robotics research project where I need to serialize 2D matrices of 3D points: basically each pixel is a 3-vector of floats. These pixels are saved in an OpenCV matrix, and they need to be sent over inter-process communication and saved into files to be processed on multiple computers. I'd like to serialize them in an endian/architecture-independent, space-efficient way, as quickly as possible.

cv::imencode
here would be perfect, except that it only works on 8-bit and 16-bit elements, and we don't want to lose any precision. The files don't need to be human-readable (although we do that now to ensure data portability, and it's incredibly slow). Are there best practices for this, or elegant ways to do it?

Thanks!

Answer

Edit: Christoph Heindl has commented on this post with a link to his blog where he has improved on this serialisation code. Highly recommended!

http://cheind.wordpress.com/2011/12/06/serialization-of-cvmat-objects-using-boost/

--

For whoever it may benefit: Some code to serialize Mat& with boost::serialization
I haven't tested with multi-channel data, but everything should work fine.

#include <iostream>
#include <fstream>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
#include <boost/serialization/split_free.hpp>
#include <boost/serialization/vector.hpp>

BOOST_SERIALIZATION_SPLIT_FREE(Mat)
namespace boost {
namespace serialization {

    /*** Mat ***/
    template<class Archive>
    void save(Archive & ar, const Mat& m, const unsigned int version)
    {
      size_t elemSize = m.elemSize(), elemType = m.type();

      ar & m.cols;
      ar & m.rows;
      ar & elemSize;
      ar & elemType; // element type.
      size_t dataSize = m.cols * m.rows * m.elemSize();

      //cout << "Writing matrix data rows, cols, elemSize, type, datasize: (" << m.rows << "," << m.cols << "," << m.elemSize() << "," << m.type() << "," << dataSize << ")" << endl;

      for (size_t dc = 0; dc < dataSize; ++dc) {
          ar & m.data[dc];
      }
    }

    template<class Archive>
    void load(Archive & ar, Mat& m, const unsigned int version)
    {
        int cols, rows;
        size_t elemSize, elemType;

        ar & cols;
        ar & rows;
        ar & elemSize;
        ar & elemType;

        m.create(rows, cols, elemType);
        size_t dataSize = m.cols * m.rows * elemSize;

        //cout << "reading matrix data rows, cols, elemSize, type, datasize: (" << m.rows << "," << m.cols << "," << m.elemSize() << "," << m.type() << "," << dataSize << ")" << endl;

        for (size_t dc = 0; dc < dataSize; ++dc) {
                  ar & m.data[dc];
        }
    }

}
}

Now, mat can be serialized and deserialized as following:

    void saveMat(Mat& m, string filename) {
            ofstream ofs(filename.c_str());
            boost::archive::binary_oarchive oa(ofs);
            //boost::archive::text_oarchive oa(ofs);
            oa << m;
    }

    void loadMat(Mat& m, string filename) {
            std::ifstream ifs(filename.c_str());
            boost::archive::binary_iarchive ia(ifs);
            //boost::archive::text_iarchive ia(ifs);
            ia >> m;
    }

I've used the binary_oarchive and binary_iarchive here to keep the memory usage down. The binary format doesn't provide portability between platforms, but if desired the text_oarchive/iarchive can be used.

Comments