Yuriy Romanenko Yuriy Romanenko - 2 months ago 11
C++ Question

memory heap allocator library that keeps separate structures?

Here's my problem: I need to manage memory in a remote contiguous buffer that my program can't read or write to. It needs to have malloc()/free() semantics, and support setting minimum alignment, and fragmentation avoidance (whenever possible). Since I can't read or write to this buffer directly, I need to use local structures to manage all the allocations.

I am already using boost, so if something inside boost can be massaged to do this, that would be great. However, I'm not averse to using a C library or anything like that.

As an example, I need a non-IPC version of:

boost::interprocess::basic_managed_external_buffer<
char,
boost::interprocess::rbtree_best_fit<
boost::interprocess::mutex_family,
boost::interprocess::offset_ptr<void>,
SOME_ALIGNMENT>,
boost::interprocess::iset_index>


preferably with malloc/free semantics instead of new/delete
but without it ever actually reading or writing to the underlying buffer (and keeping all the allocation information/data structures in a separate buffer)

Any ideas?

P.S. I don't want the boost::interprocess example to be misleading, I am just familiar with the interface so using it as an example. The application is not really interprocess, and the allocator would only be used from my application.

Specifically I would like to be able to manage a 16GB external buffer with allocation sizes from 128 bytes all the way to 512MB. This is strictly 64-bit code, but even then I would prefer the pointer type to be a template parameter so I can use uint64_t explicitly.

Answer

I am posting an update on what we actually wound up doing. I wound up implementing my own remote memory allocator (source below). It's spiritually similar to the answer Sam suggests but uses boost intrusive RB trees to avoid some of the log(N) lookups when freeing, joining etc. It's thread-safe and supports various remote pointer/offset types as template parameters. It's probably not ideal in a lot of ways, but it worked well enough for what we needed it to do. If you find bugs, let me know.

/*
 * Thread-safe remote memory allocator
 *
 * Author: Yuriy Romanenko
 * Copyright (c) 2015 Lytro, Inc.
 *
 */

#pragma once

#include <memory>
#include <mutex>
#include <cstdint>
#include <cstdio>
#include <functional>

#include <boost/intrusive/rbtree.hpp>

namespace bi = boost::intrusive;

template<typename remote_ptr_t = void*,
         typename remote_size_t = size_t,
         typename remote_uintptr_t = uintptr_t>
class RemoteAllocator
{
    /* Internal structure used for keeping track of a contiguous block of
     * remote memory. It can be on one or two of the following RB trees:
     *    Free Chunks (sorted by size)
     *    All Chunks (sorted by remote pointer)
     */
    struct Chunk
    {
        bi::set_member_hook<> mRbFreeChunksHook;
        bi::set_member_hook<> mRbAllChunksHook;

        remote_uintptr_t mOffset;
        remote_size_t mSize;
        bool mFree;

        Chunk(remote_uintptr_t off, remote_size_t sz, bool fr)
                : mOffset(off), mSize(sz), mFree(fr)
        {

        }

        bool contains(remote_uintptr_t off)
        {
            return (off >= mOffset) && (off < mOffset + mSize);
        }
    private:
        Chunk(const Chunk&);
        Chunk& operator=(const Chunk&);
    };

    struct ChunkCompareSize : public std::binary_function <Chunk,Chunk,bool>
    {
        bool operator() (const Chunk& x, const Chunk& y) const
        {
            return x.mSize < y.mSize;
        }
    };
    struct ChunkCompareOffset : public std::binary_function <Chunk,Chunk,bool>
    {
        bool operator() (const Chunk& x, const Chunk& y) const
        {
            return x.mOffset < y.mOffset;
        }
    };

    typedef bi::rbtree<Chunk,
                       bi::member_hook<Chunk,
                                       bi::set_member_hook<>,
                                       &Chunk::mRbFreeChunksHook>,
                       bi::compare< ChunkCompareSize > > FreeChunkTree;

    typedef bi::rbtree<Chunk,
                       bi::member_hook<Chunk,
                                       bi::set_member_hook<>,
                                       &Chunk::mRbAllChunksHook>,
                       bi::compare< ChunkCompareOffset > > AllChunkTree;

    // Thread safety lock
    std::mutex mLock;
    // Size of the entire pool
    remote_size_t mSize;
    // Start address of the pool
    remote_ptr_t mStartAddr;

    // Tree of free chunks
    FreeChunkTree mFreeChunks;
    // Tree of all chunks
    AllChunkTree mAllChunks;

    // This removes the chunk from both trees
    Chunk *unlinkChunk(Chunk *c)
    {
        mAllChunks.erase(mAllChunks.iterator_to(*c));
        if(c->mFree)
        {
            mFreeChunks.erase(mFreeChunks.iterator_to(*c));
        }
        return c;
    }

    // This reinserts the chunk into one or two trees, depending on mFree
    Chunk *relinkChunk(Chunk *c)
    {
        mAllChunks.insert_equal(*c);
        if(c->mFree)
        {
            mFreeChunks.insert_equal(*c);
        }
        return c;
    }

    /* This assumes c is 'free' and walks the mAllChunks tree to the left
     * joining any contiguous free chunks into this one
     */
    bool growFreeLeft(Chunk *c)
    {
        auto it = mAllChunks.iterator_to(*c);
        if(it != mAllChunks.begin())
        {
            it--;
            if(it->mFree)
            {
                Chunk *left = unlinkChunk(&(*it));
                unlinkChunk(c);
                c->mOffset = left->mOffset;
                c->mSize = left->mSize + c->mSize;
                delete left;
                relinkChunk(c);
                return true;
            }
        }
        return false;
    }
    /* This assumes c is 'free' and walks the mAllChunks tree to the right
     * joining any contiguous free chunks into this one
     */
    bool growFreeRight(Chunk *c)
    {
        auto it = mAllChunks.iterator_to(*c);
        it++;
        if(it != mAllChunks.end())
        {
            if(it->mFree)
            {
                Chunk *right = unlinkChunk(&(*it));
                unlinkChunk(c);
                c->mSize = right->mSize + c->mSize;
                delete right;
                relinkChunk(c);
                return true;
            }
        }
        return false;
    }

public:
    RemoteAllocator(remote_size_t size, remote_ptr_t startAddr) :
        mSize(size), mStartAddr(startAddr)
    {
        /* Initially we create one free chunk the size of the entire managed
         * memory pool, and add it to both trees
         */
        Chunk *all = new Chunk(reinterpret_cast<remote_uintptr_t>(mStartAddr),
                               mSize,
                               true);
        mAllChunks.insert_equal(*all);
        mFreeChunks.insert_equal(*all);
    }

    ~RemoteAllocator()
    {
        auto it = mAllChunks.begin();

        while(it != mAllChunks.end())
        {
            Chunk *pt = unlinkChunk(&(*it++));
            delete pt;
        }
    }

    remote_ptr_t malloc(remote_size_t bytes)
    {
        std::unique_lock<std::mutex> lock(mLock);
        auto fit = mFreeChunks.lower_bound(
                    Chunk(reinterpret_cast<remote_uintptr_t>(mStartAddr),
                          bytes,
                          true));

        /* Out of memory */
        if(fit == mFreeChunks.end())
            return remote_ptr_t{0};

        Chunk *ret = &(*fit);
        /* We need to split the chunk because it's not the exact size */
        /* Let's remove the node */
        mFreeChunks.erase(fit);

        if(ret->mSize != bytes)
        {
            Chunk *right, *left = ret;

            /* The following logic decides which way the heap grows
             * based on allocation size. I am not 100% sure this actually
             * helps with fragmentation with such a big threshold (50%)
             *
             * Check if we will occupy more than half of the chunk,
             * in that case, use the left side. */
            if(bytes > ret->mSize / 2)
            {
                right = new Chunk(left->mOffset + bytes,
                                  left->mSize - bytes,
                                  true);
                relinkChunk(right);

                left->mSize = bytes;
                left->mFree = false;

                ret = left;
            }
            /* We'll be using less than half, let's use the right side. */
            else
            {
                right = new Chunk(left->mOffset + left->mSize - bytes,
                                  bytes,
                                  false);

                relinkChunk(right);

                left->mSize = left->mSize - bytes;
                mFreeChunks.insert_equal(*left);

                ret = right;
            }
        }
        else
        {
            ret->mFree = false;
        }

        return reinterpret_cast<remote_ptr_t>(ret->mOffset);
    }

    remote_ptr_t malloc_aligned(remote_size_t bytes, remote_size_t alignment)
    {
        remote_size_t bufSize = bytes + alignment;
        remote_ptr_t mem = this->malloc(bufSize);
        remote_ptr_t ret = mem;
        if(mem)
        {
            remote_uintptr_t offset = reinterpret_cast<remote_uintptr_t>(mem);
            if(offset % alignment)
            {
                offset = offset + (alignment - (offset % alignment));
            }
            ret = reinterpret_cast<remote_ptr_t>(offset);
        }
        return ret;
    }

    void free(remote_ptr_t ptr)
    {
        std::unique_lock<std::mutex> lock(mLock);
        Chunk ref(reinterpret_cast<remote_uintptr_t>(ptr), 0, false);
        auto it = mAllChunks.find(ref);
        if(it == mAllChunks.end())
        {
            it = mAllChunks.upper_bound(ref);
            it--;
        }
        if(!(it->contains(ref.mOffset)) || it->mFree)
            throw std::runtime_error("Could not find chunk to free");

        Chunk *chnk = &(*it);
        chnk->mFree = true;
        mFreeChunks.insert_equal(*chnk);

        /* Maximize space */
        while(growFreeLeft(chnk));
        while(growFreeRight(chnk));
    }

    void debugDump()
    {
        std::unique_lock<std::mutex> lock(mLock);
        int i = 0;
        printf("----------- All chunks -----------\n");
        for(auto it = mAllChunks.begin(); it != mAllChunks.end(); it++)
        {
            printf(" [%d] %lu -> %lu (%lu) %s\n",
                i++,
                it->mOffset,
                it->mOffset + it->mSize,
                it->mSize,
                it->mFree ? "(FREE)" : "(NOT FREE)");
        }
        i = 0;
        printf("----------- Free chunks -----------\n");
        for(auto it = mFreeChunks.begin(); it != mFreeChunks.end(); it++)
        {
            printf(" [%d] %lu -> %lu (%lu) %s\n",
                i++,
                it->mOffset,
                it->mOffset + it->mSize,
                it->mSize,
                it->mFree ? "(FREE)" : "(NOT FREE)");
        }
    }
};