For a C++ class I am trying to design a class hierarchy that handles binary operations
+=
=
Addition
Subtraction
Binops
Operations
Binops
Operations

↓
Binops
 
 
++ ++
↓ ↓
Addition Subtraction
Binops
Operations
Operations
void add(Operations const &rhs);
void sub(Operations const &rhs);
Addition
Addition::operator+=(Operations const &rhs)
Subtraction
Matrix
Operations
Matrix
Operations
Matrix
+=
add
Operations
+=
Matrix
+=
Operations
Matrix
=
Addition
add
Operations
add
+=
+
+=
Addition
Operations
Addition
Operations
Addition
add
Operations
Addition
Operations
Addition
Binops
It appears those class names are a bit off. My psychic decoding is that Addition
is HasAddition
. So we have HasOperations
inherits from HasBinOps
, which inherits from both HasAddition
and HasSubtraction
.
So I get the basic plan. But I'm going to answer how to do this right. This may not line up with your assignment, but that is honestly your assignment's problem not mine!
We do not want virtual runtime dispatch and dynamic allocation going on for all basic operations. We want static polymorphism, not dynamic polymorphism.
Luckily, in C++ we have static polymorphism. A typical way to implement it is via the CRTP  the curiously repeating template pattern.
The CRTP is named because it is curious how often this template pattern occurs, and in it we take the name of a class, and repeat it in the name of a template we inherit from.
This is curiously useful in many situations.
Here is a CRTP based has_addition
type:
template<class D>
struct has_addition {
// implement + in terms of += on the lhs:
friend D operator+( D&& lhs, D const& rhs ) {
lhs += rhs;
return std::move(lhs);
}
friend D operator+( D&& lhs, D&& rhs ) {
lhs += std::move(rhs);
return std::move(lhs);
}
// here the rhs is an rvalue, so reuse its storage
// for operator+:
friend D operator+( D const& lhs, D&& rhs ) {
return std::move(rhs)+lhs; // assumes addition commutes!
}
// both sides are lvalues. Copy lhs and use +=:
friend D operator+( D const& lhs, D const& rhs ) {
return D(lhs)+rhs;
}
// 4 overloads of += that move rhs or the return value:
friend D& operator+=( D& lhs, D const& rhs ) {
lhs.add( rhs );
return lhs;
}
friend D& operator+=( D& lhs, D&& rhs ) {
lhs.add( std::move(rhs) );
return lhs;
}
// notice += on an rvalue returns a copy.
// This permits reference lifetime extension:
friend D operator+=( D&& lhs, D const& rhs ) {
lhs.add( rhs );
return std::move(lhs);
}
friend D operator+=( D&& lhs, D&& rhs ) {
lhs.add( std::move(rhs) );
return std::move(lhs);
}
};
you use it via:
struct bob : has_addition<bob> {
int x = 0;
void add( bob const& rhs ) {
x += rhs.x;
}
};
Now both +
and +=
are implemented for you based on your add
method. What more, there are multiple rvalue and lvalue overloads of them. If you implement moveconstruct, you get automatic performance boosts. If you implement add that takes an rvalue on the right hand side, you get automatic performance boosts.
If you fail to write the rvalue overloaded add
and moveconstruct, things still work. We decoupled the factors (adding something you can discard, and recycling your storage, and microoptimization of how +
works) from each other. The result is easier to write code with piles of micro optimizations builtin.
Now most of the microoptimizations in has_addition
are not required for a first pass.
template<class D>
struct has_addition {
friend D operator+( D lhs, D const& rhs ) {
lhs += rhs;
return std::move(lhs);
}
friend D& operator+=( D& lhs, D const& rhs ) {
lhs.add(rhs);
return lhs;
}
friend D operator+=( D&& lhs, D const& rhs ) {
lhs.add(rhs);
return std::move(lhs);
}
};
We then extend this with
template<class D>struct has_subtraction;
template<class D>struct has_binops:
has_subtraction<D>,
has_addition<D>
{};
template<class D>struct has_operations:
has_binops<D>
{};
but really, few types have every type of operation, so I personally wouldn't like this.
You could use SFINAE (substitution failure is not an error) to detect if add
, subtact
, multiply
, divide
, order
, equals
etc are implemented in your type, and write maybe_has_addition<D>
that does a SFINAE test on D
to determine if it has D.add( D const& )
implemented. If and only if so has_addition<D>
is inherited from maybe_has_addition<D>
.
Then you can set it up so that a whole myriad of operator overloads are written by doing:
struct matrix: maybe_has_operations<matrix>
where as you implement new operations on matrix
, more and more overloaded operators kick in.
This, however, is a different problem.
Doing this with dynamic polymorphism (virtual functions) is a mess. And really, do you want to jump through multiple vtables, dynamic allocations, and lose all compile time type safety when you write matrix1 = matrix2 + matrix3
? This isn't Java.
The friend bit is pretty easy. Notice how has_addition<D>
calls D.add(D const&)
. We can make add
private within D
, but only if we friend struct has_addition<D>;
within the body of D
.
So has_addition<D>
is both a parent of D
and a friend of D
.
Myself, I just leave add
exposed, because it is harmless.