Inheritance is the obvious way to share an interface in C++. It's also the wrong way, often enough that the standard library went out of its way to provide a different one. Type erasure is that other way, and once it clicks, it changes how you think about polymorphism.
The problem with inheritance
Say you want to draw shapes. Circles, squares, sprites, icons, buttons. The textbook C++ solution is an abstract base class:
struct Drawable {
virtual void draw() const = 0;
virtual ~Drawable() = default;
};
struct Circle : Drawable {
void draw() const override { /* ... */ }
};
This works. It's also intrusive. Every type that wants to be drawable has to inherit from Drawable. That means you cannot draw a third-party type you don't own. You cannot draw a lambda. You cannot draw a struct from a C library. The interface infects the type system.
There's also the matter of value semantics. To use these polymorphically, you have to store pointers or references: std::vector<std::unique_ptr<Drawable>>. The container loses the clean copy-and-move behavior of a normal container of values. You're managing lifetimes by hand because inheritance forced you out of value semantics.
Inheritance ties together three separate things: the interface, the storage, and the type. Type erasure separates them. You get the interface without dictating the type, and you get value semantics without giving up polymorphism.
The shift in perspective
Type erasure flips the model. Instead of asking types to declare they conform to an interface, you adapt them to one externally. The concrete type gets "erased" at compile time, but its behavior is preserved through a small layer of indirection.
You define what an object can do, not what it is. A type qualifies if it has the right operations, not if it inherits from the right base. This is duck typing at runtime, bridged through templates.
The classic version uses three pieces: an interface, a templated model, and a value-semantic wrapper. Get those three right and you have a homogeneous container of unrelated types.
The classic three-piece structure
Concept (internal interface)
An abstract base class defining the operations you care about. Lives inside the implementation, never exposed to users.
Model (templated adapter)
A template that wraps any concrete type satisfying the concept and forwards calls to it. One model per erased type.
Wrapper (value-semantic outer class)
The public type users actually hold. Owns the model behind a pointer and delegates the operations. This is where value semantics live.
Put together, it looks like this:
// i. Concept: the internal interface
struct Drawable {
virtual void draw() const = 0;
virtual ~Drawable() = default;
};
// ii. Model: adapts any T that has a .draw() method
template <typename T>
struct DrawableModel : Drawable {
T obj;
DrawableModel(T o) : obj(std::move(o)) {}
void draw() const override { obj.draw(); }
};
// iii. Wrapper: the value-semantic public class
class Shape {
std::unique_ptr<Drawable> impl;
public:
template <typename T>
Shape(T obj)
: impl(std::make_unique<DrawableModel<T>>(std::move(obj))) {}
void draw() const { impl->draw(); }
};
Any type with a draw() method now works as a Shape, without inheriting from anything:
struct Circle { void draw() const { /* ... */ } };
struct Square { void draw() const { /* ... */ } };
std::vector<Shape> shapes;
shapes.push_back(Circle{});
shapes.push_back(Square{}); // unrelated types, one container
Circle and Square share nothing. They don't inherit from Drawable. They don't even know Drawable exists. The adaptation happens externally, in the model.
You've already used this
The standard library is full of type-erased types. Once you see the pattern, it's everywhere:
| Type | What it erases | Constraint |
|---|---|---|
std::function |
Any callable: lambda, function pointer, functor, member function pointer | Matching call signature |
std::any |
Any copyable value, with all type info hidden | Copyable |
std::shared_ptr |
The deleter type — different deleters, same pointer type | Callable deleter |
std::pmr::polymorphic_allocator |
The underlying memory resource | Allocator interface |
std::function is the canonical example. It can hold a lambda today, a function pointer tomorrow, a functor next week, all through the same handle:
std::function<int(int)> f = [](int x) { return x * 2; };
f = &some_free_function; // different type, same interface
f = Multiplier{3}; // different again
Three completely unrelated types, three completely different concrete representations, one uniform handle. That's type erasure doing its job. And critically: std::variant is the closed alternative. With variant you enumerate the types upfront. With type erasure the set of types is open ended — anyone, anywhere, can add a new one.
Erasure without a base class
The classic three-piece structure is clean, but it isn't the only way. The same effect can be achieved without a base class at all, by storing function pointers directly. This is essentially what the C++ compiler does for you behind virtual: it builds a vtable. Type erasure lets you build the vtable by hand.
class Shape {
void* data;
void (*draw_fn)(const void*);
void (*destroy_fn)(void*);
public:
template <typename T>
Shape(T obj)
: data(new T(std::move(obj)))
, draw_fn([](const void* p) {
static_cast<const T*>(p)->draw();
})
, destroy_fn([](void* p) {
delete static_cast<T*>(p);
}) {}
void draw() const { draw_fn(data); }
~Shape() { destroy_fn(data); }
};
No base class. No model template. Just a void* and a pair of stateless lambdas standing in as function pointers. The type is erased through the cast, and behavior is restored through the captured-but-not-really lambda that knows how to recover T.
This compact form is more efficient (one allocation, no vtable indirection through a base class), but you lose the structure. Copy semantics, move semantics, exception safety: all manual. The classic three-piece version trades a bit of performance for clarity. Production-grade implementations usually land somewhere in between, with a manual vtable plus small-buffer optimization to avoid heap allocations for small types.
A production-shaped version
Here's roughly what a real implementation looks like once you've added small-buffer optimization and proper value semantics. The storage is inline up to a fixed size; the operations are dispatched through manually constructed function pointers; copy, move, and destroy all work like a normal value type.
class Drawable {
public:
Drawable() = default;
template <typename T>
Drawable(T obj) {
static_assert(std::is_copy_constructible_v<T>);
static_assert(sizeof(T) <= StorageSize);
new (&storage_) T(std::move(obj));
draw_ = [](const void* self, Coordinate where) -> int {
return static_cast<const T*>(self)->Draw(where);
};
copy_ = [](void* dst, const void* src) {
new (dst) T(*static_cast<const T*>(src));
};
destroy_ = [](void* self) {
static_cast<T*>(self)->~T();
};
}
// ... copy / move / destructor delegate to the captured fn ptrs
int Draw(Coordinate where) const {
return draw_(&storage_, where);
}
private:
static constexpr std::size_t StorageSize = 32;
std::aligned_storage_t<StorageSize> storage_;
int (*draw_)(const void*, Coordinate) = nullptr;
void (*copy_)(void*, const void*) = nullptr;
void (*destroy_)(void*) = nullptr;
};
Now Drawable behaves like an int. You can put it in a vector. Copy it. Move it. Pass it by value. It just happens to internally dispatch to whatever concrete type you constructed it from:
struct Sprite { int Draw(Coordinate) const { return 1; } };
struct Icon { int Draw(Coordinate) const { return 2; } };
struct Button { int Draw(Coordinate) const { return 3; } };
std::vector<Drawable> v;
v.emplace_back(Sprite{});
v.emplace_back(Icon{});
v.emplace_back(Button{});
Three unrelated types. One container of values. No virtual inheritance. No heap allocation for small types. This is how production-grade libraries do it.
The tradeoffs
- Types don't need to know about the interface
- Works with third-party types, primitives, lambdas, anything
- Real value semantics — copy, move, pass by value
- Open-ended set of types (unlike
std::variant) - Cleaner ownership stories than pointer-based polymorphism
- Indirect dispatch cost similar to virtual functions
- Heap allocation per object unless SBO is implemented
- More code than plain inheritance for the same end result
- Compile times suffer slightly from the template instantiation
- Implementing copy, move, and exception safety correctly is fiddly
The performance cost is roughly the same as classic virtual dispatch, which is to say: usually fine, occasionally significant. The decisive advantage of type erasure isn't speed, it's flexibility. You can erase types you don't own, types you can't modify, types that don't even know they're being used polymorphically. Inheritance can't do that.
Choosing it over the alternatives
Type erasure earns its complexity when you need a homogeneous interface across types you can't (or shouldn't) modify. Some real-world examples:
Callbacks and event handlers. Anywhere a function expects "something callable" — that's std::function, and that's type erasure.
Plugin systems. Loading code that conforms to a protocol but doesn't inherit from your framework's classes. Type erasure adapts at the boundary.
Generic containers of behavior. A vector of "things you can update each frame," a list of "things that can serialize themselves," a queue of "tasks." Each item is unrelated to the others, but they all support the operation you need.
API boundaries. Hide implementation types from consumers while preserving value semantics. This is the pattern behind PImpl on steroids.
Skip it when you control the type hierarchy and inheritance fits cleanly, when the set of types is small and fixed (use std::variant), or when the cost of indirection is genuinely intolerable for your hot path. For everything else, type erasure quietly solves a problem that the language itself doesn't offer a good solution to.
Inheritance asks "what is this thing?" Type erasure asks "what can this thing do?" That shift in perspective, more than any specific technique, is what makes the pattern worth learning. Thanks for reading ✦
Comments