Reflections on Reflection
02.02.2026
Up to this point, Rust has lacked any features that most would recognize as reflection. Instead, the tool of choice for metaprogramming are procedural macros. They do enable compile-time introspection, and they produce source code as output. Derive macros, the most common form of procedural macros, usually emit code implementing a trait, which then gets fed back into the compiler. However, they can only operate on the source code of a single item, its tokens, and its AST. The limitations imposed by this are immense when compared to reflection features in languages like Java or Python.
Why #[derive] is great
Overall, I consider procedural macros to be a tremendous success.
As it turns out, it’s often perfectly sufficient to only operate on the AST, without getting any type information from the compiler. I believe it’s important to highlight that the hero of this story is the trait system, and how procedural macros leverage it.
A good derive macro embraces the fact that it can’t do everything at once. Instead, it only does what’s possible locally with the source of a single item, and relies on traits for composition.
Take serde as an example: Its derive macro implements Serialize for your type by deferring to those types that it’s composed of. Just like types are composed of types, Serialize implementations are composed of other Serialize implementations.
I also do not consider this approach to be a workaround to alleviate the limitations of procedural macros. On the contrary, I believe this to be the correct way of doing things, and the benifits are overwhealming.
Consider cases where a Serialize implementation can’t be composed using others. This happens with primitive types, with data structures that need to hide their implementation details, or when a type needs a different representation on disk and in memory.
What then? Well, we just don’t derive Serialize in that case, and just write the implementation by hand. Problem solved!
An other benifit of the status quo is that output of every macro invocation is - similar to the monomorphizations of a generic functions - independently optimized. This enables all of this to be done at zero runtime cost.
To me, it’s absolutely clear that being able to emit new trait implementations is absolutely crucial for any potential alternative or successor to procedural macros for reflection. Bypassing the trait system using reflection will not fly.
Why not at runtime?
Crates like facet, originally motivated by increased compile times caused by syn and serde sitting near the root of the dependency graph, take a very different approach. They provide runtime introspection and e.g. permit traversing through nested types at runtime. While that might sound like an improvement, it is not.
For one, doing introspection at runtime hurts performance, and requires all type information to be included in compiled binaries. Compile-time polymorphism is replaced with run-time polymorphism, and no monomorphization takes place. But worst of all, it breaks the nice composition and encapsulation I described above using serde as example. How would a type now opt out of being serialized analogous to how it’s represented in code? How would primitives now decide how to be serialized?
facet answers this question with a clear “they cannot”. Instead, facet, a reflection library, needs to have serialization built right in.
For every type, facet provides an instance of facet::Shape, which is supposed to contain everything you’d want to introspect about it. But within, you’ll find a field of type facet::Def containing the “semantic definition of a shape”.
Huh?
It is this facet::Def with which facet hard-codes the fact that the struct std::vec::Vec, which is composed of two integers and a pointer, is semantically a sequence, and not a map with three fields.
In this paradigm, the reflection library defines how types are serialized, and what their semantics are. Any attempt to claw that control back, and deciding how your type is serialized is prohibitively painful, since that’s intermingled with the data used for introspection.
Traits are not used to compose functionality, concerns are muddled, and everything is tightly coupled. While I admire facet as an exploration of the design space, I don’t see a future for it as a viable solution to reflection or serialization.
Room to grow
Although I praised compile-time introspection using procedural macros, it is obvious that they are not perfect. Besides the absurdity of a compiler plugin needing to parse source code itself and receiving nothing but tokens from the compiler, procedural macros are a pain to write. Procedural macro code quickly gets complicated, verbose, and error-prone. Being a macro, it’s trivial to emit invalid syntax, cause indecipherable compiler errors downstream, and violate hygine. For users, the inner workings of these macros is opaque, and reading their source or staring at cryptic cargo expand output is just an insufficient band-aid.
What’s already possible
I hope I made a strong enough case for compile-time introspection, and against runtime introspection in the form of facet. As it turns out, Rust can already do stuff at compile-time, without macros, using the type system itself.
To take in type information, perform computations, and emit a trait implementation while preserving composability, a boatload of generics are needed, at least today. Instead of generating code for a trait implementation like a macro, a single generic trait implementation can be written, and the actual computation is done within the type system using trait bounds. Make no mistake - while possible today, the arcane code to do so is absolutely absurd. I have however experimented with it (link), and I’d like to share how this can be done, even if completely impractical. Please note that many implementation details present in the linked repository will be absent from the following explanation.
I. Where it begins
At its core, type information is needed in the form of types and traits with associated types, and not with values. Otherwise, computations couldn’t be done with it within the type system. I have done this by generating structs and trait implementations using a derive macro. If this were to ever become a language feature, no macro would be needed, of course.
Let us now imagine a user who has authored MyStruct:
#[derive(Reflect)]
struct MyStruct {
foo: i32,
bar: String
}
For this struct, the macro first generates an implementation for the following trait to allow access to the type of its fields:
pub trait Struct {
type Fields;
}
For every field of MyStruct, a new struct is generated, and the trait Field is implemented for it:
struct Field0;
impl Field for Field0 {
type Type = i32; // Type of the 1st struct field
}
struct Field1;
impl Field for Field0 {
type Type = String; // Type of the 2nd struct field
}
Finally, the implementation of the above Struct trait for the user’s MyStruct looks like this, with the types of both fields contained in a tuple:
impl Struct for MyStruct {
type Fields = (Field0, Field1)
}
II. Looking at the reflection
Now, let’s put ourselfs in the shoes of a library author wanting to give its users the ability to easily figure out how much memory on the heap a type is using. Naturally, we’d need a trait for this:
trait HeapSize {
fn heap_size(&self) -> usize;
}
Instead of writing a procedural macro to let our users derive HeapSize, we need to express this as a generic trait impl. We sadly cannot just provide a impl<T: Struct> HeapSize for T blanket implementation. Instead, we introduce a generic wrapper type, for which we can:
struct UsingReflection<T>(T);
impl<T> HeapSize for UsingReflection<T>
where
T: Struct,
"each field of T implements `HeapSize`"
{ /* ... */ }
And we’re done!
Our users can now just call heap_size() for their type, without actually implementing the HeapSize trait:
let bytes = UsingReflection(MyStruct { ..}).heap_size();
println!("using {bytes} bytes");
Alternatively, they can actually implement HeapSize for their struct, without a procedural derive macro!
impl HeapSize for MyStruct {
fn heap_size(&self) -> usize {
UsingReflection(self).heap_size()
}
}
III. What we’re left with
While this seems pretty good allaround, actually writing out the impl<T> HeapSize for UsingReflection<T> is, while possible, immensly painful.
If you are curious, here is the actual implementation from my above mentioned experimental repository, which does also support enums.
Besides the derive macro doing the reflecting, I have implemented a somewhat realistic version of HeapSize.
To actually demonstrate that this does work for real things, I have also implemented a replacement for serde’s derive macros for serialization and deserialization with this very system. That being said, it doesn’t support any serde helper attributes.
Understanding the atrocious code in that repository isn’t too important, as long as the point that this is a mess comes across.
Although the process of leveraging reflection like this to replace derive macros is painful for those library authors, it is actually quite pleasant for application authors.
IV. Come on, really?
Does this approach of doing compile-time reflection hold promise? Maybe, but I doubt it, and definetely not in the near future.
It’s just too unergonomic and utterly arcane. The sheer number of additional language features needed to make this just bearable would certainly be immense.
The past efforts of shepherd’s oasis were in this vein, combined with language features to make this more ergonomic. Technical opposition to it from members of the project, combined with what seems to me to have been a breakdown in communication in preparation of RustConf resulting in lots of drama, unfortunately killed it prematurely.
An alternative vision
Zig’s comptime is very different from the meta-programming madness you’d only expect from C++20 I sketched out above. Funnily enough, even C++26 came up with a solution not too dissimilar, and actually seems nice. While I expect it to come with its own set of problems, I believe they could be very minor in comparison. Could we just copy that into Rust?
I do not know, and lack experience and knowledge of Zig’s comptime, C++26’s reflection, and rustc, having only superficially skimmed the docs.
The Rust compiler could easily expose a facet-like introspection API yielding normal values as type descriptors, and computation could be done during compile time in const functions. The interesting part is to go back from a normal value computed by such a const function to more code. After all, once a const function can run, the program has lexed, parsed, type- and borrow-checked, and is more or less done.
Without having any knowledge about rustc, I’d imagine being able to feed that output back into the compiler would require a lot of work, and new pitfalls in itself.
However what is clear is that just an introspection API without any way to feed that output back into the type system would be useless and counter-productive. I sincerely hope I made that point clearly enough.