This change further reduces the memory utilization of the actors, down to ~ 13KB. However it also has the added benefit that some benchmarks are showing additional wins in message processing latencies.
```
Benchmarking Waiting on 100 actors to process first message: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 5.5s, enable flat sampling, or reduce sample count to 60.
Waiting on 100 actors to process first message
time: [804.50 µs 818.01 µs 833.69 µs]
change: [-34.074% -26.977% -21.803%] (p = 0.00 < 0.05)
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
Waiting on 1000 actors to process first message
time: [9.6242 ms 9.7702 ms 9.9224 ms]
change: [-8.5173% -5.9246% -3.3614%] (p = 0.00 < 0.05)
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) low mild
2 (2.00%) high mild
Waiting on 100000 messages to be processed
time: [17.640 ms 17.759 ms 17.881 ms]
change: [-3.1347% -2.0925% -1.0043%] (p = 0.00 < 0.05)
Performance has improved.
```
No change to other metrics.