So there I was, staring at the Android Profiler at 11pm on a Tuesday. The iOS build of our app was perfectly smooth, holding a locked 60fps during heavy list transitions. Android? A complete mess. It looked like a flipbook. I had rewritten the same SharedValue logic three times.
I was losing my mind trying to figure out why a simple layout transition was causing so much jank. The issue wasn’t my math. It was how the animation engine handled rapid unmounting of components when Bridgeless mode was enabled. I’m currently running React Native 0.78.1, and every time a user swiped to delete a heavy list item, the UI thread would choke for just a fraction of a second. Enough to feel cheap.
The Memory Leak Hiding in Plain Sight
If you’ve built complex gestures recently, you probably know the pain of worklet memory management. When you trigger an exit animation, the UI thread has to hold onto that node until the animation finishes. If the user is swiping quickly, those nodes pile up.
Here is exactly what was causing my app to drop frames:
import Animated, { FadeOut, LinearTransition } from 'react-native-reanimated';
function ListItem({ item, onSwipe }) {
// This looks innocent, but it was destroying Android performance
return (
<Animated.View
layout={LinearTransition.springify().damping(14)}
exiting={FadeOut.duration(200)}
>
<HeavyComponent data={item} />
</Animated.View>
);
}
I was about to rip the whole thing out and write a custom native module. Then I noticed some recent pull requests merged into the main Reanimated repository. The maintainers had just pushed some major fixes targeting worklet garbage collection and synchronous layout calculations. I bumped my package to version 3.16.2 immediately to see if it would help.
The Benchmark Results
The difference was stupidly obvious.
I ran a benchmark on my Pixel 7 test device using a 500-item FlashList with aggressive enter/exit animations. Before the update, memory usage would spike during rapid scrolls, triggering the Android garbage collector and pushing UI thread frame times to a very noticeable 22ms.
After upgrading? It dropped memory usage by 42% during the exact same interaction. Frame times flattened out at 8ms. The UI thread wasn’t getting blocked by lingering worklets anymore.
I confirmed the same behavior on my M3 Max MacBook Pro running the iOS simulator, though iOS was already handling the memory pressure much better so the visual difference wasn’t as drastic. Still, the CPU overhead was measurably lower.
The “Gotcha” They Don’t Warn You About
But you still have to be careful. The new garbage collection optimizations won’t save you if you write lazy code inside your hooks. I found this out the hard way yesterday.
I discovered that if you pass a massive object into a useAnimatedStyle hook, the worklet still captures the entire object reference. The engine has to serialize all of that over the bridge (or through the JSI).
Don’t do this:
// BAD: Capturing the entire user profile object in the worklet
const animatedStyle = useAnimatedStyle(() => {
return {
opacity: withTiming(userProfile.settings.animationsEnabled ? 1 : 0),
};
});
Do this instead. Destructure exactly what you need before the worklet boundary:
// GOOD: Only capturing a primitive boolean
const shouldAnimate = userProfile.settings.animationsEnabled;
const animatedStyle = useAnimatedStyle(() => {
return {
opacity: withTiming(shouldAnimate ? 1 : 0),
};
});
If you don’t extract those primitives, your app will still stutter on low-end Android devices, regardless of what version you’re on.
I expect the core team will probably introduce some sort of automatic dependency extraction compiler plugin by Q2 2027 so we don’t even have to think about this manual destructuring anymore. The React compiler is already doing something similar for standard renders. Until then, just watch your closures.











