Memory Allocation Patterns That Hurt Performance
Concrete memory allocation anti-patterns in Android and Kotlin code that degrade performance, with profiling strategies and fixes for each.
Context
Every object allocation on Android has a cost: the allocator must find free heap space, initialize the object header, zero the memory, and eventually the garbage collector must trace and reclaim it. In non-critical code paths, this cost is negligible. In hot paths executed per-frame, per-scroll-event, or per-network-response, these costs accumulate into measurable jank and GC pressure.
Related: Event Tracking System Design for Android Applications.
See also: How Garbage Collection Impacts Android Performance.
Problem
Kotlin's expressive syntax makes it easy to write code that allocates heavily without realizing it. Features like data classes, extension functions, collection operators, coroutines, and lambda expressions all allocate under the hood. The allocations are invisible at the source level but visible in the profiler. This post catalogs the patterns that matter in practice.
Constraints
- ART's young generation collector handles short-lived objects efficiently, but "efficiently" is not "free"
- Each GC cycle, even concurrent, introduces 0.5 to 2ms of pause time
- Low-end devices (2GB RAM, slow eMMC storage) experience 2 to 5x longer GC pauses
- Allocation rate above 50MB/s during UI rendering reliably causes jank on mid-range devices
- Kotlin inline functions eliminate some lambda allocation overhead but not all
Design
Pattern 1: Intermediate Collection Chains
Kotlin's collection operators are ergonomic but each one creates a new collection.
// Allocates 4 intermediate lists
val displayItems = rawItems
.filter { it.isVisible } // List 1
.map { it.toDisplayModel() } // List 2
.distinctBy { it.id } // List 3
.sortedBy { it.sortOrder } // List 4For a list of 1000 items, this creates 4 temporary lists, each with its own backing array. That is 4 array allocations plus the object overhead for each intermediate list.
// Fix: sequence-based pipeline, single terminal allocation
val displayItems = rawItems.asSequence()
.filter { it.isVisible }
.map { it.toDisplayModel() }
.distinctBy { it.id }
.sortedBy { it.sortOrder }
.toList()When to use sequences vs. eager collections:
| Collection Size | Chain Length | Use Sequence? |
|---|---|---|
| < 10 items | Any | No, overhead of sequence machinery exceeds savings |
| 10 to 100 items | 1 to 2 operators | No |
| 10 to 100 items | 3+ operators | Yes |
| > 100 items | Any chain | Yes |
| Any size, called per-frame | Any chain | Yes |
Pattern 2: Autoboxing in Collections
Kotlin's Int, Float, Boolean compile to JVM primitives when used directly. But generic collections require boxed types.
// Every Int is boxed to java.lang.Integer
val countMap: HashMap<String, Int> = hashMapOf("a" to 1, "b" to 2)
// Each entry: String key (object), Integer value (boxed object), Entry object
// For 1000 entries: 3000 objects minimumAlternatives for primitive-heavy use cases:
| Standard Type | Primitive Alternative | Savings |
|---|---|---|
HashMap<Int, V> | SparseArray<V> | No key boxing, no Entry objects |
HashMap<Int, Int> | SparseIntArray | No boxing at all |
HashMap<Int, Long> | SparseLongArray | No boxing at all |
HashMap<Int, Boolean> | SparseBooleanArray | No boxing at all |
HashSet<Int> | Bit set or IntArray | No boxing, compact |
List<Int> (fixed size) | IntArray | No boxing, contiguous memory |
// Bad: boxes every coordinate
data class PathPoints(val xCoords: List<Float>, val yCoords: List<Float>)
// Good: primitive arrays, no boxing
class PathPoints(val xCoords: FloatArray, val yCoords: FloatArray)Pattern 3: Data Class Copy in Tight Loops
Kotlin data class copy() creates a new instance every time. In state reduction loops or event processing, this adds up.
// Each event creates a new State object
fun reduce(state: State, events: List<Event>): State {
var current = state
events.forEach { event ->
current = current.copy(count = current.count + 1) // New State per event
}
return current
}
// Fix: use a mutable builder for batch updates, then create one immutable result
fun reduce(state: State, events: List<Event>): State {
var count = state.count
events.forEach { event ->
count++
}
return state.copy(count = count) // Single allocation
}Pattern 4: String Formatting in Hot Paths
String.format() and string templates with non-primitive types allocate intermediate strings.
// Called per list item during scroll, allocates:
// 1. StringBuilder, 2. formatted String, 3. possibly autoboxed numbers
fun formatPrice(amount: Double, currency: String): String {
return String.format("%s %.2f", currency, amount) // Allocates varargs array too
}
// Fix: reuse a formatter, or use StringBuilder directly
class PriceFormatter {
private val sb = StringBuilder(20)
fun format(amount: Double, currency: String): String {
sb.setLength(0)
sb.append(currency).append(' ')
appendTwoDecimals(sb, amount)
return sb.toString()
}
private fun appendTwoDecimals(sb: StringBuilder, value: Double) {
val whole = value.toLong()
val frac = ((value - whole) * 100).toInt()
sb.append(whole).append('.').append(if (frac < 10) "0" else "").append(frac)
}
}Pattern 5: Lambda Capture in RecyclerView/LazyColumn
Every lambda that captures a variable creates a new anonymous class instance. In list items, this means one allocation per item per bind.
// RecyclerView ViewHolder: new lambda on every bind
class ItemViewHolder(view: View) : RecyclerView.ViewHolder(view) {
fun bind(item: Item, onClick: (String) -> Unit) {
itemView.setOnClickListener {
onClick(item.id) // Captures item.id
}
// New View.OnClickListener instance per bind call
}
}
// Fix: store the ID and use a single listener
class ItemViewHolder(
view: View,
private val onClick: (String) -> Unit
) : RecyclerView.ViewHolder(view) {
private var currentId: String = ""
init {
itemView.setOnClickListener { onClick(currentId) }
}
fun bind(item: Item) {
currentId = item.id
}
}Pattern 6: Coroutine Overhead in Per-Frame Code
Each launch creates a coroutine object, a continuation, and potentially a CoroutineDispatcher task. For per-frame operations, this overhead matters.
// Bad: launches a coroutine per frame during animation
LaunchedEffect(Unit) {
while (isActive) {
withFrameNanos { frameTimeNanos ->
launch { // New coroutine per frame, unnecessary
updateAnimation(frameTimeNanos)
}
}
}
}
// Good: process directly in the frame callback
LaunchedEffect(Unit) {
while (isActive) {
withFrameNanos { frameTimeNanos ->
updateAnimation(frameTimeNanos) // No extra coroutine
}
}
}Pattern 7: Enum.values() Allocation
Enum.values() allocates a new array on every call. In per-frame or per-item code, use a cached copy.
enum class Priority { LOW, MEDIUM, HIGH, CRITICAL }
// Bad: new array every call
fun getPriority(index: Int): Priority = Priority.values()[index]
// Good: cached array (Kotlin 1.9+ has entries)
fun getPriority(index: Int): Priority = Priority.entries[index]
// Or for older Kotlin versions:
private val PRIORITIES = Priority.values()
fun getPriority(index: Int): Priority = PRIORITIES[index]Trade-offs
| Optimization | Benefit | Cost |
|---|---|---|
| Sequences over collections | Fewer intermediate allocations | Overhead for small lists, harder to debug |
| Primitive arrays over List | No boxing, less memory | Less type safety, no Collection API |
| Object pooling | Zero per-use allocation | Pool management, potential memory retention |
| Mutable builders for batch operations | Single allocation for batch result | Mutable state must be carefully scoped |
| Cached enum values | Zero allocation per access | Minor, just a static field |
Failure Modes
| Failure | Symptom | Detection |
|---|---|---|
| Excessive allocation in scroll handler | GC-correlated jank during fast scrolling | Allocation tracker during scroll, Perfetto GC slices |
| Autoboxing in tight loop | High allocation rate with no visible object creation in source | Android Studio allocation profiler showing java.lang.Integer |
| String allocation per list item | GC pressure proportional to list size | Allocation count per onBindViewHolder call |
| Lambda capture per frame | Steady allocation rate even with static content | Heap dump showing anonymous inner class proliferation |
Scaling Considerations
- Allocation budgets: set an allocation rate budget for critical paths (e.g., under 1MB/s during scrolling). Enforce with automated profiling in CI
- Hot path identification: use method tracing to identify functions called over 1000 times per second. Those functions get allocation scrutiny
- Library overhead: third-party libraries may allocate internally. Profile Gson/Moshi deserialization, Retrofit call adaptation, and image loading pipelines for allocation costs
- R8 optimization: R8 can inline lambdas and devirtualize calls, reducing some allocation overhead. Verify with release-mode profiling
Observability
- Android Studio Allocation Tracker: shows per-method allocation count and size during a recording session
Debug.startAllocCounting()/Debug.stopAllocCounting(): programmatic allocation measurement in tests- Perfetto: heap allocation events correlated with frame timing
- Custom allocation sampling: periodically sample
Runtime.getRuntime().totalMemory() - freeMemory()and report heap growth rate
// Measure allocations around a specific code block
fun measureAllocations(label: String, block: () -> Unit) {
val runtime = Runtime.getRuntime()
runtime.gc() // Request GC to get a clean baseline
Thread.sleep(100) // Allow GC to complete
val before = runtime.totalMemory() - runtime.freeMemory()
block()
val after = runtime.totalMemory() - runtime.freeMemory()
val allocated = after - before
Log.d("Alloc", "$label: allocated ${allocated / 1024}KB")
}Key Takeaways
- Kotlin's collection operators, data class
copy(), string formatting, and lambda captures all allocate. Know which patterns allocate and where they appear in your hot paths - Use sequences for collection chains with 3+ operators on non-trivial lists
- Prefer primitive arrays (
IntArray,FloatArray) overList<Int>in performance-critical data structures - Cache
Enum.values()(or useentries), reuseStringBuilderinstances, and pool objects in per-frame code - Profile allocation rate, not just heap size. High allocation rate drives GC frequency, which drives jank
Further Reading
- Memory Leaks in Android: Patterns I've Seen in Production: Real-world memory leak patterns from production Android apps, covering lifecycle-bound leaks, static references, listener registration, a...
- Debugging Performance Issues in Large Android Apps: A systematic approach to identifying, isolating, and fixing performance bottlenecks in large Android codebases, covering profiling strate...
- Binder, Threads, and Performance Implications: A deep examination of Android's Binder IPC mechanism, its thread pool model, and the performance consequences that surface at scale in la...
Final Thoughts
Allocation-conscious programming is not premature optimization when applied to the right code paths. The vast majority of your codebase can allocate freely. But the 5% of code that runs per-frame, per-scroll-event, or per-list-item determines your app's perceived smoothness. Identify those hot paths with profiling, measure their allocation rate, and apply the patterns in this post. The result is fewer GC pauses, smoother frame delivery, and better performance on the devices your users actually carry.
Recommended
Jetpack Compose Recomposition: A Deep Dive
A detailed look at how Compose recomposition works under the hood, what triggers it, how the slot table tracks state, and how to control it in production apps.
Event Tracking System Design for Android Applications
A systems-level breakdown of designing an event tracking system for Android, covering batching, schema enforcement, local persistence, and delivery guarantees.
Understanding ANRs: Detection, Root Causes, and Fixes
A systematic look at Application Not Responding errors on Android, covering the detection mechanism, common root causes in production, and concrete strategies to fix and prevent them.