Memory Allocation Patterns That Hurt Performance

Dhruval Dhameliya·December 5, 2025·9 min read

Concrete memory allocation anti-patterns in Android and Kotlin code that degrade performance, with profiling strategies and fixes for each.

Context

Every object allocation on Android has a cost: the allocator must find free heap space, initialize the object header, zero the memory, and eventually the garbage collector must trace and reclaim it. In non-critical code paths, this cost is negligible. In hot paths executed per-frame, per-scroll-event, or per-network-response, these costs accumulate into measurable jank and GC pressure.

Related: Event Tracking System Design for Android Applications.

See also: How Garbage Collection Impacts Android Performance.

Problem

Kotlin's expressive syntax makes it easy to write code that allocates heavily without realizing it. Features like data classes, extension functions, collection operators, coroutines, and lambda expressions all allocate under the hood. The allocations are invisible at the source level but visible in the profiler. This post catalogs the patterns that matter in practice.

Constraints

  • ART's young generation collector handles short-lived objects efficiently, but "efficiently" is not "free"
  • Each GC cycle, even concurrent, introduces 0.5 to 2ms of pause time
  • Low-end devices (2GB RAM, slow eMMC storage) experience 2 to 5x longer GC pauses
  • Allocation rate above 50MB/s during UI rendering reliably causes jank on mid-range devices
  • Kotlin inline functions eliminate some lambda allocation overhead but not all

Design

Pattern 1: Intermediate Collection Chains

Kotlin's collection operators are ergonomic but each one creates a new collection.

// Allocates 4 intermediate lists
val displayItems = rawItems
    .filter { it.isVisible }           // List 1
    .map { it.toDisplayModel() }       // List 2
    .distinctBy { it.id }              // List 3
    .sortedBy { it.sortOrder }         // List 4

For a list of 1000 items, this creates 4 temporary lists, each with its own backing array. That is 4 array allocations plus the object overhead for each intermediate list.

// Fix: sequence-based pipeline, single terminal allocation
val displayItems = rawItems.asSequence()
    .filter { it.isVisible }
    .map { it.toDisplayModel() }
    .distinctBy { it.id }
    .sortedBy { it.sortOrder }
    .toList()

When to use sequences vs. eager collections:

Collection SizeChain LengthUse Sequence?
< 10 itemsAnyNo, overhead of sequence machinery exceeds savings
10 to 100 items1 to 2 operatorsNo
10 to 100 items3+ operatorsYes
> 100 itemsAny chainYes
Any size, called per-frameAny chainYes

Pattern 2: Autoboxing in Collections

Kotlin's Int, Float, Boolean compile to JVM primitives when used directly. But generic collections require boxed types.

// Every Int is boxed to java.lang.Integer
val countMap: HashMap<String, Int> = hashMapOf("a" to 1, "b" to 2)
 
// Each entry: String key (object), Integer value (boxed object), Entry object
// For 1000 entries: 3000 objects minimum

Alternatives for primitive-heavy use cases:

Standard TypePrimitive AlternativeSavings
HashMap<Int, V>SparseArray<V>No key boxing, no Entry objects
HashMap<Int, Int>SparseIntArrayNo boxing at all
HashMap<Int, Long>SparseLongArrayNo boxing at all
HashMap<Int, Boolean>SparseBooleanArrayNo boxing at all
HashSet<Int>Bit set or IntArrayNo boxing, compact
List<Int> (fixed size)IntArrayNo boxing, contiguous memory
// Bad: boxes every coordinate
data class PathPoints(val xCoords: List<Float>, val yCoords: List<Float>)
 
// Good: primitive arrays, no boxing
class PathPoints(val xCoords: FloatArray, val yCoords: FloatArray)

Pattern 3: Data Class Copy in Tight Loops

Kotlin data class copy() creates a new instance every time. In state reduction loops or event processing, this adds up.

// Each event creates a new State object
fun reduce(state: State, events: List<Event>): State {
    var current = state
    events.forEach { event ->
        current = current.copy(count = current.count + 1) // New State per event
    }
    return current
}
 
// Fix: use a mutable builder for batch updates, then create one immutable result
fun reduce(state: State, events: List<Event>): State {
    var count = state.count
    events.forEach { event ->
        count++
    }
    return state.copy(count = count) // Single allocation
}

Pattern 4: String Formatting in Hot Paths

String.format() and string templates with non-primitive types allocate intermediate strings.

// Called per list item during scroll, allocates:
// 1. StringBuilder, 2. formatted String, 3. possibly autoboxed numbers
fun formatPrice(amount: Double, currency: String): String {
    return String.format("%s %.2f", currency, amount) // Allocates varargs array too
}
 
// Fix: reuse a formatter, or use StringBuilder directly
class PriceFormatter {
    private val sb = StringBuilder(20)
 
    fun format(amount: Double, currency: String): String {
        sb.setLength(0)
        sb.append(currency).append(' ')
        appendTwoDecimals(sb, amount)
        return sb.toString()
    }
 
    private fun appendTwoDecimals(sb: StringBuilder, value: Double) {
        val whole = value.toLong()
        val frac = ((value - whole) * 100).toInt()
        sb.append(whole).append('.').append(if (frac < 10) "0" else "").append(frac)
    }
}

Pattern 5: Lambda Capture in RecyclerView/LazyColumn

Every lambda that captures a variable creates a new anonymous class instance. In list items, this means one allocation per item per bind.

// RecyclerView ViewHolder: new lambda on every bind
class ItemViewHolder(view: View) : RecyclerView.ViewHolder(view) {
    fun bind(item: Item, onClick: (String) -> Unit) {
        itemView.setOnClickListener {
            onClick(item.id) // Captures item.id
        }
        // New View.OnClickListener instance per bind call
    }
}
 
// Fix: store the ID and use a single listener
class ItemViewHolder(
    view: View,
    private val onClick: (String) -> Unit
) : RecyclerView.ViewHolder(view) {
    private var currentId: String = ""
 
    init {
        itemView.setOnClickListener { onClick(currentId) }
    }
 
    fun bind(item: Item) {
        currentId = item.id
    }
}

Pattern 6: Coroutine Overhead in Per-Frame Code

Each launch creates a coroutine object, a continuation, and potentially a CoroutineDispatcher task. For per-frame operations, this overhead matters.

// Bad: launches a coroutine per frame during animation
LaunchedEffect(Unit) {
    while (isActive) {
        withFrameNanos { frameTimeNanos ->
            launch { // New coroutine per frame, unnecessary
                updateAnimation(frameTimeNanos)
            }
        }
    }
}
 
// Good: process directly in the frame callback
LaunchedEffect(Unit) {
    while (isActive) {
        withFrameNanos { frameTimeNanos ->
            updateAnimation(frameTimeNanos) // No extra coroutine
        }
    }
}

Pattern 7: Enum.values() Allocation

Enum.values() allocates a new array on every call. In per-frame or per-item code, use a cached copy.

enum class Priority { LOW, MEDIUM, HIGH, CRITICAL }
 
// Bad: new array every call
fun getPriority(index: Int): Priority = Priority.values()[index]
 
// Good: cached array (Kotlin 1.9+ has entries)
fun getPriority(index: Int): Priority = Priority.entries[index]
 
// Or for older Kotlin versions:
private val PRIORITIES = Priority.values()
fun getPriority(index: Int): Priority = PRIORITIES[index]

Trade-offs

OptimizationBenefitCost
Sequences over collectionsFewer intermediate allocationsOverhead for small lists, harder to debug
Primitive arrays over ListNo boxing, less memoryLess type safety, no Collection API
Object poolingZero per-use allocationPool management, potential memory retention
Mutable builders for batch operationsSingle allocation for batch resultMutable state must be carefully scoped
Cached enum valuesZero allocation per accessMinor, just a static field

Failure Modes

FailureSymptomDetection
Excessive allocation in scroll handlerGC-correlated jank during fast scrollingAllocation tracker during scroll, Perfetto GC slices
Autoboxing in tight loopHigh allocation rate with no visible object creation in sourceAndroid Studio allocation profiler showing java.lang.Integer
String allocation per list itemGC pressure proportional to list sizeAllocation count per onBindViewHolder call
Lambda capture per frameSteady allocation rate even with static contentHeap dump showing anonymous inner class proliferation

Scaling Considerations

  • Allocation budgets: set an allocation rate budget for critical paths (e.g., under 1MB/s during scrolling). Enforce with automated profiling in CI
  • Hot path identification: use method tracing to identify functions called over 1000 times per second. Those functions get allocation scrutiny
  • Library overhead: third-party libraries may allocate internally. Profile Gson/Moshi deserialization, Retrofit call adaptation, and image loading pipelines for allocation costs
  • R8 optimization: R8 can inline lambdas and devirtualize calls, reducing some allocation overhead. Verify with release-mode profiling

Observability

  • Android Studio Allocation Tracker: shows per-method allocation count and size during a recording session
  • Debug.startAllocCounting() / Debug.stopAllocCounting(): programmatic allocation measurement in tests
  • Perfetto: heap allocation events correlated with frame timing
  • Custom allocation sampling: periodically sample Runtime.getRuntime().totalMemory() - freeMemory() and report heap growth rate
// Measure allocations around a specific code block
fun measureAllocations(label: String, block: () -> Unit) {
    val runtime = Runtime.getRuntime()
    runtime.gc() // Request GC to get a clean baseline
    Thread.sleep(100) // Allow GC to complete
    val before = runtime.totalMemory() - runtime.freeMemory()
    block()
    val after = runtime.totalMemory() - runtime.freeMemory()
    val allocated = after - before
    Log.d("Alloc", "$label: allocated ${allocated / 1024}KB")
}

Key Takeaways

  • Kotlin's collection operators, data class copy(), string formatting, and lambda captures all allocate. Know which patterns allocate and where they appear in your hot paths
  • Use sequences for collection chains with 3+ operators on non-trivial lists
  • Prefer primitive arrays (IntArray, FloatArray) over List<Int> in performance-critical data structures
  • Cache Enum.values() (or use entries), reuse StringBuilder instances, and pool objects in per-frame code
  • Profile allocation rate, not just heap size. High allocation rate drives GC frequency, which drives jank

Further Reading

Final Thoughts

Allocation-conscious programming is not premature optimization when applied to the right code paths. The vast majority of your codebase can allocate freely. But the 5% of code that runs per-frame, per-scroll-event, or per-list-item determines your app's perceived smoothness. Identify those hot paths with profiling, measure their allocation rate, and apply the patterns in this post. The result is fewer GC pauses, smoother frame delivery, and better performance on the devices your users actually carry.

Recommended