Minimizing RAM Usage

[中文]

In some cases, a firmware application's available RAM may run low or run out entirely. In these cases, it is necessary to tune the memory usage of the firmware application.

In general, firmware should aim to leave some headroom of free internal RAM to deal with extraordinary situations or changes in RAM usage in future updates.

Background

Before optimizing ESP-IDF RAM usage, it is necessary to understand the basics of ESP32 memory types, the difference between static and dynamic memory usage in C, and the way ESP-IDF uses stack and heap. This information can all be found in Heap Memory Allocation.

Measuring Static Memory Usage

The idf.py tool can be used to generate reports about the static memory usage of an application, see Measuring Static Sizes.

Measuring Dynamic Memory Usage

ESP-IDF contains a range of heap APIs for measuring free heap at runtime, see Heap Memory Debugging.

Note

In embedded systems, heap fragmentation can be a significant issue alongside total RAM usage. The heap measurement APIs provide ways to measure the largest free block. Monitoring this value along with the total number of free bytes can give a quick indication of whether heap fragmentation is becoming an issue.

Reducing Static Memory Usage

  • Reducing the static memory usage of the application increases the amount of RAM available for heap at runtime, and vice versa.

  • Generally speaking, minimizing static memory usage requires monitoring the .data and .bss sizes. For tools to do this, see Measuring Static Sizes.

  • Internal ESP-IDF functions do not make heavy use of static RAM in C. In many instances (such as Wi-Fi library, Bluetooth controller), static buffers are still allocated from the heap. However, the allocation is performed only once during feature initialization and will be freed if the feature is deinitialized. This approach is adopted to optimize the availability of free memory at various stages of the application's life cycle.

To minimize static memory use:

  • Constant data can be stored in flash memory instead of RAM, thus it is recommended to declare structures, buffers, or other variables as const. This approach may require modifying firmware functions to accept const * arguments instead of mutable pointer arguments. These changes can also help reduce the stack usage of certain functions.

  • If using Bluedroid, setting the option CONFIG_BT_BLE_DYNAMIC_ENV_MEMORY will cause Bluedroid to allocate memory on initialization and free it on deinitialization. This does not necessarily reduce the peak memory usage, but changes it from static memory usage to runtime memory usage.

  • If using OpenThread, enabling the option CONFIG_OPENTHREAD_PLATFORM_MSGPOOL_MANAGEMENT will cause OpenThread to allocate message pool buffers from PSRAM, which will reduce static memory use.

Determining Stack Size

In FreeRTOS, task stacks are usually allocated from the heap. The stack size for each task is fixed and passed as an argument to xTaskCreate(). Each task can use up to its allocated stack size, but using more than this will cause an otherwise valid program to crash, with a stack overflow or heap corruption.

Therefore, determining the optimum sizes of each task stack, minimizing the required size of each task stack, and minimizing the number of task stacks as whole, can all substantially reduce RAM usage.

Configuration Options for Stack Overflow Detection

End of Stack Watchpoint

The End of Stack Watchpoint feature places a CPU watchpoint at the end of the current stack. If that word is overwritten (such as in a stack overflow), a panic is triggered immediately. End of Stack Watchpoints can be enabled via the CONFIG_FREERTOS_WATCHPOINT_END_OF_STACK option, but can only be used if debugger watchpoints are not already being used.

Stack Canary Bytes

The Stack Canary Bytes feature adds a set of magic bytes at the end of each task's stack, and checks if those magic bytes have changed on every context switch. If those magic bytes are overwritten, a panic is triggered. Stack Canary Bytes can be enabled via the CONFIG_FREERTOS_CHECK_STACKOVERFLOW option.

Note

When using the End of Stack Watchpoint or Stack Canary Bytes, it is possible that a stack pointer skips over the watchpoint or canary bytes on a stack overflow and corrupts another region of RAM instead. Thus, these methods cannot detect all stack overflows.

Run-time Methods to Determine Stack Size

  • The uxTaskGetStackHighWaterMark() returns the minimum free stack memory of a task throughout the task's lifetime, which gives a good indication of how much stack memory is left unused by a task.

    • The easiest time to call uxTaskGetStackHighWaterMark() is from the task itself: call uxTaskGetStackHighWaterMark(NULL) to get the current task's high water mark after the time that the task has achieved its peak stack usage, i.e., if there is a main loop, execute the main loop a number of times with all possible states, and then call uxTaskGetStackHighWaterMark().

    • Often, it is possible to subtract almost the entire value returned here from the total stack size of a task, but allow some safety margin to account for unexpected small increases in stack usage at runtime.

  • Call uxTaskGetSystemState() to get a summary of all tasks in the system. This includes their individual stack high watermark values.

Reducing Stack Sizes

  • Avoid stack heavy functions. String formatting functions (like printf()) are particularly heavy users of the stack, so any task which does not ever call these can usually have its stack size reduced.

  • Avoid allocating large variables on the stack. In C, any large structures or arrays allocated as an automatic variable (i.e., default scope of a C declaration) uses space on the stack. To minimize the sizes of these, allocate them statically and/or see if you can save memory by dynamically allocating them from the heap only when they are needed.

  • Avoid deep recursive function calls. Individual recursive function calls do not always add a lot of stack usage each time they are called, but if each function includes large stack-based variables then the overhead can get quite high.

Reducing Task Count

Combine tasks. If a particular task is never created, the task's stack is never allocated, thus reducing RAM usage significantly. Unnecessary tasks can typically be removed if those tasks can be combined with another task. In an application, tasks can typically be combined or removed if:

  • The work done by the tasks can be structured into multiple functions that are called sequentially.

  • The work done by the tasks can be structured into smaller jobs that are serialized (via a FreeRTOS queue or similar) for execution by a worker task.

Internal Task Stack Sizes

ESP-IDF allocates a number of internal tasks for housekeeping purposes or operating system functions. Some are created during the startup process, and some are created at runtime when particular features are initialized.

The default stack sizes for these tasks are usually set conservatively high to allow all common usage patterns. Many of the stack sizes are configurable, and it may be possible to reduce them to match the real runtime stack usage of the task.

Important

If internal task stack sizes are set too small, ESP-IDF will crash unpredictably. Even if the root cause is task stack overflow, this is not always clear when debugging. It is recommended that internal stack sizes are only reduced carefully (if at all), with close attention to high water mark free space under load. If reporting an issue that occurs when internal task stack sizes have been reduced, please always include the following information and the specific configuration that is being used.

Note

Aside from built-in system features such as ESP-timer, if an ESP-IDF feature is not initialized by the firmware, then no associated task is created. In those cases, the stack usage is zero, and the stack-size configuration for the task is not relevant.

Reducing Heap Usage

For functions that assist in analyzing heap usage at runtime, see Heap Memory Debugging.

Normally, optimizing heap usage consists of analyzing the usage and removing calls to malloc() that are not being used, reducing the corresponding sizes, or freeing previously allocated buffers earlier.

There are some ESP-IDF configuration options that can reduce heap usage at runtime:

Note

There are other configuration options that increases heap usage at runtime if changed from the defaults. These options are not listed above, but the help text for the configuration item will mention if there is some memory impact.

Optimizing IRAM Usage

If the app allocates more static IRAM than available, then the app will fail to build, and linker errors such as section '.iram0.text' will not fit in region 'iram0_0_seg', IRAM0 segment data does not fit, and region 'iram0_0_seg' overflowed by 84-bytes will be seen. If this happens, it is necessary to find ways to reduce static IRAM usage in order to link the application.

To analyze the IRAM usage in the firmware binary, use Measuring Static Sizes. If the firmware failed to link, steps to analyze are shown at Showing Size When Linker Fails.

The following options will reduce IRAM usage of some ESP-IDF features:

Using SRAM1 for IRAM

The SRAM1 memory area is normally used for DRAM, but it is possible to use parts of it for IRAM with CONFIG_ESP_SYSTEM_ESP32_SRAM1_REGION_AS_IRAM. This memory would previously be reserved for DRAM data usage (e.g., .bss) by the ESP-IDF second stage bootloader and later added to the heap. After this option was introduced, the bootloader DRAM size was reduced to a value closer to what it normally actually needs.

To use this option, ESP-IDF should be able to recognize that the new SRAM1 area is also a valid load address for an image segment. If the second stage bootloader was compiled before this option existed, then the bootloader will not be able to load the app that has code placed in this new extended IRAM area. This would typically happen if you are doing an OTA update, where only the app would be updated.

If the IRAM section were to be placed in an invalid area, then this would be detected during the boot up process, and result in a failed boot:

E (204) esp_image: Segment 5 0x400845f8-0x400a126c invalid: bad load address range

Warning

Apps compiled with CONFIG_ESP_SYSTEM_ESP32_SRAM1_REGION_AS_IRAM may fail to boot, if used together with a second stage bootloader that was compiled before this config option was introduced. If you are using an older bootloader and updating over OTA, please test carefully before pushing any updates.

Any memory that ends up unused for static IRAM will be added to the heap.

Putting C Library in Flash

When compiling for ESP32 revisions older than ECO3 (CONFIG_ESP32_REV_MIN), the PSRAM Cache bug workaround (CONFIG_SPIRAM_CACHE_WORKAROUND) option is enabled, and the C library functions normally located in ROM are recompiled with the workaround and placed into IRAM instead. For most applications, it is safe to move many of the C library functions into flash, reclaiming some IRAM. Corresponding options include:

The exact amount of IRAM saved will depend on how much C library code is actually used by the application. In addition, the following options may be used to move more of the C library code into flash, however note that this may result in reduced performance. Be careful not to use the C library function allocated with ESP_INTR_FLAG_IRAM flag from interrupts when cache is disabled, refer to IRAM-Safe Interrupt Handlers for more details. For these reasons, the functions itoa, memcmp, memcpy, memset, strcat, strcmp, and strlen are always put in IRAM.

Note

Moving frequently-called functions from IRAM to flash may increase their execution time.

Note

Other configuration options exist that will increase IRAM usage by moving some functionality into IRAM, usually for performance, but the default option is not to do this. These are not listed here. The IRAM size impact of enabling these options is usually noted in the configuration item help text.


Was this page helpful?