RTOS Kernel Control
Note
This document is automatically translated using AI. Please excuse any detailed errors. The official English version is still in progress.
RTOS kernel control refers to the mechanism of coordinating task execution, managing system resources, and implementing multitasking through the scheduler. This document only covers some commonly used macro definitions in kernel control. For more information, please refer to the FreeRTOS official documentation.
Task Switching
Macro Definition Prototype
void taskYIELD( void );
taskYIELD()
is used to explicitly trigger task switching. The current task voluntarily gives up the right to use the processor, triggering a context switch, and the scheduler selects and runs a ready task with the same or higher priority. It is commonly used to coordinate the execution order of tasks with the same priority.
Note
After
taskYIELD()
is called, the current task is still in the ready state and may continue to run in the next scheduling cycle.If no other task has a priority higher than or equal to the current task, the scheduler will still choose this task to run.
In the symmetric multicore system under the ESP-IDF environment, each core’s scheduling is independent, and calling
taskYIELD()
only affects the currently running core.When
configUSE_PREEMPTION
is set to 1, the scheduler always runs the highest priority ready task. CallingtaskYIELD()
will not switch to a higher priority task, but will trigger a switch between tasks of the same priority.
Example 1: Explicit Switching Between Tasks
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
void vTask1(void *pvParameters)
{
int count = 0;
while (1)
{
printf("Task1 count: %d\n", count++);
taskYIELD();
}
}
void vTask2(void *pvParameters)
{
int count = 0;
while (1)
{
printf("Task2 count: %d\n", count++);
taskYIELD();
}
}
void app_main(void)
{
// Create two tasks with the same priority and bind them to the same core
xTaskCreatePinnedToCore(vTask1, "Task1", 2048, NULL, 1, NULL, 0);
xTaskCreatePinnedToCore(vTask2, "Task2", 2048, NULL, 1, NULL, 0);
}
The execution result of the above code is as follows:
Task1 count: 0
Task1 count: 1
Task2 count: 0
Task1 count: 2
Task2 count: 1
Task1 count: 3
Task2 count: 2
This example shows that two tasks with the same priority explicitly yield the CPU through taskYIELD()
, thereby achieving a basic task switching effect. The execution result shows that the two tasks run alternately.
This example runs under the ESP-IDF environment, with the following features and notes:
Tasks need to be bound to the same core for execution.
taskYIELD()
only affects the task queue on the currently running core. If tasks are bound to different cores, they cannot alternate switching throughtaskYIELD()
.In ESP-IDF,
app_main()
itself is a FreeRTOS task, with a default priority of 1. If the first task created has a higher priority thanapp_main()
, it will preemptapp_main()
once it becomes ready, resulting in subsequent tasks not being created and never being able to be created. Therefore, when creating multiple tasks, ensure that the task creation logic is completed before being preempted, or gradually created within the task.
Compared to vTaskDelay()
, taskYIELD()
is a non-blocking explicit CPU yield operation. After calling, the task still remains ready, and the scheduler will try to switch to other tasks of the same priority, but it does not guarantee a definite switch.
On the other hand, vTaskDelay()
will put the task into a blocking state, explicitly suspending it for a period of time, and the scheduler will inevitably switch to other tasks for execution.
Therefore, taskYIELD()
is more suitable for quick cooperation switching between tasks, and vTaskDelay()
is more suitable for periodic or delayed suspension tasks.
Critical Section Control
Macro Definition Prototype
// Native FreeRTOS
void taskENTER_CRITICAL( void );
void taskEXIT_CRITICAL( void );
// In the ESP-IDF environment
void portENTER_CRITICAL(portMUX_TYPE *mux);
void portEXIT_CRITICAL(portMUX_TYPE *mux);
taskENTER_CRITICAL()
and taskEXIT_CRITICAL()
are used to enter and exit critical sections, protecting the code within the critical section from being interrupted or task-switched, ensuring the atomicity and data consistency of operations.
taskENTER_CRITICAL()
: Disables interrupts (or raises interrupt priority), prohibits task switching, and enters the critical section.
taskEXIT_CRITICAL()
: Restores interrupt status, allows task switching, and exits the critical section.
These two functions are commonly used in multitasking environments to prevent multiple tasks from accessing shared resources simultaneously, thereby avoiding data conflicts or execution errors.
Note
Atomicity of operations refers to a group of operations that cannot be interrupted or divided during execution, either all completed or not executed at all, and the system will not see an intermediate state.
In native FreeRTOS and ESP-IDF environments, these two macros are used to enter and exit critical sections, but there are differences in usage and internal implementation.
Native FreeRTOS usually runs in a single-core system, taskENTER_CRITICAL()
and taskEXIT_CRITICAL()
are parameterless functions, mainly preventing the current core from being interrupted by disabling and restoring interrupts, thereby protecting shared resources.
In the multi-core architecture of ESP-IDF, these two macros are actually encapsulated as functions with parameters, requiring the cross-core mutex portMUX_TYPE
, combined with interrupt masking and spinlock mechanisms, to achieve synchronization and protection between multiple cores, avoiding race conditions.
The reason for this difference is that a single-core system only needs to mask local core interrupts to ensure the safety of the critical section, while a multi-core system must ensure the atomicity and data consistency of multiple cores accessing shared resources through a cross-core mutex.
For further related information, please refer to the section on FreeRTOS Critical Sections in the official ESP-IDF documentation.
Note
A mutex is a synchronization mechanism used in multitasking or multithreaded environments to protect shared resources, preventing multiple tasks from accessing simultaneously, causing data conflicts or inconsistencies.
It ensures that only one task can enter the critical section to access shared resources at the same time, thereby avoiding race conditions.
Example 1: Entering and Exiting Critical Sections in the ESP-IDF Environment
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
volatile int shared_var = 0;
// Define the mutex
static portMUX_TYPE my_mux = portMUX_INITIALIZER_UNLOCKED;
void vTask1(void *pvParameters)
{
while (1)
{
portENTER_CRITICAL(&my_mux);
shared_var++;
portEXIT_CRITICAL(&my_mux);
printf("Task 1: shared_var = %d\n", shared_var);
vTaskDelay(pdMS_TO_TICKS(100));
}
}
void vTask2(void *pvParameters)
{
while (1)
{
portENTER_CRITICAL(&my_mux);
shared_var++;
portEXIT_CRITICAL(&my_mux);
printf("Task 2: shared_var = %d\n", shared_var);
vTaskDelay(pdMS_TO_TICKS(100));
}
}
void app_main(void)
{
xTaskCreatePinnedToCore(vTask1, "Task1", 2048, NULL, 1, NULL, 0);
xTaskCreatePinnedToCore(vTask2, "Task2", 2048, NULL, 1, NULL, 1);
}
The execution result of the above code is as follows:
Task 1: shared_var = 1
Task 2: shared_var = 2
Task 1: shared_var = 3
Task 2: shared_var = 4
Task 1: shared_var = 5
Task 2: shared_var = 6
Task 1: shared_var = 7
Task 2: shared_var = 8
Task 1: shared_var = 9
Task 2: shared_var = 10
This example clearly demonstrates the basic principles and practical methods of safely accessing shared resources under the ESP multi-core architecture.
By concurrently operating the shared variable shared_var
with two tasks bound to different CPU cores, and using the portMUX_TYPE
spinlock in conjunction with the portENTER_CRITICAL()
and portEXIT_CRITICAL()
macros, mutual exclusion protection of the shared variable is achieved, avoiding race conditions and data confusion.
The critical section only contains the increment operation, ensuring the code is short and efficient, and the print operation is performed outside the critical section to avoid deadlock caused by blocking functions.
The two tasks run on fixed cores, making full use of multi-core parallelism. The output results show that shared_var increases in order, verifying the effectiveness of critical section protection and the practice of safely accessing shared resources in a multi-core environment.
When calling the critical section, note:
The spinlock variable is statically initialized through the
portMUX_INITIALIZER_UNLOCKED
macro to ensure that the initial state of the mutex is “unlocked”, facilitating safe use.Deadlock refers to the situation where a task cannot switch or continue to execute while waiting for resource release, causing the system to freeze. Therefore, blocking functions should be avoided in the critical section to ensure the code is executed quickly and cleanly.