Skip to content

System workqueue: Prevent blocking API calls #87522

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

bjarki-andreasen
Copy link
Collaborator

@bjarki-andreasen bjarki-andreasen commented Mar 23, 2025

It is inherently unsafe to call blocking APIs from the system work queue, as it is shared opaquely between modules which may use the system work queue internally to unblock the blocking API call (deadlock).

This particular deadlock is extraordinarily hard to hunt down, given there is no hard fault, and it affects seemingly random parts of the system. Drivers and subsystems just "stop responding" from a seemingly arbitrary, unrelated API call.

Subsystems which rely heavily on the system workqueue already adhere to the fact that blocking API calls are unsafe, and have implemented their own workarounds to address this.

This PR addresses the issue by extending the documentation to state that it is inherently unsafe to use the system work queue for blocking API calls, and introduces an optional check similar to a stack sentinal or spinlock verify, which invokes a kernel oops if a work item passed to the system work queue attempts a blocking call. Lastly, the new kernel API k_is_in_sys_work() for checking if a calling context is within a work item handler passed to the system workqueue is added.

Note the new API is k_is_in_sys_work(), not k_is_in_sys_workqueue_thread(). The thread is allowed to unready when it is not busy, between items for example, it is only while executing a work item handler it is not allowed to unready :)

@bjarki-andreasen bjarki-andreasen changed the title System workqueue: prevent blocking API calls System workqueue: Prevent blocking API calls Mar 23, 2025
Add warning to workqueue docs, explaining that using the system work
queue for blocking work can not be done safely.

Signed-off-by: Bjarki Arge Andreasen <[email protected]>
@faxe1008
Copy link
Collaborator

Very good catch, have found myself tracking down issues because of this multiple times :^)

System workqueue items must not use blocking APIs, like k_msleep().
Replace k_msleep() with k_busy_wait() to both adhere to this rule,
and to better emulate more realistic work being done by system
workqueue.

Signed-off-by: Bjarki Arge Andreasen <[email protected]>
@bjarki-andreasen
Copy link
Collaborator Author

@JordanYates

I doubt the kernel is using the system work queue for blocking operations anywhere.
You are "helping" by mandating that they can't do something.

The system work queue is being misused in-tree in some tests, drivers, and subsystems, I'm helping by mandating that they, and user applications, not do that. I truly fail to understand why you are so opposed to this.

Several of those subsystems (Bluetooth and RTIO) run user provided callbacks, is it left up to the subsystem to define whether user callbacks are allowed to block or not?

No work should be done in callbacks by default, definitely not blocking work, just signal/delegate work to threads which you have control of. I thought that was commonly understood, given in most cases, the user does not know the context the callback is called from. There are obviously cases where the callback context is known, in which cases it is fine, but a callback from a device driver for example, definitely don't do work from there, it is likely from ISR.

I understand that this PR solves the surface level issue (blocking on the system work queue), but doesn't actually solve any deadlocks itself.

It does not solve them, it prevents implementing a design pattern which can cause them.

If every blocking user of the system work queue would be required to define their own work queue under this new regime, why not just create a dedicated work queue only for those subsystems known to be problematic, and leave everyone else to use the system work queue as it is?

Because, it is fundamentally unsafe to do so. See PR description regarding deadlocking.

@JordanYates
Copy link
Collaborator

I truly fail to understand why you are so opposed to this.

Because a much more limited version of this was merged for Bluetooth and it triggered months of issues and PRs before finally being reverted. I just don't want to see the same thing repeated here. I am perfectly okay with this happening if the consequences have been acknowledged.

IMO this should only be merged AFTER existing users have been transitioned to individual work queues, doing it before is just asking for problems.

kernel/sched.c Outdated
@@ -523,6 +523,9 @@ static inline void z_vrfy_k_thread_resume(k_tid_t thread)

static void unready_thread(struct k_thread *thread)
{
#ifndef CONFIG_SYSTEM_WORKQUEUE_BLOCKING
__ASSERT_NO_MSG(!k_is_in_sys_work());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use k_oops() instead. Assertions are for expressions that we know will be true because we have control over all code that could affect it. Application code is unknown to us, so we cannot make assertions about it.

Suggested change
__ASSERT_NO_MSG(!k_is_in_sys_work());
if (k_is_in_sys_work()) {
k_oops();
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed :)

@alwa-nordic
Copy link
Collaborator

Will Zephyr define a second workqueue for tasks that need to block? Then the same reasoning applied to the system workqueue in this PR will apply to that queue as well.

No, it is fundamentally unsafe to do so.

I agree. We should consider that Zephyr has dynamic threads. That looks like cheap threads that are perfect for short-lived tasks that need their own stack but don't want the static allocation.

@bjarki-andreasen bjarki-andreasen force-pushed the sys-workq-unready-check branch 2 times, most recently from 95b554b to 0668826 Compare March 26, 2025 11:21
@pdgendt
Copy link
Collaborator

pdgendt commented Mar 26, 2025

Question: If some high level call is done in the system work queue, that would cause a mutex/semaphore/... wait in some lower level (for example in an SPI driver), will this result in a kernel oops if the kconfig symbol isn't set?
I typically see no issue with these "blocks" as they should be freed after a very short delay.

IMO this will be confusing for new users. Also it's perfectly valid to wait in other threads, why should we make the system work queue special?

EDIT: It's default off, which is better IMO.

@bjarki-andreasen
Copy link
Collaborator Author

bjarki-andreasen commented Mar 26, 2025

Question: If some high level call is done in the system work queue, that would cause a mutex/semaphore/... wait in some lower level (for example in an SPI driver), will this result in a kernel oops if the kconfig symbol isn't set? I typically see no issue with these "blocks" as they should be freed after a very short delay.

If the symbol is not selected, no kernel oops :)

The issue with "blocks" here is that they have to be un-blocked by another context. If that context is another thread, and the block is very short, no problem. However, if the un-block needs to be performed by a later work item passed to the same work queue, there is a deadlock.

IMO this will be confusing for new users. Also it's perfectly valid to wait in other threads, why should we make the system work queue special?

The system workqueue is already special, in that it has an "unlimited"/uncontrolled scope, compared to RTIO for example where every item passed to it is known and accounted for. Any module, driver, subsystem etc. on either side on an API call, can delegate work to the system workqueue, so there is no guarantee that the system workqueue is not both the calling context of a blocking call, and the context supposed to unblock it.

The deadlock which inspired me create this PR is users using a system workqueue item to call pm_device_suspend() on a modem which uses the modem_cellular.c device driver (or any device which uses the modem subsystem), and you get a deadlock, since the modem subsystem uses the workqueue internally for AT command communication.

Add k_is_in_sys_work() API which returns true if the thread context
is the system workqueue thread and the thread is currently servicing
a work item. Useful for checking if blocking is safe or required in
the case of calling an API which uses the system work queue
internally.

Signed-off-by: Bjarki Arge Andreasen <[email protected]>
The system workqueue should never be unreadied from a work item
handler (while busy). This commit implements an optional check
which will invoke a kernel oops is a blocking operation is
attempted from a work item passed to the system workqueue.

Signed-off-by: Bjarki Arge Andreasen <[email protected]>
@bjarki-andreasen bjarki-andreasen force-pushed the sys-workq-unready-check branch from 0668826 to f0d9a3b Compare March 27, 2025 06:35
@alwa-nordic
Copy link
Collaborator

alwa-nordic commented Mar 27, 2025

Question: If some high level call is done in the system work queue, that would cause a mutex/semaphore/... wait in some lower level (for example in an SPI driver), will this result in a kernel oops if the kconfig symbol isn't set? I typically see no issue with these "blocks" as they should be freed after a very short delay.

IMO this will be confusing for new users. Also it's perfectly valid to wait in other threads, why should we make the system work queue special?

I agree. This is a major problem. I want to expand on your point:

Consider an operation that is extremely fast, but requires a large amount of memory. It would make sense to implement it using static memory protected by a mutex. This operation would never block for any significant amount of time, and would never participate in a deadlock. I think it's reasonable to treat the mutex as an implementation detail. It's completely safe to invoke this operation from the system work queue.

Then consider a library that previously implemented the above operation by allocating on the stack, but transitions to doing it the way described above. I think it would be reasonable for a library author to change it without considering it to be a breaking change. But, with this PR, it would start failing on the system work queue. This is very unfortunate for stability guarantees.

I think the terms 'blocking' and 'non-blocking' don't describe this situation well. This PR actually enforces that all work items on the system work queue are ISR-safe, a stronger requirement, right?

@bjarki-andreasen
Copy link
Collaborator Author

Question: If some high level call is done in the system work queue, that would cause a mutex/semaphore/... wait in some lower level (for example in an SPI driver), will this result in a kernel oops if the kconfig symbol isn't set? I typically see no issue with these "blocks" as they should be freed after a very short delay.
IMO this will be confusing for new users. Also it's perfectly valid to wait in other threads, why should we make the system work queue special?

I agree. This is a major problem. I want to expand on your point:

Consider an operation that is extremely fast, but requires a large amount of memory. It would make sense to implement it using static memory protected by a mutex. This operation would never block for any significant amount of time, and would never participate in a deadlock. I think it's reasonable to treat the mutex as an implementation detail. It's completely safe to invoke this operation from the system work queue.

In this case, use a spinlock. Accessing shared data is allowed from the system workqueue :) Mutexes and semaphores only become relevant once you start calling other APIs which you don't have control over.

Then consider a library that previously implemented the above operation by allocating on the stack, but transitions to doing it the way described above. I think it would be reasonable for a library author to change it without considering it to be a breaking change. But, with this PR, it would start failing on the system work queue. This is very unfortunate for stability guarantees.

Should have been using a spinlock :)

I think the terms 'blocking' and 'non-blocking' don't describe this situation well. This PR actually enforces that all work items on the system work queue are ISR-safe, a stronger requirement, right?

That is one way to put it : ) They need to be unblocking, which is also a requirement of ISR-safe code. The difference is you get a large stack at the cost of higher latency using the system work queue compared to actually calling something from ISRs :)

@carlescufi
Copy link
Member

Architecture WG meeting:

  • @carlescufi asks how this is useful if we cannot enforce this in different subsystems, given that this will be disabled by default
  • @nashif suggests defining how the system workqueue should be reserved to the system, and disallow blocking via ASSERT statement. But today the system workqueue is available to all code, which makes it hard to cooperate with it
  • @teburd thinks is a good idea, but there's a bunch of code in the tree that uses the system workqueue to call blocking code. How would we fix the different components of the tree to ensure they do not block?
  • @bjarki-andreasen says that the PM subsystem needs its own workqueue no matter what, so this PR is about improving Zephyr's use of the system workqueue
  • @carlescufi suggests that blocking in the system workqueue is not always wrong
  • @henrikbrixandersen suggests that individual subsystems would deal with this problem themselves by adding their workqueues when required by the latency expectations
  • @bjarki-andreasen thinks we should not introduce this if the ultimate goal is not to avoid blocking in the system workqueue. @nashif thinks the same

@@ -103,6 +103,15 @@ operations that are potentially blocking (e.g. taking a semaphore) must be
used with care, since the workqueue cannot process subsequent work items in
its queue until the handler function finishes executing.

.. warning::
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this warning re system workqueue is being added here when we have a section dedicated to the system workqueue that already touches on the subject?

image

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did not see this warning, it lacks the warning regarding deadlocking which is the crucial one :) I can move it to this section if we decide on continuing with this PR :)

Copy link
Collaborator

@teburd teburd Apr 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the deadlocking issue true of any work queue though? There's nothing particularly special about the system work queue other than it tends to get used by default a lot?

I also think its worth pointing out a simple scenario where this occurs as well.

E.g. one work item is taking a semaphore a subsequent work item is giving. Work queue is now dead locked.

Blocking calls aren't inherently the issue here either I'd note, its a possible symptom but not the cause of the deadlock.

A call to i2c_transfer() for example in a work queue item is a blocking call, and may cause the work queue thread to pend. Just because it blocks doesn't inherently mean there will be a deadlock!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the deadlocking issue true of any work queue though? There's nothing particularly special about the system work queue other than it tends to get used by default a lot?

I also think its worth pointing out a simple scenario where this occurs as well.

E.g. one work item is taking a semaphore a subsequent work item is giving. Work queue is now dead locked.

Blocking calls aren't inherently the issue here either I'd note, its a possible symptom but not the cause of the deadlock.

A call to i2c_transfer() for example in a work queue item is a blocking call, and may cause the work queue thread to pend. Just because it blocks doesn't inherently mean there will be a deadlock!

The "which is available to any application or kernel code" part it what makes it true especially for the sys workq, given an owner of the queue would know all work passed to the queue, so can prevent deadlocks and manage latencies :)

Copy link
Collaborator

@peter-mitsis peter-mitsis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems reasonable to me--particularly since it provides a means to allow a project to choose whether or not to allow blocking operations in the system work queue.

Copy link
Collaborator

@andyross andyross left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure the test is in the wrong spot. Agree with the feature on the whole though.

Personally I'm neutral on whether this should apply to all work queues or just the system work queue. Doesn't seem like there's much value to an app trying to block in its own queue, but at the same time it has clear and definable semantics and I guess there's no reason to disallow it.

@@ -600,6 +600,14 @@ config SYSTEM_WORKQUEUE_NO_YIELD
cooperative and a sequence of work items is expected to complete
without yielding.

config SYSTEM_WORKQUEUE_NO_BLOCK
bool "Select whether system work queue enforces non-blocking work items"
help
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe default y if ASSERT or something similar? This is a cheap check with clear value, probably wants to be on any time CONFIG_ASSERT=y

@@ -523,6 +523,10 @@ static inline void z_vrfy_k_thread_resume(k_tid_t thread)

static void unready_thread(struct k_thread *thread)
{
if (IS_ENABLED(CONFIG_SYSTEM_WORKQUEUE_NO_BLOCK) && k_is_in_sys_work()) {
k_oops();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks wrong to me. "Ready" and "running" aren't the same thing. A thread can be ready but lower priority than _current. Basically: my guess is that this code will oops if you try to k_thread_suspend() a runnable thread out of a work queue item, which would be expected to be legal and work.

You need to add a test for thread == _current at least, but it would probably be better to move this test to reschedule() instead.

Also: probably want a panic here and not an oops. An oops in userspace will kill only the current thread, but a misuse of the system workqueue (which obviously is a kernel thread anyway) is a global failure.

And finally: neither oops nor panic give any feedback to the poor user whose code blew up. Probably wants a printk() here (or to be expressed as an __ASSERT() when available).

@jhedberg
Copy link
Member

jhedberg commented Apr 8, 2025

I think it's worth noting that one category of system workqueue "misuse" that this would not prevent, is when a work item doesn't block on any kernel object but still ends up blocking the workqueue itself due to spending lots of time doing "something", e.g. crypto operations lasting for multiple seconds, like we've seen in #84216.

Copy link
Collaborator

@JordanYates JordanYates left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems reasonable to me--particularly since it provides a means to allow a project to choose whether or not to allow blocking operations in the system work queue.

I don't believe this leads to a choice at all. If there is an option to disable all blocking on the system workqueue, all in-tree system workqueue users need to avoid blocking. Otherwise you can't turn on the feature without the application breaking immediately.

IMO the only reliable way the choice can work is if it defaults to non-blocking in-tree, and downstream users can enable blocking only if their application needs it for whatever reason.

The problem is that you cannot turn this on by default, because there are too many in-tree users that rely on the blocking behavior. If I am wrong and you can enable this by default, I'm happy to give a +1. My 2 cents.

@andyross
Copy link
Collaborator

andyross commented Apr 8, 2025

If there is an option to disable all blocking on the system workqueue, all in-tree system workqueue users need to avoid blocking. Otherwise you can't turn on the feature without the application breaking immediately.

Are there any such users? I don't think it's unreasonable at all that we enforce "No Blocking The System Work Queue" rule for in-tree code even if we are more flexible for apps, in which case this PR would still have value.

@brandon-exact
Copy link
Contributor

brandon-exact commented Apr 8, 2025

if I understand this correctly, wouldnt this require a tree wide effort to change anything that blocks? For example, these GPIO drivers would need to be updated

static void tca6424a_handle_interrupt(const struct device *dev)
{
struct tca6424a_drv_data *drv_data = dev->data;
struct tca6424a_irq_state *irq_state = &drv_data->irq_state;
int ret;
uint32_t previous_state;
uint32_t current_state;
uint32_t transitioned_pins;
uint32_t interrupt_status;
k_sem_take(&drv_data->lock, K_FOREVER);
/* Any interrupts enabled? */
if (!irq_state->rising && !irq_state->falling) {
k_sem_give(&drv_data->lock);
return;
}
/* Store previous input state then read new value */
previous_state = drv_data->pins_state.input;
ret = update_input_regs(dev, &current_state);
if (ret != 0) {
k_sem_give(&drv_data->lock);
return;
}
/* Find out which input pins have changed state */
transitioned_pins = previous_state ^ current_state;
/* Mask gpio transactions with rising/falling edge interrupt config */
interrupt_status = (irq_state->rising & transitioned_pins & current_state);
interrupt_status |= (irq_state->falling & transitioned_pins & previous_state);
k_sem_give(&drv_data->lock);
if (interrupt_status) {
gpio_fire_callbacks(&drv_data->callbacks, dev, interrupt_status);
}
}
/**
* @brief Work handler for TCA6424A interrupt
*
* @param work Work struct that contains pointer to interrupt handler function
*/
static void tca6424a_work_handler(struct k_work *work)
{
struct tca6424a_drv_data *drv_data = CONTAINER_OF(work, struct tca6424a_drv_data, work);
tca6424a_handle_interrupt(drv_data->dev);
}

static int sx1509b_handle_interrupt(const struct device *dev)
{
const struct sx1509b_config *cfg = dev->config;
struct sx1509b_drv_data *drv_data = dev->data;
int ret = 0;
uint16_t int_source;
uint8_t cmd = SX1509B_REG_INTERRUPT_SOURCE;
k_sem_take(&drv_data->lock, K_FOREVER);
ret = i2c_write_read_dt(&cfg->bus, &cmd, sizeof(cmd),
(uint8_t *)&int_source, sizeof(int_source));
if (ret != 0) {
goto out;
}
int_source = sys_be16_to_cpu(int_source);
/* reset interrupts before invoking callbacks */
ret = i2c_reg_write_word_be(&cfg->bus, SX1509B_REG_INTERRUPT_SOURCE,
int_source);
out:
k_sem_give(&drv_data->lock);
if (ret == 0) {
gpio_fire_callbacks(&drv_data->cb, dev, int_source);
}
return ret;
}
static void sx1509b_work_handler(struct k_work *work)
{
struct sx1509b_drv_data *drv_data =
CONTAINER_OF(work, struct sx1509b_drv_data, work);
sx1509b_handle_interrupt(drv_data->dev);
}

static void pcf857x_work_handler(struct k_work *work)
{
struct pcf857x_drv_data *drv_data = CONTAINER_OF(work, struct pcf857x_drv_data, work);
k_sem_take(&drv_data->lock, K_FOREVER);
uint32_t changed_pins;
uint16_t input_port_last_temp = drv_data->input_port_last;
int rc = pcf857x_process_input(drv_data->dev, &changed_pins);
if (rc) {
LOG_ERR("Failed to read interrupt sources: %d", rc);
}
k_sem_give(&drv_data->lock);
if (input_port_last_temp != (uint16_t)changed_pins && !rc) {
/** Find changed bits*/
changed_pins ^= input_port_last_temp;
gpio_fire_callbacks(&drv_data->callbacks, drv_data->dev, changed_pins);
}
}
/** Callback for interrupt through some level changes on pcf857x pins*/
static void pcf857x_int_gpio_handler(const struct device *dev, struct gpio_callback *gpio_cb,
uint32_t pins)
{
ARG_UNUSED(dev);
ARG_UNUSED(pins);
struct pcf857x_drv_data *drv_data =
CONTAINER_OF(gpio_cb, struct pcf857x_drv_data, int_gpio_cb);
k_work_submit(&drv_data->work);
}

static int pcal64xxa_process_input(const struct device *dev, gpio_port_value_t *value)
{
const struct pcal64xxa_drv_cfg *drv_cfg = dev->config;
struct pcal64xxa_drv_data *drv_data = dev->data;
int rc;
pcal64xxa_data_t int_sources;
pcal64xxa_data_t input_port;
k_sem_take(&drv_data->lock, K_FOREVER);
rc = drv_cfg->chip_api->inputs_read(&drv_cfg->i2c, &int_sources, &input_port);
if (rc != 0) {
LOG_ERR("%s: failed to read inputs", dev->name);
k_sem_give(&drv_data->lock);
return rc;
}
if (value) {
*value = input_port;
}
/* It may happen that some inputs change their states between above
* reads of the interrupt status and input port registers. Such changes
* will not be noted in `int_sources`, thus to correctly detect them,
* the current state of inputs needs to be additionally compared with
* the one read last time, and any differences need to be added to
* `int_sources`.
*/
int_sources |= ((input_port ^ drv_data->input_port_last) & ~drv_data->triggers.masked);
drv_data->input_port_last = input_port;
if (int_sources) {
pcal64xxa_data_t dual_edge_triggers = drv_data->triggers.dual_edge;
pcal64xxa_data_t falling_edge_triggers =
~dual_edge_triggers & drv_data->triggers.on_low;
pcal64xxa_data_t fired_triggers = 0;
/* For dual edge triggers, react to all state changes. */
fired_triggers |= (int_sources & dual_edge_triggers);
/* For single edge triggers, fire callbacks only for the pins
* that transitioned to their configured target state (0 for
* falling edges, 1 otherwise, hence the XOR operation below).
*/
fired_triggers |= ((input_port & int_sources) ^ falling_edge_triggers);
/* Give back semaphore before the callback to make the same
* driver available again for the callback.
*/
k_sem_give(&drv_data->lock);
gpio_fire_callbacks(&drv_data->callbacks, dev, fired_triggers);
} else {
k_sem_give(&drv_data->lock);
}
return 0;
}
static void pcal64xxa_work_handler(struct k_work *work)
{
struct pcal64xxa_drv_data *drv_data = CONTAINER_OF(work, struct pcal64xxa_drv_data, work);
(void)pcal64xxa_process_input(drv_data->dev, NULL);
}
static void pcal64xxa_int_gpio_handler(const struct device *dev, struct gpio_callback *gpio_cb,
uint32_t pins)
{
ARG_UNUSED(dev);
ARG_UNUSED(pins);
struct pcal64xxa_drv_data *drv_data =
CONTAINER_OF(gpio_cb, struct pcal64xxa_drv_data, int_gpio_cb);
k_work_submit(&drv_data->work);
}

static void mcp23xxx_work_handler(struct k_work *work)
{
struct mcp23xxx_drv_data *drv_data = CONTAINER_OF(work, struct mcp23xxx_drv_data, work);
const struct device *dev = drv_data->dev;
int ret;
k_sem_take(&drv_data->lock, K_FOREVER);
uint16_t intf;
ret = read_port_regs(dev, REG_INTF, &intf);
if (ret != 0) {
LOG_ERR("Failed to read INTF");
goto fail;
}
if (!intf) {
/* Probable causes:
* - REG_GPIO was read from somewhere else before the interrupt handler had a chance
* to run
* - Even though the datasheet says differently, reading INTCAP while a level
* interrupt is active briefly (~2ns) causes the interrupt line to go high and
* low again. This causes a second ISR to be scheduled, which then won't
* find any active interrupts if the callback has disabled the level interrupt.
*/
LOG_ERR("Spurious interrupt");
goto fail;
}
uint16_t intcap;
/* Read INTCAP to acknowledge the interrupt */
ret = read_port_regs(dev, REG_INTCAP, &intcap);
if (ret != 0) {
LOG_ERR("Failed to read INTCAP");
goto fail;
}
/* mcp23xxx does not support single-edge interrupts in hardware, filter them out manually */
uint16_t level_ints = drv_data->reg_cache.gpinten & drv_data->reg_cache.intcon;
intf &= level_ints | (intcap & drv_data->rising_edge_ints) |
(~intcap & drv_data->falling_edge_ints);
k_sem_give(&drv_data->lock);
gpio_fire_callbacks(&drv_data->callbacks, dev, intf);
return;
fail:
k_sem_give(&drv_data->lock);
}
static void mcp23xxx_int_gpio_handler(const struct device *port, struct gpio_callback *cb,
gpio_port_pins_t pins)
{
struct mcp23xxx_drv_data *drv_data =
CONTAINER_OF(cb, struct mcp23xxx_drv_data, int_gpio_cb);
k_work_submit(&drv_data->work);
}

@teburd
Copy link
Collaborator

teburd commented Apr 8, 2025

If there is an option to disable all blocking on the system workqueue, all in-tree system workqueue users need to avoid blocking. Otherwise you can't turn on the feature without the application breaking immediately.

Are there any such users? I don't think it's unreasonable at all that we enforce "No Blocking The System Work Queue" rule for in-tree code even if we are more flexible for apps, in which case this PR would still have value.

Any fetch+get sensor with trigger handling optionally uses the "GLOBAL_THREAD" which is the system workqueue. Probably 50+ drivers.

Mostly fixable by find/replacing k_work_submit with k_work_submit_to_queue with a sensor specific thread perhaps...

This will add a stack which is quite possibly non-negligible for people that were using this method instead of the per sensor thread.

@bjarki-andreasen
Copy link
Collaborator Author

bjarki-andreasen commented Apr 9, 2025

I have created an alternative to this PR which monitors for work taking too long rather than if the work blocks, see #88345

@carlescufi carlescufi moved this from Todo to In Progress in Architecture Review Apr 14, 2025
@github-project-automation github-project-automation bot moved this from In Progress to Done in Architecture Review May 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Architecture Review Discussion in the Architecture WG required area: Bluetooth Classic Bluetooth Classic (BR/EDR) area: Bluetooth Host Bluetooth Host (excluding BR/EDR) area: Bluetooth area: Input Input Subsystem and Drivers area: Kernel
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.