Skip to content

scx_p2dq: Refactor vtime handling of migrating tasks #1833

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 21, 2025

Conversation

hodgesds
Copy link
Contributor

@hodgesds hodgesds commented May 8, 2025

When updating the vtime of a task moving across LLCs give tasks with a lower weight a boost in vtime. Refactor vtime updating into a helper method and update the enqueue method to properly update vtime after selecting a CPU.

*/
if ((p->flags & PF_KTHREAD) && !taskc->all_cpus &&
if ((p->flags & PF_KTHREAD) && p->cpus_ptr == &p->cpus_mask && p->nr_cpus_allowed == nr_cpus &&
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is purely to avoid looking up the task_ctx for affinitized kthreads.

Copy link
Contributor

@arighi arighi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a comment, but overall LGTM.

return;
}

// Give a slight boost to high priority tasks migrating across LLCs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in case of a cross-LLC migration we reset all the accumulated vtime budget? I'm wondering if we're penalizing too much tasks that are migrating.

Have you tried for example to change just the vtime_min with this new logic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me do some more testing on this. I was trying to avoid extra lookups of the previous llc_ctx and use some heuristic to give the right tasks enough of a bump. Perhaps in this case is is worth doing the check though.

}

// Give a slight boost to high priority tasks migrating across LLCs.
if (p->scx.weight < 100) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I recall correctly, higher scx.weight means higher priority. If so, it should be p->scx.weight > 100.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, it will be a good idea to document the idea behind this if-else block more in detail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I recall correctly, higher scx.weight means higher priority. If so, it should be p->scx.weight > 100.

Minor note: $NICE &lt; 0$ maps to scx.weight > 100 in terms of scheduling weight.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to go with the simple approach and just set the set the vtime to the current llc vtime.

@hodgesds hodgesds force-pushed the p2dq-vtime-clamp branch from 5e32e10 to e8fe6b5 Compare May 21, 2025 18:46
Refactor vtime updating into a helper method and update the enqueue
method to properly update vtime after selecting a CPU.

Signed-off-by: Daniel Hodges <[email protected]>
@hodgesds hodgesds force-pushed the p2dq-vtime-clamp branch from e8fe6b5 to 778f1e5 Compare May 21, 2025 19:09
@hodgesds hodgesds enabled auto-merge May 21, 2025 19:11
@hodgesds hodgesds added this pull request to the merge queue May 21, 2025
Merged via the queue into sched-ext:main with commit 7d09941 May 21, 2025
16 checks passed
@hodgesds hodgesds deleted the p2dq-vtime-clamp branch May 21, 2025 19:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants