-
Hello! I'm a newbie to physX and am a bit confused to as what is expected from the PxContactPoints that custom shape are supposed to generate in the generateContacts callback. The documentation is not very explanatory on this...
I appreciate any help on this! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Hi Linus, Great question! The
For a deeper dive into these concepts, I highly recommend Dirk Gregorius' Robust Contact Creation paper (link). It provides excellent visual explanations that might answer many of your questions. Additionally, you can explore the PhysX documentation on Advanced Collision Detection, which explains concepts like If you have further questions or need clarification on specific points, feel free to ask! Cheers |
Beta Was this translation helpful? Give feedback.
-
Hello,
Great feedback, and excellent links!
A follow up question, (which actually is more like a completely new
question):
I'm implementing custom shapes in the form of distance fields (voxels). My
challenge is that the data for this lives on the GPU, and it is too costly
to keep a copy around on the CPU. So I try to predict what areas physX will
need, and copy only those parts over from the GPU each frame, prior to
physX needing them. The problem is that even this is costly, and the
prediction isn't always correct resulting in shapes potentially not having
their data available for testing.
So my question is, is there a way in physX to have some custom shapes do
their tests on the GPU, and then take those results back to the solver on
the CPU? Like a hybrid CPU/GPU approach? Second alternative would be to
move entirely to the GPU, but the problem there is that we can't rely on
CUDA. So we'd have to port/implement the entire physics pipeline in HLSL,
which even with the CUDA code available feels like a daunting task...
Any thoughts on this?
Cheers,
Linus
Linus Blomberg
Co-Founder
Stockholm, Sweden
elemental.games
…On Mon, Apr 7, 2025 at 10:21 AM Viktor Reutskyy ***@***.***> wrote:
Hi Linus,
Great question! The PxContactPoints in the generateContacts callback can
indeed be a bit tricky to understand at first. Here's some clarification:
1.
*Contact Distance*: This parameter defines the distance within which
contact points should be generated. Even if the shapes haven't collided
yet, PhysX generates contact points when they are within this distance to
ensure smooth transitions in collision response. Think of this as a
"proactive" step that allows the physics engine to prepare for potential
collisions.
2.
*Contact Points in Penetration*: When shapes are already penetrating,
PhysX doesn't require you to "step back in time." Instead, you should
compute contact points based on the overlapping regions. For example:
- Identify the features (vertices, edges, or faces) that are closest
between the two shapes.
- Generate contact points at these locations with normals pointing
in the direction needed to separate the shapes.
3.
*Contact Normal*: The normal vector at a contact point indicates the
direction that one shape (s0) needs to move to resolve its collision with
the other shape (s1). This is typically calculated as the vector from the
contact point on s1 to the contact point on s0, normalized.
4.
*Coordinate Space*: All positions and normals should be expressed in
world space unless otherwise specified by PhysX.
For a deeper dive into these concepts, I highly recommend Dirk Gregorius' *Robust
Contact Creation* paper (link
<https://media.steampowered.com/apps/valve/2015/DirkGregorius_Contacts.pdf>).
It provides excellent visual explanations that might answer many of your
questions. Additionally, you can explore the PhysX documentation on Advanced
Collision Detection
<https://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/guide/3.3.4/Manual/AdvancedCollisionDetection.html>,
which explains concepts like contactOffset and restOffset.
If you have further questions or need clarification on specific points,
feel free to ask!
Cheers
—
Reply to this email directly, view it on GitHub
<#389 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AVVGUGCKH4NQS6REGCFY2Q32YIYO3AVCNFSM6AAAAAB2PUXSOWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTENZUG44DMMY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi Viktor,
Thanks for your reply! Yes, that is exactly what I want to do. But
launching a GPU shader per ray and waiting for the result would quickly
become too expensive (there can be thousands of rays). Is there a way to
batch all rays and upload them all at once to the GPU? Also, I assume the
physX thread needs to be stalling while waiting for the GPU to return with
the contacts, or is there some sync point it can proceed up to?
Cheers,
Linus
Linus Blomberg
Co-Founder
Stockholm, Sweden
elemental.games
…On Tue, Apr 8, 2025 at 12:20 PM Viktor Reutskyy ***@***.***> wrote:
Hi Linus,
From what I understand, you’re not using PhysX’s GPU simulation (because
it relies on CUDA), but your scene data (distance fields/voxels) resides on
the GPU, probably for rendering purposes. You want to reuse this data for
collision detection without the costly transfer to CPU memory. If that’s
correct, you could utilize the
PxCustomGeometry::Callbacks::generateContacts function to implement a
hybrid CPU/GPU workflow. Here’s how it might work:
1.
*Launch GPU Collision Detection from the Callback*: When PhysX calls
your generateContacts callback, you can dispatch a custom GPU kernel
(in HLSL, since CUDA isn’t an option) to perform the collision detection
directly on the GPU using your voxel/distance field data.
2.
*Generate Contact Data on the GPU*: The GPU kernel would compute the
necessary contact points, normals, and penetration depths between your
custom geometry and other shapes.
3.
*Transfer Only Contact Data Back to CPU*: Instead of transferring
large chunks of voxel data from GPU to CPU, you would only transfer the
small set of computed contact data (positions, normals, penetration depths)
back to the CPU. This is typically much faster since contact data is
relatively small.
4.
*Pass Contact Data to PhysX*: Once the contact data is back on the
CPU, you can use it to populate the contactBuffer provided by PhysX in
the generateContacts callback and let PhysX handle it from there.
While this hybrid approach avoids moving entirely to a GPU-based physics
pipeline (which would indeed be daunting without CUDA), it does require
some custom implementation. If you decide to pursue this route and need
further clarification, feel free to reach out!
Cheers
—
Reply to this email directly, view it on GitHub
<#389 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AVVGUGBYOYS6KGNPIOH2A232YOPITAVCNFSM6AAAAAB2PUXSOWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTENZWGI4DEMI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Hi Linus,
Sure, launching a shader for every raycast won't be efficient. But you don't really need a custom geometry to cast rays. While PhysX's custom geometry has callbacks for scene queries (raycast, sweep, overlap), its main purpose is to pass user-generated contact info to PhysX's contact solver. The scene query callbacks are there only for convenience. If all you need is raycasting, you can just use your own PhysX-independent implementation with batching for better efficiency.
As for the contact generation: instead of launching the contact-generating shader for each colliding pair in the generateContacts() callback, you could compute all the contact info before (or simultaneously) w…