Skip to content

simln-lib/refactor: fully deterministic produce events #277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

f3r10
Copy link
Collaborator

@f3r10 f3r10 commented Jun 3, 2025

Description

The goal of this PR is to achieve fully deterministic runs to get reproducible simulations

Changes

  • nodes: HashMap<PublicKey, Arc<Mutex<dyn LightningNode>>> Update HashMap for a BTreeMap. A HashMap does not maintain an order, which has an impact when the simulation is running, making the results unpredictable. Using a BTreeMap, the order of the nodes is always the same.
  • dispatch_producers acts as a master task, generating all the payments of the nodes, getting the random destination, and only then spawning a threat for producing the events (produce_events)

Addresses #243

@f3r10
Copy link
Collaborator Author

f3r10 commented Jun 4, 2025

Opening in draft still need to fix some issues.
But I think it is ready for a first review pass @carlaKC

@carlaKC
Copy link
Contributor

carlaKC commented Jun 9, 2025

btw don't worry about fixups until this is out of draft - when review hasn't properly started it's okay to just squash em!

Copy link
Contributor

@carlaKC carlaKC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Direction looking good here! Main comment is that I think we need to have a step where we replenish our heap by calling generate_payments again?

  • If payment_count().is_none() we return a single payment from generate_payments
  • For RandomPaymentActivity, this means we'll do one payment per node and then shut down?

Related to this is that we possibly don't want to queue up tons of events for when payment_count is defined (say, we want a million payments, we'll queue up a million items which is a bit of a memory waste). This probably isn't much of a big deal, because I'd imagine this use case is primarily for smaller numbers but something to keep in mind as we address the above requirement.

Also would be good to rebase this early on to get to a more current base 🙏

@f3r10
Copy link
Collaborator Author

f3r10 commented Jun 10, 2025

Direction looking good here! Main comment is that I think we need to have a step where we replenish our heap by calling generate_payments again?

The idea would be to generate all the payments at once, so the master task would dispatch the events.

If payment_count().is_none() we return a single payment from generate_payments

Yes, in this case, only one payment is generated

For RandomPaymentActivity, this means we'll do one payment per node and then shut down?

Yes, right now it is working in this mode 🤔

Related to this is that we possibly don't want to queue up tons of events for when payment_count is defined (say, we want a million payments, we'll queue up a million items which is a bit of a memory waste). This probably isn't much of a big deal, because I'd imagine this use case is primarily for smaller numbers but something to keep in mind as we address the above requirement.

Yes, you are right, maybe it would be better to create some batches of payments. I am going to try to come up with an alternative to reduce the memory waste. 🤔

@f3r10 f3r10 force-pushed the refactor_fully_deterministic_produce_events branch from b06a289 to 1b3a21f Compare June 10, 2025 20:31
@f3r10
Copy link
Collaborator Author

f3r10 commented Jun 13, 2025

Hi @carlaKC , I've developed a new approach for the event generation system. The core idea is to centralize the random number generation to ensure deterministic outcomes for our simulations.

Here's a breakdown of the design:

  1. Central Manager Task: A dedicated thread runs a central manager. This manager is the sole source for generating both random wait times and random destinations. By centralizing this, we ensure that the sequence of random numbers generated for these critical values is entirely reproducible, given a fixed seed.

  2. Executor Event Listeners: For each executor, a separate thread is spawned. These threads act as listeners for payment events, forwarding them to the designated consumers once received.

  3. Payment Event Generators: Concurrently, for each executor, another thread is spawned. These threads are responsible for generating payment events in a continuous loop (e.g., for RandomActivity). Each generator thread communicates with the central manager via a dedicated channel to request a wait time. After awaiting the specified duration, it sends another event to the manager to trigger the calculation of a random destination. Once the destination is determined, the manager dispatches a final event to the respective event listener thread (as described in the previous point).

This design ensures that the wait times and final destinations are entirely deterministic across simulation runs. However, there is a new challenge with the non-deterministic order of thread execution.

The Determinism Challenge

While the values generated (wait times, destinations) are fixed if the random number generator is seeded, the order in which the executor threads request these values is not guaranteed. For example, if we have ex1 and ex2 executors:

Execution 1:
    ex1 gets wait_time 0 → destination node_3
    ex2 gets wait_time 1 → destination node_4

Execution 2 (possible non-deterministic order):
    ex2 gets wait_time 0 → destination node_3
    ex1 gets wait_time 1 → destination node_4

This means that even though the sequence of random numbers from the central manager is the same, which executor consumes which number from that sequence is left to the operating system's scheduler, leading to variations in the overall simulation flow.

Proposed Solution for Execution Order

To achieve full simulation determinism, including the order of execution, I'm considering adding a tiny, randomized initial sleep time before each executor thread begins its main loop. While seemingly counter-intuitive, this jitter can effectively "break ties" in thread scheduling in a controlled, reproducible way when combined with a seeded random number generator. This would allow us to deterministically influence which thread acquires the next available random number from the central manager.

WDYT?

@carlaKC
Copy link
Contributor

carlaKC commented Jun 17, 2025

Deleted previous comment - it had some misunderstandings.

Why can't we keep the current approach of generating a queue of events and then replenish the queue when we run out of events? By generating all of our payment data in one place, we don't need to worry about thread execution order.

I think that this can be as simple as pushing a new event to the queue every time we pop one? We might need to track some state for payment count (because we'll need to remember how many we've had), but for random activity it should be reasonable.

@carlaKC
Copy link
Contributor

carlaKC commented Jun 17, 2025

Rough sketch of what I was picturing:

Queue up initial set of events:

  • for each executor
    • Get wait time and destination
    • Push wait time, destination and ExecutorKit onto head

Read from heap:

  • Pop event off heap
  • Sleep until wait time is reached
  • Send SimulationEvent::SendPayment into the channel for the executor
  • Generate a new wait time and destination from the ExecutorKit
  • Read from heap until shutdown

Instinct about this is:

  • Always generating payment destination in one places fixes our determinism issue
  • Re-queueing on pop + sorting by time means we'll never run out of events for each executor
  • The nasty thing will be payment counts, we're probably going to have to store those in the heap and know when we don't need to queue anything else

simln-lib/refactor: fully deterministic produce events
@f3r10 f3r10 force-pushed the refactor_fully_deterministic_produce_events branch from a99bbff to 2beccfa Compare June 23, 2025 20:47
@f3r10
Copy link
Collaborator Author

f3r10 commented Jun 24, 2025

hi @carlaKC I think that now it is working as expected 💪

@f3r10 f3r10 marked this pull request as ready for review June 24, 2025 14:56
@f3r10 f3r10 requested a review from carlaKC June 24, 2025 14:56
Copy link
Contributor

@carlaKC carlaKC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some relative timestamp issues that we need to address - didn't occur to me earlier in design phase.

I noticed this when testing out against a toy network, would be good to have some unit tests to assert that we get the payment sequence that we're expecting (should be reasonable to do with defined activities with set wait times / counts).

log::info!(
"Payment count has been met for {}: {c} payments. Stopping the activity."
, executor.source_info);
return Ok(());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we continue here? If one activity hits its payment_count, it doesn't mean that we're finished with all other payments?

payment_generator: Box<dyn PaymentGenerator>,
}

struct PaymentEvent {
wait_time: Duration,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're going to have to use an absolute time here - my bad, should have realized that earlier in the design process.

Since we process and sleep all in one queue, a relative value doesn't work because we're always starting from zero when we pop an item off.

For example: say we have two activities, one executes every 5 seconds, the other every 10.

  • We push two items to the heap, one in 5 seconds one in 10 seconds
  • We sleep 5 seconds, then pop the 5 second event and fire it
  • We re-generate another event which should sleep 5 seconds
  • We'll push that onto the heap, and it's the next soonest event
  • We then sleep another 5 seconds, then pop the next 5 second event

We'll continue like this forever, never hitting the 10 second event. If we have absolute times, that won't happen because the 10 second event will eventually bubble up.

@@ -591,7 +592,7 @@ pub struct Simulation<C: Clock + 'static> {
/// Config for the simulation itself.
cfg: SimulationCfg,
/// The lightning node that is being simulated.
nodes: HashMap<PublicKey, Arc<Mutex<dyn LightningNode>>>,
nodes: BTreeMap<PublicKey, Arc<Mutex<dyn LightningNode>>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to change this to a BTreeMap?

Comment on lines +1172 to +1174
} else {
generate_payments(&mut heap, executor, current_count + 1)?;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: don't need the else branch if we're continuing

@@ -1561,6 +1610,31 @@ async fn track_payment_result(
Ok(())
}

fn generate_payments(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: generate_payment? we're only adding one

// Wait until our time to next payment has elapsed then execute a random amount payment to a random
// destination.
pe_clock.sleep(wait_time).await;
t.spawn(async move {
Copy link
Contributor

@carlaKC carlaKC Jun 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated opinion: I think that we can kill produce_events completely and do the following here:

  • Spawn a task that directly passes SimulationEvent to consume_events
  • Remove the loop from consume_events and just handle dispatch of the LN event.

This cuts a lot of steps out, and puts all our event handling solidly in one place. If the LN nodes take long, at least they're spawned in a task (nothing we can do about that ordering).

I'm on the fence about whether we need to spawn this in a task. Technically consume_events should pull the event pretty quickly (at least, in the amount of time that it takes to make a RPC call to the LN node).

The channel is buffered - so the question is:
Will we queue two events for a single lightning node faster than we can dispatch a single payment? If yes, then we'll block, if no then we don't need the task.


tasks.spawn(async move {
let source = executor.source_info.clone();
generate_payments(&mut heap, executor, 0)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this again - I think we can actually just store the payment event in the heap and keep a hashmap of executors / payment counts. When we pop an event off, we just look up the stuff we need in the hashmap and increment the count there, rather than clagging up the hashmap with large structs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants