Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DSMC: add minimum reaction weight for particle splitting #4976

Open
wants to merge 6 commits into
base: development
Choose a base branch
from

Conversation

roelof-groenewald
Copy link
Member

@roelof-groenewald roelof-groenewald commented Jun 6, 2024

The DSMC implementation has an issue when simulating collisions between species with vastly different macroparticle numbers. Consider a case where 1 neutral macroparticle collides with an ion plasma with many macroparticles (N). The single neutral is paired with each ion, reducing it's weight by 1/N. If one of those possible collisions occur and the new, small neutral moves to a new cell, it will again be paired with N ions for collision consideration with another 1/N reduction in weight. This process can rapidly lead to extremely low weight particles.

In this PR a control for this process is introduced. The user can now specify a "minimum splitting weight". If the reaction weight is less than this number, the particles that would have been split off the colliding particles are immediately merged back into the colliding particles. The net effect is that the colliding particles experience a velocity kick proportional to the fraction of weight that collided. This approach is the same as the "standard" DSMC technique mentioned by @JustinRayAngus a few months back. The implementation can now be reduced to that standard approach if desired by the user simply by specifying a very large minimum_splitting_weight.

@roelof-groenewald roelof-groenewald added the component: collisions Anything related to particle collisions label Jun 6, 2024
@roelof-groenewald roelof-groenewald changed the title [WIP] DSMC: add minimum reaction weight for particle splitting DSMC: add minimum reaction weight for particle splitting Jun 6, 2024
@JustinRayAngus
Copy link
Contributor

JustinRayAngus commented Jun 11, 2024

I'm starting to look at this. I'm unfamiliar with this part of the code, so perhaps I don't understand exactly how this DSMC algorithm works. Let me know if my understanding is incorrect. I assume the pairing of the particles is done in the standard way where the species with less macro-particles in the cell are potentially paired multiple times such that each macro-particle from the larger number group has at least one partner each scattering event. I couldn't find where the pairing is done and how the probability of a collision occurring is determined, but the idea of splitting the particle weight corresponding to how many times it can potentially scatter combined with actually splitting the particle up into smaller weight particles and creating new particles seems like it could be problematic. Imagine I want to use fixed particle weights for a simulation. Seems like this algorithm would not permit that.

While I'm learning more about how this DSMC method works, I'll address your proposed fix for limiting the weight of the particles in the splitting. You have a macro-particle and you split it into two. You then decide that one of them collides and update the momentum as usual . Then you recombine it with the other un-collided particle in way that the momentum of the two is preserved. Is that correct? If so, this is not how I understand the "Standard" DSCM method to work. For the standard approach, there is no splitting of the weights based on the difference in the number of macro-particles. The weights just enter in the calculation of the collision probability and then again for determining if the larger weight particle has its momentum updated. It would be updated with probability wmin/wmax. Doing it this way achieves energy and momentum conservation on average.

The idea of splitting a particle off and then recombining it with the parent particle is something that has been considered for Coulomb collisions by Y. Sentoku and A. Kemp, JCP 227 (2008). I think a difference though is that they split the larger weight particle to have a partner that has equal weight to the smaller weight particle. This is so they can do the scattering in a conservative way for both particles. They then recombine the split particle back to the parent particle in a way that conserves energy rather than momentum. Turns out that is important because if you do it the other way you can show that you will continually lose energy in the particles. I suspect that the same is true here.

@roelof-groenewald
Copy link
Member Author

@JustinRayAngus Thanks for your comments. You are correct that this algorithm is not set up to the weight per particle i.e. even if you start with all weights for a given species equal, they will change as collisions occur. The approach used is similar to what you describe from Y. Sentoku and A. Kemp in that once collision partners are determined, the higher weight particle is split into two particles with one equaling the weight of the collision partner. Generally, the simulation then continues with all three particles (i.e. at least one new particle is created for each collision). I need this approach since in my use case I have heavy (in terms of statistical weight) ions colliding with light neutrals to produce light fast moving ions where the energy distribution of those fast moving ions play an important role. The idea is that a separate merging algorithm will control the macroparticle number by combining the products from different collisions where they are close to each other in phase space.

The issue as I mentioned in the PR description is that I'm gettting very low weight particles due to the continual splitting so I want to limit that. I agree that the momentum conserving merging I'm doing here when the reaction weight becomes lower than the desired threshold will lead to cooling, so I'll modify that to rather conserve energy by following Sentoku and Kemp. Thanks for pointing that out!

@JustinRayAngus
Copy link
Contributor

@roelof-groenewald Thanks for the explanations. I see why you need to create new particles. What I don't understand with the current approach is why you first split the particles up based on the ratio of the number of macro particles of the two species in the cell. It seems like that is the root of your problem in the first place. Why not just do the pairing as normal (e.g., as described TA77) without weight splitting, but do splitting based on the lower weight particle when it has been determined that the pair collides?

You could still add your proposed fix. Perhaps you are don't want to split the particles in some scenario and would prefer an approach similar to that from Sentoku and Kemp.

@roelof-groenewald
Copy link
Member Author

@JustinRayAngus I've updated the code to conserve energy during the merging step.

I'm not sure I understand the difference between the current approach and that from TA77, so I'll describe the current algorithm here and we can see if the differences become clear.

Collision algorithm:

  • Sort particles based on grid cells
  • For each grid cell:
    • Pair particles from the two colliding species so that each particle has a collision partner; this requires that particles from the species with fewer particles be paired more than once.
    • Determine the weight involved in each collision (reaction_weight) as min(w_1, w_2) where w_i = weight_i / (# of times particle i is paired with a collision partner).
    • For each collision pair:
      • Determine the collision probability: P = -std::expm1(-exponent) where exponent = (multiplier * w_max * sigma_tot * v_coll * dt / dV ); and multiplier is the number of collision pairs in the cell.
      • If random_number < P:
        • Determine post collision velocities based on collision process
        • If reaction_weight > minimum_splitting_weight
          • Create a new particle for each species with weight = reaction_weight and velocities equal to the post collision velocities.
          • Subtract reaction_weight from the original particles
          • If either of the original particles' new weight (after subtraction) equals 0, remove those particles from the simulation.
        • else:
          • Adjust the colliding particles' velocities based on a weighted sum (with original weight and reaction_weight) of the original velocities and post collision velocities. Scale the new velocities to conserve kinetic energy.

So I don't actually split the particles based on the ratio of macroparticles but only break-off the amount of weight required in a collision when that collision actually occurs. I think this is therefore already in line with your suggestion, right?

@JustinRayAngus
Copy link
Contributor

@roelof-groenewald I see. I misunderstood your original description where you said the weight is reduced by 1/N, which is the ratio of the number of macro particles of the two species.

I agree with what you describe above, with one minor exception. You set the multiplier in the exponent to the number of collision pairs, which is max(N1,N2) with N1 and N2 the number of macro particles in the cell of species 1 and 2, respectively. The multiplier for the exponent term should be min(N1,N2), consistent with what is in TA77. Do you agree with this?

I'll take a look at the energy-preserving commit soon.

@roelof-groenewald
Copy link
Member Author

I agree with what you describe above, with one minor exception. You set the multiplier in the exponent to the number of collision pairs, which is max(N1,N2) with N1 and N2 the number of macro particles in the cell of species 1 and 2, respectively. The multiplier for the exponent term should be min(N1,N2), consistent with what is in TA77. Do you agree with this?

I agree that in TA77 (their eq. 8a) the multiplier is min(N1, N2)/dV, but I do believe for the current implementation max(N1, N2) is correct. The collision algorithm is broadly based on that of Higginson et al. (2019). See their eq. 8 and 18. for the collision probability which does use the total number of collision pairs per particle i.e. max(N1, N2).

In my understanding the difference is as follows:

  • in TA77 a scattering angle is calculated with the scattering being of a larger angle the more collisions occurred, and the total number of collision in a timestep is determined by the species with the lower density, hence, min(n1,n2).
  • in Higginson (and the current implementation) we are assuming a very low collision probability such that at most 1 collision occurs during a timestep. We then save time by not considering every possible pair for a collision but instead only consider 1 pair per particle (while effectively forcing N = max(N1, N2) = N1, N2) and scaling up the collision probability as if all possible pairs (N per particle) would have had the same probability to collide.

@JustinRayAngus
Copy link
Contributor

I see. I missed the part where you described the reaction weight as being the minimum weight divided by # of times particle i is paired with a collision partner. When doing that, then I agree that max(N1,N2) should be used for the multiplier.

But this brings me back to an original point. You don't have to do it this way and this seems to be the root of the problem in the first place. You can skip dividing the particle weight by the number of times it is paired (duplicity factor if you will) and replace max(N1,N2) with min(N1,N2). They both achieve the same goal of getting the correct normalized scattering length on average, but the latter method doesn't require splitting the weights by the duplicity factor. If it is not clear that these are the same, perhaps we can touch base via zoom sometime and discuss. I can also share some notes with you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: collisions Anything related to particle collisions
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants