Skip to content

VFNS SV NLO discrepancy

Alessandro Candido edited this page Apr 29, 2021 · 13 revisions

Pegasus comparison

  • comparison in FFNS of current implementation still have relevant discrepancy w.r.t. apfel-pegasus one (i.e. at the level per-mille in the valence)

VFNS-SV discrepancy exploration

A few-percent discrepancy is observed @ NLO in some regimes for some flavors when doing Scale Variations (SV) in a Variable Flavor Number Scheme (VFNS).

It is about:

  • a 1-2.5% discrepancy in the and large-x for all PDFs
    • in the very-large-x a discrepancy is expected because of vanishing PDFs and sparse interpolation x-grid (makes more difficult to quote an upper bound for this error)
  • a 1.5% discrepancy in the small-x for those non-singlet flavors that are vanishing in the small-x
  • a 2-3% for gluon-singlet in the small-x

The main issue is that the discrepancy has a clear dependency on x, not a scattered one, so there is a systematic difference. This might be expected since solving the differential equation might turn a numerical difference into a consistent one (or some other integration can do it).

Benchmarks

  • no problem @ LO
    • a84946172cc9de779c890a437bc4b3b4b8026cbc3b71b1c435fe82ce053ed9e0
  • problems @ NLO
    • ~5% in all the quark (according to the flavor can be ~2% in a larger or smaller regime and up to ~8% somewhere else, always <10%)
    • ~30% in the gluon distribution, ranging from 10% to 60% (no problem was present @ LO, where it was ~0.05% as the others)
    • 60d31eca6f3530e925d7a865fd88476cdb9a5cf8b8d17283bfe5fef6ff0807f5
    • numbers are referred to the diff with LO (so pure NLO correction)

Notice that a pure NLO correction comparison is not that meaningful, because the NLO part is not generated independently from the LO (like in process cross-sections calculations), since it enhance the differential equation to the NLO, and then the solution is generated all together (it is not the solution to be expanded in series but the equation).

Reproducibility

The setup and obtained numbers can be found in the ekomark generated benchmark database.

Code

  • XIR enters in few places, through theory_card:
    • it's propagated through runner and from_dict methods
      • ThresholdsAtlas.from_dict is passed theory_card but does not retain XIR
    • StrongCoupling.from_dict incorporate XIR in the thresholds definition, but does not store it directly
    • OperatorGrid.from_dict is the only one to store it directly in fact_to_ren and it is used in the following ways:
      • call a_s at mu_R
      • pass to scipy.integrate.quad() argument args, and so to quad_ker

At the end of the day only to functions should be the entry point for scale variations:

  • strong_coupling.a_s(), through thresholds and mu_R dependency
  • quad_ker(), only through L = np.log(fact_to_ren)
    • propagates only to gamma_singlet_fact and gamma_ns_fact
    • and their implementation perfectly agrees with the Pegasus paper

Conclusion

All the following comparison has been performed @ NLO:

  • FFNS
  • FFNS + Scale Variations
  • VFNS
  • VFNS + Scale Variations

Looks like the effect of scale variations is just to enhance discrepancies that were already present in the non-scale-varied analogous setup, through an unknown mechanism that correlates with the presence of thresholds.

Indeed there is somehow a continuous deterioration going towards more complex options, rather than a newly introduced discrepancy when toggling some features.

The comparison has been performed against APFEL because:

  • it is the software used to evolve NNPDF PDF-sets
  • so there is no benefit in doing with LHAPDF since it's only introducing more noise because of a further interpolation
  • the NLO LHA benchmark paper is only reliable at the level of % (since even in the cases in which APFEL and EKO agree on ‰ or sub-‰ level, and they are completely independent codes, the paper has still a % discrepancy)

eko is stable w.r.t. changing the numerical parameters that control the complexity for the numerical integrations.

All in all the conclusion is:

There is no reason to claim for an implementation bug in eko

Even if the numbers are huge and coherent enough to expect a deterministic source for the discrepancy (some kind of difference in the numerical strategy adopted for the solution integration).

Clone this wiki locally