-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understand how different selections of ZZ candidate, both at gen and reco level, (and ZZ reconstruction) affect acceptance and efficiency values used in the analysis. #12
Comments
GEN_ variables: Generator ZZ candidate selected following analysis selection.
Gen_ variables: Generator ZZ candidate selected using true MC history. Hence in this case only events coming from H->ZZ->4l decay chain are considered. Of course, this has the largest impact on associated productions (WH, ZH, ttH). As we see in GENmassZ2 above, there is a striking difference between ggH and ZH. This striking difference comes from the fact that events building the associated Z can be picked up using the GEN_ approach, but would be discarded using the Gen_ one. Also note that, by definition, the Gen_ approach is a non-complete collection of the events selected using the GEN_ approach. Looking at GENmassZ2 and GENmassZ1 it seems that some wrong events are being picked (i.e. GENmassZ2 does not have a sharp cutoff at 35 GeV as expected). However this is necessary to evaluate the non-resonant contribution when building the pdf and creating the fiducial volume. When building response matrices we want to make sure we are selecting GEN-level events following as closely as possible reco-level events selection. This is done correctly only when using the GEN_ approach, which is the only one allowing to: The two sets of plots above also answer the question on why the acceptances, efficiencies and fraction of non fiducial events (i.e. all the coefficients used as input to build pdfs) change so much when using GEN_ instead of Gen_ approach, especially for the associated productions. Events peaking at -1, present in both methods, are not selected in the analysis and only represent the fraction of events for which it was not possible to build the ZZ candidate: it's clear how dramatic this effect is on associated productions. In addition, using the Gen_ approach instead of the GEN_ one, would enhance the model-dependence of the measurement. As, quoting from [1], using the ZZ candidates from GEN_ allows to: |
Even though the "correct" way of selecting ZZ candidates might be with the use of topological selection, the events at mZ2 >~ 60GeV in the associated productions are: at reco level due to detector resolution and at gen level are coming from the fact that we build all lepton pairs starting from all the gen leptons. This being said, we are not really interested in these events and one should get rid of them. One way of doing it could be introducing a cut at mZ2 < 60 GeV and check the effect on the analysis. |
Related to this: #21 |
@AlessandroTarabini presentation - CJLST meeting: https://indico.cern.ch/event/976616/#12-run2-legacy-plans-different
The text was updated successfully, but these errors were encountered: