Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PWGHF: Ds-h correlation, effciency processes modified #7089

Merged
merged 10 commits into from
Aug 2, 2024

Conversation

scattaru
Copy link
Contributor

Dear all,
I have modified the processes for evaluating both the candidate and associated particle efficiencies in order to consider at the kinematic level only those collisions which are reconstructed with the same selections applied on data.

Let me know for any comments,
Cheers,
Samuele

@alibuild
Copy link
Collaborator

Error while checking build/O2Physics/o2 for 74cc15c at 2024-07-31 14:50:

## sw/BUILD/O2Physics-latest/log
/sw/SOURCES/O2Physics/7089-slc7_x86-64/0/PWGHF/HFC/Tasks/taskCorrelationDsHadrons.cxx:559:13: error: unused variable 'posZReco' [-Werror=unused-variable]
/sw/SOURCES/O2Physics/7089-slc7_x86-64/0/PWGHF/HFC/Tasks/taskCorrelationDsHadrons.cxx:561:13: error: unused variable 'posZGen' [-Werror=unused-variable]
ninja: build stopped: subcommand failed.

Full log here.

Copy link
Collaborator

@fgrosa fgrosa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @scattaru, apart from some minor comments, I have a substantial one about the logic of your implementation. You first loop over reconstructed collisions, you get the corresponding MC collision and then you loop over the generated particles associated to the MC collision. However, in case of split vertices you will have double counts (because you will have multiple times the same MC collision and hence you will loop multiple times on the same generated particles). I would rather reverse the logic and loop over the MC collisions, check if they have at least a reco collision, and in this case loop over generated collisions.

PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/Tasks/taskCorrelationDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/Tasks/taskCorrelationDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/Tasks/taskCorrelationDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/Tasks/taskCorrelationDsHadrons.cxx Outdated Show resolved Hide resolved
PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx Outdated Show resolved Hide resolved
@scattaru
Copy link
Contributor Author

scattaru commented Aug 2, 2024

Hi @scattaru, apart from some minor comments, I have a substantial one about the logic of your implementation. You first loop over reconstructed collisions, you get the corresponding MC collision and then you loop over the generated particles associated to the MC collision. However, in case of split vertices you will have double counts (because you will have multiple times the same MC collision and hence you will loop multiple times on the same generated particles). I would rather reverse the logic and loop over the MC collisions, check if they have at least a reco collision, and in this case loop over generated collisions.

Hi @fgrosa, thanks a lot for the suggestions! You are right, I have not considered this case of the split vertices. I have reversed the logic in order to loop on the generated collision and then take the reconstructed collision. I have defined a configurable removeCollWSplitVtxto have also the possibility to explicitly remove the case in which for one generated collision there are more than 1 associated reconstructed collisions, so we can check what happen if we consider or not also those split collisions.

@fgrosa fgrosa merged commit 2915660 into AliceO2Group:master Aug 2, 2024
11 checks passed
@fgrosa
Copy link
Collaborator

fgrosa commented Aug 2, 2024

Hi @scattaru, apart from some minor comments, I have a substantial one about the logic of your implementation. You first loop over reconstructed collisions, you get the corresponding MC collision and then you loop over the generated particles associated to the MC collision. However, in case of split vertices you will have double counts (because you will have multiple times the same MC collision and hence you will loop multiple times on the same generated particles). I would rather reverse the logic and loop over the MC collisions, check if they have at least a reco collision, and in this case loop over generated collisions.

Hi @fgrosa, thanks a lot for the suggestions! You are right, I have not considered this case of the split vertices. I have reversed the logic in order to loop on the generated collision and then take the reconstructed collision. I have defined a configurable removeCollWSplitVtxto have also the possibility to explicitly remove the case in which for one generated collision there are more than 1 associated reconstructed collisions, so we can check what happen if we consider or not also those split collisions.

Thanks @scattaru!

MaximVirta pushed a commit to MaximVirta/O2Physics that referenced this pull request Aug 5, 2024
)

* Efficiency process modified

* Efficiency processes modification

* Fixed If statement had no body

* Fixed warning messages

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Implement suggestion on the logic

---------

Co-authored-by: Fabrizio <[email protected]>
@scattaru scattaru deleted the DsNewBranch branch August 8, 2024 07:19
Luca610 pushed a commit to Luca610/O2Physics that referenced this pull request Aug 13, 2024
)

* Efficiency process modified

* Efficiency processes modification

* Fixed If statement had no body

* Fixed warning messages

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Implement suggestion on the logic

---------

Co-authored-by: Fabrizio <[email protected]>
joonsukbae pushed a commit to joonsukbae/O2Physics that referenced this pull request Aug 27, 2024
)

* Efficiency process modified

* Efficiency processes modification

* Fixed If statement had no body

* Fixed warning messages

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Update PWGHF/HFC/TableProducer/correlatorDsHadrons.cxx

Co-authored-by: Fabrizio <[email protected]>

* Implement suggestion on the logic

---------

Co-authored-by: Fabrizio <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants