Skip to content

bring back id table algorithm instead of std table [backport:2.2] #24930

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: devel
Choose a base branch
from

Conversation

metagn
Copy link
Collaborator

@metagn metagn commented May 4, 2025

refs #24929, partially reverts #23403

Instead of using Table[ItemId, T], the old algorithm is brought back into TIdTable[T] to prevent a performance regression. The inheritance removal from #23403 still holds, only ItemIds are stored.

@metagn metagn changed the title test performance improvements bring back id tables May 4, 2025
@metagn metagn marked this pull request as ready for review May 4, 2025 22:08
@metagn metagn changed the title bring back id tables bring back id table algorithm instead of std table May 4, 2025
@metagn
Copy link
Collaborator Author

metagn commented May 4, 2025

Bootstrap time for github CI for ubuntu goes from an average of 9 seconds to 8.3 seconds with this. Attempting to adapt #23403 to the 2.0 branch in #24931 raised the bootstrap time from an average of 7 seconds to 7.5. Although the time in github CI is not completely reliable to tell any performance difference, it is consistent enough that this is a real impact. Probably not the entire cause of #24929 though.

I would not have thought ID tables could affect the compiler's performance so much, in generics/macro heavy code the benefit is probably larger. My guess for why it has such a big impact is calls to magics or other very commonly called routines which have implicit generic parameters, i.e. SomeInteger: these always call idTableGet to see if they were already bound. Maybe we could specifically profile sigmatch to see which kinds of overloads take the most time.

@arnetheduck
Copy link
Contributor

a thing to keep in mind is that the compiler bootstrap is a very poor benchmark - it stresses only a small part of the compiler (ie a project like nimbus has a lot more generics / macros / overloads etc to deal with).

If you need a bigger test, you can use nimbus-eth2 like so:

git clone https://github.com/status-im/nimbus-eth2.git
cd nimbus-eth2
make # build deps
./env.sh my_test_nim c beacon_chain/nimbus_beacon_node

@metagn metagn changed the title bring back id table algorithm instead of std table bring back id table algorithm instead of std table [backport:2.2] May 5, 2025
@metagn
Copy link
Collaborator Author

metagn commented May 5, 2025

In any case this is the cause of a performance regression, we could also see its effect on nimbus after the fact, but I'll try it on github CI on version-2-2

@narimiran
Copy link
Member

narimiran commented May 5, 2025

If you need a bigger test, you can use nimbus-eth2

Here are the results on my machine:

Nim 2.2.4:

333179 lines; 313.908s; 9.271GiB peakmem

Nim 2.2.5 with the changes from this PR applied to it:

333203 lines; 241.748s; 9.393GiB peakmem

Over 20% improvement! Nice!!

@metagn
Copy link
Collaborator Author

metagn commented May 5, 2025

That is 29.8%, the difference in #24929 is 33.1%, for beacon_chain/nimbus_beacon_node at least (i.e. the 116.915s result for 2-2 would become 90.039s vs 2.0's 87.857s). The tests/all_tests the performance regression is 40.9%.

metagn added a commit to metagn/Nim that referenced this pull request May 5, 2025
metagn added a commit to metagn/Nim that referenced this pull request May 5, 2025
@metagn
Copy link
Collaborator Author

metagn commented May 6, 2025

As tested in #24935 the performance for nimbus does not go back to 2.0 levels but still noticeably improves with this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants