-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Gson to faster serialization library #3624
Comments
i personally add a component serial and deserialisation cache to my bungee do speed exactly this up |
Can I see how did you do that? Did you see any improvement? If the behaviour will be exactly the same and only the performance will be better, I think that's a good idea to implement that. |
Behaviour is the same but i have no benchmarks |
I added a threadlocal limited cache that only caches the most used components |
With many components that don't repeat it will be slower than normal one probably. |
In most cases one component will be sent to multiple players and gets deserialized multiple times so in most cases its much faster |
The only case its, i would not even call ist slower, but has no performance boost is if you senden one single component to only one player |
Just because a library is widely used doesn't mean that bungee should stick to it. Bungee would only have to carry the library as dead weight around in case plugins use it for a couple years (file size is not a big problem), but it could switch to a faster serialization library at an time, especially when a new library seems to be an order of magnitude faster in benchmarks. Edit: However it seems like Fury for example is not yet in a usable state. It does not contain json (de-)serialization. |
|
No, that does not create json, it serializes to bytes with a custom format or maybe its using jvm's serialization format. |
Hmm, the fastest that has json in it's name is: https://github.com/ngs-doo/dsl-json - which is almost 8x faster than gson. |
problem with cache is that a cached component is mutable, in most cases thats not rpoblem but if you use components weirdly it can be |
in this case IdentityLinkedHashMap is limited to 128 and the oldest (the oldest one is the one that is used the least) value in der cache is removed any time a new is added |
I don't think using cache, when you don't know what will go in the json, is a good idea. Many json strings will be unique, having different things like stats, names, ping etc. You could technically check if it's unique, but it will still use extra resources. IMO using better library will increase json's performance by up to 8x and should bring great result. 🚀 💘 |
thats why i am checking for identity instead so we dont lose any performance |
This is no suitable general solution as BaseComponents are not final and itself or its extra's could change at any point in time without identity changing. With an identity based hash map, I do not see any actual cache hits in vanilla bungeecord operation. |
you're 100% right its not any solution i would recommend or make a pr for but maybe in this case it could help him In my opinion we should not change lib |
What's the reason you prefer gson? |
most used json lib ig, also i personally used it in all of my projects, i dont think we should recode all json related code just to get 0,01% better cpu times, also components are not sent by the client so its not exploitable |
I took a look at some of the java json libraries ranking high here: https://github.com/fabienrenaud/java-json-benchmark Fastjson is missing documentation in english. The other two are also kinda missing some documentation for the more customizable parts. I didnt look further into fastjson, but dsljson and avajejsonb are missing the flexibility of redirecting the target deserialization object like we do in ComponentSerializer class. These fast libraries use annotations to generate code in compile time where they use a json parser to directly parse the string into the target object, customited serialization is possible, but to get back the flexibility we'd basically have to create our own parser to a json object and then write code to interprete it into our Component classes. Gson provides this flexibility already. I did not look at more libraries, the performance improvement is not that high anymore that I'd consider it worth. |
I think it would help massively, I summarized all gson usage and it was 4,12%, it will help with scoreboard, bossbar, chat, server pinging, BungeeTabListPlus and some more. It was calculated during a big attack, that had no impact on gson. Without the attack it looks like gson operations are taking 8% of cpu time. |
And what do you expect from the change? 1% less cpu times? |
Well initially I saw 8x improvement, but idk which lib is possible to use. |
Bungee's serialization is complicated and deep. Changing the json library probably wouldn't net a significant performance improvement. Deeper profiling should be done to see exactly what's taking the most time. If I got time I'll drop some timings from Yourkit here |
I did my timings using Spark and I described the usage above. |
Yeah, but profiling with something like YourKit will show very detailed timings. Anyways, here's some profiled results. This is from 130,584 total milliseconds. Just looked into deserialization since it's likely the bigger resource hog. Part1: https://i.gyazo.com/9d56a65540433521f7e34b7b33943137.png Flamegraph: https://i.gyazo.com/d5c470b668985eccb58b325dc67e8ce7.png This is just on a test server with a bunch of fake (1.8.9) players and a single real 1.20.4 player, so the data isn't too good. Would be interesting to see with live 1.20.4+ players. A large amount of the timings are just from Bungee's deserialization. Which is to be expected, it's pretty intensive. A good amount of timings also go to Gson's internal usage of LinkedTreeMap. I imagine most if not all of these libraries use a linked hashmap or a similar implementation with string keys. Here's one part of the flamegraph I found interesting: https://i.gyazo.com/3e392e51e317ce3886c55f5dfda8ac0e.png The toUppercase is using a pretty large amount of time: https://github.com/SpigotMC/BungeeCord/blob/1b88a8471077929bcfbd3a5bd6c7cfdf93df92de/chat/src/main/java/net/md_5/bungee/api/ChatColor.java#L267C49-L267C60 Which would likely be a pretty simple optimization. Second minor optimization in that same function would be to just flip |
What's the optimisation? |
I meant the toUppercase could be easily optimized, not the whole thing. Whole thing is already well written. For the uppercase, you could add the different uppercase/lowercase variants to the map rather then changing the string case. Would be a couple thousand entries in total though. It's only 8.33% of the ChatColor.of function though so mostly a micro optimization... |
lazyloading would be implementable for all packets that are directly try to parse it as an basecomponent |
Its a little contradicting that we have BY_NAME map using upper case while we serialize to lower case. What I noticed more in the benchmark is the huge JsonElement.has usage inside CompoenentStyleSerializer, actually taking most of the deserialization time. I have pushed these little optimizations of the 2nd paragraph here: https://github.com/Janmm14/BungeeCord/tree/chat-optimization Lazily deserializing chat might be a valueable optimization as well. |
You are here in the serialization (objects -> json objects -> json string) part while bob7l and me found things about deserialization (json string -> json objects -> objects). |
Serialization/Deserialization differences aside, you're using Spark's sampler. While it's good for getting a ballpark idea of what's taking up time, it's not very accurate and usually wont show quick functions like map.get and so on. Whereas I'm using instrumentation/tracing which offers a more precise measurement and is able to pickup on finer details. It's also a lot more useful in these cases with massive trees from having to iterate over many children components. You should be able to grab a free trial of YourKit. Incredible profiler |
ComponentSerializer.toString uses gson.toJson, there's nothing about JsonElement.has. |
Yes. That is the "serializing" part: java objects to json string. gson is also involved in the "deserializing" part: json string to java objects. Different method, still being used a lot in bungee. If we go to handling "chat" data from packets only on demand, that could save us both steps in a lot of cases, leaving only bungee plugins sending chat to be affected on the bungee side (as the chat part is also used in spigot, serialization speed is still a concern somewhat tho) |
In branch chat-optimization-v2 I have created an implementation of lazy deserialization in packets, with methods to hopefully retain full bytecode compatibility (Edit: for the individual packets, not for DefinedPacket methods). Edit 2: |
One of the patches dd338c5 this or that Janmm14@5b12adb I think broke bold options from the tablist's entries and header/footer. |
And it's the same for strikethrough too. |
weird, i will test later and investigate and find the probably dumb mistake i made |
Weird even more, because I'm testing with master and I still have this issue. I forced this to true and I don't see anything on the tablist as bold, should it be like that? I don't know where to look.
I did it here https://github.com/Janmm14/BungeeCord/blob/5b12adb97f0e9db28c123a4d3ef88fd0909ed730/chat/src/main/java/net/md_5/bungee/chat/ComponentStyleSerializer.java#L45 and here https://github.com/Janmm14/BungeeCord/blob/5b12adb97f0e9db28c123a4d3ef88fd0909ed730/chat/src/main/java/net/md_5/bungee/chat/ComponentStyleSerializer.java#L81 and now it's bold. |
Yea, that's what I thought happened, it's an issue inside the BungeeCord. #3631 |
Probably the most basic way of implementing it without breaking anything. Hopefully has a chance of getting merged.. De/serialization is only going to get heavier and heavier in the future. |
How does it work btw? I don't understand that too well. |
He's not deserializating the string payload unless it's requested. So when it comes time to write the packet, it just writes the string and doesn't need to serialize the text components back into a string assuming it never deserialized |
Once deserialized, the code will ignore the previously serialized string as the deserialized representation could have been changed/updated, because it was requested. I think I am using the |
I'd just open the pull and see what md_5 says. I think Deserializable is perfect. |
I'm not sure I understand the question |
Whether in chat-optimzation-v2 it should stay as |
You could have a fixed subclass of the generic one? I don't think it really matters |
That doesn't wok this easily, a fixed sub-interface would need different implementations. So I kept it like it was. |
While on this topic I found something interesting I wanted to ask about if anyone knows. |
@NEZNAMY please link or mention the involved places in code. I do not think this is happening. The deserialization of scoreboard-related packets is not just to kick players, the client would disconnect itself otherwise. |
That's the issue - I didn't find the part of the code that is "wrong". While I disagree with what you are saying, that's not the point here. The point is that BungeeCord deserializes these packets, but when injecting custom duplex handler with a plugin and reading "write" direction, only the raw buffer is forwarded, so if I want to listen to those packets, I need to deserialize them once again, which overcomplicates things and slows down performance as well. |
Is this what you mean? #3503 |
That's one of the reasons why BungeeCord deserializes these packets, yes. Since the job is already done, would be nice to see the deserialized packet forwarded to the pipeline instead of the raw buffer. Or is it to save resources while encoding? |
Actually this is intended behaviour and in fact an optimization inside the ChannelWrapper's write method. It applies to all registered packets those handlers in the classes extending "PacketHandler" (like DownsteamBridge) are not throwing CancelSendSignal.INSTANCE. Either way, your code is not using API and some further bungeecord optimizations that are laying dormant in PRs here, but implemented for example in IvanCord could break your system of ignoring all existing bungeecord tooling for packet reading and hacking your way into the netty pipeline. |
Thanks for the info, looks like it is intended. |
Feature description
I was looking and the profiler and saw that serializing and deserializing of strings such as scoreboard components is taking some processing time, maybe it would be a good idea to change it to a faster library? The servers have scoreboard plugin, I think tablist plugin like https://github.com/CodeCrafter47/BungeeTabListPlus would have better performance too?
Goal of the feature
When looking at the results from here, gson, which is currently being used in BungeeCord, got 8677 ns, while for example fury-fastest got 442 and jackson 1994. This is almost 20x improvement with fury or 4x with jackson.
https://github.com/eishay/jvm-serializers/wiki
Unfitting alternatives
Checking
The text was updated successfully, but these errors were encountered: