You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! While profiling some of our internal code that we needed to speed up, I noticed that the Shotgun API seemed to have a bottleneck at the json decoding stage (in _json_loads_ascii) I've been running my tests on Python 2.7 and I'm 98% sure we've got the C speedups for simplejson in place, so I don't think that's the issue.
For these preliminary benchmarks, I'm using cprofile to find the cumulative time of making an SG.find() call 10x, which returns the number of entities specified in the lefthand column. The difference is negligible at low numbers, but ujson seems to be exponentially faster at decoding when larger amounts of data are returned (times are in seconds):
entities
simplejson
ujson
1
0.003
0.000
10
0.013
0.000
100
0.105
0.001
1000
0.955
0.012
10000
7.791
0.108
I'm wondering if there's a known reason to not utilize a faster library for decoding json returned by the Shotgun API? I did notice that ujson doesn't appear to have the object_hook arg currently being used in _json_loads_ascii; removing this wasn't having any negative impact in my tests, but I'd be interested in knowing when this object_hook is necessary (since we might consider moving our own fork to ujson.)
The text was updated successfully, but these errors were encountered:
Hi! While profiling some of our internal code that we needed to speed up, I noticed that the Shotgun API seemed to have a bottleneck at the json decoding stage (in
_json_loads_ascii
) I've been running my tests on Python 2.7 and I'm 98% sure we've got the C speedups for simplejson in place, so I don't think that's the issue.For these preliminary benchmarks, I'm using cprofile to find the cumulative time of making an SG.find() call 10x, which returns the number of entities specified in the lefthand column. The difference is negligible at low numbers, but ujson seems to be exponentially faster at decoding when larger amounts of data are returned (times are in seconds):
I'm wondering if there's a known reason to not utilize a faster library for decoding json returned by the Shotgun API? I did notice that ujson doesn't appear to have the object_hook arg currently being used in
_json_loads_ascii
; removing this wasn't having any negative impact in my tests, but I'd be interested in knowing when this object_hook is necessary (since we might consider moving our own fork to ujson.)The text was updated successfully, but these errors were encountered: