Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce ARM Neon and SSE2 SIMD. #743

Open
wants to merge 27 commits into
base: master
Choose a base branch
from

Conversation

samyron
Copy link

@samyron samyron commented Feb 3, 2025

Version 2 of the introduction of ARM Neon SIMD.

There are currently two implementations:

  1. "Rules" based.
  2. Lookup Table based. This is effectively an SIMD accelerated version of the scalar implementation.

Benchmarks (Lookup table)

== Encoding mixed utf8 (5003001 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
                json    62.000 i/100ms
          json_coder    67.000 i/100ms
                  oj    30.000 i/100ms
Calculating -------------------------------------
                json    628.035 (±12.7%) i/s    (1.59 ms/i) -      3.162k in   5.118636s
          json_coder    626.843 (±15.8%) i/s    (1.60 ms/i) -      3.082k in   5.079836s
                  oj    352.174 (± 9.4%) i/s    (2.84 ms/i) -      1.740k in   5.005929s

Comparison:
                json:      628.0 i/s
          json_coder:      626.8 i/s - same-ish: difference falls within error
                  oj:      352.2 i/s - 1.78x  slower


== Encoding mostly utf8 (5001001 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
                json    50.000 i/100ms
          json_coder    56.000 i/100ms
                  oj    36.000 i/100ms
Calculating -------------------------------------
                json    632.784 (±27.0%) i/s    (1.58 ms/i) -      3.000k in   5.063991s
          json_coder    628.328 (±16.7%) i/s    (1.59 ms/i) -      3.080k in   5.034271s
                  oj    351.466 (± 9.7%) i/s    (2.85 ms/i) -      1.728k in   5.003977s

Comparison:
                json:      632.8 i/s
          json_coder:      628.3 i/s - same-ish: difference falls within error
                  oj:      351.5 i/s - 1.80x  slower

Benchmarks (Rules based)

== Encoding mixed utf8 (5003001 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
                json    69.000 i/100ms
          json_coder    78.000 i/100ms
                  oj    33.000 i/100ms
Calculating -------------------------------------
                json    758.135 (±22.7%) i/s    (1.32 ms/i) -      3.657k in   5.114664s
          json_coder    800.957 (±11.5%) i/s    (1.25 ms/i) -      3.978k in   5.044465s
                  oj    343.750 (±11.9%) i/s    (2.91 ms/i) -      1.683k in   5.004571s

Comparison:
                json:      758.1 i/s
          json_coder:      801.0 i/s - same-ish: difference falls within error
                  oj:      343.7 i/s - 2.21x  slower


== Encoding mostly utf8 (5001001 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
                json    59.000 i/100ms
          json_coder    53.000 i/100ms
                  oj    37.000 i/100ms
Calculating -------------------------------------
                json    828.807 (±15.1%) i/s    (1.21 ms/i) -      4.071k in   5.060739s
          json_coder    799.688 (±20.1%) i/s    (1.25 ms/i) -      3.816k in   5.019480s
                  oj    364.514 (± 7.1%) i/s    (2.74 ms/i) -      1.850k in   5.100773s

Comparison:
                json:      828.8 i/s
          json_coder:      799.7 i/s - same-ish: difference falls within error
                  oj:      364.5 i/s - 2.27x  slower

I am still working on this but I wanted to share progress.

Edit: Looks like I missed one commit so I'll have to resolve some merge conflicts.

@byroot
Copy link
Member

byroot commented Feb 3, 2025

The gain seem to be 7% on real word benchmarks:

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after     2.438k i/100ms
Calculating -------------------------------------
               after     24.763k (± 0.8%) i/s   (40.38 μs/i) -    124.338k in   5.021560s

Comparison:
              before:    23166.2 i/s
               after:    24762.5 i/s - 1.07x  faster


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after   254.000 i/100ms
Calculating -------------------------------------
               after      2.600k (± 1.3%) i/s  (384.61 μs/i) -     13.208k in   5.080852s

Comparison:
              before:     2439.5 i/s
               after:     2600.0 i/s - 1.07x  faster

Also note that I did one more refactoring to make the introduction of SIMD easier, so you still have a conflict.

Comment on lines 20 to 26
uint8x16x4_t load_uint8x16_4(const unsigned char *table, int offset) {
uint8x16x4_t tab;
for(int i=0; i<4; i++) {
tab.val[i] = vld1q_u8(table+offset+(i*16));
}
return tab;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately it's not. vld4q_u8 interleaves the data among the 4 vector registers.

% cat load-test.c 
#include <stdio.h>
#include <stdint.h>
#include <arm_neon.h>

void print_vec(char *msg, uint8x16_t vec) {
  printf("%s\n[ ", msg);
  uint8_t store[16] = {0};
  vst1q_u8(store, vec);
  for(int i=0; i<16; i++) {
    printf("%3d ", store[i]);
  }
  printf("]\n");
}

uint8x16x4_t load_table(uint8_t *table, int offset) {
  uint8x16x4_t tab;
  for(int i=0; i<4; i++) {
    tab.val[i] = vld1q_u8(table+offset+(i*16));
  }
  return tab;
}

int main(void) {
  uint8_t table[256];
  
  for(int i=0; i<256; i++) {
    table[i] = i;
  }

  uint8x16x4_t tab1 = load_table(table, 0);

  print_vec("tab1.val[0]", tab1.val[0]);
  print_vec("tab1.val[1]", tab1.val[1]);
  print_vec("tab1.val[2]", tab1.val[2]);
  print_vec("tab1.val[3]", tab1.val[3]);

  printf("\n");
  uint8x16x4_t tab1_2 = vld4q_u8(table);
  print_vec("tab1_2.val[0]", tab1_2.val[0]);
  print_vec("tab1_2.val[1]", tab1_2.val[1]);
  print_vec("tab1_2.val[2]", tab1_2.val[2]);
  print_vec("tab1_2.val[3]", tab1_2.val[3]);
  
  return 0;
}
% ./load-test 
tab1.val[0]
[   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15 ]
tab1.val[1]
[  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31 ]
tab1.val[2]
[  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47 ]
tab1.val[3]
[  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63 ]

tab1_2.val[0]
[   0   4   8  12  16  20  24  28  32  36  40  44  48  52  56  60 ]
tab1_2.val[1]
[   1   5   9  13  17  21  25  29  33  37  41  45  49  53  57  61 ]
tab1_2.val[2]
[   2   6  10  14  18  22  26  30  34  38  42  46  50  54  58  62 ]
tab1_2.val[3]
[   3   7  11  15  19  23  27  31  35  39  43  47  51  55  59  63 ]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, that's so weird.

Well, maybe that loop should be unrolled then, I suspect the compiler does it, but might as well be explicit.

@byroot
Copy link
Member

byroot commented Feb 3, 2025

Can you just include the implementation for the regular escaping? I'm not sure the script safe version is quite worth it.

Comment on lines 392 to 398
if ((ch_len = search_escape_basic_neon_advance_lut(search)) != 0) {
return ch_len;
}

// if ((ch_len = search_escape_basic_neon_advance_rules(search)) != 0) {
// return ch_len;
// }
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like it's a toss up which one is the best. It might be an artifact that my M1 Macbook Air is passively cooled and it gets warm after I run it over and over.

@samyron
Copy link
Author

samyron commented Feb 6, 2025

Comparison between master and this branch in real world benchmarks. This is for the lookup table implementation.

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after     2.027k i/100ms
Calculating -------------------------------------
               after     21.413k (± 1.6%) i/s   (46.70 μs/i) -    107.431k in   5.018339s

Comparison:
              before:    14448.8 i/s
               after:    21412.9 i/s - 1.48x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   110.000 i/100ms
Calculating -------------------------------------
               after      1.098k (± 1.2%) i/s  (910.41 μs/i) -      5.500k in   5.007977s

Comparison:
              before:      993.9 i/s
               after:     1098.4 i/s - 1.11x  faster


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   216.000 i/100ms
Calculating -------------------------------------
               after      2.086k (± 8.9%) i/s  (479.31 μs/i) -     10.368k in   5.034983s

Comparison:
              before:     1642.1 i/s
               after:     2086.3 i/s - 1.27x  faster

Running it a second time:

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after     2.042k i/100ms
Calculating -------------------------------------
               after     21.400k (± 1.7%) i/s   (46.73 μs/i) -    108.226k in   5.058877s

Comparison:
              before:    15039.4 i/s
               after:    21399.7 i/s - 1.42x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   109.000 i/100ms
Calculating -------------------------------------
               after      1.094k (± 1.2%) i/s  (913.67 μs/i) -      5.559k in   5.079778s

Comparison:
              before:     1005.4 i/s
               after:     1094.5 i/s - 1.09x  faster


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   215.000 i/100ms
Calculating -------------------------------------
               after      2.137k (± 5.5%) i/s  (467.91 μs/i) -     10.750k in   5.050467s

Comparison:
              before:     1639.0 i/s
               after:     2137.1 i/s - 1.30x  faster

…e only need 128 bytes for the lookup table as the top 128 bytes are all zeros.
@byroot
Copy link
Member

byroot commented Feb 6, 2025

Not sure why but it's way more modest on my machine (Air M3):

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after     2.603k i/100ms
Calculating -------------------------------------
               after     26.544k (± 1.8%) i/s   (37.67 μs/i) -    132.753k in   5.002890s

Comparison:
              before:    23370.1 i/s
               after:    26543.7 i/s - 1.14x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after   136.000 i/100ms
Calculating -------------------------------------
               after      1.368k (± 0.7%) i/s  (730.98 μs/i) -      6.936k in   5.070329s

Comparison:
              before:     1369.9 i/s
               after:     1368.0 i/s - same-ish: difference falls within error


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after   269.000 i/100ms
Calculating -------------------------------------
               after      2.702k (± 0.3%) i/s  (370.11 μs/i) -     13.719k in   5.077550s

Comparison:
              before:     2475.0 i/s
               after:     2701.9 i/s - 1.09x  faster

@samyron
Copy link
Author

samyron commented Feb 10, 2025

Apologies for going dark for a while. I've been trying to make incremental improvements on a different branch (found here). My hope was using a move mask would be faster than vmaxvq_u8 to determine if any byte needs to be escaped. It also has the benefit of not needing to store all of the candidate matches as all that would be needed is a uint64_t which indicates which bytes need to be escaped. Unfortunately on my machine, it didn't seem to make much of a difference.

Feel free to try it out though.

@byroot
Copy link
Member

byroot commented Feb 10, 2025

Apologies for going dark for a while

That's no worries at all. I want to release a 2.10.0 with the current change on master, but I'm pairing with Étienne on making sure we have no blind spots on JSON::Coder. So probably gonna happen this week.

After that I think I can start merging some SIMD stuff. I'd like to go with the smaller possible useful SIMD acceleration to ensure it doesn't cause issues with people. If it works well, we can then go farther. So yeah, no rush.

@samyron
Copy link
Author

samyron commented Feb 11, 2025

@byroot if you have a few minutes, would you be able to checkout this branch and benchmark it against master. You'll have to tweak your compare script a bit to compile this branch with cmd("bundle", "exec", "rake", "clean", "compile", "--", "--disable-generator-use-simd"). I want to see how your M3 compares with my M1.

This branch uses the bit twiddling sort of platform agnostic SIMD code if the SIMD code is disabled via aextconf.rb flag.

The results on my M1:

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after     1.944k i/100ms
Calculating -------------------------------------
               after     19.671k (± 2.5%) i/s   (50.84 μs/i) -     99.144k in   5.043309s

Comparison:
              before:    15135.7 i/s
               after:    19670.9 i/s - 1.30x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   113.000 i/100ms
Calculating -------------------------------------
               after      1.109k (± 2.1%) i/s  (901.49 μs/i) -      5.650k in   5.095561s

Comparison:
              before:     1040.1 i/s
               after:     1109.3 i/s - 1.07x  faster


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   204.000 i/100ms
Calculating -------------------------------------
               after      2.006k (± 3.8%) i/s  (498.51 μs/i) -     10.200k in   5.092718s

Comparison:
              before:     1687.4 i/s
               after:     2006.0 i/s - 1.19x  faster

@byroot
Copy link
Member

byroot commented Feb 12, 2025

With that compilation flag and compared to master:

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after     2.326k i/100ms
Calculating -------------------------------------
               after     23.218k (± 1.6%) i/s   (43.07 μs/i) -    116.300k in   5.010271s

Comparison:
              before:    22460.3 i/s
               after:    23218.0 i/s - 1.03x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after   132.000 i/100ms
Calculating -------------------------------------
               after      1.290k (± 1.4%) i/s  (775.38 μs/i) -      6.468k in   5.016121s

Comparison:
              before:     1323.6 i/s
               after:     1289.7 i/s - same-ish: difference falls within error


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [arm64-darwin23]
Warming up --------------------------------------
               after   242.000 i/100ms
Calculating -------------------------------------
               after      2.495k (± 0.6%) i/s  (400.84 μs/i) -     12.584k in   5.044306s

Comparison:
              before:     2449.6 i/s
               after:     2494.8 i/s - 1.02x  faster

@samyron
Copy link
Author

samyron commented Feb 25, 2025

From a co-worker with an M4 Pro:

== Encoding activitypub.json (52595 bytes)
ruby 3.2.6 (2024-10-30 revision 63aeb018eb) [arm64-darwin24]
Warming up --------------------------------------
               after     2.876k i/100ms
Calculating -------------------------------------
               after     28.251k (± 3.0%) i/s   (35.40 μs/i) -    143.800k in   5.095128s

Comparison:
              before:    24938.2 i/s
               after:    28251.0 i/s - 1.13x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.2.6 (2024-10-30 revision 63aeb018eb) [arm64-darwin24]
Warming up --------------------------------------
               after   154.000 i/100ms
Calculating -------------------------------------
               after      1.516k (± 2.9%) i/s  (659.57 μs/i) -      7.700k in   5.083078s

Comparison:
              before:     1575.4 i/s
               after:     1516.1 i/s - same-ish: difference falls within error


== Encoding twitter.json (466906 bytes)
ruby 3.2.6 (2024-10-30 revision 63aeb018eb) [arm64-darwin24]
Warming up --------------------------------------
               after   295.000 i/100ms
Calculating -------------------------------------
               after      2.933k (± 3.3%) i/s  (340.94 μs/i) -     14.750k in   5.034796s

Comparison:
              before:     2678.2 i/s
               after:     2933.0 i/s - 1.10x  faster

@samyron
Copy link
Author

samyron commented Feb 26, 2025

From another co-worker with an M1 Pro:

== Encoding activitypub.json (52595 bytes)
ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after     2.166k i/100ms
Calculating -------------------------------------
               after     21.521k (± 1.2%) i/s   (46.47 μs/i) -    108.300k in   5.032957s

Comparison:
              before:    15231.1 i/s
               after:    21521.3 i/s - 1.41x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   108.000 i/100ms
Calculating -------------------------------------
               after      1.062k (± 5.5%) i/s  (941.69 μs/i) -      5.400k in   5.103989s

Comparison:
              before:     1013.4 i/s
               after:     1061.9 i/s - same-ish: difference falls within error


== Encoding twitter.json (466906 bytes)
ruby 3.4.2 (2025-02-15 revision d2930f8e7a) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   219.000 i/100ms
Calculating -------------------------------------
               after      2.061k (±12.8%) i/s  (485.22 μs/i) -     10.074k in   5.040974s

Comparison:
              before:     1677.4 i/s
               after:     2060.9 i/s - 1.23x  faster

@radiospiel
Copy link
Contributor

@samyron

I just pushed a PR #769 to this repo which also employs SIMD to speed up string escapes. I am really sorry that we both worked in that area at the same time; after I started my work I didn't check back with this repo for a while (and I should have done that.)

I believe the main difference between my PR and yours seem that mine supports x86 as well. It is doing this by using a cross-platform shim simd.h from Postgres, which comes with implementations on AVX, Neon, (and also on plain C). Still, on Neon I see somewhat higher gains than those reported here; however I don't understand where that difference comes from.

I want to suggest to collaborate on getting SIMD support in one way or another. 👋

@samyron
Copy link
Author

samyron commented Mar 18, 2025

Hi @radiospiel, I'll take a look at #769. I originally started working on #730 which supports Neon, SSE 4.2 and AVX2 with runtime detection support. The PR got a bit big so I decided to close it and implement each instruction set individually.

Additionally, @byroot refactored the code quite a bit to make the SIMD implementation quite a bit easier. There are two implementations in this PR, one uses a lookup table and the other is rule-based. Both seem to have similar performance on my machine.

On my machine I see a 11%-48% improvement depending on the benchmark. A few of my co-workers saw various speedups depending on their machine.

I should probably mark this PR as "Ready for Review". However, I'm happy to collaborate either on this or your PR.

Edit: oh yeah, there is an old-school bit-twiddling SIMD approach in pure C: #738

@samyron samyron marked this pull request as ready for review March 18, 2025 01:34
@radiospiel
Copy link
Contributor

radiospiel commented Mar 18, 2025

Thank you, @samyron .

I became painfully aware of the work you did when I tried to merge master into my branch, because the interface's of the escape functions had been changed; my implementation relies on a "escape me a uchar[] array into an fbuffer" which is no longer available with whats in master today :)

The main difference between your approach and mine is that you switch out the search functionality, depending on the availability of SIMD, while I switch out the SIMD primitives instead. This allows me to have working implementations for X86, ARM, and bit-twiddling; but only a handful of primitives are available because NEON and AVX are different, so your approach should allow for per-hardware type optimal implementations.

I have a busy week ahead of me, but I will definitively take a look end of the week. I will also benchmark on Graviton instances; most ARM server workloads are probably not on a Apple Silicon CPU after all :) Happy to benchmark this PR as well.

Can you share a benchmark script that produces the most useful output for you? I would be especially interested in understanding how you get the "before" and "after" entries in the benchmark output :)

Speaking of benchmarks:

On my machine I see a 11x-48x improvement depending on the benchmark.

This is magnitudes more than the numbers posted here. I have seen a 48% posted above (on the activitypub testcase), so is this a typo x%?
The activitypub testcase, apparently, lends itself particularly well to SIMD; I see a speedup of ~82% on that (Apple M1)

@samyron
Copy link
Author

samyron commented Mar 18, 2025

This is magnitudes more than the numbers posted here. I have seen a 48% posted above (on the activitypub testcase), so is this a typo x%? The activitypub testcase, apparently, lends itself particularly well to SIMD; I see a speedup of ~82% on that (Apple M1)

Apologies, yes, that was a typo. I'll fix it in the comment above

@radiospiel
Copy link
Contributor

@samyron I reran benchmarks (link). Both our PRs show a substantial improvement over the baseline, the only significant difference is on short strings.

Encoding Type json 2.10.2 samyron radiospiel
strings.ascii 13.046k (± 1.6%) 29.681k (± 1.9%) 33.583k (± 3.0%)
strings.escapes 4.608k (± 1.9%) 10.765k (± 2.2%) 9.681k (± 2.5%)
strings.mixed 32.971k (± 1.4%) 88.580k (± 2.1%) 90.133k (± 3.2%)
strings.multibyte 32.836k (± 2.0%) 89.385k (± 3.0%) 89.475k (± 2.1%)
strings.short 91.819k (± 9.8%) 95.388k (± 2.5%) 133.008k (± 2.6%)
strings.tests 21.350k (± 4.1%) 22.538k (± 2.7%) 22.600k (± 2.5%)

strings.short is a test on a 13-byte string ("b" * 5) + "€" + ("a" * 5), which is shorter than the size of the SIMD buffer (which in my case is 16 byte.).

I believe such short strings are relevant, because JSON object keys are probably quite often shorter than 16 byte; my PR applies SIMD for strings of 8 byte and more (link). (The value of 8 seemed beneficial and looked nice, but I should probably retest this with smaller values.)

Maybe you could be able to support that as well?

@radiospiel
Copy link
Contributor

radiospiel commented Mar 23, 2025

@byroot we have two competing implementations of the same approach. While mine is probably more beneficial in the short term (because it also supports x86), I believe that @samyron 's approach has more future potential, because it allows handcrafted SIMD implementations that are fundamentally different between NEON and SSE2. (and it certainly can be extended to also support shorter strings, see comment above.)

Also, transplanting a x86 implementation from my PR into @samyron 's shouldn't be too hard to achieve.

I see the following alternatives:

  • we scrap mine, @.samyron adds support for shorter strings, and, in a follow up we transplant SSE2 into @.samyron's;
  • we merge mine, with the understanding that @.samyron's will be merged in at a later point, with SSE2 support right out of the box; mine will be removed again

What do you all think about that? ☝️

#ifdef ENABLE_SIMD

#if defined(__ARM_NEON) || defined(__ARM_NEON__) || defined(__aarch64__) || defined(_M_ARM64)
#include <arm_neon.h>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The internet is convinced that no ARM64's lack neon support, so maybe this is not necessary. For example here: https://github.com/postgres/postgres/blob/REL_17_4/src/include/port/simd.h#L38

@byroot
Copy link
Member

byroot commented Apr 7, 2025

Indeed. Also I run my benchmarks with YJIT on, it's minor but help reduce the Ruby baseline.

@samyron
Copy link
Author

samyron commented Apr 7, 2025

The windows and ubuntu failures with Ruby 2.7 seem legit:

compiling ../../../../../../ext/json/ext/generator/generator.c
../../../../../../ext/json/ext/generator/generator.c:124:2: error: #error "Unknown SIMD Implementation."
  124 | #error "Unknown SIMD Implementation."
      |  ^~~~~
../../../../../../ext/json/ext/generator/generator.c: In function ‘generate_json_string’:
../../../../../../ext/json/ext/generator/generator.c:1378:11: error: ‘search_state’ {aka ‘struct _search_state’} has no member named ‘matches_mask’
 1378 |     search.matches_mask = 0;
      |           ^
gmake: *** [Makefile:246: generator.o] Error 1

Interesting. Strange it failed only on 2.7. I'll see if I can re-create the error locally. I think I can easily fix it removing the #error case and using a uint16_t by default. My suspicion is something isn't correct in simd.h.

@radiospiel
Copy link
Contributor

radiospiel commented Apr 7, 2025 via email

@samyron
Copy link
Author

samyron commented Apr 7, 2025

I was able to replicate the error on ruby 2.7. That should now be fixed. I also made a few small changes to the SSE algorithm and ran some benchmarks on my x86 laptop which has an Intel Core i7-8850H CPU.

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [x86_64-linux]
Warming up --------------------------------------
               after     1.517k i/100ms
Calculating -------------------------------------
               after     18.801k (± 2.4%) i/s   (53.19 μs/i) -     94.054k in   5.005615s

Comparison:
              before:    15053.3 i/s
               after:    18801.4 i/s - 1.25x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [x86_64-linux]
Warming up --------------------------------------
               after    81.000 i/100ms
Calculating -------------------------------------
               after    800.051 (± 3.2%) i/s    (1.25 ms/i) -      4.050k in   5.068088s

Comparison:
              before:      829.1 i/s
               after:      800.1 i/s - same-ish: difference falls within error


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [x86_64-linux]
Warming up --------------------------------------
               after   176.000 i/100ms
Calculating -------------------------------------
               after      1.758k (± 3.4%) i/s  (568.89 μs/i) -      8.800k in   5.013033s

Comparison:
              before:     1524.1 i/s
               after:     1757.8 i/s - 1.15x  faster

@eno-stripe
Copy link

@samyron you have a couple of lines when you iterate over blocks with a check like

while (search->ptr+sizeof(uint8x16_t) < search->end) {

but I think <= is correct. Can you take a look?

@samyron
Copy link
Author

samyron commented Apr 8, 2025

@samyron you have a couple of lines when you iterate over blocks with a check like

while (search->ptr+sizeof(uint8x16_t) < search->end) {

but I think <= is correct. Can you take a look?

* https://github.com/ruby/json/pull/743/files#diff-2bb51be932dec14923f6eb515f24b1b593737f0d3f8e76eeecf58cff3052819fR303

* https://github.com/ruby/json/pull/743/files#diff-2bb51be932dec14923f6eb515f24b1b593737f0d3f8e76eeecf58cff3052819fR409

* https://github.com/ruby/json/pull/743/files#diff-2bb51be932dec14923f6eb515f24b1b593737f0d3f8e76eeecf58cff3052819fR564

Admittedly, I did not think this was correct. After some testing, I believe you are correct. I was worried that by using <= we would read the 0 byte pointed to by search->end. Apologies for the doubt.

I wrote a test to show it's safe:

#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <arm_neon.h>

void print_uint8x16_t(char *msg, uint8x16_t vec) {
  printf("%s\n[ ", msg);
  uint8_t store[16] = {0};
  vst1q_u8(store, vec);
  for(int i=0; i<16; i++) {
    printf("%3d ", store[i]);
  }
  printf("]\n");
}

int main(void) {
  char *test = "aaaaaaaaaaaaaaaa";
  size_t len = strlen(test);
  printf("test = %s, len=%lu\n", test, len);

  char *end = test + len;
  printf("start = %p, end = %p\n", test, end);

  for(int i=0; i<len; i++) {
    printf("test+%d = %p\n", i, test+i);
  }

  printf("test+sizeof(uint8x16_t) == end = %d\n", test+sizeof(uint8x16_t) == end);

  uint8x16_t chunk = vld1q_u8((const unsigned char *) test);
  print_uint8x16_t("chunk:", chunk);

  return 0;
}

Executing it:

scott@Scotts-MacBook-Air json % ./len-test 
test = aaaaaaaaaaaaaaaa, len=16
start = 0x104143f32, end = 0x104143f42
test+0 = 0x104143f32
test+1 = 0x104143f33
test+2 = 0x104143f34
test+3 = 0x104143f35
test+4 = 0x104143f36
test+5 = 0x104143f37
test+6 = 0x104143f38
test+7 = 0x104143f39
test+8 = 0x104143f3a
test+9 = 0x104143f3b
test+10 = 0x104143f3c
test+11 = 0x104143f3d
test+12 = 0x104143f3e
test+13 = 0x104143f3f
test+14 = 0x104143f40
test+15 = 0x104143f41
test+sizeof(uint8x16_t) == end = 1
chunk:
[  97  97  97  97  97  97  97  97  97  97  97  97  97  97  97  97 ]

It was further confirmed looking at the output of samply when running the following benchmark with the < comparison.

benchmark_encoding "bytes.16.bestcase", ([("a" * 16)] * 10000)

It was spending all of it's time in the fallback code.

@samyron
Copy link
Author

samyron commented Apr 8, 2025

I create some best case and worst case benchmarks compared master to this branch.

The benchmarks:

benchmark_encoding "bytes.16.bestcase", ([("a" * 16)] * 10000)
benchmark_encoding "bytes.16.worstcase", ([('"' * 16)] * 10000)

benchmark_encoding "bytes.32.bestcase", ([("a" * 32)] * 10000)
benchmark_encoding "bytes.32.worstcase", ([('"' * 32)] * 10000)

benchmark_encoding "bytes.15.bestcase", ([("a" * 15)] * 10000)
benchmark_encoding "bytes.15.worstcase", ([('"' * 15)] * 10000)

The results

== Encoding bytes.16.bestcase (190001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   703.000 i/100ms
Calculating -------------------------------------
               after      7.263k (± 2.3%) i/s  (137.69 μs/i) -     36.556k in   5.035957s

Comparison:
              before:     4287.8 i/s
               after:     7262.9 i/s - 1.69x  faster


== Encoding bytes.16.worstcase (350001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   129.000 i/100ms
Calculating -------------------------------------
               after      1.286k (± 1.4%) i/s  (777.35 μs/i) -      6.450k in   5.014904s

Comparison:
              before:     2459.7 i/s
               after:     1286.4 i/s - 1.91x  slower


== Encoding bytes.32.bestcase (350001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   612.000 i/100ms
Calculating -------------------------------------
               after      5.944k (± 2.9%) i/s  (168.25 μs/i) -     29.988k in   5.049904s

Comparison:
              before:     2834.9 i/s
               after:     5943.6 i/s - 2.10x  faster


== Encoding bytes.32.worstcase (670001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after    65.000 i/100ms
Calculating -------------------------------------
               after    653.196 (± 0.9%) i/s    (1.53 ms/i) -      3.315k in   5.075502s

Comparison:
              before:     1276.0 i/s
               after:      653.2 i/s - 1.95x  slower


== Encoding bytes.15.bestcase (180001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   669.000 i/100ms
Calculating -------------------------------------
               after      6.624k (± 1.7%) i/s  (150.97 μs/i) -     33.450k in   5.051427s

Comparison:
              before:     4104.1 i/s
               after:     6623.8 i/s - 1.61x  faster


== Encoding bytes.15.worstcase (330001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after    97.000 i/100ms
Calculating -------------------------------------
               after    992.471 (± 1.5%) i/s    (1.01 ms/i) -      5.044k in   5.083382s

Comparison:
              before:     2574.8 i/s
               after:      992.5 i/s - 2.59x  slower

Note: I definitely do not like how much slower this branch is in the worst case. I also realize these are not real-world benchmarks.

@radiospiel
Copy link
Contributor

radiospiel commented Apr 8, 2025 via email

@radiospiel
Copy link
Contributor

radiospiel commented Apr 8, 2025

For some context: I just run your bestcase and worstcase testcases on my branch, and I see the same (Apple M1). I also added a 7 byte test case, and this also gets slower, even though there is no SIMD involved in this case.

Test local (i/s) 2.10.2 (i/s) Speed Ratio
bytes.16.bestcase 7,291 5,210 1.40x
bytes.16.worstcase 1,154 2,717 0.42x
bytes.15.bestcase 6,685 4,754 1.41x
bytes.15.worstcase 1,128 2,791 0.40x
bytes.32.bestcase 5,622 3,703 1.52x
bytes.32.worstcase 580 1,404 0.41x
bytes.7.bestcase 6,086 5,963 1.02x
bytes.7.worstcase 2,480 4,845 0.51x

It might be cool to understand what makes that slower than 2.10.2. (Since under the hood your code and mine are more or less equivalent.)

@samyron
Copy link
Author

samyron commented Apr 9, 2025

I figured it out. The issue was with the search_escape_basic_impl function pointer. It was defined as unsigned char (*search_escape_basic_impl)(search_state *);.

I changed the definition to static unsigned char (*search_escape_basic_impl)(search_state *); and the results are much better on most of the worse-case synthetic benchmarks:

== Encoding bytes.16.bestcase (190001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   712.000 i/100ms
Calculating -------------------------------------
               after      7.116k (± 4.0%) i/s  (140.52 μs/i) -     35.600k in   5.011076s

Comparison:
              before:     4082.0 i/s
               after:     7116.3 i/s - 1.74x  faster


== Encoding bytes.16.worstcase (350001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   249.000 i/100ms
Calculating -------------------------------------
               after      2.519k (± 1.4%) i/s  (396.95 μs/i) -     12.699k in   5.041860s

Comparison:
              before:     2319.4 i/s
               after:     2519.2 i/s - 1.09x  faster


== Encoding bytes.32.bestcase (350001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   616.000 i/100ms
Calculating -------------------------------------
               after      6.093k (± 2.2%) i/s  (164.13 μs/i) -     30.800k in   5.057707s

Comparison:
              before:     2667.5 i/s
               after:     6092.7 i/s - 2.28x  faster


== Encoding bytes.32.worstcase (670001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   144.000 i/100ms
Calculating -------------------------------------
               after      1.435k (± 1.5%) i/s  (696.89 μs/i) -      7.200k in   5.018776s

Comparison:
              before:     1213.3 i/s
               after:     1435.0 i/s - 1.18x  faster


== Encoding bytes.15.bestcase (180001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   644.000 i/100ms
Calculating -------------------------------------
               after      6.465k (± 4.8%) i/s  (154.68 μs/i) -     32.844k in   5.092569s

Comparison:
              before:     3845.7 i/s
               after:     6465.0 i/s - 1.68x  faster


== Encoding bytes.15.worstcase (330001 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after    71.000 i/100ms
Calculating -------------------------------------
               after      1.093k (± 3.6%) i/s  (914.96 μs/i) -      5.467k in   5.008682s

Comparison:
              before:     2439.0 i/s
               after:     1092.9 i/s - 2.23x  slower

The real-world benchmarks are a bit better.

== Encoding activitypub.json (52595 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after     2.312k i/100ms
Calculating -------------------------------------
               after     23.230k (± 1.8%) i/s   (43.05 μs/i) -    117.912k in   5.077498s

Comparison:
              before:    14564.1 i/s
               after:    23230.4 i/s - 1.60x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   109.000 i/100ms
Calculating -------------------------------------
               after      1.086k (± 1.2%) i/s  (921.22 μs/i) -      5.450k in   5.021387s

Comparison:
              before:      972.7 i/s
               after:     1085.5 i/s - 1.12x  faster


== Encoding twitter.json (466906 bytes)
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
               after   231.000 i/100ms
Calculating -------------------------------------
               after      2.358k (± 1.7%) i/s  (424.07 μs/i) -     12.012k in   5.095391s

Comparison:
              before:     1625.2 i/s
               after:     2358.1 i/s - 1.45x  faster

@radiospiel
Copy link
Contributor

radiospiel commented Apr 9, 2025

I figured it out. The issue was with the search_escape_basic_impl function pointer. It was defined as unsigned char (*search_escape_basic_impl)(search_state *);. ...I changed the definition to static unsigned char (*search_escape_basic_impl)(search_state *);

Wow, I had not expected that to make a difference. Great finding!

I was running your latest commit on a Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz. I can confirm that worst case performance is no longer adversely affected also in x86.

Here are the numbers for the strings benchmarks; this is on a ruby 3.3. (I can't compare against 2.10.2 or a newer ruby, because that is not available there.)

JSON Version Encoding (i/s)
2.3.1 87.569
2.7.1 83.429
2.9.1 254.117
local 333.684

so a 30% speedup on string focused benchmark cases.

@byroot
Copy link
Member

byroot commented Apr 9, 2025

Somehow, citm_catalog.json is still 9% slower than master on my machine.

I get that because we need to check that we're not going out of bounds it slows things down a bit, but 9% seem a bit surprising.

@radiospiel
Copy link
Contributor

As for code review I wanted to ask about two things:

  • some of the search_escape_basic_XXX functions are inline, others are not. Is that intentional?
  • in each of the implementations we now iterate over vector-sized blocks, and then do the "fill up vector with 'X' and test again" part, and in both locations the SIMD instructions that do the actual check are repeated (e.g. here and here. Can we extract these into individual inlined methods?

@samyron
Copy link
Author

samyron commented Apr 9, 2025

As for code review I wanted to ask about two things:

* some of the `search_escape_basic_XXX` functions are inline, others are not. Is that intentional?

* in each of the implementations we now iterate over vector-sized blocks, and then do the "fill up vector with 'X' and test again" part, and in both locations the SIMD instructions that do the actual check are repeated (e.g. [here](https://github.com/ruby/json/pull/743/files#diff-2bb51be932dec14923f6eb515f24b1b593737f0d3f8e76eeecf58cff3052819fR304-R307) and [here](https://github.com/ruby/json/pull/743/files#diff-2bb51be932dec14923f6eb515f24b1b593737f0d3f8e76eeecf58cff3052819fR335-R337). Can we extract these into individual inlined methods?

Thank you! Done.

@radiospiel
Copy link
Contributor

Somehow, citm_catalog.json is still 9% slower than master on my machine

is this PR rebased off the current master? maybe the test case benefits from other optimizations that have landed in master but are not present here?

Comment on lines 275 to 279
memset(s, 'X', len);

// Optimistically copy the remaining 'len' characters to the output FBuffer. If there are no characters
// to escape, then everything ends up in the correct spot. Otherwise it was convenient temporary storage.
memcpy(s, search->ptr, len);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what this memset purpose is, given we memcpy at the same pointer with same length right after.

If I comment it out, the test suite still pass on my machine. Is it some sort of left over?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a bug/typo. That was supposed to be memset(s, 'X', vec_len);. The goal is to ensure that the anything past len bytes does not need to be escaped.

@byroot
Copy link
Member

byroot commented Apr 9, 2025

is this PR rebased off the current master?

No. But I tried to merge master into it in case you were right, and I still have a 10% slowdown on citm_catalog.json.

@radiospiel
Copy link
Contributor

I see a 3% performance improvement (but within the error ranges) with this branch. These are the numbers I see on citm_catalog (Apple M1, 3.4 + yjit):

2.10.2: 1.074k (± 1.7%) i/s  (930.68 μs/i) -      5.457k in   5.080131s
master: 1.276k (± 1.7%) i/s  (783.94 μs/i) -      6.477k in   5.079093s
branch: 1.194k (± 1.7%) i/s  (837.32 μs/i) -      6.069k in   5.083191s
branch w/merged master: 1.318k (± 1.7%) i/s  (758.59 μs/i) -      6.681k in   5.069660s

When I change the 8 byte limit to 6, (i.e. we SIMD even strings of only 6 bytes) I get

branch w/merged master: 1.351k (± 1.7%) i/s  (740.25 μs/i) -      6.800k in   5.035171s

and with 4 I get

branch w/merged master: 1.372k (± 1.6%) i/s  (729.12 μs/i) -      6.885k in   5.021282s 

Finding the optimum for this limit might be heavily CPU model specific, though, so I don't know how far we want to go there.

@samyron
Copy link
Author

samyron commented Apr 10, 2025

In the case there isn't a vector's width worth of data, but at least SIMD_MINIMUM_THRESHOLD's worth of bytes remain (currently set to 6...), the code will now use the neon_next_match to locate the match instead of falling back to search_escape_basic to locate those matches.

Edit: Updated the code for SSE2 too.

The real-world benchmarks on my M1 are now:

== Encoding activitypub.json (52595 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after     2.568k i/100ms
Calculating -------------------------------------
               after     26.384k (± 1.8%) i/s   (37.90 μs/i) -    133.536k in   5.062980s

Comparison:
              before:    15443.5 i/s
               after:    26383.9 i/s - 1.71x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   124.000 i/100ms
Calculating -------------------------------------
               after      1.253k (± 1.0%) i/s  (797.88 μs/i) -      6.324k in   5.046229s

Comparison:
              before:      989.7 i/s
               after:     1253.3 i/s - 1.27x  faster


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   269.000 i/100ms
Calculating -------------------------------------
               after      2.628k (± 1.3%) i/s  (380.56 μs/i) -     13.181k in   5.017108s

Comparison:
              before:     1705.5 i/s
               after:     2627.7 i/s - 1.54x  faster

@samyron
Copy link
Author

samyron commented Apr 10, 2025

I now have access to an M4 Pro. Here are the current real-world results compared to master.

ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after     3.620k i/100ms
Calculating -------------------------------------
               after     36.214k (± 2.5%) i/s   (27.61 μs/i) -    181.000k in   5.001447s

Comparison:
              before:    27564.7 i/s
               after:    36214.1 i/s - 1.31x  faster


== Encoding citm_catalog.json (500298 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   182.000 i/100ms
Calculating -------------------------------------
               after      1.820k (± 2.1%) i/s  (549.34 μs/i) -      9.282k in   5.101445s

Comparison:
              before:     1717.0 i/s
               after:     1820.4 i/s - same-ish: difference falls within error


== Encoding twitter.json (466906 bytes)
ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +PRISM [arm64-darwin24]
Warming up --------------------------------------
               after   367.000 i/100ms
Calculating -------------------------------------
               after      3.675k (± 2.6%) i/s  (272.11 μs/i) -     18.717k in   5.096566s

Comparison:
              before:     2882.5 i/s
               after:     3675.0 i/s - 1.27x  faster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants