-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebase of aarch64 PR #1414
base: master
Are you sure you want to change the base?
Rebase of aarch64 PR #1414
Conversation
Do we need more robust architecture protection (Issue dynup#1356) The elf.h reloc-type constants are not unique across archs #define R_PPC64_REL24 10 /* PC relative 26 bit */ #define R_X86_64_32 10 /* Direct 32 bit zero extended */ so to avoid any unexpected aliasing, guard all R_arch_type refs with a check on kelf->arch, or a global default arch set from the first elf encountered.
The "has_function_profiling" support field in the symbol struct is used to show that a function symbol is able to be patched. This is necessary to check that functions which need to be patched are able to be. On arm64 this means the presence of 2 NOP instructions at function entry which are patched by ftrace to call the ftrace handling code. These 2 NOPs are inserted by the compiler and the location of them is recorded in a section called "__patchable_function_entries". Check whether a symbol has a corresponding entry in the "__patchable_function_entries" section and if so mark it as "has_func_profiling". Signed-off-by: Suraj Jitindar Singh <[email protected]> --- V1->V2: - Make error message standard across architectures when no patchable entry - Don't store __patchable_function_entries section in kpatch_find_func_profiling_calls(), instead find it each time
…d populate The function kpatch_create_mcount_sections() allocates the __mcount_loc section and then populates it with functions which have a patchable entry. The following patch will add aarch64 support to this function where the allocation will have to be done before the kelf_patched is torn down. Thus split this function so that the allocation can be performed earlier and the populating as before. No intended functional change. Signed-off-by: Suraj Jitindar Singh <[email protected]> --- V1->V2: - Add patch to series
…arch64 The __mcount_loc section contains the addresses of patchable ftrace sites which is used by the ftrace infrastructure in the kernel to create a list of tracable functions and to know where to patch to enable tracing of them. On aarch64 this section is called __patchable_function_entries and is generated by the compiler. Either of __mcount_loc or __patchable_function_entries is recognised by the kernel but for aarch64 use __patchable_function_entries as it is what is expected. Add aarch64 support to kpatch_alloc_mcount_sections(). The SHF_LINK_ORDER section flag must be copied to ensure that it matches to avoid the following: ld: __patchable_function_entries has both ordered [...] and unordered [...] sections Add aarch64 support to kpatch_populate_mcount_sections(). Check for the 2 required NOP instructions on function entry, which may be preceded by a BTI C instruction depending on whether the function is a leaf function. This determines the offset of the patch site. Signed-off-by: Suraj Jitindar Singh <[email protected]> --- V1->V2: - Don't preserve the __patchable_function_entries section from the patched elf as this is already verified by kpatch_check_func_profiling_calls() - Instead get the patch entry offset by checking for a preceding BTI C instr - Copy the section flags for __patchable_function_entries --- rebased, added sh_link fix from Suraj's later commit "kpatch-build: Enable ARM64 support" Signed-off-by: Pete Swain <[email protected]>
Add the final support required for aarch64 and enable building on that arch. Signed-off-by: Suraj Jitindar Singh <[email protected]> --- V1->V2: - Add # shellcheck disable=SC2086 - Add comment to kpatch_is_mapping_symbol()
On aarch64, only the ASSERT_RTNL macro is affected by source line number changes (WARN, BUG, etc. no longer embed line numbers in the instruction stream.) A small test function that invokes the macro for a line change from 42 to 43: 0000000000000000 <test_assert_rtnl>: 0: d503245f bti c 4: d503201f nop 8: d503201f nop c: d503233f paciasp 10: a9bf7bfd stp x29, x30, [sp, #-16]! 14: 910003fd mov x29, sp 18: 94000000 bl 0 <rtnl_is_locked> 18: R_AARCH64_CALL26 rtnl_is_locked 1c: 34000080 cbz w0, 2c <test_assert_rtnl+0x2c> 20: a8c17bfd ldp x29, x30, [sp], dynup#16 24: d50323bf autiasp 28: d65f03c0 ret 2c: 90000000 adrp x0, 0 <test_assert_rtnl> 2c: R_AARCH64_ADR_PREL_PG_HI21 .data.once 30: 39400001 ldrb w1, [x0] 30: R_AARCH64_LDST8_ABS_LO12_NC .data.once 34: 35ffff61 cbnz w1, 20 <test_assert_rtnl+0x20> 38: 52800022 mov w2, #0x1 // dynup#1 3c: 90000001 adrp x1, 0 <test_assert_rtnl> 3c: R_AARCH64_ADR_PREL_PG_HI21 .rodata.str1.8+0x8 40: 39000002 strb w2, [x0] 40: R_AARCH64_LDST8_ABS_LO12_NC .data.once 44: 91000021 add x1, x1, #0x0 44: R_AARCH64_ADD_ABS_LO12_NC .rodata.str1.8+0x8 - 48: 52800542 mov w2, #0x2a // dynup#42 + 48: 52800562 mov w2, #0x2b // dynup#43 4c: 90000000 adrp x0, 0 <test_assert_rtnl> 4c: R_AARCH64_ADR_PREL_PG_HI21 .rodata.str1.8+0x20 50: 91000000 add x0, x0, #0x0 50: R_AARCH64_ADD_ABS_LO12_NC .rodata.str1.8+0x20 54: 94000000 bl 0 <__warn_printk> 54: R_AARCH64_CALL26 __warn_printk 58: d4210000 brk #0x800 5c: 17fffff1 b 20 <test_assert_rtnl+0x20> Create an implementation of kpatch_line_macro_change_only() for aarch64 modeled after the other architectures. Only look for relocations to __warn_printk that ASSERT_RTNL invokes. Based-on-s390x-code-by: C. Erastus Toe <[email protected]> Signed-off-by: Joe Lawrence <[email protected]>
Update the kpatch-unit-test-objs submodule reference to add aarch64 unit tests. Signed-off-by: Joe Lawrence <[email protected]>
It seems mapping symbols in aarch64 elf has section size of 0. So, exclude it in section symbol replacing code just like kpatch_correlate_symbols(). This fixes the data-read-mostly unit test on aarch64. Signed-off-by: Misono Tomohiro <[email protected]>
Copy from kernel source tree. Signed-off-by: Misono Tomohiro <[email protected]>
new clang toolchain on arm64 produces individual __patchable_function_entries sections for each patchable func, in -ffunction-sections mode, rather than traditional single __mcount_loc section. Bend the existing logic to detect this multiplicity in the incoming kelf objects, and allocate N identical one-entry sections. These are retrieved as needed by a new function: find_nth_section_by_name() and attached to the .text sections they describe. These __pfe section are not actually arm64-specific, but a generic enhancement across gcc & clang, to allow better garbage collection of unreferenced object sections, and mcount/pfe objects which refer to them. The __pfe sections are combined in kernel-or-module final link, from 5.19.9's 9440155ccb948f8e3ce5308907a2e7378799be60. From clang-11, __pfe is supported for x86, though not yet used by kernel The split between allocate/populate phases here is necessary to enumerate/populate the outgoing section-headers before beginning to produce output sections Also adds some missing \n to log_debug()s Signed-off-by: Pete Swain <[email protected]>
On arm64, kpatch_find_func_profiling_calls() was skipping leaf functions, with no relocations, so they weren't patchable. Here other archs need to walk a function's reloc entries to check for __fentry__ or __mcount, so it's valid to skip over functions without sym->sec->rela, because they cannot be patchable, else they would have at least an __fentry__ call relocation. But arm64 marks functions patchable in a different way, with per-func __patchable_function_entries sections referring _to_ the func, not relocations _within_ the func, so a function w/o relocations for text or data can still be patchable. Move the sym->sec->rela check to the per-arch paths. This allows gcc-static-local-var-5.patch to generate livepatch, on arm64 & x86 Suggested-By: Bill Wendling <[email protected]> Signed-off-by: Pete Swain <[email protected]>
Signed-off-by: Pete Swain <[email protected]>
New toolchain/arch, new conventions for section/label/etc names gcc's .LCx symbols point to string literals in '.rodata.<func>.str1.*' sections. Clang creates similar .Ltmp%d symbols in '.rodata.str' The function is_string_literal_section() generalized (too much?) to match either - clang's/arm64 /^\.rodata\.str$/ - gcc's /^\.rodata\./ && /\.str1\./ Various matchers for .data.unlikely .bss.unlikely replaced by is_data_unlikely_section() generalized to match - gcc's ".data.unlikely" - clang's ".(data|bss).module_name.unlikely" .data.once handled similarly Signed-off-by: Pete Swain <[email protected]>
Generalized kpatch_line_macro_change_only() & insn_is_load_immediate() to collapse the aarch64 support back into parent. I'm assuming the 3rd start1 of the original /* Verify mov w2 <line number> */ if (((start1[offset] & 0b11111) != 0x2) || (start1[offset+3] != 0x52) || ((start1[offset] & 0b11111) != 0x2) || (start2[offset+3] != 0x52)) was a typo for start2. That's now absorbed into insn_is_load_immediate() leaving just one aarch64-specific piece: thinning out the match-list for diagnosing a __LINE__ reference, to just "__warn_printf".
If CONFIG_UBSAN is enabled, ubsan section (.data..Lubsan_{data,type}) can be created. Keep them unconditionally. NOTE: This patch needs to be verified. Signed-off-by: Misono Tomohiro <[email protected]>
Initialize add_off earlier, so it's obviously never used uninitialized. Clang was warning on this, even if gcc was not. No functional change, the only path which left it undefined would call ERROR() anyway.
In ARM64, every function section should have its own pfe section. It is a bug in GCC 11/12 which will only generate a single pfe section for all functions. The bug has been fixed in GCC 13.1. As the create-diff-object is generating the pfe sections on its own, we should also fix this bug, instead of try to repeat the bug. -- Adjusted whitespace in Zimao's proposed code. Signed-off-by: Pete Swain <[email protected]>
Signed-off-by: Mihails Strasuns <[email protected]>
(dynup/kpatch-unit-test-objs#51 should be merged first) |
AFAIU there are no publicly available arm64 github runners first, that remains my main concern at the moment as these unit tests are not actually being run. Would you consider adding a self-hosted one provided by AWS? |
Yes that would be fine. |
Great! Have sent an e-mail with more details to your RH address. |
As we discussed offline, the arm64 unit tests run fine on x86 so there's currently not a need for self-hosted runners. |
Are there any other concerns? |
I/we have been traveling last week and this one for conferences but I will try to review after that. |
Ping :) |
@mihails-strasuns : sorry for delay, I was at conference and then PTO for the last few weeks. I'll try to take look this week. |
@mihails-strasuns : I am considering an intermediate step for the aarch64 support here: #1415 PowerPC is already supported by kpatch-build and for some newer kernel toolset/configurations creating patchable_function_entry sections. My thought was to extract the final* implementation details for those sections from @swine's patchset and modify/generalize them for ppc64le. *This would collapse some of the commits in the aarch64 patchset that doubled back after learning new information (like the multi_pfe flag and the generic kpatch_line_macro_change_only()). My hope is that merging patchable_function_entry support first renders the aarch64 patchset easier to maintain/review. I haven't tried laying the aarch64 parts back on top of #1415 yet, but I was wondering:
Thanks. |
Thanks for having a look! Your plan makes sense to me.
Out of my head I remember 776056a, will diff exact commit list later. This PR matches Amazon Linux kpatch package patch set almost exactly so it was something I was very confident about actually working :)
I know that @puranjaymohan has been working on rebasing to a newer version but had to switch at some point. He is probably the best contact about it from AL side. |
FYI, Mark Rutland posted the following preparatory series today: |
Still only a WIP, but here is what the aarch64 implementation PR would look like on top of a separate patchable_function_entries PR: joe-lawrence/kpatch@patchable_function_entries...joe-lawrence:kpatch:joes-arm64-rebase (It passes the old aarch64 unit test objects that I created a few years ago :) but I haven't tried anything with a live kernel just yet.) Still TODO is to untangle the authorship across a few rebases and then pull out the clang fixes separately. Those could also go in ASAP to minimize the baggage that this patchset has been carrying around. |
@mihails-strasuns Are you planning to keep working on this given your employment change? If this is not on your list now, I can do the remaining work. Thanks |
Fyi, the following passes the 5.18.0 integration tests:
Does Amazon use gcc or clang for kpatch? And do you have any further internal tests for kpatch-build, or would it be possible to try and build one of your more interesting kpatches using my kpatch branch? If that looks good then I can post up the patchable_function_entry part for review and then the smaller arm64 rebase part. Thanks! |
@joe-lawrence We have working(*) 4.14, 5.10, and 6.1 kernels with ARM64 livepatching forward/back ported. I will do some tests with your kpatch: https://github.com/joe-lawrence/kpatch/tree/joes-arm64-rebase and provide the results here. So, I will be basically testing these three kernel and gcc combinations with your kpatch branch by building a livepatch for ARM64 for one of the interesting CVEs that we have already fixed. |
@puranjaymohan : fwiw, in my limited testing (my kpatch-build branch + https://github.com/madvenka786/linux.git branch: orc_v2), I did see the follow crash when: 1) loading the integration test-module.ko and 2) subsequently loading the nfsd.ko module. However I get the same result using @mihails-strasuns's kpatch-build branch, so without investigating, I'm assuming it may be unrelated to my shuffling of the code (either a kernel problem or a common kpatch-build problem). Have you see anything like this?
|
@puranjaymohan I am still keeping an eye on this, but can't spend much time on it at the moment. So any kind of extensive AL most likely falls on you, sorry :) |
@joe-lawrence I have tested your branch with Amazon Linux 2023 and it doesn't cause this issue that you mentioned:
I tried all combinations of above and all pass. So, Amazon Linux doesn't use this ORC based implementation of livepatching that you tested with. Therefore the issue you encountered is a kernel issue and not related to your code changes.
Please go ahead. P.S. - I thought adding Amazon Linux 2023 integration tests would be useful so: #1421 On arm64 with your kpatch branch and Amazon Linux 2023 with above tests:
|
Sorry for the delay - here is the retested version of aarch64. Differences from #1302:
Should be testable with the same kernel support series from https://lore.kernel.org/linux-arm-kernel/[email protected]/ using any applicable base version.