Skip to content

Commit

Permalink
[vm/compiler] Add all Compressed Assembler methods to AssemblerBase.
Browse files Browse the repository at this point in the history
Remove CompareWithCompressedFieldFromOffset, which has no uses.

Rename the LoadFromOffset and StoreFromOffset methods that took
Addresses to Load and Store, respectively. This makes the names
of the Assembler methods more uniform:

  * Takes an address: Load, Store, LoadField, LoadCompressedField,
    StoreIntoObject, StoreCompressedIntoObject, LoadSmi,
    LoadCompressedSmi, etc.
  * Takes a base register and an offset: LoadFromOffset, StoreToOffset,
    LoadFieldFromOffset, LoadCompressedFieldFromOffset,
    StoreIntoObjectOffset, StoreCompressedIntoObjectOffset,
    LoadSmiFromOffset, LoadCompressedSmiFromOffset, etc.

Create AssemblerBase methods for loading and storing compressed
pointers that weren't already there, as well as the corresponding
methods for loading and storing uncompressed values.

Make non-virtual methods that load and store uncompressed fields
that call the corresponding method for loading from and storing to
memory regions, adjusting the address or offset accordingly. This
avoids needing per-architecture overrides for these.

Make non-virtual methods that load compressed fields, calling the
corresponding method for loading a compressed value from a memory
region. (Since compressed pointers are only stored in Dart objects,
and stores into a Dart object may require a barrier, there is no
method for storing a compressed value into an arbitrary memory region.)

Create pure virtual methods for loading from or storing to an Address
or any method that does not have both an Address-taking and a
base register and offset pair-taking version (e.g., LoadAcquire).

Create methods for loading from or storing to a base register
and an offset. The base implementation takes the base register and
offset and creates an Address from it, then calls the Address-taking
equivalent. These methods are non-virtual when the implementation is
the same on all architectures and virtual to allow overriding when
necessary.

Make a non-virtual method for loading uncompressed Smis, since all
architectures have the same code for this, including the DEBUG check.

If compressed pointers are not being used, all the methods for
compressed pointers are non-virtual methods that call the
corresponding method for uncompressed values.

If compressed pointers are being used:

* Install pure virtual methods for loading compressed values from
  and storing compressed values to an Address or any method that does
  not have both an Address-taking and a base register and offset
  pair-taking version (e.g., LoadAcquireCompressed).

* Install virtual methods for loading compressed values from and
  storing compressed values to a base register and offset. Like the
  uncompressed implementation, the base implementation of these
  create an Address and call the Address-taking equivalent, and these
  implementations are overridden on ARM64.

* Install a non-virtual method for loading compressed Smis, since the
  only difference is that it loads a zero-extended 32-bit value, which
  AssemblerBase can do.

TEST=ci (refactoring only)

Change-Id: I934791d26a6e2cdaa6ac5f188b0fd89dbdc491d1
Cq-Include-Trybots: luci.dart.try:vm-aot-android-release-arm64c-try,vm-aot-android-release-arm_x64-try,vm-aot-linux-debug-x64-try,vm-aot-linux-debug-x64c-try,vm-aot-mac-release-arm64-try,vm-aot-mac-release-x64-try,vm-aot-obfuscate-linux-release-x64-try,vm-aot-optimization-level-linux-release-x64-try,vm-aot-win-debug-arm64-try,vm-appjit-linux-debug-x64-try,vm-asan-linux-release-x64-try,vm-checked-mac-release-arm64-try,vm-eager-optimization-linux-release-ia32-try,vm-eager-optimization-linux-release-x64-try,vm-ffi-android-debug-arm-try,vm-ffi-android-debug-arm64c-try,vm-ffi-qemu-linux-release-arm-try,vm-ffi-qemu-linux-release-riscv64-try,vm-linux-debug-ia32-try,vm-linux-debug-x64c-try,vm-mac-debug-arm64-try,vm-mac-debug-x64-try,vm-msan-linux-release-x64-try,vm-reload-linux-debug-x64-try,vm-reload-rollback-linux-debug-x64-try,vm-ubsan-linux-release-x64-try,vm-win-debug-arm64-try,vm-win-debug-x64-try,vm-win-release-ia32-try
Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/359861
Reviewed-by: Daco Harkes <[email protected]>
Commit-Queue: Tess Strickland <[email protected]>
Reviewed-by: Alexander Markov <[email protected]>
  • Loading branch information
sstrickl authored and Commit Queue committed Apr 2, 2024
1 parent 8dcb212 commit 9fc280a
Show file tree
Hide file tree
Showing 26 changed files with 904 additions and 991 deletions.
2 changes: 1 addition & 1 deletion runtime/vm/compiler/asm_intrinsifier.cc
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ void AsmIntrinsifier::StringEquality(Assembler* assembler,
__ CompareClassId(obj2, string_cid, temp1);
__ BranchIf(NOT_EQUAL, normal_ir_body, AssemblerBase::kNearJump);

__ LoadFromOffset(temp1, FieldAddress(obj1, target::String::length_offset()));
__ LoadFieldFromOffset(temp1, obj1, target::String::length_offset());
__ CompareWithMemoryValue(
temp1, FieldAddress(obj2, target::String::length_offset()));
__ BranchIf(NOT_EQUAL, &is_false, AssemblerBase::kNearJump);
Expand Down
36 changes: 13 additions & 23 deletions runtime/vm/compiler/assembler/assembler_arm.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1773,7 +1773,7 @@ void Assembler::StoreIntoObject(Register object,
if (memory_order == kRelease) {
StoreRelease(value, dest);
} else {
StoreToOffset(value, dest);
Store(value, dest);
}

// In parallel, test whether
Expand Down Expand Up @@ -1909,7 +1909,7 @@ void Assembler::StoreIntoObjectNoBarrier(Register object,
if (memory_order == kRelease) {
StoreRelease(value, dest);
} else {
StoreToOffset(value, dest);
Store(value, dest);
}
#if defined(DEBUG)
// We can't assert the incremental barrier is not needed here, only the
Expand Down Expand Up @@ -2030,7 +2030,7 @@ void Assembler::StoreIntoSmiField(const Address& dest, Register value) {
Stop("New value must be Smi.");
Bind(&done);
#endif // defined(DEBUG)
StoreToOffset(value, dest);
Store(value, dest);
}

void Assembler::ExtractClassIdFromTags(Register result,
Expand Down Expand Up @@ -2280,16 +2280,6 @@ void Assembler::Bind(Label* label) {
BindARMv7(label);
}

void Assembler::LoadCompressedSmi(Register dest, const Address& slot) {
ldr(dest, slot);
#if defined(DEBUG)
Label done;
BranchIfSmi(dest, &done, kNearJump);
Stop("Expected Smi");
Bind(&done);
#endif
}

OperandSize Address::OperandSizeFor(intptr_t cid) {
auto const rep = RepresentationUtils::RepresentationOfArrayElement(cid);
switch (rep) {
Expand Down Expand Up @@ -2890,10 +2880,10 @@ Address Assembler::PrepareLargeStoreOffset(const Address& address,
return Address(base, offset, mode);
}

void Assembler::LoadFromOffset(Register reg,
const Address& address,
OperandSize size,
Condition cond) {
void Assembler::Load(Register reg,
const Address& address,
OperandSize size,
Condition cond) {
const Address& addr = PrepareLargeLoadOffset(address, size, cond);
switch (size) {
case kByte:
Expand Down Expand Up @@ -2932,10 +2922,10 @@ void Assembler::CompareToStack(Register src, intptr_t depth) {
CompareRegisters(src, TMP);
}

void Assembler::StoreToOffset(Register reg,
const Address& address,
OperandSize size,
Condition cond) {
void Assembler::Store(Register reg,
const Address& address,
OperandSize size,
Condition cond) {
const Address& addr = PrepareLargeStoreOffset(address, size, cond);
switch (size) {
case kUnsignedByte:
Expand Down Expand Up @@ -3866,8 +3856,8 @@ void Assembler::LoadElementAddressForRegIndex(Register address,
void Assembler::LoadStaticFieldAddress(Register address,
Register field,
Register scratch) {
LoadCompressedFieldFromOffset(
scratch, field, target::Field::host_offset_or_field_id_offset());
LoadFieldFromOffset(scratch, field,
target::Field::host_offset_or_field_id_offset());
const intptr_t field_table_offset =
compiler::target::Thread::field_table_values_offset();
LoadMemoryValue(address, THR, static_cast<int32_t>(field_table_offset));
Expand Down
146 changes: 63 additions & 83 deletions runtime/vm/compiler/assembler/assembler_arm.h
Original file line number Diff line number Diff line change
Expand Up @@ -415,32 +415,25 @@ class Assembler : public AssemblerBase {

void PushValueAtOffset(Register base, int32_t offset) { UNIMPLEMENTED(); }

void Bind(Label* label);
void Bind(Label* label) override;
// Unconditional jump to a given label. [distance] is ignored on ARM.
void Jump(Label* label, JumpDistance distance = kFarJump) { b(label); }
// Unconditional jump to a given address in register.
void Jump(Register target) { bx(target); }
// Unconditional jump to a given address in memory.
void Jump(const Address& address) { Branch(address); }

void LoadField(Register dst, const FieldAddress& address) override {
LoadFromOffset(dst, address);
}
void LoadMemoryValue(Register dst, Register base, int32_t offset) {
LoadFromOffset(dst, base, offset);
}
void LoadCompressed(Register dest, const Address& slot) {
LoadFromOffset(dest, slot);
}
void LoadCompressedSmi(Register dest, const Address& slot) override;
void StoreMemoryValue(Register src, Register base, int32_t offset) {
StoreToOffset(src, base, offset);
}
void LoadAcquire(Register dst,
Register address,
int32_t offset = 0,
OperandSize size = kFourBytes) override {
LoadFromOffset(dst, Address(address, offset), size);
Load(dst, Address(address, offset), size);
dmb();
}
void StoreRelease(Register src,
Expand All @@ -450,23 +443,16 @@ class Assembler : public AssemblerBase {
}
void StoreRelease(Register src, Address dest) {
dmb();
StoreToOffset(src, dest);
Store(src, dest);

// We don't run TSAN bots on 32 bit.
}

void CompareWithCompressedFieldFromOffset(Register value,
Register base,
int32_t offset) {
LoadCompressedFieldFromOffset(TMP, base, offset);
cmp(value, Operand(TMP));
}

void CompareWithMemoryValue(Register value,
Address address,
OperandSize size = kFourBytes) override {
ASSERT_EQUAL(size, kFourBytes);
LoadFromOffset(TMP, address, size);
Load(TMP, address, size);
cmp(value, Operand(TMP));
}

Expand Down Expand Up @@ -1022,32 +1008,34 @@ class Assembler : public AssemblerBase {
void StoreIntoArray(Register object,
Register slot,
Register value,
CanBeSmi can_value_be_smi = kValueCanBeSmi);
void StoreIntoObjectOffset(Register object,
int32_t offset,
Register value,
CanBeSmi can_value_be_smi = kValueCanBeSmi,
MemoryOrder memory_order = kRelaxedNonAtomic);
CanBeSmi can_value_be_smi = kValueCanBeSmi) override;
void StoreIntoObjectOffset(
Register object,
int32_t offset,
Register value,
CanBeSmi can_value_be_smi = kValueCanBeSmi,
MemoryOrder memory_order = kRelaxedNonAtomic) override;

void StoreIntoObjectNoBarrier(
Register object,
const Address& dest,
Register value,
MemoryOrder memory_order = kRelaxedNonAtomic) override;
void StoreIntoObjectNoBarrier(Register object,
const Address& dest,
const Object& value,
MemoryOrder memory_order = kRelaxedNonAtomic);
void StoreIntoObjectNoBarrier(
Register object,
const Address& dest,
const Object& value,
MemoryOrder memory_order = kRelaxedNonAtomic) override;
void StoreIntoObjectOffsetNoBarrier(
Register object,
int32_t offset,
Register value,
MemoryOrder memory_order = kRelaxedNonAtomic);
MemoryOrder memory_order = kRelaxedNonAtomic) override;
void StoreIntoObjectOffsetNoBarrier(
Register object,
int32_t offset,
const Object& value,
MemoryOrder memory_order = kRelaxedNonAtomic);
MemoryOrder memory_order = kRelaxedNonAtomic) override;

// Stores a non-tagged value into a heap object.
void StoreInternalPointer(Register object,
Expand Down Expand Up @@ -1106,46 +1094,40 @@ class Assembler : public AssemblerBase {
OperandSize sz,
Condition cond);

void Load(Register reg,
const Address& address,
OperandSize type,
Condition cond);
void Load(Register reg,
const Address& address,
OperandSize type = kFourBytes) override {
Load(reg, address, type, AL);
}
void LoadFromOffset(Register reg,
const Address& address,
OperandSize type,
Condition cond);
void LoadFromOffset(Register reg,
const Address& address,
Register base,
int32_t offset,
OperandSize type = kFourBytes) override {
LoadFromOffset(reg, address, type, AL);
LoadFromOffset(reg, base, offset, type, AL);
}
void LoadFromOffset(Register reg,
Register base,
int32_t offset,
OperandSize type = kFourBytes,
Condition cond = AL) {
LoadFromOffset(reg, Address(base, offset), type, cond);
OperandSize type,
Condition cond) {
Load(reg, Address(base, offset), type, cond);
}
void LoadFieldFromOffset(Register reg,
Register base,
int32_t offset,
OperandSize sz = kFourBytes) override {
LoadFromOffset(reg, FieldAddress(base, offset), sz, AL);
OperandSize type = kFourBytes) override {
LoadFieldFromOffset(reg, base, offset, type, AL);
}
void LoadFieldFromOffset(Register reg,
Register base,
int32_t offset,
OperandSize type,
Condition cond) {
LoadFromOffset(reg, FieldAddress(base, offset), type, cond);
}
void LoadCompressedFieldFromOffset(Register reg,
Register base,
int32_t offset) override {
LoadCompressedFieldFromOffset(reg, base, offset, kFourBytes, AL);
}
void LoadCompressedFieldFromOffset(Register reg,
Register base,
int32_t offset,
OperandSize type,
Condition cond = AL) {
LoadFieldFromOffset(reg, base, offset, type, cond);
Load(reg, FieldAddress(base, offset), type, cond);
}
// For loading indexed payloads out of tagged objects like Arrays. If the
// payload objects are word-sized, use TIMES_HALF_WORD_SIZE if the contents of
Expand All @@ -1155,47 +1137,52 @@ class Assembler : public AssemblerBase {
int32_t payload_start,
Register index,
ScaleFactor scale,
OperandSize type = kFourBytes) {
OperandSize type = kFourBytes) override {
add(dst, base, Operand(index, LSL, scale));
LoadFromOffset(dst, dst, payload_start - kHeapObjectTag, type);
}
void LoadIndexedCompressed(Register dst,
Register base,
int32_t offset,
Register index) {
add(dst, base, Operand(index, LSL, TIMES_COMPRESSED_WORD_SIZE));
LoadCompressedFieldFromOffset(dst, dst, offset);
}
void LoadFromStack(Register dst, intptr_t depth);
void StoreToStack(Register src, intptr_t depth);
void CompareToStack(Register src, intptr_t depth);

void Store(Register reg,
const Address& address,
OperandSize type,
Condition cond);
void Store(Register reg,
const Address& address,
OperandSize type = kFourBytes) override {
Store(reg, address, type, AL);
}
void StoreToOffset(Register reg,
const Address& address,
OperandSize type,
Condition cond);
void StoreToOffset(Register reg,
const Address& address,
Register base,
int32_t offset,
OperandSize type = kFourBytes) override {
StoreToOffset(reg, address, type, AL);
StoreToOffset(reg, base, offset, type, AL);
}
void StoreToOffset(Register reg,
Register base,
int32_t offset,
OperandSize type = kFourBytes,
Condition cond = AL) {
StoreToOffset(reg, Address(base, offset), type, cond);
OperandSize type,
Condition cond) {
Store(reg, Address(base, offset), type, cond);
}
void StoreFieldToOffset(Register reg,
Register base,
int32_t offset,
OperandSize type = kFourBytes,
Condition cond = AL) {
StoreToOffset(reg, FieldAddress(base, offset), type, cond);
OperandSize type = kFourBytes) override {
StoreFieldToOffset(reg, base, offset, type, AL);
}
void StoreFieldToOffset(Register reg,
Register base,
int32_t offset,
OperandSize type,
Condition cond) {
Store(reg, FieldAddress(base, offset), type, cond);
}
void StoreZero(const Address& address, Register temp) {
mov(temp, Operand(0));
StoreToOffset(temp, address);
Store(temp, address);
}
void LoadSFromOffset(SRegister reg,
Register base,
Expand Down Expand Up @@ -1545,16 +1532,9 @@ class Assembler : public AssemblerBase {
Register field,
Register scratch);

void LoadCompressedFieldAddressForRegOffset(Register address,
Register instance,
Register offset_in_words_as_smi) {
return LoadFieldAddressForRegOffset(address, instance,
offset_in_words_as_smi);
}

void LoadFieldAddressForRegOffset(Register address,
Register instance,
Register offset_in_words_as_smi);
Register offset_in_words_as_smi) override;

void LoadFieldAddressForOffset(Register address,
Register instance,
Expand Down
Loading

0 comments on commit 9fc280a

Please sign in to comment.