Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lifecycle documentation #2

Open
jerinphilip opened this issue Oct 4, 2021 · 1 comment
Open

Lifecycle documentation #2

jerinphilip opened this issue Oct 4, 2021 · 1 comment

Comments

@jerinphilip
Copy link

jerinphilip commented Oct 4, 2021

Eviction: Write

Eviction happens only during writes (or a time-to-live based setting, which we'll ignore for purposes here). This happens at WritableHashTable's implementation, below.

// Evict this record if
// 1: the record is expired, or
// 2: the entry is not recently accessed (and unset the access bit
// if set).
if (metadata.IsExpired(curEpochTime, this->m_recordTimeToLive) ||
!metadata.UpdateAccessStatus(false)) {
const auto numBytesFreed = record.m_key.m_size + value.m_size;
numBytesToFree = (numBytesFreed >= numBytesToFree)
? 0U
: numBytesToFree - numBytesFreed;
WritableBase::Remove(*entry, i);
this->m_hashTable.m_perfData.Increment(
HashTablePerfCounter::EvictedRecordsCount);
}
}
}

This takes locks, so structure is expected to be protected during the process. So it's removed if time to live has expired, or access-bit is off. If it was accessed recently, it lives until another full rotation (because this process unsets the part).

typename HashTable::UniqueLock lock{
this->m_hashTable.GetMutex(currentBucketIndex)};

Removes are just "mark for Removals":

void ReleaseRecord(RecordBuffer* record) {
if (record == nullptr) {
return;
}
m_epochManager.RegisterAction([this, record]() {
record->~RecordBuffer();
this->m_hashTable.template GetAllocator<RecordBuffer>().deallocate(record,
1U);
});

This is a problem, we're leaking memory here, while we await the action-queue to clear these up.

Epochs are incremented and references to epochs are held, so:

void Add() {
// Incrementing the global epoch counter before incrementing per-connection
// epoch counter is safe (not so the other way around). If the server
// process is registering an action at the m_currentEpochCounter in
// RegisterAction(), it is happening in the "future," and this means that if
// the client is referencing the memory to be deleted in the "future," it
// will be safe.
++m_currentEpochCounter;
m_epochCounterManager.AddNewEpoch();
}

The following can keep running forever if a frontIndex L4::Context is not freed while the final ones keep on piling up.

// Note that this function is NOT thread safe, and should be run on the
// same thread as the one that calls AddNewEpoch().
std::uint64_t RemoveUnreferenceEpochCounters() {
while (m_epochQueue.m_backIndex > m_epochQueue.m_frontIndex) {
if (m_epochQueue.m_refCounts[m_epochQueue.m_frontIndex %
m_epochQueue.m_refCounts.size()] == 0U) {
++m_epochQueue.m_frontIndex;
} else {
// There are references to the front of the queue and will return this
// front index.
break;
}
}

Reads

There are checks if this record is past it's time-to-live, in which case this is returned false.

bool GetInternal(const Key& key, Value& value) const {
if (!Base::Get(key, value)) {
return false;
}
assert(value.m_size > Metadata::c_metaDataSize);
// If the record with the given key is found, check if the record is expired
// or not. Note that the following const_cast is safe and necessary to
// update the access status.
Metadata metaData{const_cast<std::uint32_t*>(
reinterpret_cast<const std::uint32_t*>(value.m_data))};
if (metaData.IsExpired(this->GetCurrentEpochTime(), m_recordTimeToLive)) {
return false;
}
metaData.UpdateAccessStatus(true);
value.m_data += Metadata::c_metaDataSize;
value.m_size -= Metadata::c_metaDataSize;
return true;
}

Misc:

There is a check for our size_t key, so collisions are improbable.

if (record.m_key == key) {
value = record.m_value;
return true;
}

Any successful evictions trigger an increment in global-epoch. This runs every n seconds on a background thread.

m_processingThread{m_config.m_epochProcessingInterval, [this] {
this->Remove();
this->Add();

@jerinphilip
Copy link
Author

jerinphilip commented Oct 4, 2021

To summarize, unless there is a use-case where a multiple readers take a context and read several times - L4's lock-free is debatable. It's locking on read and write through Context -> EpochRefPolicy -> AddRef / RemoveRef.

If we don't release a context, the memory is never freed, and waiting somewhere to be cleaned up.

This is tested using the following diff on L4, which checks if actions are indeed performed in the background cleanup thread, with browsermt/bergamot-translator:

diff --git a/inc/L4/LocalMemory/EpochManager.h b/inc/L4/LocalMemory/EpochManager.h
index a601766..920e6dd 100644
--- a/inc/L4/LocalMemory/EpochManager.h
+++ b/inc/L4/LocalMemory/EpochManager.h
@@ -9,6 +9,7 @@
 #include "Log/PerfCounter.h"
 #include "Utils/Lock.h"
 #include "Utils/RunningThread.h"
+#include <iostream>
 
 namespace L4 {
 namespace LocalMemory {
@@ -75,6 +76,7 @@ class EpochManager : public IEpochActionManager {
 
     const auto numActionsPerformed =
         m_epochActionManager.PerformActions(oldestEpochCounter);
+    std::cout << "Actions performed" << numActionsPerformed << "\n";
 
     m_perfData.Subtract(ServerPerfCounter::PendingActionsCount,
                         numActionsPerformed);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant