Skip to content

2. Containers

lukemartinlogan edited this page Mar 17, 2023 · 6 revisions

Thread-Unsafe

There are a number of shared-memory containers provided. We provide examples of each in the "examples" directory. We use MPI to demonstrate how to use these data structures across processes. Below we provide a description of each container along with an example. These data structures support parallel reads, but do not support parallel writes. These data structures require synchronization using some form of locking.

String and Charbuf

The most primitive shared-memory container is the string.

Vector

The vector supports extension from any process which maps it. However, extensibility needs to be synchronized manually with a lock. We assume a model where only a single process creates the vector, and then every other process is either read-only or doesn't cause a resize of the vector. If the user desires multiple processes to emplace into the vector concurrently, a Mutex or RwLock can be used.

Below is an example of how to create a shared-memory backend, allocator, and vector. This example is also located in example/vector.cc

#include <mpi.h>
#include <cassert>
#include "hermes_shm/data_structures/thread_unsafe/vector.h"

struct CustomHeader {
  hipc::TypedPointer<hipc::vector<int>> obj_;
};

int main(int argc, char **argv) {
  int rank;
  MPI_Init(&argc, &argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);

  // Common allocator information
  std::string shm_url = "test_allocators";
  hipc::allocator_id_t alloc_id(0, 1);
  auto mem_mngr = HERMES_MEMORY_MANAGER;
  hipc::Allocator *alloc;
  CustomHeader *header;

  // Create backend + allocator
  if (rank == 0) {
    // Create a 64 megabyte allocatable region
    mem_mngr->CreateBackend<hipc::PosixShmMmap>(
      MEGABYTES(64), shm_url);
    // Create a memory allocator over the 64MB region
    alloc = mem_mngr->CreateAllocator<hipc::StackAllocator>(
      shm_url, alloc_id, sizeof(CustomHeader));
    // Get the custom header from the allocator
    header = alloc->GetCustomHeader<CustomHeader>();
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Attach backend + find allocator
  if (rank != 0) {
    mem_mngr->AttachBackend(hipc::MemoryBackendType::kPosixShmMmap, shm_url);
    alloc = mem_mngr->GetAllocator(alloc_id);
    header = alloc->GetCustomHeader<CustomHeader>();
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Create the vector
  hipc::vector<int> obj;
  if (rank == 0) {
    // Initialize in shared memory
    obj.shm_init(alloc);
    // Resize to 1024 ints. Each int will be set to 10.
    obj.resize(1024, 10);
    // Save the vector inside the allocator's header
    header->obj_ = obj.GetShmPointer<hipc::Pointer>();
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Find the vector in shared memory
  if (rank != 0) {
    obj << header->obj_;
  }

  // Read vector on all ranks
  for (hipc::ShmRef<int> x : obj) {
    assert(*x == 10);
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Finalize
  if (rank == 0) {
    std::cout << "COMPLETE!" << std::endl;
  }
  MPI_Finalize();
}

List

#include <mpi.h>
#include <cassert>
#include "hermes_shm/data_structures/thread_unsafe/list.h"

struct CustomHeader {
  hipc::TypedPointer<hipc::list<int>> obj_;
};

int main(int argc, char **argv) {
  int rank;
  MPI_Init(&argc, &argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);

  // Common allocator information
  std::string shm_url = "test_allocators";
  hipc::allocator_id_t alloc_id(0, 1);
  auto mem_mngr = HERMES_MEMORY_MANAGER;
  hipc::Allocator *alloc;
  CustomHeader *header;

  // Create backend + allocator
  if (rank == 0) {
    // Create a 64 megabyte allocatable region
    mem_mngr->CreateBackend<hipc::PosixShmMmap>(
      MEGABYTES(64), shm_url);
    // Create a memory allocator over the 64MB region
    alloc = mem_mngr->CreateAllocator<hipc::StackAllocator>(
      shm_url, alloc_id, sizeof(CustomHeader));
    // Get the custom header from the allocator
    header = alloc->GetCustomHeader<CustomHeader>();
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Attach backend + find allocator
  if (rank != 0) {
    mem_mngr->AttachBackend(hipc::MemoryBackendType::kPosixShmMmap, shm_url);
    alloc = mem_mngr->GetAllocator(alloc_id);
    header = alloc->GetCustomHeader<CustomHeader>();
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Create the list
  hipc::list<int> obj;
  if (rank == 0) {
    // Initialize in shared memory
    obj.shm_init(alloc);
    // Save the list inside the allocator's header
    header->obj_ = obj.GetShmPointer<hipc::Pointer>();
    // Emplace 1024 elements
    for (int i = 0; i < 1024; ++i) {
      obj.emplace_back(10);
    }
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Find the list in shared memory
  if (rank != 0) {
    obj << header->obj_;
  }

  // Read list on all ranks
  for (hipc::ShmRef<int> x : obj) {
    assert(*x == 10);
  }
  MPI_Barrier(MPI_COMM_WORLD);

  // Finalize
  if (rank == 0) {
    std::cout << "COMPLETE!" << std::endl;
  }
  MPI_Finalize();
}

Unordered Map

NUMA-Aware

There are two NUMA-aware data structures under development.

Clone this wiki locally