Skip to contents

This function intelligently creates a mirai daemons object by allocating workers to specified GPUs based on available VRAM.

Usage

gpu_daemons(
  gpu_ids = 0,
  n_workers = NULL,
  memory_per_worker_mb = NULL,
  reserve_memory_mb = 1024,
  framework = "none",
  worker_type = "persistent"
)

Arguments

gpu_ids

A numeric vector of GPU IDs to use (e.g., 0, c(0, 1)).

n_workers

The total number of workers to create across all specified GPUs. To terminate all daemons, set n_workers = 0.

memory_per_worker_mb

The amount of VRAM in MB to allocate for each worker. This is used for both capacity planning and, if a supported framework is chosen, for setting a memory limit within the worker.

reserve_memory_mb

A numeric value for the VRAM to reserve on each GPU in MB.

framework

A character string specifying the ML/AI framework to be used by the workers. Currently supported: "none" (default), "tensorflow". If a supported framework is specified, gpumux will automatically configure the worker to respect the memory_per_worker_mb limit.

worker_type

A character string specifying the daemon strategy. "persistent" (the default) creates long-lived daemons that execute many tasks, offering high performance. "proxy" creates daemons that spawn a new, clean worker process for each task, offering maximum stability and guaranteed memory cleanup at the cost of performance overhead.

Value

A mirai daemons object, ready to be used with mirai::mirai().