
Create a mirai daemons object for GPU workers
gpu_daemons.RdThis function intelligently creates a mirai daemons object by allocating
workers to specified GPUs based on available VRAM.
Usage
gpu_daemons(
gpu_ids = 0,
n_workers = NULL,
memory_per_worker_mb = NULL,
reserve_memory_mb = 1024,
framework = "none",
worker_type = "persistent"
)Arguments
- gpu_ids
A numeric vector of GPU IDs to use (e.g.,
0,c(0, 1)).- n_workers
The total number of workers to create across all specified GPUs. To terminate all daemons, set
n_workers = 0.- memory_per_worker_mb
The amount of VRAM in MB to allocate for each worker. This is used for both capacity planning and, if a supported
frameworkis chosen, for setting a memory limit within the worker.- reserve_memory_mb
A numeric value for the VRAM to reserve on each GPU in MB.
- framework
A character string specifying the ML/AI framework to be used by the workers. Currently supported:
"none"(default),"tensorflow". If a supported framework is specified,gpumuxwill automatically configure the worker to respect thememory_per_worker_mblimit.- worker_type
A character string specifying the daemon strategy.
"persistent"(the default) creates long-lived daemons that execute many tasks, offering high performance."proxy"creates daemons that spawn a new, clean worker process for each task, offering maximum stability and guaranteed memory cleanup at the cost of performance overhead.
Value
A mirai daemons object, ready to be used with mirai::mirai().