data_manager module
- class data_manager.CUDADataManager(num_agents: Optional[int] = None, num_envs: Optional[int] = None, episode_length: Optional[int] = None)
Bases:
object
CUDA Data Manager: manages the data initialization of GPU, and data transfer between CPU host and GPU device
Example
cuda_data_manager = CUDADataManager( num_agents=10, num_envs=5, episode_length=100 )
data1 = DataFeed() data1.add_data(name=”X”, data=np.array([[1, 2, 3, 4, 5],
[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]])
)
data1.add_data(name=”a”, data=100) cuda_data_manager.push_data_to_device(data)
data2 = DataFeed() data2.add_data(name=”Y”, data=[[0.1,0.2,0.3,0.4,0.5],
[0.0,0.0,0.0,0.0,0.0], [0.0,0.0,0.0,0.0,0.0]]
)
cuda_data_manager.push_data_to_device(data2, torch_accessible=True)
X_copy_at_host = cuda_data_manager.pull_data_from_device(name=”X”) Y_copy_at_host = cuda_data_manager.pull_data_from_device(name=”Y”)
- if cuda_data_manager.is_data_on_device_via_torch(“Y”):
Y_tensor_accessible_by_torch = cuda_data_manager.data_on_device_via_torch(“Y”)
# cuda_function here assumes a compiled CUDA C function cuda_function(cuda_data_manager.device_data(“X”),
cuda_data_manager.device_data(“Y”), block=(10,1,1), grid=(5,1))
- add_meta_info(meta: Dict)
Add meta information to the data manager, only accepts scalar integer or float
- Parameters
meta – for example, {“episode_length”: 100, “num_agents”: 10}
Add shared constants to the data manager
- Parameters
constants – e.g., {“action_mapping”: [[0,0], [1,1], [-1,-1]]}
- data_on_device_via_torch(name: str) torch.Tensor
The data on the device. This is used for Pytorch default access within GPU. To fetch the tensor back to the host, call pull_data_from_device()
- Parameters
name – name of the device array
returns: the tensor itself at the device.
- device_data(name: str)
- Parameters
name – name of the device data
returns: the data pointer in the device for CUDA to access
- get_dtype(name: str)
- get_shape(name: str)
- property host_data
- is_data_on_device(name: str) bool
- is_data_on_device_via_torch(name: str) bool
This is used to check if the data exist and accessible via Pytorch default access within GPU. name: name of the device
- property log_data_list
- meta_info(name: str)
- pull_data_from_device(name: str)
Fetch the values of device array back to the host
- Parameters
name – name of the device array
returns: a host copy of scalar data or numpy array fetched back from the device array
- push_data_to_device(data: Dict, torch_accessible: bool = False)
Register data to the host, and push to the device (1) register at self._host_data (2) push to device and register at self._device_data_pointer, CUDA program can directly access those data via pointer (3) if save_copy_and_apply_at_reset or log_data_across_episode as instructed by the data, register and push to device using step (1)(2) too
- Parameters
data – e.g., {“name”: {“data”: numpy array,
“attributes”: {“save_copy_and_apply_at_reset”: True, “log_data_across_episode”: True}}}. This data dictionary can be constructed by warp_drive.utils.data_feed.DataFeed :param torch_accessible: if True, the data is directly accessible by Pytorch
- property reset_data_list
- reset_device(name: Optional[str] = None)
Reset the device array values back to the host array values Note: this reset is not a device-only execution, but incurs data transfer from host to device
- Parameters
name – (optional) reset a device array by name, if None, reset all arrays
- property scalar_data_list
- class data_manager.CudaTensorHolder(t)
Bases:
pycuda._driver.PointerHolderBase
A class that facilitates casting tensors to pointers.