Options to configure a DataLoader
.
More...
#include <dataloader_options.h>
Public Member Functions | |
DataLoaderOptions (size_t batch_size) | |
TORCH_ARG (size_t, batch_size) | |
The size of each batch to fetch. | |
TORCH_ARG (size_t, workers)=0 | |
The number of worker threads to launch. More... | |
TORCH_ARG (optional< size_t >, max_jobs) | |
The maximum number of jobs to enqueue for fetching by worker threads. More... | |
TORCH_ARG (optional< std::chrono::milliseconds >, timeout) | |
An optional limit on the time to wait for the next batch. | |
TORCH_ARG (bool, enforce_ordering) | |
Whether to enforce ordering of batches when multiple are loaded asynchronously by worker threads. More... | |
TORCH_ARG (bool, drop_last) | |
Whether to omit the last batch if it contains less than batch_size examples. More... | |
Options to configure a DataLoader
.
Definition at line 13 of file dataloader_options.h.
|
pure virtual |
The number of worker threads to launch.
If zero, the main thread will synchronously perform the data loading.
torch::data::DataLoaderOptions::TORCH_ARG | ( | optional< size_t > | , |
max_jobs | |||
) |
The maximum number of jobs to enqueue for fetching by worker threads.
Defaults to two times the number of worker threads.
torch::data::DataLoaderOptions::TORCH_ARG | ( | bool | , |
enforce_ordering | |||
) |
Whether to enforce ordering of batches when multiple are loaded asynchronously by worker threads.
Set to false
for better performance if you do not care about determinism.
torch::data::DataLoaderOptions::TORCH_ARG | ( | bool | , |
drop_last | |||
) |
Whether to omit the last batch if it contains less than batch_size
examples.