skorch.regressor

NeuralNet subclasses for regression tasks.

class skorch.regressor.NeuralNetRegressor(module, *args, criterion=<class 'torch.nn.modules.loss.MSELoss'>, **kwargs)[source]

NeuralNet for regression tasks

Use this specifically if you have a standard regression task, with input data X and target y. y must be 2d.

In addition to the parameters listed below, there are parameters with specific prefixes that are handled separately. To illustrate this, here is an example:

>>> net = NeuralNet(
...    ...,
...    optimizer=torch.optimizer.SGD,
...    optimizer__momentum=0.95,
...)

This way, when optimizer is initialized, NeuralNet will take care of setting the momentum parameter to 0.95.

(Note that the double underscore notation in optimizer__momentum means that the parameter momentum should be set on the object optimizer. This is the same semantic as used by sklearn.)

Furthermore, this allows to change those parameters later:

net.set_params(optimizer__momentum=0.99)

This can be useful when you want to change certain parameters using a callback, when using the net in an sklearn grid search, etc.

By default an EpochTimer, BatchScoring (for both training and validation datasets), and PrintLog callbacks are installed for the user’s convenience.

Parameters:
module : torch module (class or instance)

A PyTorch Module. In general, the uninstantiated class should be passed, although instantiated modules will also work.

criterion : torch criterion (class, default=torch.nn.MSELoss)

Mean squared error loss.

optimizer : torch optim (class, default=torch.optim.SGD)

The uninitialized optimizer (update rule) used to optimize the module

lr : float (default=0.01)

Learning rate passed to the optimizer. You may use lr instead of using optimizer__lr, which would result in the same outcome.

max_epochs : int (default=10)

The number of epochs to train for each fit call. Note that you may keyboard-interrupt training at any time.

batch_size : int (default=128)

Mini-batch size. Use this instead of setting iterator_train__batch_size and iterator_test__batch_size, which would result in the same outcome. If batch_size is -1, a single batch with all the data will be used during training and validation.

iterator_train : torch DataLoader

The default PyTorch DataLoader used for training data.

iterator_valid : torch DataLoader

The default PyTorch DataLoader used for validation and test data, i.e. during inference.

dataset : torch Dataset (default=skorch.dataset.Dataset)

The dataset is necessary for the incoming data to work with pytorch’s DataLoader. It has to implement the __len__ and __getitem__ methods. The provided dataset should be capable of dealing with a lot of data types out of the box, so only change this if your data is not supported. You should generally pass the uninitialized Dataset class and define additional arguments to X and y by prefixing them with dataset__. It is also possible to pass an initialzed Dataset, in which case no additional arguments may be passed.

train_split : None or callable (default=skorch.dataset.ValidSplit(5))

If None, there is no train/validation split. Else, train_split should be a function or callable that is called with X and y data and should return the tuple dataset_train, dataset_valid. The validation data may be None.

callbacks : None, “disable”, or list of Callback instances (default=None)

Which callbacks to enable. There are three possible values:

If callbacks=None, only use default callbacks, those returned by get_default_callbacks.

If callbacks="disable", disable all callbacks, i.e. do not run any of the callbacks.

If callbacks is a list of callbacks, use those callbacks in addition to the default callbacks. Each callback should be an instance of Callback.

Callback names are inferred from the class name. Name conflicts are resolved by appending a count suffix starting with 1, e.g. EpochScoring_1. Alternatively, a tuple (name, callback) can be passed, where name should be unique. Callbacks may or may not be instantiated. The callback name can be used to set parameters on specific callbacks (e.g., for the callback with name 'print_log', use net.set_params(callbacks__print_log__keys_ignored=['epoch', 'train_loss'])).

predict_nonlinearity : callable, None, or ‘auto’ (default=’auto’)

The nonlinearity to be applied to the prediction. When set to ‘auto’, infers the correct nonlinearity based on the criterion (softmax for CrossEntropyLoss and sigmoid for BCEWithLogitsLoss). If it cannot be inferred or if the parameter is None, just use the identity function. Don’t pass a lambda function if you want the net to be pickleable.

In case a callable is passed, it should accept the output of the module (the first output if there is more than one), which is a PyTorch tensor, and return the transformed PyTorch tensor.

This can be useful, e.g., when predict_proba() should return probabilities but a criterion is used that does not expect probabilities. In that case, the module can return whatever is required by the criterion and the predict_nonlinearity transforms this output into probabilities.

The nonlinearity is applied only when calling predict() or predict_proba() but not anywhere else – notably, the loss is unaffected by this nonlinearity.

warm_start : bool (default=False)

Whether each fit call should lead to a re-initialization of the module (cold start) or whether the module should be trained further (warm start).

verbose : int (default=1)

This parameter controls how much print output is generated by the net and its callbacks. By setting this value to 0, e.g. the summary scores at the end of each epoch are no longer printed. This can be useful when running a hyperparameter search. The summary scores are always logged in the history attribute, regardless of the verbose setting.

device : str, torch.device, or None (default=’cpu’)

The compute device to be used. If set to ‘cuda’ in order to use GPU acceleration, data in torch tensors will be pushed to cuda tensors before being sent to the module. If set to None, then all compute devices will be left unmodified.

Attributes:
prefixes_ : list of str

Contains the prefixes to special parameters. E.g., since there is the 'module' prefix, it is possible to set parameters like so: NeuralNet(..., optimizer__momentum=0.95).

cuda_dependent_attributes_ : list of str

Contains a list of all attribute prefixes whose values depend on a CUDA device. If a NeuralNet trained with a CUDA-enabled device is unpickled on a machine without CUDA or with CUDA disabled, the listed attributes are mapped to CPU. Expand this list if you want to add other cuda-dependent attributes.

initialized_ : bool

Whether the NeuralNet was initialized.

module_ : torch module (instance)

The instantiated module.

criterion_ : torch criterion (instance)

The instantiated criterion.

callbacks_ : list of tuples

The complete (i.e. default and other), initialized callbacks, in a tuple with unique names.

_modules : list of str

List of names of all modules that are torch modules. This list is collected dynamically when the net is initialized. Typically, there is no reason for a user to modify this list.

_criteria : list of str

List of names of all criteria that are torch modules. This list is collected dynamically when the net is initialized. Typically, there is no reason for a user to modify this list.

_optimizers : list of str

List of names of all optimizers. This list is collected dynamically when the net is initialized. Typically, there is no reason for a user to modify this list.

Methods

check_data(X, y)
check_is_fitted([attributes]) Checks whether the net is initialized
check_training_readiness() Check that the net is ready to train
evaluation_step(batch[, training]) Perform a forward step to produce the output used for prediction and scoring.
fit(X, y, **fit_params) See NeuralNet.fit.
fit_loop(X[, y, epochs]) The proper fit loop.
forward(X[, training, device]) Gather and concatenate the output from forward call with input data.
forward_iter(X[, training, device]) Yield outputs of module forward calls on each batch of data.
get_all_learnable_params() Yield the learnable parameters of all modules
get_dataset(X[, y]) Get a dataset that contains the input data and is passed to the iterator.
get_iterator(dataset[, training]) Get an iterator that allows to loop over the batches of the given data.
get_loss(y_pred, y_true[, X, training]) Return the loss for this batch.
get_params_for(prefix) Collect and return init parameters for an attribute.
get_params_for_optimizer(prefix, …) Collect and return init parameters for an optimizer.
get_split_datasets(X[, y]) Get internal train and validation datasets.
get_train_step_accumulator() Return the train step accumulator.
infer(x, **fit_params) Perform a single inference step on a batch of data.
initialize() Initializes all of its components and returns self.
initialize_callbacks() Initializes all callbacks and save the result in the callbacks_ attribute.
initialize_criterion() Initializes the criterion.
initialize_history() Initializes the history.
initialize_module() Initializes the module.
initialize_optimizer([triggered_directly]) Initialize the model optimizer.
initialized_instance(instance_or_cls, kwargs) Return an instance initialized with the given parameters
load_params([f_params, f_optimizer, …]) Loads the the module’s parameters, history, and optimizer, not the whole object.
notify(method_name, **cb_kwargs) Call the callback method specified in method_name with parameters specified in cb_kwargs.
on_batch_begin(net[, batch, training])
on_epoch_begin(net[, dataset_train, …])
on_epoch_end(net[, dataset_train, dataset_valid])
on_train_begin(net[, X, y])
on_train_end(net[, X, y])
partial_fit(X[, y, classes]) Fit the module.
predict(X) Where applicable, return class labels for samples in X.
predict_proba(X) Return the output of the module’s forward method as a numpy array.
run_single_epoch(iterator, training, prefix, …) Compute a single epoch of train or validation.
save_params([f_params, f_optimizer, …]) Saves the module’s parameters, history, and optimizer, not the whole object.
score(X, y[, sample_weight]) Return the coefficient of determination of the prediction.
set_params(**kwargs) Set the parameters of this class.
train_step(batch, **fit_params) Prepares a loss function callable and pass it to the optimizer, hence performing one optimization step.
train_step_single(batch, **fit_params) Compute y_pred, loss value, and update net’s gradients.
trim_for_prediction() Remove all attributes not required for prediction.
validation_step(batch, **fit_params) Perform a forward step using batched data and return the resulting loss.
get_default_callbacks  
get_params  
initialize_virtual_params  
on_batch_end  
on_grad_computed  
fit(X, y, **fit_params)[source]

See NeuralNet.fit.

In contrast to NeuralNet.fit, y is non-optional to avoid mistakenly forgetting about y. However, y can be set to None in case it is derived dynamically from X.