shadow.pseudo module

class shadow.pseudo.PL(model, weight_function, ssml_mode=True, missing_label=-1)[source]

Bases: shadow.module_wrapper.ModuleWrapper

Pseudo Label model wrapper.

The pseudo labeling wrapper weight samples according to model score. This is a form of entropy regularization. For example, a binary random variable with distribution \(P(X=1) = .5\) and \(P(X=0) = .5\) has a much higher entropy than \(P(X=1) = .9\) and \(P(X=0) = .1\).

Parameters
  • weight_function (callable) – assigns weighting based on raw model outputs.

  • ssml_mode (bool, optional) – semi-supevised learning mode, toggles whether loss is computed for all inputs or just those data with missing labels. Defaults to True.

  • missing_label (int, optional) – integer value used to represent missing labels. Defaults to -1.

get_technique_cost(x, targets)[source]

Compute loss from pseudo labeling.

Parameters
  • x (torch.Tensor) – Tensor of the data

  • targets (torch.Tensor) – 1D Corresponding labels. Unlabeled data is specified according to self.missing_label.

Returns

Pseudo label loss.

Return type

torch.Tensor

class shadow.pseudo.Threshold(thresholds)[source]

Bases: torch.nn.Module

Per-class thresholding operator.

Parameters

threshold (torch.Tensor) – 1D float array of thresholds with length equal to the number of classes. Each element should be between \([0, 1]\) and represents a per-class threshold. Thresholds are with respect to normalized scores (e.g. they sum to 1).

Example

>>> myThresholder = Threshold([.8, .9])
>>> myThresholder([[10, 90], [95, 95.4], [0.3, 0.4]])
[1, 0, 0]
forward(predictions)[source]

Threshold multi-class scores.

Parameters

predictions (torch.Tensor) – 2D model outputs of shape (n_samples, n_classes). Does not need to be normalized in advance.

Returns

binary thresholding for each sample.

Return type

torch.Tensor