Skip to content

[TO_REVIEW] Fix docstring for the regulariation parameter of DA loss #230

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions skada/deep/_adversarial.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ def DANN(
The name of the module's layer whose outputs are
collected during the training.
reg : float, default=1
Regularization parameter.
Regularization parameter for DA loss.
domain_classifier : torch module, default=None
A PyTorch :class:`~torch.nn.Module` used to classify the
domain. If None, a domain classifier is created following [1]_.
Expand Down Expand Up @@ -344,7 +344,7 @@ def CDAN(
The name of the module's layer whose outputs are
collected during the training.
reg : float, default=1
Regularization parameter.
Regularization parameter for DA loss.
max_features : int, default=4096
Maximum size of the input for the domain classifier.
4096 is the largest number of units in typical deep network
Expand Down
4 changes: 2 additions & 2 deletions skada/deep/_divergence.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ def DeepCoral(module, layer_name, reg=1, base_criterion=None, **kwargs):
The name of the module's layer whose outputs are
collected during the training for the adaptation.
reg : float, optional (default=1)
The regularization parameter of the covariance estimator.
Regularization parameter for DA loss.
base_criterion : torch criterion (class)
The base criterion used to compute the loss with source
labels. If None, the default is `torch.nn.CrossEntropyLoss`.
Expand Down Expand Up @@ -142,7 +142,7 @@ def DAN(module, layer_name, reg=1, sigmas=None, base_criterion=None, **kwargs):
The name of the module's layer whose outputs are
collected during the training for the adaptation.
reg : float, optional (default=1)
The regularization parameter of the covariance estimator.
Regularization parameter for DA loss.
sigmas : array-like, optional (default=None)
The sigmas for the Gaussian kernel.
base_criterion : torch criterion (class)
Expand Down
2 changes: 1 addition & 1 deletion skada/deep/_optimal_transport.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def DeepJDOT(
The name of the module's layer whose outputs are
collected during the training for the adaptation.
reg : float, default=1
Regularization parameter.
Regularization parameter for DA loss.
reg_cl : float, default=1
Class distance term regularization parameter.
base_criterion : torch criterion (class)
Expand Down
2 changes: 1 addition & 1 deletion skada/deep/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class DomainAwareCriterion(torch.nn.Module):
The initialized criterion (loss) used to compute the
loss to reduce the divergence between domains.
reg: float, default=1
Regularization parameter.
Regularization parameter for DA loss.
reduction: str, default='mean'
Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'.
'none': no reduction will be applied,
Expand Down
Loading