You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Fix broken source links in docs, fix#675
* Fix links to papers to point to arxiv pages rather than pdf
* Fix divide symbol in `similarity.metrics` docstrings
* Fix ray link in example notebook
Copy file name to clipboardExpand all lines: doc/source/examples/cem_iris.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@
13
13
"source": [
14
14
"The Contrastive Explanation Method (CEM) can generate black box model explanations in terms of pertinent positives (PP) and pertinent negatives (PN). For PP, it finds what should be minimally and sufficiently present (e.g. important pixels in an image) to justify its classification. PN on the other hand identify what should be minimally and necessarily absent from the explained instance in order to maintain the original prediction.\n",
15
15
"\n",
16
-
"The original paper where the algorithm is based on can be found on [arXiv](https://arxiv.org/pdf/1802.07623.pdf).\n",
16
+
"The original paper where the algorithm is based on can be found on [arXiv](https://arxiv.org/abs/1802.07623).\n",
17
17
"\n",
18
18
"This notebook requires the seaborn package for visualization which can be installed via pip:"
Copy file name to clipboardExpand all lines: doc/source/examples/cem_mnist.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@
13
13
"source": [
14
14
"The Contrastive Explanation Method (CEM) can generate black box model explanations in terms of pertinent positives (PP) and pertinent negatives (PN). For PP, it finds what should be minimally and sufficiently present (e.g. important pixels in an image) to justify its classification. PN on the other hand identify what should be minimally and necessarily absent from the explained instance in order to maintain the original prediction.\n",
15
15
"\n",
16
-
"The original paper where the algorithm is based on can be found on [arXiv](https://arxiv.org/pdf/1802.07623.pdf)."
16
+
"The original paper where the algorithm is based on can be found on [arXiv](https://arxiv.org/abs/1802.07623)."
Copy file name to clipboardExpand all lines: doc/source/examples/distributed_kernel_shap_adult_lr.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@
23
23
"<div class=\"alert alert-warning\">\n",
24
24
"Warning\n",
25
25
"\n",
26
-
"Windows support for the `ray` Python library is [still experimental](https://docs.ray.io/en/stable/installation.html#windows-support). Using `KernelShap` in parallel is not currently supported on Windows platforms.\n",
26
+
"Windows support for the `ray` Python library is [in beta](https://docs.ray.io/en/latest/ray-overview/installation.html#windows-support). Using `KernelShap` in parallel is not currently supported on Windows platforms.\n",
Copy file name to clipboardExpand all lines: doc/source/examples/integrated_gradients_transformers.ipynb
+2-2
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@
62
62
"cell_type": "markdown",
63
63
"metadata": {},
64
64
"source": [
65
-
"Here we define some functions needed to process the data and visualize. For consistency with other [text examples](https://github.com/SeldonIO/alibi/blob/master/examples/integrated_gradients_imdb.ipynb) in alibi, we will use the **IMDB reviews** dataset provided by Keras. Since the dataset consists of reviews that are already tokenized, we need to decode each sentence and re-convert them into tokens using the **(distil)BERT** tokenizer."
65
+
"Here we define some functions needed to process the data and visualize. For consistency with other [text examples](../examples/integrated_gradients_imdb.ipynb) in alibi, we will use the **IMDB reviews** dataset provided by Keras. Since the dataset consists of reviews that are already tokenized, we need to decode each sentence and re-convert them into tokens using the **(distil)BERT** tokenizer."
66
66
]
67
67
},
68
68
{
@@ -185,7 +185,7 @@
185
185
" tokens = [tokenizer.decode([X[i]]) for i in range(len(X))]\n",
0 commit comments