Skip to content

Commit fdf1897

Browse files
haojin2astonzhang
authored andcommitted
[DO NOT MERGE] Update for MXNet 1.6.0 (#548)
* [DO NOT MERGE] Update for MXNet 1.6.0 * Update house-price-kaggle * Update random-variables.md * Update dropout.md * Update linear-algebra.md * Update linear-algebra.md
1 parent ccc189a commit fdf1897

File tree

5 files changed

+11
-11
lines changed

5 files changed

+11
-11
lines changed

chapter_appendix_math/integral-calculus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ approx = np.sum(epsilon*f)
6363
true = np.log(2) / 2
6464
6565
d2l.set_figsize()
66-
d2l.plt.bar(x, f, width = epsilon, align = 'edge')
66+
d2l.plt.bar(x.asnumpy(), f.asnumpy(), width = epsilon, align = 'edge')
6767
d2l.plt.plot(x, f, color='black')
6868
d2l.plt.ylim([0, 1])
6969
d2l.plt.show()

chapter_appendix_math/linear-algebra.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -312,8 +312,8 @@ In a fully machine learned solution, we would learn the threshold from the datas
312312
```{.python .input}
313313
# Print test set accuracy with eyeballed threshold
314314
w = (ave_1 - ave_0).T
315-
predictions = 1*(X_test.reshape(2000, -1).dot(w.flatten()) > -1500000)
316-
np.mean(predictions==y_test) # Accuracy
315+
predictions = X_test.reshape(2000, -1).dot(w.flatten()) > -1500000
316+
np.mean(predictions.astype(y_test.dtype)==y_test, dtype=np.float64) # Accuracy
317317
```
318318

319319
## Geometry of Linear Transformations

chapter_appendix_math/random-variables.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -499,9 +499,9 @@ d2l.plt.figure(figsize=(12, 3))
499499
for i in range(3) :
500500
X = np.random.normal(0, 1, 500)
501501
Y = covs[i]*X + np.random.normal(0, 1, 500)
502-
502+
503503
d2l.plt.subplot(1, 4, i+1)
504-
d2l.plt.scatter(X, Y)
504+
d2l.plt.scatter(X.asnumpy(), Y.asnumpy())
505505
d2l.plt.xlabel('X')
506506
d2l.plt.ylabel('Y')
507507
d2l.plt.title("cov = {}".format(covs[i]))
@@ -572,9 +572,9 @@ d2l.plt.figure(figsize=(12, 3))
572572
for i in range(3) :
573573
X = np.random.normal(0, 1, 500)
574574
Y = cors[i] * X + np.sqrt(1 - cors[i]**2) * np.random.normal(0, 1, 500)
575-
575+
576576
d2l.plt.subplot(1, 4, i + 1)
577-
d2l.plt.scatter(X, Y)
577+
d2l.plt.scatter(X.asnumpy(), Y.asnumpy())
578578
d2l.plt.xlabel('X')
579579
d2l.plt.ylabel('Y')
580580
d2l.plt.title("cor = {}".format(cors[i]))

chapter_multilayer-perceptrons/dropout.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -210,7 +210,7 @@ def dropout(X, drop_prob):
210210
if drop_prob == 1:
211211
return np.zeros_like(X)
212212
mask = np.random.uniform(0, 1, X.shape) > drop_prob
213-
return mask * X / (1.0-drop_prob)
213+
return mask.astype(np.float32) * X / (1.0-drop_prob)
214214
```
215215

216216
We can test out the `dropout` function on a few examples.

chapter_multilayer-perceptrons/kaggle-house-price.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -180,9 +180,9 @@ Finally, via the `values` attribute,
180180

181181
```{.python .input n=9}
182182
n_train = train_data.shape[0]
183-
train_features = np.array(all_features[:n_train].values)
184-
test_features = np.array(all_features[n_train:].values)
185-
train_labels = np.array(train_data.SalePrice.values).reshape(-1, 1)
183+
train_features = np.array(all_features[:n_train].values, dtype=np.float32)
184+
test_features = np.array(all_features[n_train:].values, dtype=np.float32)
185+
train_labels = np.array(train_data.SalePrice.values, dtype=np.float32).reshape(-1, 1)
186186
```
187187

188188
## Training

0 commit comments

Comments
 (0)