@@ -52,19 +52,8 @@ class TabularDataset(datasets._ColumnNamesDataset):
52
52
my_dataset = aiplatform.TabularDataset.create(
53
53
display_name="my-dataset", gcs_source=['gs://path/to/my/dataset.csv'])
54
54
```
55
-
56
- The following code shows you how to create and import a tabular
57
- dataset in two distinct steps.
58
-
59
- ```py
60
- my_dataset = aiplatform.TextDataset.create(
61
- display_name="my-dataset")
62
-
63
- my_dataset.import(
64
- gcs_source=['gs://path/to/my/dataset.csv']
65
- import_schema_uri=aiplatform.schema.dataset.ioformat.text.multi_label_classification
66
- )
67
- ```
55
+ Contrary to unstructured datasets, creating and importing a tabular dataset
56
+ can only be done in a single step.
68
57
69
58
If you create a tabular dataset with a pandas
70
59
[`DataFrame`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html),
@@ -108,10 +97,11 @@ def create(
108
97
Optional. The URI to one or more Google Cloud Storage buckets that contain
109
98
your datasets. For example, `str: "gs://bucket/file.csv"` or
110
99
`Sequence[str]: ["gs://bucket/file1.csv",
111
- "gs://bucket/file2.csv"]`.
100
+ "gs://bucket/file2.csv"]`. Either `gcs_source` or `bq_source` must be specified.
112
101
bq_source (str):
113
102
Optional. The URI to a BigQuery table that's used as an input source. For
114
- example, `bq://project.dataset.table_name`.
103
+ example, `bq://project.dataset.table_name`. Either `gcs_source`
104
+ or `bq_source` must be specified.
115
105
project (str):
116
106
Optional. The name of the Google Cloud project to which this
117
107
`TabularDataset` is uploaded. This overrides the project that
0 commit comments