Skip to content

Commit 767ac66

Browse files
committed
add flag
add flag add flag lint lint
1 parent 0c37a21 commit 767ac66

File tree

2 files changed

+20
-1
lines changed

2 files changed

+20
-1
lines changed

python/docs/source/migration_guide/pyspark_upgrade.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ Upgrading PySpark
2222
Upgrading from PySpark 4.0 to 4.1
2323
---------------------------------
2424

25+
* In Spark 4.1, ``DataFrame['name']`` on Spark Connect Python Client no longer eagerly validate the column name. To restore the legacy behavior, set ``PYSPARK_VALIDATE_COLUMN_NAME_LEGACY`` environment variable to ``1``.
2526
* In Spark 4.1, Arrow-optimized Python UDF supports UDT input / output instead of falling back to the regular UDF. To restore the legacy behavior, set ``spark.sql.execution.pythonUDF.arrow.legacy.fallbackOnUDT`` to ``true``.
26-
2727
* In Spark 4.1, unnecessary conversion to pandas instances is removed when ``spark.sql.execution.pythonUDTF.arrow.enabled`` is enabled. As a result, the type coercion changes when the produced output has a schema different from the specified schema. To restore the previous behavior, enable ``spark.sql.legacy.execution.pythonUDTF.pandas.conversion.enabled``.
2828

2929

python/pyspark/sql/connect/dataframe.py

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@
4444
)
4545

4646
import copy
47+
import os
4748
import sys
4849
import random
4950
import pyarrow as pa
@@ -1732,6 +1733,24 @@ def __getitem__(
17321733
)
17331734
)
17341735
else:
1736+
# TODO: revisit classic Spark's Dataset.col
1737+
# if (sparkSession.sessionState.conf.supportQuotedRegexColumnName) {
1738+
# colRegex(colName)
1739+
# } else {
1740+
# ConnectColumn(addDataFrameIdToCol(resolve(colName)))
1741+
# }
1742+
1743+
# validate the column name
1744+
if os.environ.get("PYSPARK_VALIDATE_COLUMN_NAME_LEGACY") == "1" and not hasattr(
1745+
self._session, "is_mock_session"
1746+
):
1747+
from pyspark.sql.connect.types import verify_col_name
1748+
1749+
# Try best to verify the column name with cached schema
1750+
# If fails, fall back to the server side validation
1751+
if not verify_col_name(item, self._schema):
1752+
self.select(item).isLocal()
1753+
17351754
return self._col(item)
17361755
elif isinstance(item, Column):
17371756
return self.filter(item)

0 commit comments

Comments
 (0)