Skip to content

Commit 0ee9fc1

Browse files
authored
GCP Integration Tests Catch Failures (#580)
## Summary Right now, the gcp integration tests do not recognize failures from dataproc and continue running the commands. This has led to the group_by_upload job hanging indefinitely. This change ensures that if the dataproc job state is "ERROR" we fail immediately. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Enhanced job status checking by ensuring that both empty and explicitly error-indicating states trigger consistent failure responses, resulting in more robust job evaluation. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
1 parent c41466d commit 0ee9fc1

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

scripts/distribution/run_gcp_quickstart.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,10 +55,10 @@ function check_dataproc_job_state() {
5555
exit 1
5656
fi
5757
echo -e "${GREEN} <<<<<<<<<<<<<<<<-----------------JOB STATUS----------------->>>>>>>>>>>>>>>>>\033[0m"
58-
JOB_STATE=$(gcloud dataproc jobs describe $JOB_ID --region=us-central1 --format=flattened | grep "status.state:")
58+
JOB_STATE=$(gcloud dataproc jobs describe $JOB_ID --region=us-central1 --format=flattened | grep "status.state:" | awk '{print $NF}')
5959
echo $JOB_STATE
6060
# TODO: this doesn't actually fail. need to fix.
61-
if [ -z "$JOB_STATE" ]; then
61+
if [ -z "$JOB_STATE" ] || [ "$JOB_STATE" == "ERROR" ]; then
6262
echo -e "${RED} <<<<<<<<<<<<<<<<-----------------JOB FAILED!----------------->>>>>>>>>>>>>>>>>\033[0m"
6363
exit 1
6464
fi

0 commit comments

Comments
 (0)