Skip to content

Source Facebook Marketing: Attempt to retry failing jobs that are already split to minimum size #12390

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
May 3, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,9 @@ class Status(str, Enum):
class AsyncJob(ABC):
"""Abstract AsyncJob base class"""

# max attempts for a job before errroring out
max_attempts: int = 10 # TODO: verify a sane number for this
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

am I reading this correctly that we currently have zero retry logic for async jobs, and that we just keep splitting until the job fails at the moment? if so this change makes a lot of sense

I'd recommend setting this to something like 5, it seems like if something fails 9 times it's unlikely to succeed on the 10th? (but you never know when it comes to FB :)

Copy link
Contributor Author

@Phlair Phlair Apr 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is retry logic in async_job_manager.py but because we try to split_job() as soon as we hit attempt_number 2, we're ending up throwing this error of lowest split level only after retrying that call once (I think) rather than the specified 20 times.

I'm going to confirm that is what's happening and if so then refactor this PR so that rather than adding new retry logic in the AsyncJob it all ties together properly with the async_job_manager.


def __init__(self, api: FacebookAdsApi, interval: pendulum.Period):
"""Init generic async job

Expand Down Expand Up @@ -158,7 +161,15 @@ def split_job(self) -> List["AsyncJob"]:
new_jobs = []
for job in self._jobs:
if job.failed:
new_jobs.extend(job.split_job())
try:
new_jobs.extend(job.split_job())
except RuntimeError as split_limit_error:
logger.error(split_limit_error)
if job.attempt_number > job.max_attempts:
raise RuntimeError(f"{job} at smallest split size and still failing after {job.max_attempts} retries.")
logger.info(f'can\'t split "{job}" any smaller, attempting to restart instead.')
job.restart()
new_jobs.append(job)
else:
new_jobs.append(job)
return new_jobs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -415,5 +415,22 @@ def test_split_job(self, parent_job, grouped_jobs, mocker):
else:
job.split_job.assert_not_called()

def test_split_job_smallest(self, parent_job, grouped_jobs):
grouped_jobs[0].max_attempts = InsightAsyncJob.max_attempts
grouped_jobs[0].failed = True
grouped_jobs[0].split_job.side_effect = RuntimeError("Mocking smallest size")

count = 0
while count < InsightAsyncJob.max_attempts:
split_jobs = parent_job.split_job()
assert len(split_jobs) == len(
grouped_jobs
), "attempted to split job at smallest size so should just restart job meaning same no. of jobs"
grouped_jobs[0].attempt_number += 1
count += 1

with pytest.raises(RuntimeError): # now that we've hit max_attempts, we should error out
parent_job.split_job()

def test_str(self, parent_job, grouped_jobs):
assert str(parent_job) == f"ParentAsyncJob({grouped_jobs[0]} ... {len(grouped_jobs) - 1} jobs more)"