Skip to content

[Inference Providers] isolate image-to-image payload build for HF Inference API #1439

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 14, 2025

Conversation

hanouticelina
Copy link
Contributor

This PR is a prerequisite for #1427.

It refactors the payload construction for hf-inference by isolating it into a separate async function. Note that adding a new async function to build the payload is necessary because HFInferenceImageToImageTask.preparePayload cannot be made async, yet the payload construction requires asynchronous operations. A similar pattern has already been implemented for automaticSpeechRecognition with fal.

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(high level feedback)

@SBrandeis
Copy link
Contributor

HFInferenceImageToImageTask.preparePayload cannot be made async

I don't understand - why can't it be made async?

@zeke
Copy link
Contributor

zeke commented May 13, 2025

Hey folks. Looking forward to getting this shipped to unblock #1427 -- anything I can do to help?

Copy link
Contributor

@SBrandeis SBrandeis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok to merge to unlock downstream tasks

I'm not a fan of having 2 different functions for the same purpose, and I advocate to make preparePayload async instead

I also understand it's a bug and breaking API change, so let's merge as is.

@hanouticelina hanouticelina merged commit 369d105 into main May 14, 2025
5 checks passed
@hanouticelina hanouticelina deleted the fix-image-to-image branch May 14, 2025 12:28
@Wauplin
Copy link
Contributor

Wauplin commented May 14, 2025

I don't understand - why can't it be made async?

Mainly because we want makeRequestOptionsFromResolvedModel to be a sync method to be callable in inference snippets for instance. Kinda a weak reason but on the other hand, breaking this would be quite painful. Agree this solution is not ideal but I can't think of a better one rn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants