-
Notifications
You must be signed in to change notification settings - Fork 3k
Sketch code generator #10824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sketch code generator #10824
Conversation
🪼 branch checks and previews
Install Gradio from this PR pip install https://gradio-pypi-previews.s3.amazonaws.com/8408294762bce28ea5aa66145dfeee681f76e18b/gradio-5.21.0-py3-none-any.whl Install Gradio Python Client from this PR pip install "gradio-client @ git+https://github.com/gradio-app/gradio@8408294762bce28ea5aa66145dfeee681f76e18b#subdirectory=client/python" Install Gradio JS Client from this PR npm install https://gradio-npm-previews.s3.amazonaws.com/8408294762bce28ea5aa66145dfeee681f76e18b/gradio-client-1.13.1.tgz Use Lite from this PR <script type="module" src="https://gradio-lite-previews.s3.amazonaws.com/8408294762bce28ea5aa66145dfeee681f76e18b/dist/lite.js""></script> |
🦄 change detectedThis Pull Request includes changes to the following packages.
With the following changelog entry.
Maintainers or the PR author can modify the PR title to modify this entry.
|
@aliabid94 works great! Just a few nits on the usage:
![]()
![]() ![]() Likewise, if the token doesn't have permissions to call inference providers and you get an error, it might be a good idea to catch and explain that error
|
@@ -102,6 +102,7 @@ def __init__( | |||
self.width = width | |||
self.color_map = color_map | |||
self.show_fullscreen_button = show_fullscreen_button | |||
self._value_description = "a tuple of type [image: str, annotations: list[tuple[mask: str, label: str]]] where 'image' is the path to the base image and 'annotations' is a list of tuples where each tuple has a 'mask' image filepath and a corresponding label." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a huge fan of adding ._value_description
, because of the increased maintenance involved. Particularly as the usage of this parameter is quite decoupled from its place in the code. i.e. we are adding these to component classes but we are using these in a completely different part of the code (Sketch). Is it truly necessary -- can't the LLM understand how to combine the docstring with the type
of the component?
self._value_description = "a tuple of type [image: str, annotations: list[tuple[mask: str, label: str]]] where 'image' is the path to the base image and 'annotations' is a list of tuples where each tuple has a 'mask' image filepath and a corresponding label." | |
self._value_description = "a tuple of type [image: str, annotations: list[tuple[mask: str, label: str]]] where 'image' is the path to the base image and 'annotations' is a list of tuples where each tuple has a 'mask' image filepath and a corresponding label." |
full_prompt += f"""- index {index} should be: {get_value_description(o[0], o[1])}.\n""" | ||
full_prompt += f"""The function should perform the following task: {prompt}\n""" | ||
full_prompt += "Return only the python code of the function in your response. Do not wrap the code in backticks or include any description before the response. Return ONLY the function code. Start your response with the header provided. Include any imports inside the function.\n" | ||
full_prompt += """If using an LLM would help with the task, use the huggingface_hub library. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth adding the other supported tasks here: https://huggingface.co/docs/huggingface_hub/en/guides/inference#supported-providers-and-tasks
I'm sure people will try things like image generation, which right now causes the generated functions to use obscure dependencies:

Done
Done.
Done.
Done.
I don't love it either but the description of value and type together aren't enough. By default, it just uses the type of value if no _value_description is provided (see how gr.Textbox doesn't need a _value_description for example, because it is enough to say that value will just be a string). But for other components, many other kwargs that are needed to know the final type. For example, it is necessary for the LLM to know the choices for a dropdown, so that it knows the values the function can receive (or return if dropdown is an output), and it needs to know if the dropdown is multiselect to know whether to expect/return a list or a single string.
Ok I plann on adding most tasks in a follow up PR but I've added image generation for now. |
Nice @aliabid94 ! Just tried out some couple of examples (image gen via API, chatbot via inference provider, simple image editing) and the experience was great. I also don't love Some feedback on the experience:
def chat(multimodaltextbox, chatbot):
import huggingface_hub
from io import StringIO
import sys
# Initialize the inference client
client = huggingface_hub.InferenceClient()
# Prepare the input for the chat model
messages = chatbot + [{'role': 'user', 'content': multimodaltextbox['text']}]
# Function to capture streaming output
class Capturing(list):
def __enter__(self):
self._stdout = sys.stdout
sys.stdout = self._stringio = StringIO()
return self
def __exit__(self, *args):
self.extend(self._stringio.getvalue().splitlines())
del self._stringio # free up some memory
sys.stdout = self._stdout
# Capture the streaming response
with Capturing() as output:
response = client.chat_completion(messages, stream=True)
# Append each streamed response to the chatbot
for line in output:
chatbot.append({'role': 'assistant', 'content': line})
# Clear the textbox for the next message
cleared_textbox = {'text': '', 'files': []}
return chatbot, cleared_textbox |
Done.
I think there's still a bit of work to be done on improving the prompting, I'll do these in a follow up PR |
* changes * changes * add changeset * changes * changes * changes * changes * changes * changes * changes --------- Co-authored-by: Ali Abid <[email protected]> Co-authored-by: gradio-pr-bot <[email protected]> Co-authored-by: Abubakar Abid <[email protected]>
* WIP * Fix * roughdraft * Workinig * query params * add changeset * modify * revert * lint * Code * Fix * lint * Add code * Fix * Fix python unit tests * Update `markupsafe` dependency version (#10820) * changes * add changeset * type * add changeset --------- Co-authored-by: gradio-pr-bot <[email protected]> * Adds a watermark parameter to `gr.Chatbot` that is added to copied text (#10814) * changes * add changeset * format' * test * copy * changes * doc * format --------- Co-authored-by: gradio-pr-bot <[email protected]> * Fix gr.load_chat (#10829) * changes * add changeset --------- Co-authored-by: Ali Abid <[email protected]> Co-authored-by: gradio-pr-bot <[email protected]> * Fix typo in docstring of Request class in route_utils.py (#10833) * Fix cell menu not showing in non-editable dataframes (#10819) * remove editable condition * - add test - improve html semantics * add changeset * fix test * fix test * - fix test - fix column widths changing on sort * swap e2e for story --------- Co-authored-by: gradio-pr-bot <[email protected]> * Sketch code generator (#10824) * changes * changes * add changeset * changes * changes * changes * changes * changes * changes * changes --------- Co-authored-by: Ali Abid <[email protected]> Co-authored-by: gradio-pr-bot <[email protected]> Co-authored-by: Abubakar Abid <[email protected]> * chore: update versions (#10811) Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * minor fixes * fix * Add guide * Minor tweaks * Address comments --------- Co-authored-by: gradio-pr-bot <[email protected]> Co-authored-by: Abubakar Abid <[email protected]> Co-authored-by: aliabid94 <[email protected]> Co-authored-by: Ali Abid <[email protected]> Co-authored-by: Abdesselam Benameur <[email protected]> Co-authored-by: Hannah <[email protected]> Co-authored-by: Gradio PR Bot <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Now we can use code generation directly in gradio sketch.
I added a "_value_description" property to components so they can describe what values they can have, which the LLM code generator needs to know. This was necessary, because often it was not enough to just read the docstring of a component - the value expected depends often on other kwargs (especially
type=
).