Skip to content

Add opacity to stroke and fill colors, fix/enhance RGB to RGBA. #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

sfinktah
Copy link

@sfinktah sfinktah commented Jun 12, 2025

Hi, lovely simple but effective node you have. I was using (and had modified) https://github.com/sfinktah/ComfyUI_Comfyroll_CustomNodes but then it turned out for all it's complexity, it couldn't label videos. So your node was so beautifully simple after all that.

I wanted opacity on my outlined text (or stroked, as you call it).

vlcsnap-2025-06-12-21h58m31s640

In the process of implementing it a second time, I found out that opacity is fully supported by PIL text drawing, as long as you are writing to an RGBA image. Since the image we get is not RGBA, we quickly convert it to RGBA just long enough to composite a transparent layer (containing the possibly-transparent text/outline), then convert it back to whatever form it was for a tidy return.

Along the way, I fixed your RGB hex decoder (it was decoding #123 as #123123, rather than #112233) and enhanced it to also accept RGBA hex values (#8881 or #88888811).

I had however already created an extra input (stroke_opacity), which is what it will default to, if the alpha channel on the stroke isn't set.

That extra input could be removed, or another added for fill opacity.

I also changed the way you did font names, to the way they are done in comfyroll_customnodes: A font subdirectory with a "dropdown" list. That has the disadvantage of not letting you type arbitary font names that you have installed, but that's something that never seems to work right for me anyway. If you don't agree with that change, I can reverse that part.

In the future I might want to do something based on some python code I wrote read .PSD documents, and apply them in a layer-by-layer fashion, with text layers rendered using TTF. This involved determine the exact conversion of photoshops leading and line spacing parameters to PIL based functions. I recall also doing a QR code generator, but they may have been a bit of hack.

This is the kind of output it generated when mixed with live data. I think something similar might be of some use as a comfyui node.

image

Which was generated from a .PSD (via a json intermediate description)

image

LMK if you'd be interested in something like this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant