Skip to content

t2iadapter_style_sd14v1 gives something strange #547

Closed
@AndreyRGW

Description

@AndreyRGW

image
image
image


it gives this not only with these two images, but also with the others.

upd1: now, it gives me error:

Loaded state_dict from [F:\WBC\sdwb\extensions\sd-webui-controlnet\models\t2iadapter_style-fp16.safetensors]
Error running process: F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
  File "F:\WBC\sdwb\modules\scripts.py", line 386, in process
    script.process(p, *script_args)
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py", line 735, in process
    model_net = self.load_control_model(p, unet, model, lowvram)
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py", line 534, in load_control_model
    model_net = self.build_control_model(p, unet, model, lowvram)
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py", line 572, in build_control_model
    network = network_module(
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\adapter.py", line 81, in __init__
    self.control_model.load_state_dict(state_dict)
  File "F:\WBC\sdwb\venv\lib\site-packages\torch\nn\modules\module.py", line 2073, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Adapter:
        Missing key(s) in state_dict: "body.0.block1.weight", "body.0.block1.bias", "body.0.block2.weight", "body.0.block2.bias", "body.1.block1.weight", "body.1.block1.bias", "body.1.block2.weight", "body.1.block2.bias", "body.2.in_conv.weight", "body.2.in_conv.bias", "body.2.block1.weight", "body.2.block1.bias", "body.2.block2.weight", "body.2.block2.bias", "body.3.block1.weight", "body.3.block1.bias", "body.3.block2.weight", "body.3.block2.bias", "body.4.in_conv.weight", "body.4.in_conv.bias", "body.4.block1.weight", "body.4.block1.bias", "body.4.block2.weight", "body.4.block2.bias", "body.5.block1.weight", "body.5.block1.bias", "body.5.block2.weight", "body.5.block2.bias", "body.6.block1.weight", "body.6.block1.bias", "body.6.block2.weight", "body.6.block2.bias", "body.7.block1.weight", "body.7.block1.bias", "body.7.block2.weight", "body.7.block2.bias", "conv_in.weight", "conv_in.bias".
        Unexpected key(s) in state_dict: "ln_post.bias", "ln_post.weight", "ln_pre.bias", "ln_pre.weight", "proj", "style_embedding", "transformer_layes.0.attn.in_proj_bias", "transformer_layes.0.attn.in_proj_weight", "transformer_layes.0.attn.out_proj.bias", "transformer_layes.0.attn.out_proj.weight", "transformer_layes.0.ln_1.bias", "transformer_layes.0.ln_1.weight", "transformer_layes.0.ln_2.bias", "transformer_layes.0.ln_2.weight", "transformer_layes.0.mlp.c_fc.bias", "transformer_layes.0.mlp.c_fc.weight", "transformer_layes.0.mlp.c_proj.bias", "transformer_layes.0.mlp.c_proj.weight", "transformer_layes.1.attn.in_proj_bias", "transformer_layes.1.attn.in_proj_weight", "transformer_layes.1.attn.out_proj.bias", "transformer_layes.1.attn.out_proj.weight", "transformer_layes.1.ln_1.bias", "transformer_layes.1.ln_1.weight", "transformer_layes.1.ln_2.bias", "transformer_layes.1.ln_2.weight", "transformer_layes.1.mlp.c_fc.bias", "transformer_layes.1.mlp.c_fc.weight", "transformer_layes.1.mlp.c_proj.bias", "transformer_layes.1.mlp.c_proj.weight", "transformer_layes.2.attn.in_proj_bias", "transformer_layes.2.attn.in_proj_weight", "transformer_layes.2.attn.out_proj.bias", "transformer_layes.2.attn.out_proj.weight", "transformer_layes.2.ln_1.bias", "transformer_layes.2.ln_1.weight", "transformer_layes.2.ln_2.bias", "transformer_layes.2.ln_2.weight", "transformer_layes.2.mlp.c_fc.bias", "transformer_layes.2.mlp.c_fc.weight", "transformer_layes.2.mlp.c_proj.bias", "transformer_layes.2.mlp.c_proj.weight".

upd2:
t2iadapter_style-fp16.safetensors - gives the error above
t2iadapter_style_sd14v1.pth - gives faded image as above

upd3:
my args:
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512
set COMMANDLINE_ARGS=--xformers --medvram --no-half-vae --api --opt-channelslast

upd4:
Changing the weight for clip_vision does nothing, either 0 or 2 gives the same result.

I'm about to lose my mind :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions