Sign In

T5xxl-Unchained Lora + Workflow

22

378

11

9

Type

LoRA

Stats

142

0

Reviews

Published

Jul 2, 2025

Base Model

HiDream

Training Images

Download

Hash

AutoV2
EBA440DCBF

Flux fix Update - 7/2/25 - evening:

There was a bug with the Flux loader for t5xxl, so I jiggered it and got it working for todays evening push.

Will likely need to do something similar for SD3 and SD35 as well.

Workflow Release - 7/2/25 - morning:

Current workflow requires the comfy-clip-shunts node addon. You don't NEED to use the shunts, but it comes with the clip loaders that support the t5-unchained.

These loaders work BECAUSE they essentially replace the original sd function calls with function calls directly linking utilization.

For the shunts, you can use standard bert uncased or bert cased, instead of beatrix but the results won't be as accurate.

Shunt code with unchained.

https://github.com/AbstractEyes/comfy-clip-shunts/tree/dev

Full untrained unchained model:

https://huggingface.co/AbstractPhil/t5xxl-unchained

I have a ton of clips loaded here. The omega 24's are very good for this, since they're closer to the original vit-l-14 and laion vit-bigG.

https://huggingface.co/AbstractPhil/clips

Augmented and improved by res4lyf, I would suggest installing it.

https://github.com/ClownsharkBatwing/RES4LYF


Special thanks to Kaoru8 for their base t5xxl-unchained conversion and repo. It doesn't have any additional training on top of the base T5, but it's converted and the training that I fed it with clearly works.

https://huggingface.co/Kaoru8/T5XXL-Unchained


Not bad... a fair working sd35 prototype in a week.

By this time next week I hope to have the Flux variation fully operational and the clip suite in prototype stages.

I genuinely need some time to rest. This was very taxing on my mental health it seems, so I'm going to take some time to recover and regenerate.

I'm going to take time to work on my tools and do smaller finetunes rather than full finetunes. These larger finetunes are expensive and very taxing on the body and mind when the programs don't function correctly.


Yes this is a T5 lora. It's treated as though the T5xxl's lora_te3 is the "T5xxl" text encoder. I've extracted the lora_te3 layers from the original sd35 trained lora and resaved. Simple process really, no telling what sort of defectiveness it has. I doubt it'll load in anything but comfyui or forge, but here it is.

You can have a conversation with the loaded T5xxl-unchained with the lora weights applied; using standard LLM inference if you like. It'll summarize pretty well.

800 meg lora. The process is a lot easier than I thought it would be.

https://huggingface.co/AbstractPhil/SD35-SIM-V1/tree/main/REFIT-V1

from safetensors.torch import load_file, save_file

# Load the safetensors model
input_path = "I:/AIImageGen/AUTOMATIC1111/stable-diffusion-webui/models/Lora/test/sd35-sim-v1-t5-refit-v2-Try2-e3-step00003000.safetensors"
output_path = "I:/AIImageGen/AUTOMATIC1111/stable-diffusion-webui/models/Lora/test/t5xxl-unchained-lora-v1.safetensors"

model = load_file(input_path)

# Filter out TE1 and TE2 tensors
filtered = {k: v for k, v in model.items() if not (k.startswith("lora_te1") or k.startswith("lora_te2") or k.startswith("lora_unet")) }
print(f"Filtered out {len(model) - len(filtered)} tensors.")
print(f"Remaining tensors: {filtered.keys()}")
# Save result
save_file(filtered, output_path)
print(f"✅ Saved cleaned model without TE1/TE2 tensors to:\n{output_path}")

Rip them yourself if you want. The newest T5 is still training.

Requires the correct tokenizer and config for the T5xxl and the T5xxl model weights to function.


Without the base t5xxl-unchained, tokenizer, and correct dimensions configuration; you will receive a size mismatch error.

You need the big ass T5xxl fp16 or fp8. It was trained in fp16 so you'll get better results from the finetune with it. You can probably just tell comfyui or forge to downscale it.

https://huggingface.co/AbstractPhil/t5xxl-unchained/resolve/main/t5xxl-unchained-f16.safetensors

When the clip-suite is ready, it'll automatically scale in program and allow hardware-level quantization hot-conversion (Q2, Q4, Q8, etc) utilization, and saving within ComfyUI.

At that point you'll only need one model and everything will just convert at runtime using the META C++ libs.

https://huggingface.co/AbstractPhil/t5xxl-unchained/blob/main/config.json

https://huggingface.co/AbstractPhil/t5xxl-unchained/blob/main/tokenizer.json

To modify Forge you can swap these files with the ones at the address; the only exception being the sd3_conds.py needing a direct modification to the template contained within code.

Make a backup of the original configs if you want. It doesn't matter though. The t5xxl-unchained in it's vanilla form behaves identically to the original t5xxl.

------------------------------------------------------------------------
configs
------------------------------------------------------------------------
modules/models/sd3/sd3_conds.py

backend/huggingface/stabilityai/stable-diffusion-3-medium-diffusers/text_encoder_3

backend/huggingface/black-forest-labs/FLUX.1-dev/text_encoder_2/config.json

backend/huggingface/black-forest-labs/FLUX.1-schnell/text_encoder_2/config.json
-------------------------------------------------------------------------
tokenizers
-------------------------------------------------------------------------
backend/huggingface/black-forest-labs/FLUX.1-dev/tokenizer_2/tokenizer.json

backend/huggingface/black-forest-labs/FLUX.1-schnell/tokenizer_2/tokenizer.json

backend/huggingface/stabilityai/stable-diffusion-3-medium-diffusers/tokenizer_3/tokenizer.json