Sign In

FLUX Continuum (Modular Interface for ComfyUI)

74

1.1k

35

Type

Workflows

Stats

125

0

Reviews

Published

Jun 19, 2025

Base Model

Flux.1 D

Usage Tips

Clip Skip: 1

Hash

AutoV2
66AE60662F

The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

ComfyUI Flux Continuum - Modular Interface

A modular workflow that brings order to the chaos of image generation pipelines.

πŸ“Ί Watch the Tutorial

πŸ”— GitHub

Updates

  • 1.7.0: Enhanced workflow and usability update πŸ“Ί Watch Video Update

    • Image Transfer Shortcut: Use Ctrl+Shift+C to copy images from Img Preview to Img Load (customizable in Settings > Keybinding > Image Transfer)

    • Configurable Model Router: Dynamic model selection with customizable JSON mapping for flexible workflows

    • Hint System: Interactive hint nodes provide contextual help throughout the workflow

    • Crop & Stitch: Enhanced inpainting/outpainting with automatic crop and stitch functionality

    • Smart Guidance: Automatic guidance value of 30 for inpainting, outpainting, canny, and depth operations

    • TeaCache Integration: Optional speed boost for all outputs (trades some quality for performance)

    • Improved Preprocessor Preview Logic: CN Input is used for previewing when ControlNet strength > 0, otherwise uses Img Load

    • Workflow Reorganization: Modules reordered for more logical flow

    • Redux Naming: IP Adapter renamed to Redux for consistency with BFL terminology

Overview

ComfyUI Flux Continuum revolutionises workflow management through a thoughtful dual-interface design:

  • Front-end: A consistent control interface shared across all modules

  • Back-end: Powerful, modular architecture for customisation and experimentation

✨ Core Features

Perfect for creators who want a consistent, streamlined experience across all image generation tasks, while maintaining the power to customize when needed.

  • Unified Control Interface

    • Single set of controls affects all relevant modules

    • Smart guidance adjustment based on operation type

    • Consistent experience across all generation tasks

  • Smart Workflow Management

    • Only activates nodes and models required for current task

    • Toggle between different output types seamlessly

    • Efficiently handles resource allocation

    • Optional TeaCache for speed optimization

  • Universal Model Integration

    • LoRAs, ControlNets and Redux work across all output modules

    • Seamless Black Forest Labs model support

    • Configurable model routing for custom workflows

  • Enhanced Usability

    • Interactive hint system for contextual help

    • Quick image transfer with keyboard shortcut

    • Intelligent preprocessing based on control values

    • Crop & stitch for seamless inpainting/outpainting


πŸš€ Quick Start

πŸ“Ί New to Flux Continuum? Watch the tutorial first

  1. Clone repo to the custom nodes folder

git clone https://github.com/robertvoy/ComfyUI-Flux-Continuum
  1. Download and import the workflow into ComfyUI

  2. Install missing custom nodes using the ComfyUI Manager

  3. Configure your models in the config panel (press 2 to access)

  4. Download any missing models (see Model Downloads section below)

  5. Return to the main interface (press 1)

  6. Select txt2img from the output selector (top left corner)

  7. Run the workflow to generate your first image


🎯 Usage Guide

Output Selection

The workflow is controlled by the Output selector in the top-left corner. Select your desired output and all relevant controls will automatically apply.

Key Controls

🎨 Main Generation

  • Prompt: Your text description for generation

  • Denoise: Controls strength for img2img operations (0 = no change, 1 = completely new)

  • Steps: Number of sampling steps (higher = more detail, slower)

  • Guidance: How closely to follow the prompt (automatically set to 30 for inpainting/outpainting/canny/depth)

  • TeaCache: Toggle for speed boost (some quality trade-off)

πŸ–ΌοΈ Input Images

  • Img Load: Primary image for all img2img operations (inpainting, outpainting, detailer, upscaling)

  • CN Input: Source for ControlNet preprocessing

  • Redux 1-3: Up to 3 reference images for style transfer (use very low strength values)

  • Tip: Use Ctrl+Shift+C to quickly copy from Img Preview to Img Load

πŸŽ›οΈ ControlNet & Redux

  • ControlNets activate when strength > 0

  • When CN strength > 0, preprocessor uses CN Input; otherwise uses Img Load

  • Preview preprocessor results by selecting corresponding output (e.g., "preprocessor canny")

  • Redux sliders control each Redux input individually (1 = Redux 1, etc.)

Recommended ControlNet Values:

  • Canny: Strength=0.7, End=0.8

  • Depth: Strength=0.8, End=0.8

  • Pose: Strength=0.9, End=0.65

πŸ”§ Image Processing

  • Resize, crop, sharpen, color correct, or pad images

  • Preview results with "imgload prep" output

  • Bypass nodes after processing to avoid reprocessing (Ctrl+B)

⬆️ Upscaling

  • Resolution Multiply: Multiplies image resolution after any preprocessing

  • Upscale Model: Choose your upscaling model (recommended: 4xNomos8kDAT)

  • πŸ“Ί Watch Upscaling Tutorial


πŸ“₯ Model Downloads

Required Models

unet folder:

Note: If you don't use Canny or Depth models, you can bypass their load nodes and skip downloading them.

vae folder:

clip folder:

style_models folder:

clip_vision folder:

controlnet/FLUX folder: