Style transfer lets you apply the visual style of one image to the content of another. Instead of manually describing a style in your prompt, you provide a reference image and ComfyUI extracts and applies that style to your generation. Common use cases include:Documentation Index
Fetch the complete documentation index at: https://dripart-mintlify-style-transfer-guide-1774068104.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- Applying an artist’s style to your own compositions
- Converting photos into paintings, sketches, or illustrations
- Maintaining a consistent visual style across multiple images
- Combining the composition of one image with the aesthetics of another
- Image-to-image style transfer — the simplest method using prompts and denoise
- IP-Adapter style transfer — using a reference image to guide style without changing composition
- ControlNet + IP-Adapter — combining structural control with style guidance
Method 1: image-to-image style transfer
The simplest way to do style transfer is through the image-to-image workflow with style-focused prompts.How it works
This method encodes your reference image into latent space, then denoises it with a style-descriptive prompt. Thedenoise value in the KSampler controls how much the output deviates from the original.
When to use
- Quick style experiments
- When you want to change both style and content
- When you don’t need precise control over which elements change
Key parameters
| Parameter | Recommended range | Effect |
|---|---|---|
denoise | 0.4–0.7 | Lower values keep more of the original image; higher values allow more stylistic freedom |
| Prompt | Style-descriptive | Describe the target style (e.g., “oil painting style, impressionist, thick brushstrokes”) |
Method 2: IP-Adapter style transfer
IP-Adapter (Image Prompt Adapter) is the most popular method for style transfer in ComfyUI. It allows you to use a reference image as a visual prompt, guiding the generation style without relying solely on text descriptions.Model installation
You need two models for IP-Adapter style transfer:-
CLIP Vision model — Download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and place it in your
ComfyUI/models/clip_visionfolder -
IP-Adapter model — Download ip-adapter_sd15.safetensors and place it in your
ComfyUI/models/ipadapterfolder -
Checkpoint model — Download v1-5-pruned-emaonly-fp16.safetensors and place it in your
ComfyUI/models/checkpointsfolder
For SDXL-based workflows, use the corresponding SDXL IP-Adapter models instead. Check the h94/IP-Adapter repository for the full list of available models.
Workflow overview
The IP-Adapter style transfer workflow uses these key nodes:- Load Checkpoint — loads your base model
- Load Image — loads your style reference image
- CLIP Vision Encode — encodes the reference image into a visual embedding
- IPAdapter Apply — applies the style embedding to guide generation
- KSampler — generates the final image
Key parameters
| Parameter | Recommended range | Effect |
|---|---|---|
weight | 0.5–1.0 | Controls how strongly the reference style influences the output |
noise | 0.0–0.5 | Adds variation; higher values create more diverse results |
Style vs. composition control
IP-Adapter transfers both style and content by default. To focus primarily on style:- Use a lower
weightvalue (0.5–0.7) - Write a detailed prompt describing your desired composition
- The prompt guides the composition while IP-Adapter guides the style
Method 3: ControlNet + IP-Adapter
For maximum control, combine ControlNet (for structure) with IP-Adapter (for style). This lets you precisely define the composition while applying a reference style.How it works
- ControlNet extracts structural information (edges, depth, pose) from your input image and enforces that structure during generation
- IP-Adapter provides style guidance from a separate reference image
- Together, they let you say: “generate an image with this structure in that style”
Additional models needed
In addition to the IP-Adapter models above, you need a ControlNet model. For example:- Canny ControlNet — Download control_v11p_sd15_canny_fp16.safetensors and place it in your
ComfyUI/models/controlnetfolder
Workflow overview
This workflow extends the IP-Adapter workflow with ControlNet:- Load Image (content) — the image whose structure you want to preserve
- Canny edge detection — extracts edges from the content image
- ControlNet Apply — enforces the structural guidance
- Load Image (style) — the reference image whose style you want to apply
- CLIP Vision Encode + IPAdapter Apply — applies style from the reference
- KSampler — generates the final image combining both controls
Recommended settings
| Parameter | Recommended value | Notes |
|---|---|---|
ControlNet strength | 0.7–1.0 | Higher values enforce structure more strictly |
IP-Adapter weight | 0.6–0.8 | Balance between original and reference style |
KSampler denoise | 1.0 | Full denoise since ControlNet provides structure |
Tips for better results
- Choose clear style references — images with distinct, consistent styles work best. Avoid reference images with mixed or subtle styles.
- Match model to task — SD 1.5 models work well for general style transfer. SDXL models produce higher quality results but require SDXL-specific IP-Adapter models.
- Iterate on weights — small changes in IP-Adapter weight (0.05 increments) can significantly affect results. Take time to find the sweet spot.
- Combine with LoRA — for consistent style across many images, consider training a LoRA on your target style and combining it with IP-Adapter for even stronger style adherence.
- Use negative prompts — describe what you don’t want (e.g., “blurry, low quality, distorted”) to improve output quality.
Try it yourself
- Start with method 1 (image-to-image) to understand how denoise affects style transfer
- Move to method 2 (IP-Adapter) for more precise style control using a reference image
- Combine ControlNet with IP-Adapter (method 3) when you need both structural accuracy and style transfer