Skip to main content
Style transfer lets you apply the visual style of one image to the content of another. Instead of manually describing a style in your prompt, you provide a reference image and ComfyUI extracts and applies that style to your generation. Common use cases include:
  • Applying an artist’s style to your own compositions
  • Converting photos into paintings, sketches, or illustrations
  • Maintaining a consistent visual style across multiple images
  • Combining the composition of one image with the aesthetics of another
This guide covers three approaches to style transfer in ComfyUI, from basic to advanced:
  1. Image-to-image style transfer — the simplest method using prompts and denoise
  2. IP-Adapter style transfer — using a reference image to guide style without changing composition
  3. ControlNet + IP-Adapter — combining structural control with style guidance

Method 1: image-to-image style transfer

The simplest way to do style transfer is through the image-to-image workflow with style-focused prompts.

How it works

This method encodes your reference image into latent space, then denoises it with a style-descriptive prompt. The denoise value in the KSampler controls how much the output deviates from the original.

When to use

  • Quick style experiments
  • When you want to change both style and content
  • When you don’t need precise control over which elements change

Key parameters

ParameterRecommended rangeEffect
denoise0.4–0.7Lower values keep more of the original image; higher values allow more stylistic freedom
PromptStyle-descriptiveDescribe the target style (e.g., “oil painting style, impressionist, thick brushstrokes”)
Start with a denoise value of 0.55 and adjust from there. Values below 0.3 may not change the style enough, while values above 0.8 may lose the original composition entirely.
For more details on this approach, see the image-to-image tutorial.

Method 2: IP-Adapter style transfer

IP-Adapter (Image Prompt Adapter) is the most popular method for style transfer in ComfyUI. It allows you to use a reference image as a visual prompt, guiding the generation style without relying solely on text descriptions.

Model installation

You need two models for IP-Adapter style transfer:
  1. CLIP Vision model — Download CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and place it in your ComfyUI/models/clip_vision folder
  2. IP-Adapter model — Download ip-adapter_sd15.safetensors and place it in your ComfyUI/models/ipadapter folder
  3. Checkpoint model — Download v1-5-pruned-emaonly-fp16.safetensors and place it in your ComfyUI/models/checkpoints folder
For SDXL-based workflows, use the corresponding SDXL IP-Adapter models instead. Check the h94/IP-Adapter repository for the full list of available models.

Workflow overview

The IP-Adapter style transfer workflow uses these key nodes:
  1. Load Checkpoint — loads your base model
  2. Load Image — loads your style reference image
  3. CLIP Vision Encode — encodes the reference image into a visual embedding
  4. IPAdapter Apply — applies the style embedding to guide generation
  5. KSampler — generates the final image

Key parameters

ParameterRecommended rangeEffect
weight0.5–1.0Controls how strongly the reference style influences the output
noise0.0–0.5Adds variation; higher values create more diverse results
For pure style transfer (keeping your own composition), use a weight around 0.6–0.8. Higher weights may start transferring content elements from the reference image as well.

Style vs. composition control

IP-Adapter transfers both style and content by default. To focus primarily on style:
  • Use a lower weight value (0.5–0.7)
  • Write a detailed prompt describing your desired composition
  • The prompt guides the composition while IP-Adapter guides the style

Method 3: ControlNet + IP-Adapter

For maximum control, combine ControlNet (for structure) with IP-Adapter (for style). This lets you precisely define the composition while applying a reference style.

How it works

  • ControlNet extracts structural information (edges, depth, pose) from your input image and enforces that structure during generation
  • IP-Adapter provides style guidance from a separate reference image
  • Together, they let you say: “generate an image with this structure in that style”

Additional models needed

In addition to the IP-Adapter models above, you need a ControlNet model. For example:

Workflow overview

This workflow extends the IP-Adapter workflow with ControlNet:
  1. Load Image (content) — the image whose structure you want to preserve
  2. Canny edge detection — extracts edges from the content image
  3. ControlNet Apply — enforces the structural guidance
  4. Load Image (style) — the reference image whose style you want to apply
  5. CLIP Vision Encode + IPAdapter Apply — applies style from the reference
  6. KSampler — generates the final image combining both controls
ParameterRecommended valueNotes
ControlNet strength0.7–1.0Higher values enforce structure more strictly
IP-Adapter weight0.6–0.8Balance between original and reference style
KSampler denoise1.0Full denoise since ControlNet provides structure

Tips for better results

  • Choose clear style references — images with distinct, consistent styles work best. Avoid reference images with mixed or subtle styles.
  • Match model to task — SD 1.5 models work well for general style transfer. SDXL models produce higher quality results but require SDXL-specific IP-Adapter models.
  • Iterate on weights — small changes in IP-Adapter weight (0.05 increments) can significantly affect results. Take time to find the sweet spot.
  • Combine with LoRA — for consistent style across many images, consider training a LoRA on your target style and combining it with IP-Adapter for even stronger style adherence.
  • Use negative prompts — describe what you don’t want (e.g., “blurry, low quality, distorted”) to improve output quality.

Try it yourself

  1. Start with method 1 (image-to-image) to understand how denoise affects style transfer
  2. Move to method 2 (IP-Adapter) for more precise style control using a reference image
  3. Combine ControlNet with IP-Adapter (method 3) when you need both structural accuracy and style transfer
For more background on style transfer techniques in ComfyUI, see the complete style transfer handbook on the Comfy blog.