IP Adapter Instruct: Resolving Ambiguity in Image-based Conditioning using Instruct Prompts

Ciara Rowles, Shimon Vanier,, Dante De Nigris, Slava Elizarov, Konstantin Kutsy, Simon Donné
Unity Technologies

By conditioning the transformer model used in IP-Adapter-Plus on additional text embeddings, one model can be used to effectively perform a wide range of image generation tasks with minimal setup.

Abstract

Diffusion models continuously push the boundary of state-of-the-art image generation, but the process is hard to control with any nuance: practice proves that textual prompts are inadequate for accurately describing image style or fine structural details (such as faces).

ControlNet and IPAdapter address this shortcoming by conditioning the generative process on imagery instead, but each individual instance is limited to modeling a single conditional posterior: for practical use-cases, where multiple different posteriors are desired within the same workflow, training and using multiple adapters is cumbersome.

We propose IPAdapter-Instruct, which combines natural-image conditioning with "Instruct" prompts to swap between interpretations for the same conditioning image: style transfer, object extraction, both, or something else still? IPAdapterInstruct efficiently learns multiple tasks with minimal loss in quality compared to dedicated per-task models..

Overview

Method

We modify the existing transformer model in the IP-Adapter-Plus architecture to be conditioned on an additional instruction modality
We use the same cross attention input scheme as the original IP-Adapter

Plug & Play

Since our method keeps original cross attention input scheme, it is compatible with any existing control method to the same degree as IP-Adapter.
This feature vastly expands the usability of our method in practical scenarios.

This functionality also extends to compatability with other sd 1.5 based models.

Further examples

Interactive Demonstration

We provide an interactive demonstration of our method using Huggingface Spaces based Gradio demo. You can try out our method with your own input images and see the results in real-time.

https://huggingface.co/spaces/CiaraRowles/IP-Test

How to use:

  1. Upload an input image.
  2. Use an instruction preset
  3. Enter a prompt in the text box.
  4. Click Generate

BibTeX

@misc{rowles2024ipadapterinstruct,
      title={IPAdapter-Instruct: Resolving Ambiguity in Image-based Conditioning using Instruct Prompts}, 
      author={Ciara Rowles and Shimon Vainer and Dante De Nigris and Slava Elizarov and Konstantin Kutsy and Simon Donné},
      year={2024},
      eprint={2408.03209},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.03209}, 
}