Control Layers

Modified on Tue, 24 Sep at 9:44 AM

With Control Layers, you can get more control over the output of your image generation, providing you with a way to direct the network towards generating images that better fit your desired style or outcome.


How do Control Layers work?

Control Layers work by analyzing an input image, pre-processing that image to identify relevant information that can be interpreted by each specific Control Layer model, and then inserting that control information into the generation process. This can be used to adjust the style, composition, or other aspects of the image to better achieve a specific result.

Control Layers require you to select a model and provide a pre-processed image matching that Control Layer type. If you don’t have a pre-processed image, you can use a filter to process an image and use it as a control for the Control Layer.


Accessing Control Layers

If you are working with an existing image, drag the asset to the Control Canvas and drop it in the ‘New Control Layer’ portion of the canvas. This will automatically generate a control layer using that image and you can view the new control layer on the ‘Layers’ tab.


Alternatively, you can create a Control Layer by pressing ‘+ Add Layer’ and selecting ‘Control Layer.’ This will create a blank control layer, which you can define using the available drawing tools, dragging an image on top of, or by pressing the “Pull Bbox into Layer” button, which allows you to create a Control Layer from what is currently visible in the bounding box of the canvas. You can also press the upload image icon to upload an image directly from your desktop.


Using Control Layers

To use an applied Control Layer, you can select the desired model and adjust the Control Layer settings to achieve the desired result. Multiple Control Layers can be used at the same time, allowing for complex effects and styles in your images.

Each Control Layer has four settings that are applied to the Control Layer:

  1. Model - The model defines which control technique (detailed below) you are looking to apply to your canvas. Each control leverages an image that has been processed to highlight certain details that instruct the generation - If you are using a generic image, you can use the Filter button to convert your image into the content that model is looking for.
  2. Weight - Strength of the Control Layer model applied to the generation for the section, with higher values exerting more control
  3. Begin/End Step Percentage - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the Control Layer applied.
  4. Control Mode - The Control Mode is a secondary degree of influence that can be used to bias the generation more towards your prompt, or more towards your control image. You can explore utilizing this if using Weight values are not giving you enough control over the generation.


Applying filters to Control Layers

To process and display a filter on the canvas, you can press the filter button to manually apply it. When you press the filter button next to the model in the Control Layer, it will display a default filter associated with that model. For example, if you’re using the Canny model, pressing the filter button will show a Canny edge detection filter applied to your control layer. You only need to filter images that are not already formatted/processed as detailed below in the “Analyzed Details” column.

From there, you can either Apply the default filter or change it to a different filter type. Once you’ve applied a filter, the system will process the control layer and display the transformed image.


Types of Control Layer Models

Control Layers come with a variety of pre-trained models that can be used to achieve different effects or styles in your generated images. These models include:


Canny

When the Canny model is used in Control Layers, Invoke will attempt to generate images that match the edges detected.

Canny edge detection works by detecting the edges in an image by looking for abrupt changes in intensity. It is known for its ability to detect edges accurately while reducing noise and false edges, and the preprocessor can identify more information by decreasing the thresholds.


Input Image
Analyzed Details
New Generation


Depth

The Depth model generates depth maps of images, allowing you to create more realistic 3D models or to simulate depth effects in post-processing.



Scribble

The Scribble model in Control Layers generates line drawings from an input image. The resulting pre-processed image is a simplified version of the original, with only the outlines of objects visible. The Scribble model in Control Layers is known for its ability to accurately capture the contours of the objects in an input sketch.



Softedge

The Softedge model works similarly to the Scribble model, extracting the essential features from an input image to create a sketch-like representation. The resulting pre-processed image is a simplified version of the original, with only the soft edges of the shape and some light shading visible.


Tile

The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:

  • It can reinterpret specific details within an image and create fresh, new elements.
  • It has the ability to disregard global instructions if there's a discrepancy between them and the local context or specific parts of the image. In such cases, it uses the local context to guide the process.

The Tile Model can be a powerful tool in your arsenal for enhancing image quality and details. If there are undesirable elements in your images, such as blurriness caused by resizing, this model can effectively eliminate these issues, resulting in cleaner, crisper images. Moreover, it can generate and add refined details to your images, improving their overall quality and appeal.



Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article