- ComfyUI Tutorial 01 – A Comprehensive Overview of AIGC
- ComfyUI Tutorial 02 – ComfyUI Local Deployment
- ComfyUI Tutorial 03 – Demonstration of Drawing a Little Girl
- ComfyUI Tutorial 04 – To install the ComfyUI Manager
- ComfyUI Tutorial 05 – Shortcut key list
Welcome to the ComfyUI tutorial series! If you’re searching for a powerful and flexible user interface tool, Comfy UI is the perfect solution. This comprehensive series will guide you from beginner to expert, covering everything you need to know about designing, building, and optimizing with ComfyUI.
ComfyUI stands out as a favorite among developers due to its intuitive interface and customizable features. Whether you’re just starting or looking to master advanced functionalities, these tutorials are designed to help you quickly get started and enhance your productivity.
This Episode Summary:
- The Workflow Principle of ComfyUI for Drawing
- Demonstration of Drawing a Little Girl
- How to Save Workflows Using Two Methods: PNG Image and JSON File
- How to Load a Workflow Using a PNG Image or JSON File
After all the hard work, it’s time to see the results! In this lesson, I’ll guide you through using ComfyUI to create an image of a little girl, like this. Are you excited to see how it turns out?
In this lesson, we’ll skip the theory and technical details, focusing instead on hands-on operation. I’ll walk you through the practical steps to get started. Don’t worry—future lessons will cover the theory and concepts in more detail.
Create node in ComfyUI:
1. To create the first node, “Checkpoint Loader”
This node loads your Stable Diffusion model (checkpoint), forming the foundation for image generation. After adding it, you can easily connect it to other nodes in your ComfyUI workflow.
Click on the items in the red box shown in the diagram in sequence, then select ‘Checkpoint Loader.’ At this point, you will have successfully created your first node, ‘Checkpoint Loader.’
2. To create two “CLIP Text Encoder”
These two nodes handle positive prompts (describing what you want to generate) and negative prompts (describing what to avoid), giving you better control over the image output. Once added, connect them to the appropriate parts of your workflow.
3. To create a “KSampler” node
The KSampler node generates the actual image from the latent space by iteratively applying the model’s diffusion process. It connects to nodes like the Checkpoint Loader, CLIP Text Encoders, and the output image node within your workflow.
4. To create an “Empty Latent” node
The Empty Latent node creates an initial latent space, acting as the canvas for image generation. Additionally, it’s often paired with the KSampler node to control the image’s dimensions, resolution, and noise initialization in your ComfyUI workflow. This combination provides better control over the generated image and enhances customization options.
5. To create a “VAE Decode” node
The VAE Decode node converts the latent image generated by the KSampler into a full-resolution, visible image. As a result, it turns the processed latent data back into a standard image format within your ComfyUI workflow.
6. Create ‘Save Image’ Node
The Save Image node is used to save the final output image to your computer. You can configure the node to specify the save location, filename, and format (e.g., PNG or JPEG). This step is essential as it ensures your generated images are stored for later use.
Now that all the nodes for drawing a girl image are set up, it’s time for the fun part: connecting the nodes!
Link Nodes in ComfyUI
1.To link the CLIP Text Encoder nodes and handle the positive and negative prompts
To connect the nodes in ComfyUI:
- Left-click on a connection point: Select the small circle or connection point on the edge of a node (usually on the output side).
- Drag the connection: Hold the left mouse button and drag the connection line to the corresponding input point of the node you want to connect.
- Release the mouse button: Once the connection point reaches the destination input, release the left mouse button to create the link.
This will establish the flow between the nodes. Repeat this process until all nodes are connected according to your desired workflow.
2.To link the model in ComfyUI
3.To link the latent in ComfyUI
4.To link the Vae in ComfyUI
5.To link the Images in ComfyUI
At this point, the entire node is linked. You can refer to the diagram below.
Basic Parameter Adjustment in ComfyUI
1. Checkpoint Selection
To select the large model in the Checkpoint Loader node in ComfyUI, follow these steps:
2.Prompt Word Input
To link the positive and negative prompts in ComfyUI using the CLIP Text Encoder nodes, follow these steps:
1. Link the Condition to the Positive Prompt:
- First, connect the “Condition” output from the previous node (e.g., Empty Latent or another appropriate node) to the first CLIP Text Encoder (the one for the positive prompt). This step ensures that the positive prompt is properly linked to the workflow.
- In this node, input the positive prompt that will guide the image generation. For example, type “1girl” and “beautiful” in the text box of the CLIP Text Encoder node. This will help shape the image according to your desired description.
- “1girl” will be the main subject of the image, which is “a girl.”
- “beautiful” will describe the desired appearance of the girl.
2. Set Up the Negative Prompt:
- For the second CLIP Text Encoder (the one for the negative prompt), connect the Condition to the negative prompt.
- In this case, leave the negative prompt empty, meaning no unwanted content is specified. However, if you want to exclude elements like “no animals” or “no background noise,” you would enter them here.
3. Explanation:
- The first CLIP Text Encoder guides Stable Diffusion to generate an image of a “beautiful girl” by using the prompts “1girl” and “beautiful.”
- On the other hand, the second CLIP Text Encoder (negative prompt) prevents any undesired content from appearing in the image. Since you are leaving it empty, however, no restrictions are applied.
After setting up these links and configurations, your model will generate an image of a girl based on the positive prompt. Since there are no additional constraints for negative content, the image will be free from unwanted elements.
Once you’ve completed all the settings and linked the nodes as described, you’re all set to generate the image using Stable Diffusion (SD) in ComfyUI!
Once you click on the “Queue” menu in the latest version of ComfyUI, you can easily track the progress of the image generation process.
Steps to Monitor the Progress in ComfyUI:
Click on “Queue”:
In the right-side menu of the ComfyUI interface, locate and click on the “Queue” option. This action will display the list of tasks and their progress, including the current image generation process.
Wait for the Process to Complete:
Since this is the first time you’re loading a large model, the initialization process may take a bit longer. The system is likely loading the model into memory, which may take a few moments. However, once that’s complete, the process will speed up for subsequent generations.
Monitor Progress:
As the task progresses, you’ll see status updates indicating the remaining time and the percentage of completion.
Enjoy the Result:
After the image is generated, it saves according to the settings you’ve configured. Once the task finishes, you can check the saved image file.
By keeping an eye on the Queue, you can monitor the overall process and wait for the model to complete the task.
Share this content:
Post Comment