6. Memory Management Fixes: Fixes related to 'medvram' and 'lowvram' have been made, which should improve the performance and stability of the project. Moved to Installation and SDXL. 3 it/s on average but I had to add --medvram cause I kept getting out of memory errors. 3 on 10: 35: 31-732037 INFO Running setup 10: 35: 31-770037 INFO Version: cf80857b Fri Apr 21 09: 59: 50 2023 -0400 10: 35: 32-113049 INFO Latest published. 5, but it struggles when using. I have a 6750XT and get about 2. You'd need to train a new SDXL model with far fewer parameters from scratch, but with the same shape. If you have a GPU with 6GB VRAM or require larger batches of SD-XL images without VRAM constraints, you can use the --medvram command line argument. 5GB vram and swapping refiner too , use --medvram-sdxl flag when startingUsing (VAE Upcasting False) FP16 Fixed VAE with the config file will drop VRAM usage down to 9GB at 1024x1024 with Batch size 16. 6. Copying depth information with the depth Control. fix: I have tried many; latents, ESRGAN-4x, 4x-Ultrasharp, Lollypop,しかし、Stable Diffusionは多くの計算を必要とするため、スペックによってスムーズに動作しない可能性があります。. 1 Click on an empty cell where you want the SD to be. --bucket_reso_steps can be set to 32 instead of the default value 64. ReplyWhy is everyone saying automatic1111 is really slow with SDXL ? I have it and it even runs 1-2 secs faster than my custom 1. 5, now I can just use the same one with --medvram-sdxl without having. It would be nice to have this flag specfically for lowvram and SDXL. 3 / 6. Add Review. I'm using a 2070 Super with 8gb VRAM. I noticed there's one for medvram but not for lowvram yet. 4: 1. if i dont remember incorrect i was getting sd1. プロンプト編集のタイムラインが、ファーストパスと雇用修正パスで別々の範囲になるように変更(seed breaking change) マイナー: img2img バッチ: img2imgバッチでRAM節約、VRAM節約、. If you have 4 GB VRAM and want to make images larger than 512x512 with --medvram, use --lowvram --opt-split-attention. But you need create at 1024 x 1024 for keep the consistency. Fast ~18 steps, 2 seconds images, with Full Workflow Included! No ControlNet, No ADetailer, No LoRAs, No inpainting, No editing, No face restoring, Not Even Hires Fix!! (and obviously no spaghetti nightmare). 0 With sdxl_madebyollin_vae. Launching Web UI with arguments: --port 7862 --medvram --xformers --no-half --no-half-vae ControlNet v1. For 8GB vram, the recommended cmd flag is "--medvram-sdxl". 0. See Reviews. Details. But if I switch back to SDXL 1. I have always wanted to try SDXL, so when it was released I loaded it up and surprise, 4-6 mins each image at about 11s/it. Introducing Comfy UI: Optimizing SDXL for 6GB VRAM. 23年7月27日にStability AIからSDXL 1. を丁寧にご紹介するという内容になっています。. I am talking PG-13 kind of NSFW, maaaaaybe PEGI-16. Many of the new models are related to SDXL, with several models for Stable Diffusion 1. version: 23. Find out more about the pros and cons of these options and how to optimize your settings. 手順1:ComfyUIをインストールする. 1. However, when the progress is already 100%, suddenly VRAM consumption jumps to almost 100%, only 200-150Mb is left free. use --medvram-sdxl flag when starting. So SDXL is twice as fast, and SD1. With 12GB of VRAM you might consider adding --medvram. I you use --xformers and --medvram in your setup, it runs fluid on a 16GB 3070 Reply replyDhanshree Shripad Shenwai. bat or sh and select option 6. Quite inefficient, I do it faster by hand. that FHD target resolution is achievable on SD 1. 0 but my laptop with a RTX 3050 Laptop 4GB vRAM was not able to generate in less than 3 minutes, so I spent some time to get a good configuration in ComfyUI, now I get can generate in 55s (batch images) - 70s (new prompt detected) getting a great images after the refiner kicks in. I just loaded the models into the folders alongside everything. 6. 1. 업데이트되었는데요. Open 1 task done. add --medvram-sdxl flag that only enables --medvram for SDXL models; prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) Minor: img2img batch: RAM savings, VRAM savings, . 5 didn't have, specifically a weird dot/grid pattern. Also, you could benefit from using --no-half command. which is exactly what we're doing, and why we haven't released our ControlNetXL checkpoints. on my 6600xt it's about a 60x speed increase. It’ll be faster than 12GB VRAM, and if you generate in batches, it’ll be even better. Commandline arguments: Nvidia (12gb+) --xformers Nvidia (8gb) --medvram-sdxl --xformers Nvidia (4gb) --lowvram --xformers AMD (4gb) --lowvram --opt-sub-quad-attention + TAESD in settings Both rocm and directml will generate at least 1024x1024 pictures at fp16. 5 and 2. 添加--medvram-sdxl仅适用--medvram于 SDXL 型号的标志. Introducing our latest YouTube video, where we unveil the official SDXL support for Automatic1111. There’s a difference between the reserved VRAM (around 5GB) and how much it uses when actively generating. Sigh, I thought this thread is about SDXL - forget about 1. Happy generating everybody!At the line where set " COMMANDLINE_ARGS =" , add in these parameters " --xformers" and " --medvram" and " --opt-split-attention" to reduce further the VRAM needed BUT it will added the processing time. You should see a line that says. I have tried rolling back the video card drivers to multiple different versions. 9 is still research only. . You may experience it as “faster” because the alternative may be out of memory errors or running out of vram/switching to CPU (extremely slow) but it works by slowing things down so lower memory systems can still process without resorting to CPU. The message is not produced. this is the tutorial you need : How To Do Stable Diffusion Textual. Idk why a1111 si so slow and don't work, maybe something with "VAE", idk. Open 1. Generated 1024x1024, Euler A, 20 steps. ago. set COMMANDLINE_ARGS= --medvram --upcast-sampling --no-half. 0 A1111 vs ComfyUI 6gb vram, thoughts. 1600x1600 might just be beyond a 3060's abilities. 動作が速い. I was using --MedVram and --no-half. Downloaded SDXL 1. If you want to switch back later just replace dev with master . I would think 3080 10gig would be significantly faster, even with --medvram. finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. 2. I could switch to a different SDXL checkpoint (Dynavision XL) and generate a bunch of images. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting r/StableDiffusion • SDXL 1. 5 there is a lora for everything if prompts dont do it fast. pretty much the same speed i get from ComfyUI edit: I just made a copy of the . just installed and Ran ComfyUI with the following Commands: --directml --normalvram --fp16-vae --preview-method auto. 1. 5 min. 09s/it when not exceeding my graphics card memory, 2. that FHD target resolution is achievable on SD 1. set COMMANDLINE_ARGS= --medvram --autolaunch --no-half-vae PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0. md, and it seemed to imply that when using the SDXL model loaded on the GPU in fp16 (using . ここでは. Before jumping on automatic1111 fault, enable xformers optimization and/or medvram/lowram launch option and come back to say the same thing. sh (Linux): set VENV_DIR allows you to chooser the directory for the virtual environment. 39. Update your source to the last version with 'git pull' from the project folder. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savingsThis is assuming A1111 and not using --lowvram or --medvram . 11. modifier (I have 8 GB of VRAM). SDXL base has a fixed output size of 1. The company says SDXL produces more detailed imagery and composition than its predecessor Stable Diffusion 2. and nothing was good ever again. Supports Stable Diffusion 1. bat file (in stable-defusion-webui-master folder). This will pull all the latest changes and update your local installation. Now I have to wait for such a long time. In diesem Video zeige ich euch, wie ihr die neue Stable Diffusion XL 1. • 3 mo. MASSIVE SDXL ARTIST COMPARISON: I tried out 208 different artist names with the same subject prompt for SDXL. I have trained profiles using both medvram options enabled and disabled but the. json to. bat file. 0 A1111 in any of the windows or Linux shell/bat files there is no --medvram or --medvram-sdxl setting used. However, generation time is a tiny bit slower: about 1. 1024x1024 instead of 512x512), use --medvram --opt-split-attention. Second, I don't have the same error, sure. bat" asset COMMANDLINE_ARGS= --precision full --no-half --medvram --opt-split-attention (means you start SD from webui-user. Comfy is better at automating workflow, but not at anything else. About this version. stable-diffusion-webui * old favorite, but development has almost halted, partial SDXL support, not recommended. 33 IT/S ~ 17. Copying outlines with the Canny Control models. It's probably as ASUS thing. api Has caused the model. I have the same GPU, 32gb ram and i9-9900k, but it takes about 2 minutes per image on SDXL with A1111. 1: 6. bat) Reply reply jonathandavisisfat • Sorry for my late response but I actually figured it out right before you. I don't know how this is even possible but other resolutions can get generated but their visual quality is absolutely inferior, and I'm not talking about difference in resolution. A brand-new model called SDXL is now in the training phase. I run it on a 2060, relatively easily (with -medvram). Medvram actually slows down image generation, by breaking up the necessary vram into smaller chunks. and nothing was good ever again. Reply reply. That is irrelevant. -if I use --medvram or higher (no opt command for vram) I get blue screens and PC restarts-I upgraded AMD driver to latest (23-7-2) but it did not help. 0 base model. 5. For a 12GB 3060, here's what I get. RealCartoon-XL is an attempt to get some nice images from the newer SDXL. If it still doesn’t work you can try replacing the --medvram in the above code with --lowvram. 【Stable Diffusion】SDXL. Cannot be used with --lowvram/Sequential CPU offloading. ・SDXLモデルに対してのみ-medvramを有効にする --medvram-sdxl フラグを追加。 ・プロンプト編集のタイムラインが、ファーストパスとhires-fixパスで別々の範囲になるように. 048. 5, having found the prototype your looking for then img-to-img with SDXL for its superior resolution and finish. Not a command line option, but an optimization implicitly enabled by using --medvram or --lowvram. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting r/StableDiffusion • AI Burger commercial - source @MatanCohenGrumi twitter - much better than previous monstrosities8GB VRAM is absolutely ok and working good but using --medvram is mandatory. I was using A1111 for the last 7 months, a 512×512 was taking me 55sec with my 1660S, SDXL+Refiner took nearly 7minutes for one picture. If I do a batch of 4, it's between 6 or 7 minutes. It feels like SDXL uses your normal ram instead of your vram lol. 0 on automatic1111, but about 80% of the time I do, I get this error: RuntimeError: The size of tensor a (1024) must match the size of tensor b (2048) at non-singleton dimension 1. I am using AUT01111 with an Nvidia 3080 10gb card, but image generations are like 1hr+ with 1024x1024 image generations. ( u/GreyScope - Probably why you noted it was slow)注:此处的“--medvram”是针对6GB及以上显存的显卡优化的,根据显卡配置的不同,你还可以更改为“--lowvram”(4GB以上)、“--lowram”(16GB以上)或者删除此项(无优化)。 此外,此处的“--xformers”选项可以开启Xformers。加上此选项后,显卡的VRAM占用率就会. In my v1. I have a weird config where I have both Vladmandic and A1111 installed and use the A1111 folder for everything, creating symbolic links for. Hit ENTER and you should see it quickly update your files. • 3 mo. 1 Picture in about 1 Minute. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting r/StableDiffusion • [WIP] Comic Factory, a web app to generate comic panels using SDXLNative SDXL support coming in a future release. ipinz changed the title [Feature Request]: [Feature Request]: "--no-half-vae-xl" on Aug 24. You using --medvram? I have very similar specs btw, exact same gpu usually i dont use --medvram for normal SD1. ago. My faster GPU, with less VRAM, at 0 is the Window default and continues to handle Windows video while GPU 1 is making art. 1, or Windows 8 ;. Put the base and refiner models in stable-diffusion-webuimodelsStable-diffusion. 134 RuntimeError: mat1 and mat2 shapes cannot be multiplied (231x1024 and 768x320)It consuming like 5G vram at most time which is perfect but sometime it spikes to 5. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 safetensors. It might provide a clue. bat file would help speed it up a bit. bat file set COMMANDLINE_ARGS=--precision full --no-half --medvram --always-batch. 7gb of vram and generates an image in 16 seconds for sde karras 30 steps. The place is in the webui-user. I tried comfyUI and it takes about 30s to generate 768*1048 images (i have a RTX2060, 6GB vram). 55 GiB (GPU 0; 24. 6. Hello everyone, my PC currently has a 4060 (the 8GB one) and 16GB of RAM. This model is open access and. 5 model is that SDXL is much slower, and uses up more VRAM and RAM. I also added --medvram and. the problem is when tried to do "hires fix" (not just upscale, but sampling it again, denoising and stuff, using K-Sampler) of that to higher resolution like FHD. Do you have any tips for making ComfyUI faster, such as new workflows?We might release a beta version of this feature before 3. I've seen quite a few comments about people not being able to run stable diffusion XL 1. I have always wanted to try SDXL, so when it was released I loaded it up and surprise, 4-6 mins each image at about 11s/it. It provides an interface that simplifies the process of configuring and launching SDXL, all while optimizing VRAM usage. 6. 2 arguments without the --medvram. @echo off set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--medvram-sdxl --xformers call webui. Also, as counterintuitive as it might seem, don't generate low resolution images, test it with 1024x1024 at least. 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. not SD. This option significantly reduces VRAM requirements at the expense of inference speed. set COMMANDLINE_ARGS=--medvram --no-half-vae --opt-sdp-attention _____ License & Use. not sure why invokeAI is ignored but it installed and ran flawlessly for me on this Mac, as a longtime automatic1111 user on windows. medvram and lowvram Have caused issues when compiling the engine and running it. 0-RC , its taking only 7. Since SDXL came out I think I spent more time testing and tweaking my workflow than actually generating images. Happens only if --medvram or --lowvram is set. For the actual training part, most of it is Huggingface's code, again, with some extra features for optimization. I applied these changes ,but it is still the same problem. Reply. Inside your subject folder, create yet another subfolder and call it output. 8: from 640x640 to 1280x1280 Without medvram it can only handle 640x640, which is half. 0 base without refiner at 1152x768, 20 steps, DPM++2M Karras (This is almost as fast as the 1. Yes, less than a GB of VRAM usage. Next is better in some ways -- most command lines options were moved into settings to find them more easily. SDXL will require even more RAM to generate larger images. 12GB is just barely enough to do Dreambooth training with all the right optimization settings, and I've never seen someone suggest using those VRAM arguments to help with training barriers. SDXLモデルに対してのみ-medvramを有効にする-medvram-sdxlフラグを追加. Contraindicated. But this is partly why SD. bat 打開讓它跑,應該要跑好一陣子。 2. 9. Without medvram, upon loading sdxl, 8. Open 1 task done. 9 (changed the loaded checkpoints to the 1. 부루퉁입니다. Ok, so I decided to download SDXL and give it a go on my laptop with a 4GB GTX 1050. tif, . S tability AI recently released its first official version of Stable Diffusion XL (SDXL) v1. finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. If I do a batch of 4, it's between 6 or 7 minutes. Enter the following formula. 5 checkpoints Yeah 8gb is too little for SDXL outside of ComfyUI. 0, just a week after the release of the SDXL testing version, v0. sd_xl_refiner_1. It can produce outputs very similar to the source content (Arcane) when you prompt Arcane Style, but flawlessly outputs normal images when you leave off that prompt text, no model burning at all. 0 on 8GB VRAM? Automatic1111 & ComfyUi. ComfyUI * recommended by stability-ai, highly customizable UI with custom workflows. py", line 422, in run_predict output = await app. 9vae. Figure out anything with this yet? Just tried it again on A1111 with a beefy 48GB VRAM Runpod and had the same result. Native SDXL support coming in a future release. . Even v1. 10it/s. I am a beginner to ComfyUI and using SDXL 1. Consumed 4/4 GB of graphics RAM. 1 / 4. Intel Core i5-9400 CPU. I think you forgot to set --medvram that's why it's so slow,. Reply reply gunbladezero. No, with 6GB you are at the limit, one batch too large or a resolution too high and you get an OOM, so --medvram and --xformers are almost mandatory things. However, for the good news - I was able to massively reduce this >12GB memory usage without resorting to --medvram with the following steps: Initial environment baseline. Just copy the prompt, paste it into the prompt field, and click the blue arrow that I've outlined in red. 6. I am a beginner to ComfyUI and using SDXL 1. 1 / 2. But it is extremely light as we speak, so much so the Civitai guys probably wouldn't even consider that NSFW at all. SDXL and Automatic 1111 hate eachother. Quite slow for a 16gb VRAM Quadro P5000. Safetensors on a 4090, there's a share memory issue that slows generation down using - - medvram fixes it (haven't tested it on this release yet may not be needed) If u want to run safetensors drop the base and refiner into the stable diffusion folder in models use diffuser backend and set sdxl pipelineRecommandé : SDXL 1. After the command runs, the log of a container named webui-docker-download-1 will be displayed on the screen. You can make AMD GPUs work, but they require tinkering ; A PC running Windows 11, Windows 10, Windows 8. Mixed precision allows the use of tensor cores which massively speed things up, medvram literally slows things down in order to use less vram. On GTX 10XX and 16XX cards makes generations 2 times faster. 動作が速い. SDXL 1. It defaults to 2 and that will take up a big portion of your 8GB. Two of these optimizations are the “–medvram” and “–lowvram” commands. This opens up new possibilities for generating diverse and high-quality images. --full_bf16 option is added. We highly appreciate your help if you can share a screenshot in this format: GPU (like RGX 4096, RTX 3080,. 74 EMU - Kolkata Trains. Top 1% Rank by size. yamfun. 2. @SansQuartier temporary solution is remove --medvram (you can also remove --no-half-vae, it's not needed anymore). 0 out of 5. Also, as counterintuitive as it might seem,. Then, I'll change to a 1. Is there anyone who tested this on 3090 or 4090? i wonder how much faster will it be in Automatic 1111. takes about a minute to generate a 512x512 image without highrez fix using --medvram while my newer 6gb card takes less than 10. r/StableDiffusion • Stable Diffusion with ControlNet works on GTX 1050ti 4GB. more replies. I don't use --medvram for SD1. By the way, it occasionally used all 32G of RAM with several gigs of swap. Reply reply more replies. refinerモデルを正式にサポートしている. 少しでも動作を. Well dang I guess. 576 pixels (1024x1024 or any other combination). I've gotten decent images from SDXL in 12-15 steps. Nothing was slowing me down. Decreases performance. 74 Local/EMU Trains. 0, the various. 1. --always-batch-cond-uncond: Disables the optimization above. In my case SD 1. so decided to use SD1. They don't slow down generation by much but reduce VRAM usage significantly so you may just leave them. Because SDXL has two text encoders, the result of the training will be unexpected. 4. 5 secsIt also has a memory leak, but with --medvram I can go on and on. When generating images it takes between 400-900 seconds to complete (1024x1024, 1 image with low VRAM due to having only 4GB) I read that adding --xformers --autolaunch --medvram inside of the webui-user. 0. Just check your vram and be sure optimizations like xformers are set-up correctly because others UI like comfyUI already enable those so you don't really feel the higher vram usage of SDXL. Don't turn on full precision or medvram if you want max speed. I've managed to generate a few images with my 3060 12Gb using SDXL base at 1024x1024 using the -medvram command line arg and closing most other things on my computer to minimize VRAM usage, but it is unreliable at best, -lowvram is more reliable, but it is painfully slow. set COMMANDLINE_ARGS= --medvram --upcast-sampling --no-half --precision full . 11. First Impression / Test Making images with SDXL with the same Settings (size/steps/Sampler, no highres. add --medvram-sdxl flag that only enables --medvram for SDXL models prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change). I shouldn't be getting this message from the 1st place. With Automatic1111 and SD Next i only got errors, even with -lowvram parameters, but Comfy. を丁寧にご紹介するという内容になっています。. (Also why should i delete my yaml files ?)Unfortunately yes. User nguyenkm mentions a possible fix by adding two lines of code to Automatic1111 devices. Fast Decoder Enabled: Fast Decoder Disabled: I've been having a headache with this problem for several days. Like so. ago. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savings It's not the medvram problem, I also have a 3060 12Gb, the GPU does not even require the medvram, but xformers is advisable. py build python setup. set COMMANDLINE_ARGS=--opt-split-attention --medvram --disable-nan-check --autolaunch My graphics card is 6800xt, I started with the above parameters, generated 768x512 img, Euler a, 1. Reply. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savingswithout --medvram (but with xformers) my system was using ~10GB VRAM using SDXL. ago. --opt-channelslast. 命令行参数 / 性能类. 최근 스테이블 디퓨전이. I installed SDXL in a separate DIR but that was super slow to generate an image, like 10 minutes. 0-RC , its taking only 7. 0. I just loaded the models into the folders alongside everything. 400 is developed for webui beyond 1. Run the following: python setup. --medvram Makes the Stable Diffusion model consume less VRAM by splitting it into three parts - cond (for transforming text into numerical representation), first_stage (for converting a picture into latent space and back), and unet (for actual denoising of latent space) and making it so that only one is in VRAM at all times, sending others to. So at the moment there is probably no way around --medvram if you're below 12GB. You dont need low or medvram. 5 models in the same A1111 instance wasn't practical, I ran one with --medvram just for SDXL and one without for SD1. SDXL 1. as higher rank models requires more vram ,The subreddit for all things related to Modded Minecraft for Minecraft Java Edition --- This subreddit was originally created for discussion around the FTB launcher and its modpacks but has since grown to encompass all aspects of modding the Java edition of Minecraft. These allow me to actually use 4x-UltraSharp to do 4x upscaling with Highres. Do you have any tips for making ComfyUI faster, such as new workflows? We might release a beta version of this feature before 3. It initially couldn't load the weight but then I realized my Stable Diffusion wasn't updated to v1. You are running on cpu, my friend. You should definitively try them out if you care about generation speed. 1. 0. While my extensions menu seems wrecked, I was able to make some good stuff with both SDXL, the refiner and the new SDXL dreambooth alpha. get_blocks(). It takes a prompt and generates images based on that description. docker compose --profile download up --build. ptitrainvaloin. This will save you 2-4 GB of VRAM. There is an opt-split-attention optimization that will be on by default, that saves memory seemingly without sacrificing performance, you could turn it off with a flag. 0, the various. --always-batch-cond-uncond. Workflow Duplication Issue Resolved: The team has resolved an issue where workflow items were being run twice for PRs from the repo. refinerモデルを正式にサポートしている. tif, . #stability #stablediffusion #stablediffusionSDXL #artificialintelligence #dreamstudio The stable diffusion SDXL is now live at the official DreamStudio. 2gb (so not full) I tried different CUDA settings mentioned above in this thread and no change. finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. It defaults to 2 and that will take up a big portion of your 8GB. webui-user. 18 seconds per iteration. (R5 5600, DDR4 32GBx2, 3060Ti 8GB GDDR6) settings: 1024x1024, DPM++ 2M Karras, 20 steps, Batch size 1 commandline args:--medvram --opt-channelslast --upcast-sampling --no-half-vae --opt-sdp-attention If your GPU card has 8 GB to 16 GB VRAM, use the command line flag --medvram-sdxl. g. 4 - 18 secs SDXL 1. bat. They listened to my concerns, discussed options,. It was technically a success, but realistically it's not practical. After that SDXL stopped all problems, load time of model around 30sec Reply reply Perspective-CarelessDisabling "Checkpoints to cache in RAM" lets the SDXL checkpoint load much faster and not use a ton of system RAM. =STDEV ( number1: number2) Then,. Medvram sacrifice a little speed for more efficient use of VRAM. bat as . Note that the Dev branch is not intended for production work and may break other things that you are currently using. I found on the old version some times a full system reboot helped stabilize the generation. (just putting this out here for documentation purposes) Reply reply. 3, num models: 9 2023-09-25 09:28:05,019 - ControlNet - INFO - ControlNet v1. But yes, this new update looks promising.