Comfyui upscale beds reddit

Comfyui upscale beds reddit. One does an image upscale and the other a latent upscale. Aug 5, 2024 · Flux has been out of under a week and already seeing some great innovation in the open source community. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP 10 votes, 18 comments. Please share your tips, tricks, and… Grab the image from your file folder, drag it onto the entire ComfyUI window. 0. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096, and then downscale with nearest-extact back to 1500. For some context, I am trying to upscale images of an anime village, something like Ghibli style. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. I upscaled it to a resolution of 10240x6144 px for us to examine the results. 9, end_percent 0. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). a. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. And here's my first question : Is one better than the other as far as final upscaled image quality? I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). 2 and resampling faces 0. This. Instead, I use Tiled KSampler with 0. 17K subscribers in the comfyui community. 5 "Upscaling with model" and then denoising 0. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y Thanks. You can also run a regular AI upscale then a downscale (4x * 0. 5 denoise. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. 5, euler, sgm_uniform or CNet strength 0. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. Welcome to the unofficial ComfyUI subreddit. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. But it's weird. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is Many people also have a hard time learning from written documents and need visual learning. It depends on how large the face in your original composition is. articles on new photogrammetry software or techniques. Upscale and then fix will work better here. 6 denoise and either: Cnet strength 0. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. Latent quality is better but the final image deviates significantly from the initial generation. The workflow is kept very simple for this test; Load image Upscale Save image. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. They also want the details on how and why to do something besides just a guide to load this json and use it. 5 noise ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. So I made a upscale test workflow that uses the exact same latent input and destination size. I want to upscale my image with a model, and then select the final size of it. 25- 1. The downside is that it takes a very long time. I had the same problem and those steps tanks performances as well. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? I try to use comfyUI to upscale (use SDXL 1. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec - image upscale is less detailed, but more faithful to the image you upscale. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. It will replicate the image's workflow and seed. it's nothing spectacular but gives good consistent results without These comparisons are done using ComfyUI with default node settings and fixed seeds. I generate an image that I like then mute the first ksampler, unmute Ult. embed. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. If it’s a close up then fix the face first. Still working on the the whole thing but I got the idea down Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. 0 Alpha + SD XL Refiner 1. Thanks! Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models… Latent upscale is different from pixel upscale. this is just a simple node build off what's given and some of the newer nodes that have come out. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. A step-by-step guide to mastering image quality. That's because latent upscale turns the base image into noise (blur). save. Ugh. 10K subscribers in the comfyui community. safetensors (SD 4X Upscale Model) The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled upscale solution like ultimate sd upscale, yea? permalink. Look at this workflow : Welcome to the unofficial ComfyUI subreddit. Also with good results. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. 19K subscribers in the comfyui community. SD upscaler and upscale from that. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. Please keep posted images SFW. Thanks Here is a workflow that I use currently with Ultimate SD Upscale. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. 43 votes, 16 comments. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. g. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Thank "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. 20K subscribers in the comfyui community. I’m new to ComfyUI and I’m aware that people create amazing stuff with just prompts and detailers. This means that your prompt (a. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Please share your tips, tricks, and workflows for using this software to create your AI art. This is done after the refined image is upscaled and encoded into a latent. No attempts to fix jpg artifacts, etc. Please share your tips, tricks, and workflows for using this… second pic. I was working on exploring and putting together my guide on running Flux on Runpod ($0. Depending on the noise and strength it end up treating each square as an individual image. I too use SUPIR, but just to sharpen my images on the first pass. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. 5), with an ESRGAN model. Reply reply Top 1% Rank by size Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. - latent upscale looks much more detailed, but gets rid of the detail of the original image. The upscale quality is mediocre to say the least. I have a custom image resizer that ensures the input image matches the output dimensions. 2 This is a community to share and discuss 3D photogrammetry modeling. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. It's why you need at least 0. However, I switched to Ultimate SD Upscale custom node. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. Also, both have a denoise value that drastically changes the result. Hope someone can advise. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. 1-0. I like how IPAdapter with masking allows me to not have to write detailed prompts, and yet still maintains the fidelity of the subject and background - or any other masked elements for that matter. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 49 votes, 12 comments. . It works more like DLSS, tile by tile and faster than iterative one. 2 options here. 34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well. I haven't been able to replicate this in Comfy. I solved that with using only 1 steps and adding multiple iterative upscale nodes. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. 5 upscale) upscaler to ksampler running 20-30 steps at . And above all, BE NICE. Also, if this is new and exciting to you, feel free to post . And when purely upscaling, the best upscaler is called LDSR. 9 , euler That's because of the model upscale. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. report. The final node is where comfyui take those images and turn it into a video. After 6 days of hard work (2 days build, 1 day testing, 2 day recording and 1 day editing and very little sleep, well, I finally managed to upload this! full tutorial in the youtube description (it's entirely free of course) - and the video goes into 1h of detailled instructions on how to build it yourself (because I prefer for someone to learn how to fish than to give them a fish 😂 That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. You just have to use the node "upscale by" using bicubic method and a fractional value (0. There are also "face detailer" workflows for faces specifically. 5 if you want to divide by 2) after upscaling by a model. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. 5 to get a 1024x1024 final image (512 *4*0. A lot of people are just discovering this technology, and want to show off what they created. There's "latent upscale by", but I don't want to upscale the latent image. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. 5=1024). now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 5 models (seems pointless to go larger). Belittling their efforts will get you banned. There is a face detailer node. Try immediately VAEDecode after latent upscale to see what I mean. It uses CN tile with ult SD upscale. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. I did once get some noise I didn't like, but rebooted & all was good second try. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Upscale x1. k. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. You end up with images anyway after ksampling so you can use those upscale node. I recently started tinkering with Ultimate SD Upscaler as well as other upscale workflows in ComfyUI. This will allow detail to be built in during the upscale. Both these are of similar speed. But I probably wouldn't upscale by 4x at all if fidelity is important. Thanks for all your comments. dzwu etiyk ihrs voe mmac uhts ewcx kejyo uiniltbm cnt