CivitAI requires metadata in a very specific format for everything remotely spicy. The merits of this are questionable because metadata is just a text string in the image file, which makes it trivial to manipulate, but this post isn't about that. You gotta have it, and that's the end of that. Question is just how to keep the hassle to a minimum.
You can add metadata to the file before upload or add it to your posts manually, but adding it with an editor every time is tedious and you can't, for example, add metadata after the fact to bounty submissions. So just having it automatically is highly desirable even if it's not entirely correct (just leave the entire workflow in the image if 100% correct info is a concern). So first of all, this assumes that you're using ComfyUI with edelvarden's "ComfyUI Image Metadata Extension" saving node. If you're using ComfyUI and reading this, you definitely should have it since it should automatically fill out all your relevant CivitAI metadata on your images. There's dozens of other custom nodes that are supposed to do this, but when I last tried a bunch of them out (which was admittedly not very recently), this one was by far the best option.

It's great, but the issue is just that that does not always work. The node works by following back everything it's connected to until it finds something that's in its list of sampler nodes, and then continues to follow the connections to anything that's in its list of prompt nodes. If you're using something it does not know (for example, the Impact Pack regional sampler node), it won't find it and you won't have metadata. You will get a message in your console if that's the case:

If that happens, I found it helpful to add a "dummy" standard KSampler node that's just there for the saving node to grab something from. But just adding it is not enough - there needs to be some route to the final node so it can be found. So my trick here is to use the "Latent Blend" node to "mix" the fake sampler's output into the real output at 0% strength, which does not have an effect on the image but is enough to be detected.

This is the simplest method you can do with built-ins - a Latent Switch is more intuitive but needs a custom node, and working in pixel space is easily accomplished with the built-in Image Blend but needs an additional VAE Decode. Downside: The fake node still runs, so I set it to 1 here to minimize time, but that means your metadata will say "Steps: 1". I don't think this is all that terrible since it's all fake anyway and steps don't mean anything if your workflow is this complex.
However, if you do want to go the custom node route I found something even better: Impact's "Switch (Any)". It's just a switch so it does the thing, but what I didn't know is that that specific switch will actually keep the sampler from running, so you can just patch in your real step number from somewhere and it won't add time. So if you already have Impact or want to try it out, this is the best solution:

Additional notes:
The "correct" way to do this is to add your unknown nodes to the extension's list. I'd rather have something that works every time instead of dicking around with python inside nodes for no real additional benefit (and having workflows that rely on local patches sucks), but hey, don't say I didn't tell you. Also, if set up right this method should theoretically work even with functioning prompts. If you know, you know.
What you actually put into the fake sampler node is up to you. You can add a dedicated prompt node and write anything you want into it or you can attach something you already have. Same for other settings, you can just set them as you like or define them centrally and patch them to both fake and real samplers like I did.
I also attach the fake sampler directly to the model and prompts - it seems like the metadata extension has a problem following Impact's basic pipes and will not be able to distinguish positive and negative prompts through them. This is something good to know in any case if you're using Impact Pack.


