How to use reasoning ?
Thanks for your support! Similarly check for this (https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/discussions/8#6927ecfb89d327829b15e815), and we would release more detail later in GitHub ~ Noted !
Thanks :)
Thanks for your support! Similarly check for this (https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/discussions/8#6927ecfb89d327829b15e815), and we would release more detail later in GitHub ~ Noted !
Thanks~
Will you release a guide on how to use reasoning in ComfyUI?
I tried using Qwen3-4B-Thinking-2507.safetensors(using safetensors to merge weights) as a text encoder in the ComfyUI workflow, but I still couldn't reproduce the result:
这模型用的是2507之前那个混合推理的版本,我估计后面应该就释放出如何使用reason模式了,我现在也搞不懂如何操作,反正ComfyUI现在暂时不支持reasoning。
This model uses the hybrid reasoning version before 2507. I guess the way to use the reasoning mode will be released later. I still don't understand how to operate it now. Anyway, ComfyUI doesn't support reasoning right now.
I tried using Qwen3-4B-Thinking-2507.safetensors(using safetensors to merge weights) as a text encoder in the ComfyUI workflow, but I still couldn't reproduce the result:
这模型用的是2507之前那个混合推理的版本,我估计后面应该就释放出如何使用reason模式了,我现在也搞不懂如何操作,反正ComfyUI现在暂时不支持reasoning。
This model uses the hybrid reasoning version before 2507. I guess the way to use the reasoning mode will be released later. I still don't understand how to operate it now. Anyway, ComfyUI doesn't support reasoning right now.
I discovered that enhanced prompts can be generated by calling the LLM API based on nodes in comfyui_LLM_party, you can try it:
I tried using Qwen3-4B-Thinking-2507.safetensors(using safetensors to merge weights) as a text encoder in the ComfyUI workflow, but I still couldn't reproduce the result:
这模型用的是2507之前那个混合推理的版本,我估计后面应该就释放出如何使用reason模式了,我现在也搞不懂如何操作,反正ComfyUI现在暂时不支持reasoning。
This model uses the hybrid reasoning version before 2507. I guess the way to use the reasoning mode will be released later. I still don't understand how to operate it now. Anyway, ComfyUI doesn't support reasoning right now.
I discovered that enhanced prompts can be generated by calling the LLM API based on nodes in comfyui_LLM_party, you can try it:
现在好像不太适合多模态模型,至少Qwen3VL不太适合干这个工作,用这个增强提示词反推的效果比较差。
It doesn't seem well-suited for multimodal models right now—at least Qwen3-VL isn't ideal for this task, as the effect of prompt enhancement via backward inference is relatively poor.


