Reviewers generally agreed that the method offers superior accuracy and efficiency across multiple tasks, supported by thorough ablation studies on design choices.
Because black-box prompt tuning is a niche field, some reviewers found it difficult to judge exactly how "new" the method was compared to the very latest unpublished research. Community Feedback
Reviewers highlighted that the paper's design choices, specifically "feature sharing," were well-motivated and helped the model stay expressive despite the simplifications. Critical Perspectives
This paper introduces a method called designed to improve how we tune large "black-box" models (like CLIP) when we don't have access to their internal code or gradients. Performance and Efficiency
One reviewer pointed out that the methods ZIP was compared against (like BLACKVIP and BPTVLM) were from 2023, and suggested that more recent 2024 benchmarks should have been included for a fairer comparison.
It looks like there's no response available for this search. Try asking something else.
Reviewers pointed out that the soft prompt reparameterization design choices were thoroughly tested, including detailed ablation studies.
The string corresponds to a specific research paper titled "ZIP: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models."







