2.8m Gmail.txt [ 100% AUTHENTIC ]
: Uses 11k pairs with a balance of textual and visual rewards (
: Uses 22k data pairs focusing on textual accuracy ( 2.8M GMAIL.txt
: The SFT stage requires 60 hours of training on 16 H800 GPUs . The RL stages take an additional 34 hours on 24 H800 GPUs [11]. : Uses 11k pairs with a balance of
: Increasing data from 2M to 2.8M results in no further performance gains, confirming the plateau [22]. Multimodal Structured Reinforcement Learning (MSRL) : confirming the plateau [22].
The paper demonstrates that MSRL significantly outperforms pure SFT models by optimizing for both textual structure and visual fidelity, effectively surpassing the performance limit reached at 2.8M SFT samples [11, 25]. MSRL Stage Max Dataset Size 2.8 million samples [11, 22] 33k curated samples [11] GPU Requirement 16 H800 GPUs [11] 24 H800 GPUs [11] Training Goal Min. Negative Log-Likelihood [22] Hybrid Text-Visual Reward [11] Outcome Performance Plateaus [22] Breaks SFT Performance Limit [11]
To break the plateau, the authors implement a two-stage Reinforcement Learning (RL) process [11].