Apple’s simple self-distillation boosts coding models without verification
Apple researchers are spotlighting Simple Self-Distillation: fine-tuning a coding model on its own unfiltered outputs. In early results shared around LiveCodeBench, pass@1 and pass@5 jump sharply—without labels, RL, or an execution-based verifier.