Description
I was reviewing the code in your SpeechLLM repository (specifically trainer.py) and had a question about the test_step function. In the current implementation, label_ids is passed to the forward method during testing:
outputs = self.forward(embeds, atts, label_ids) # Line 175 in test_step
From my understanding, providing ground truth labels (label_ids) during the forward pass in a test/inference phase might allow the model to "see" the answers, potentially leading to artificially inflated performance metrics. Typically, labels are only used for loss calculation during training or validation, not during testing where the model should generate predictions independently.
Could you clarify whether this is an intentional design choice or if it might require modification? I’d greatly appreciate any insights you could share.
Thank you for your time and for open-sourcing this valuable work!