8000 Question about test_step in SpeechLLM's trainer.py · Issue #4 · skit-ai/SpeechLLM · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Question about test_step in SpeechLLM's trainer.py #4
Open
@lisicheng-csn

Description

@lisicheng-csn

I was reviewing the code in your SpeechLLM repository (specifically trainer.py) and had a question about the test_step function. In the current implementation, label_ids is passed to the forward method during testing:

outputs = self.forward(embeds, atts, label_ids) # Line 175 in test_step

From my understanding, providing ground truth labels (label_ids) during the forward pass in a test/inference phase might allow the model to "see" the answers, potentially leading to artificially inflated performance metrics. Typically, labels are only used for loss calculation during training or validation, not during testing where the model should generate predictions independently.

Could you clarify whether this is an intentional design choice or if it might require modification? I’d greatly appreciate any insights you could share.

Thank you for your time and for open-sourcing this valuable work!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0