8000 partially observable half field offense · Issue #117 · LARG/HFO · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
partially observable half field offense #117
Open
@saeidtafazzol

Description

@saeidtafazzol

I want to train agents in partially observable half field offense. However to my knowledge, Half field offense without the full-state flag has the following 2 problems:

1.If I'm correct, HFO uses self_pos which is calculated in base2d code and then translate the relative position of other players and land marks w.r.t this self_pos. This way, HFO uses filtered state_space not observation themselves. For example the agent may have not seen the goal center landmark but because its position is known in advance , our agent has goal center's relative position w.r.t itself because of this self_pos.

2.Agent uses base2d's Turnneck control for its vision. I think we should enable agent to control its own head. So it learns for himself how to control the information flow.

If these issues are plausible, I will be the happiest to contribute. I recommend to create a third feature_set which is solely based on agent's current observation without filtering it. Also add an option to control agent's neck.

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0