ML-Agents Release 15
ML-Agents Release 15
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
| Package | Version |
|---|---|
| com.unity.ml-agents (C#) | v1.9.0 |
| com.unity.ml-agents.extensions (C#) | v0.3.0-preview |
| ml-agents (Python) | v0.25.0 |
| ml-agents-envs (Python) | v0.25.0 |
| gym-unity (Python) | v0.25.0 |
| Communicator (C#/Python) | v1.5.0 |
Major Changes
com.unity.ml-agents (C#)
- The
BufferSensorandBufferSensorComponenthave been added (documentation). They allow the Agent to observe variable number of entities. For an example, see the Sorter environment. (#4909) - The
SimpleMultiAgentGroupclass andIMultiAgentGroupinterface have been added (documentation). These allow Agents to be given rewards and end episodes in groups. For examples, see the Cooperative Push Block, Dungeon Escape and Soccer environments. (#4923)
ml-agents / ml-agents-envs / gym-unity (Python)
- The MA-POCA trainer has been added. This is a new trainer that enables Agents to learn how to work together in groups. Configure
pocaas the trainer in the configuration YAML after instantiating aSimpleMultiAgentGroupto use this feature. (#5005)
Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- Updated com.unity.barracuda to 1.3.2-preview. (#5084)
- Added 3D Ball to the
com.unity.ml-agentssamples. (#5077)
ml-agents / ml-agents-envs / gym-unity (Python)
- The
encoding_sizesetting for RewardSignals has been deprecated. Please usenetwork_settingsinstead. (#4982) - Sensor names are now passed through to
ObservationSpec.name. (#5036)
Bug Fixes
ml-agents / ml-agents-envs / gym-unity (Python)
- An issue that caused GAIL to fail for environments where agents can terminate episodes by self-sacrifice has been fixed. (#4971)
- Made the error message when observations of different shapes are sent to the trainer clearer. (#5030)
- An issue that prevented curriculums from incrementing with self-play has been fixed. (#5098)