출처
https://isaac-sim.github.io/IsaacLab/main/source/overview/environments.html
Note
True mutli-agent training is only available with the skrl library, see the Multi-Agents Documentation for more information. It supports the IPPO and MAPPO algorithms, which can be activated by adding the command line input --algorithm IPPO
or --algorithm MAPPO
to the train/play script. If these environments are run with other libraries or without the IPPO or MAPPO flags, they will be converted to single-agent environments under the hood.