from the root of the project repository use:
```python
from gym_unity.envs import UnityEnv
from gym_unity.envs import UnityToGymWrapper
env = UnityEnv(environment_filename, worker_id, use_visual, uint8_visual)
env = UnityToGymWrapper(unity_environment, worker_id, use_visual, uint8_visual)
* `environment_filename` refers to the path to the Unity environment.
* `worker_id` refers to the port to use for communication with the environment.
Defaults to `0` .
* `unity_environment` refers to the Unity environment to be wrapped.
* `use_visual` refers to whether to use visual observations (True) or vector
observations (False) as the default observation provided by the `reset` and
from baselines import deepq
from baselines import logger
from gym_unity.envs import UnityEnv
from mlagents_envs import UnityEnvironment
from gym_unity.envs import UnityToGymWrapper
env = UnityEnv("./envs/GridWorld", 0, use_visual=True, uint8_visual=True)
unity_env = UnityEnvironment("./envs/GridWorld")
env = UnityToGymWrapper(unity_env, 0, use_visual=True, uint8_visual=True)
logger.configure('./logs') # Çhange to log in a different directory
act = deepq.learn(
env,
Other algorithms in the Baselines repository can be run using scripts similar to
the examples from the baselines package. In most cases, the primary changes needed
to use a Unity environment are to import `UnityEnv` , and to replace the environment
creation code, typically `gym.make()` , with a call to `UnityEnv(env_path)`
passing the environment binary path .
to use a Unity environment are to import `UnityToGymWrapper` , and to replace the
environment creation code, typically `gym.make()` , with a call to
`UnityToGymWrapper(unity_environment)` passing the environment as input .
A typical rule of thumb is that for vision-based environments, modification
should be done to Atari training scripts, and for vector observation
such a method using the PPO2 baseline:
```python
from gym_unity.envs import UnityEnv
from mlagents_envs.environment import UnityEnvironment
from gym_unity.envs import UnityToGymWrapper
from baselines.common.vec_env.subproc_vec_env import SubprocVecEnv
from baselines.common.vec_env.dummy_vec_env import DummyVecEnv
from baselines.bench import Monitor
"""
def make_env(rank, use_visual=True): # pylint: disable=C0111
def _thunk():
env = UnityEnv(env_directory, rank, use_visual=use_visual, uint8_visual=True)
unity_env = UnityEnvironment(env_directory)
env = UnityToGymWrapper(unity_env, rank, use_visual=use_visual, uint8_visual=True)
env = Monitor(env, logger.get_dir() and os.path.join(logger.get_dir(), str(rank)))
return env
return _thunk
instantiated, just as in the Baselines example. At the top of the file, insert
```python
from gym_unity.envs import UnityEnv
from mlagents_envs.environment import UnityEnvironment
from gym_unity.envs import UnityToGymWrapper
```
to import the Gym Wrapper. Navigate to the `create_atari_environment` method
```python
game_version = 'v0' if sticky_actions else 'v4'
full_game_name = '{}NoFrameskip-{}'.format(game_name, game_version)
env = UnityEnv('./envs/GridWorld', 0, use_visual=True, uint8_visual=True)
unity_env = UnityEnvironment('./envs/GridWorld')
env = UnityToGymWrapper(unity_env, use_visual=True, uint8_visual=True)
return env
```
with discrete action spaces, and specifically the Discrete Gym space. For environments
that use branched discrete action spaces (e.g.
[VisualBanana ](../docs/Learning-Environment-Examples.md )), you can enable the
`flatten_branched` parameter in `UnityEnv ` , which treats each combination of branched
`flatten_branched` parameter in `UnityToGymWrapper ` , which treats each combination of branched
actions as separate actions.
Furthermore, when building your environments, ensure that your Agent is using visual