浏览代码

Merge pull request #1002 from Unity-Technologies/hotfix-0.4b

Hotfix 0.4.0b
/hotfix-v0.9.2a
GitHub 6 年前
当前提交
59f74e07
共有 3 个文件被更改,包括 5 次插入5 次删除
  1. 2
      docs/Learning-Environment-Examples.md
  2. 3
      python/unityagents/rpc_communicator.py
  3. 5
      unity-environment/Assets/ML-Agents/Scripts/Agent.cs

2
docs/Learning-Environment-Examples.md


* -0.0025 for every step.
* +1.0 if the block touches the goal.
* Brains: One brain with the following observation/action space.
* Vector Observation space: (Continuous) 15 variables corresponding to position and velocities of agent, block, and goal.
* Vector Observation space: (Continuous) 70 variables corresponding to 14 ray-casts each detecting one of three possible objects (wall, goal, or block).
* Vector Action space: (Continuous) Size of 2, corresponding to movement in X and Z directions.
* Visual Observations (Optional): One first-person camera. Use `VisualPushBlock` scene.
* Reset Parameters: None.

3
python/unityagents/rpc_communicator.py


class UnityToExternalServicerImplementation(UnityToExternalServicer):
parent_conn, child_conn = Pipe()
def __init__(self):
self.parent_conn, self.child_conn = Pipe()
def Initialize(self, request, context):
self.child_conn.send(request)

5
unity-environment/Assets/ML-Agents/Scripts/Agent.cs


void OnEnable()
{
textureArray = new Texture2D[agentParameters.agentCameras.Count];
for (int i = 0; i < brain.brainParameters.cameraResolutions.Length; i++)
for (int i = 0; i < agentParameters.agentCameras.Count; i++)
textureArray[i] = new Texture2D(brain.brainParameters.cameraResolutions[i].width,
brain.brainParameters.cameraResolutions[i].height, TextureFormat.RGB24, false);
textureArray[i] = new Texture2D(1, 1, TextureFormat.RGB24, false);
}
id = gameObject.GetInstanceID();
Academy academy = Object.FindObjectOfType<Academy>() as Academy;

正在加载...
取消
保存