An image can be thought of as a matrix of a predefined width (W) and a height (H) and each pixel can be thought of as simply an array of length 3 (in the case of RGB), `[Red, Green, Blue]` holding the different channel information of the color (channel) intensities at that pixel location. Thus an image is just a 3 dimensional matrix of size WxHx3. A Grid Observation can be thought of as a generalization of this setup where in place of a pixel there is a "cell" which is an array of length N representing different channel intensities at that cell position. From a Convolutional Neural Network point of view, the introduction of multiple channels in an "image" isn't a new concept. One such example is using an RGB-Depth image which is used in several robotics applications. The distinction of Grid Observations is what the data within the channels represents. Instead of limiting the channels to color intensities, the channels within a cell of a Grid Observation generalize to any data that can be represented by a single number (float or int).
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/tree/release_10_docs/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
The Food Collector environment can be described as:
* Set-up: A multi-agent environment where agents compete to collect food.
One of the main feedback we get is to illustrate more real game examples using ML-Agents. We are excited to provide an example implementation of Match-3 using ML-Agents and additional utilities to integrate ML-Agents with Match-3 games.
Our aim is to enable Match-3 teams to leverage ML-Agents to create player agents to learn and play different Match-3 levels. This implementation is intended as a starting point and guide for teams to get started (as there are many nuances with Match-3 for training ML-Agents) and for us to iterate both on the C#, hyperparameters, and trainers to improve ML-Agents for Match-3.
This implementation includes:
* C# implementation catered toward a Match-3 setup including concepts around encoding for moves based on [Human Like Playtesting with Deep Learning](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)
* An example Match-3 scene with ML-Agents implemented (located under /Project/Assets/ML-Agents/Examples/Match3). More information, on Match-3 example [here](https://github.com/Unity-Technologies/ml-agents/tree/release_10_docs/docs/docs/Learning-Environment-Examples.md#match-3).
### Feedback
If you are a Match-3 developer and are trying to leverage ML-Agents for this scenario, [we want to hear from you](https://forms.gle/TBsB9jc8WshgzViU9). Additionally, we are also looking for interested Match-3 teams to speak with us for 45 minutes. If you are interested, please indicate that in the [form](https://forms.gle/TBsB9jc8WshgzViU9). If selected, we will provide gift cards as a token of appreciation.
### Interested in more game templates?
Do you have a type of game you are interested for ML-Agents? If so, please post a [forum issue](https://forum.unity.com/forums/ml-agents.453/) with [GAME TEMPLATE] in the title.
## Getting started
The C# code for Match-3 exists inside of the extensions package (com.unity.ml-agents.extensions). A good first step would be to familiarize with the extensions package by reading the document [here](com.unity.ml-agents.extensions.md). The second step would be to take a look at how we have implemented the C# code in the example Match-3 scene (located under /Project/Assets/ML-Agents/Examples/match3). Once you have some familiarity, then the next step would be to implement the C# code for Match-3 from the extensions package.
Additionally, see below for additional technical specifications on the C# code for Match-3. Please note the Match-3 game isn't human playable as implemented and can be only played via training.
We provide some utilities to integrate ML-Agents with Match-3 games.
## Technical specifications for Match-3 with ML-Agents
## AbstractBoard class
### AbstractBoard class
The `AbstractBoard` is the bridge between ML-Agents and your game. It allows ML-Agents to
* ask your game what the "color" of a cell is
* ask whether the cell is a "special" piece type or not
The AbstractBoard also tracks the number of rows, columns, and potential piece types that the board can have.
#### `public abstract int GetCellType(int row, int col)`
#####`public abstract int GetCellType(int row, int col)`
#### `public abstract int GetSpecialType(int row, int col)`
#####`public abstract int GetSpecialType(int row, int col)`
#### `public abstract bool IsMoveValid(Move m)`
#####`public abstract bool IsMoveValid(Move m)`
#### `public abstract bool MakeMove(Move m)`
#####`public abstract bool MakeMove(Move m)`
## Move struct
### Move struct
The Move struct encapsulates a swap of two adjacent cells. You can get the number of potential moves
for a board of a given size with. `Move.NumPotentialMoves(NumRows, NumColumns)`. There are two helper
functions to create a new `Move`:
a `Move` from a row, column, and direction (and board size).
## `Match3Sensor` and `Match3SensorComponent` classes
#### `Match3Sensor` and `Match3SensorComponent` classes
The `Match3Sensor` generates observations about the state using the `AbstractBoard` interface. You can
choose whether to use vector or "visual" observations; in theory, visual observations should perform
better because they are 2-dimensional like the board, but we need to experiment more on this.
## `Match3Actuator` and `Match3ActuatorComponent` classes
#### `Match3Actuator` and `Match3ActuatorComponent` classes
The `Match3Actuator` converts actions from training or inference into a `Move` that is sent to` AbstractBoard.MakeMove()`
It also checks `AbstractBoard.IsMoveValid` for each potential move and uses this to set the action mask for Agent.
# Setting up match-3 simulation
### Setting up Match-3 simulation
* Implement the `AbstractBoard` methods to integrate with your game.
* Give the `Agent` rewards when it does what you want it to (match multiple pieces in a row, clears pieces of a certain
[Clone the repository](../../docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](../../docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_10_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_10_docs/docs/Installation.md#advanced-local-installation-for-development-1)
The main [README](../../README.md) contains links for contacting the team or getting support.
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_10_docs/README.md) contains links for contacting the team or getting support.
- Utilities were added to `com.unity.ml-agents.extensions` to make it easier to
integrate with match-3 games. See the [readme](https://github.com/Unity-Technologies/ml-agents/blob/master/com.unity.ml-agents.extensions/Documentation~/Match3.md)
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_9_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_10_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
"description":"Use state-of-the-art machine learning to create intelligent character behaviors in any Unity environment (games, robotics, film, etc.).",