* made BrainParameters a class to set default values
Modified the error message if the state is discrete
* Add discrete state support to PPO and provide discrete state example environment
* Add flexibility to continuous control as well
* Finish PPO flexible model generation implementation
* Fix formatting
* Support color observations
* Add best practices document
* bug fix for non square observations
* Update Readme.md
* Remove scipy dependency
* Add installation doc
* added broadcast to the player and heuristic brain.
Allows the python API to record actions taken along with the states and rewards
* removed the broadcast checkbox
Added a Handshake method for the communicator
The academy will try to handshake regardless of the brains present
Player and Heuristic brains will send their information through the communicator but will not receive commands
* bug fix : The environment only requests actions from external brains when unique
* added warning in case no brins are set to external
* fix on the instanciation of coreBrains,
fix on the conversion of actions to arrays in the BrainInfo received from step
* default discrete action is now 0
bug fix for discrete broadcast action (the action size should be one in Agents.cs)
modified Tennis so that the default action is no action
modified the TemplateDecsion.cs to ensure non null values are sent from Decide() and MakeMemory()
* minor fixes
* need to convert the s...
Summary of changes in setting up TF Sharp
1) Make sure to press "enter" after entering "ENABLE_TENSORFLOW" flag.
2) Save the project.
3) Checking to make sure the TF asset files are installed in the project.
Greatly simplified GridWorld code. It now also only uses a visual observation rather than state vector in order to demonstrate learning purely from a visual input.
* Add support for stacking past n states to allow network to learn temporal dependencies.
* Add Banana Collector environment for demonstrating partially observable multi-agent environments.
* Add 3DBall Hard which lacks velocity information in state representation. Used as test for LSTM and state-stacking features.
* Rework Tennis environment to be continuous control and trainable in 100k steps.
* [Semantics] Modified the semantics for the documentation
* [Semantics] Updated the images
* [Semantics] Made further changes to the docs based of the comments received
- Mostly ensures consistency with our other guides, in addition to including some more detail.
- Added an image to showcase the Linux Build Support for Unity.
- Updated the Installation guide to reference the Linux Build Support component.
* [Documentation] Added the On Demand Decision documentation.
* [Fixes] Corrected grammar mistakes
* [Documentation] Adding what kinds of games ODD is useful for
* [Documentation] Added the LSTM documentation
* [Documentation] Fix the line breaks
* [Documentations] Modified the doc given feedback
* [Documentation] Improvements based of PR comments
* [Documentation] Removed reference to PPO and BC
* [New Bouncer] Revamped the Bouncer to be in 3D
* [Bouncer Configuration file] Added the BouncerBrain configuration
* [Documentation] Added the Bouncer tot he documentation page
* [Fixes] Fixed lines too long and the documentation typo
* Slight adjustments to bouncer environment
* Don't default to internal brain on bouncer
- Incorporated feedback provided offline
- Fixed capitalizations of Agent/agent
- Re-organized trainers and features sections (renamed files accordingly)
- Change Agent Editor (code) to ODD feature
- Added a summary and next steps section
- Cleaned up text
- Renamed file
- Updated ML-Agents-Overview to point to new file
- Updated figure to showcase the new “On Demand Decisions” checkbox text
- For docker run commands, ensured each flag is on its own line (for readability)
- Standardized capitalization for “Docker” and “image”
- Removed lingering empty line
- Minor rewordings
* [Documentation] Added description on how to add visual observations
* [Documentation] Forgot a paragraph
* [Documentation] Addressed comments
* [Documentation] Addressed comments, again
* Minor changes to ensure a common visual language.
* Agents are blue (or additionally red in competitive scenarios).
* Interactable objects are orange.
* Goals are green when objects, and checkerboards when places.
* Not everything perfectly follows this, but things are mostly consistent now.
* Renamed "Banana" folder to "BananaCollectors"
* Ensured all brains were set to "Player"
* Moved non-shared assets out of the "SharedAssets" folder.
There were some important things that should have been mentioned in this tutorial, and it took me a while to figure them out. Most importantly, it was never mentioned how to properly end a training session in the Anaconda prompt to receive an exported .bytes file.
I added more comments to explain why using python 3.5, Tensorflow 1.4.0, and comments and links for Visual Studio 2015.
I erased the docopt installation and the part for the folders:
Had to paste this “lib” folder from C:\Program Files (x86)\Microsoft Visual Studio 14.0\lib
Into C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\PlatformSDK\lib
Because this might have been only a problem for me. Thanks,
There were some important things that should have been mentioned in this tutorial, and it took me a while to figure them out. Most importantly, it was never mentioned how to properly end a training session in the Anaconda prompt to receive an exported .bytes file.
Fixes the following issues:
* Missing component reference in BananaRL environment.
* Neural Network for multiple visual observations was not properly generated.
* Episode time-out value estimate bootstrapping used incorrect observation as input.
* [CoreBrain] Bug fix in the internal brain
Discrete vector observations did not have the right size
* [Docs] Removed all references to the unitypackages other than the TensorFlowSharp.unitypackage
.
* [Basic] Updated the bytes file of basic
* [Docs] Addressed comments
* [Docs] Re-addressed the comments
* [Bug Fix] Scalling the visual input between 0 and 1
* [Comments] Added comments to the
BatchVisualObservations method of the CoreInternalBrain.
* [Renaming] Renamed BlackAndWhite to blackAndWhite
This PR makes the following changes:
* Moves clipping of continuous control model into model itself. Output is now always [-1, 1].
* Internal model values are now clipped between [-3, 3] before being rescaled to [-1, 1] for output. * This improves training performance by providing a wider range of values within which the pdf of the gaussian can fall. Output of [-1, 1] is used to be more environment-creator friendly.
* Fixes issue where epsilon was erroneously being used to reconstruct old probabilities during PPO update, leading to reduced learning performance.
* Introduce ScaleAction() function within python to easily rescale values from [-1, 1] to arbitrary range.
* Re-train all CC models using improved algorithm. All performance levels are equal or improved. In the case of Crawler, improvement is drastic.
* Update documentation appropriately.
* Made miscellaneous minor code style and optimization improvements within environments.
* First draft of Azure support docs
* Correcting links to other docs
* Adding additional links and cleaning instructions
* Adding references to Azure docs in other appropriate places
- Indent the section about providing actions to multiple brains to be in line with the rest of the step() docs.
- Move the line about what step() returns closer to the top of the docs so it's harder to overlook.
- Add a small code snippet about how to get BrainInfo belonging to a specific brain and how to get data from that BrainInfo object.
* [Refactor] Fixed line indentation
* Removed the library Newtonsoft.Json from the monitor
* Replaced calls to JSON converstion with manual conversion
* [Modified] The Monitor now has multiple
* Log methods that take different object types
* some random change so that I can create this PR
* docs update for TensorFlowSharp new version
* changed the links to the new unitypackage file
* resolved conflicts, updated the pictures for CUDA 9.0
* fixed a typo
* resolved arthur's comment
* blurred the usernames
* modified the AWS doc
* resolved Vince's comment
* Adds implementation of Curiosity-driven Exploration by Self-supervised Prediction (https://arxiv.org/abs/1705.05363) to PPO trainer.
* To enable, set use_curiosity flag to true in hyperparameter file.
* Includes refactor of unitytrainers model code to accommodate new feature.
* Adds new Pyramids environment (w/ documentation). Environment contains sparse reward, and can only be solved using PPO+Curiosity.
* Revamps agent code for walker and crawler environments to use shared JointDriveController system.
* Crawler has been reworked to be very cute.
* Crawler & Walker environments have been reworked to be visually consistent.
* Added Dynamic Crawler scene.
* All scenes re-trained and new models added.
* Documentation changes.
* Added missing declaration to docs sample code.
* Added pretrained model as default graph in Internal brain of Tennis scene
* Disabled PlayerBrain in Tennis by default.
* Removed accidental config.
* Update Feature-Monitor.md
Adds details on 1) how to activate the monitor and 2) how to be able to use a target transform to display the information on.
* Update Feature-Monitor.md
Updated with suggestions from reviewer.
- Dockerfile pulls in the mlagents directory now.
- Installs mlagents package locally with `pip install .`.
- Clients should now place trainer configs in unity-volume.
* [Initial Commit]
Modified the model.py file and the ppo/trainer.py file to use masked actions
* Preliminary modifications to the python side of the code to enable action masking
* Preliminary modifications to the C# side of the code to enable action masking
* Preliminary modifications to the communication side of the code to enable action masking
* Implemented action masking for BC
Note : The actions of the teacher are not masked
* More error messages for the action masking
* fix pytests
* Added Documentation
* Address comment
* Addressed Comments on docs
* Addressed second comment on docs
* Addressed comments for the python side of the code
* Created the action masker and associated unit tests
* Addressed comments on the C# side
* Addressed the comment regarding action_masking_name
* Addressed the comments
- Highlighting Python snippet in FAQ.
- Fixing import in Python API doc.
- ml-agents-protbuf now automatically places generated files in
the correct directory.
* GridWorld now uses action masking
* Addressed the comments
* addressed comments
* Added checkbox to turn action masking on/off (#1146)
* Added checkbox to turn action masking on/off
* Fix to handle the no-action option
* Added comment to GridWorld mentioning the use of action masking. (#1153)
* Fixing learn.py, trainer_controller.py, and Docker
- learn.py has been moved under trainers.
- this was a two line change
- learn.py will no longer be run as a main method
- docopt arguments are strings by default. learn.py now uses
this assumption to correctly parse arguments.
- trainer_controller.py now considers the Docker volume when
accepting a trainer config file path.
- the Docker container now uses mlagents-learn.
* Removing extraneous unity-volume ref.
* Make project version 2017.4
* updated the documentation
* added the upgrade notes for 2017.1 to 2017.4
* removed the .10f1
* fix the typo and make the language nicer
* resolved the comments
* Wrapping lines.
* Wording.
* resolved part of jeff's comment
* resolved part of jeff's comment
* fixed the link
* Update FAQ.md
Missing "an".
* Missing "an".
The calculation of observation vectors is faulty. The old calculation does not reflect distances to the edges and it does not only yield results between -1 and 1. Since distance calculation would have been difficult in one line, I just replaced it by the relative position of the ball (only using two vectors instead of four). I've conducted 500K-step reinforcing trainings before and after the change and got enormously improved results. Contact me for screenshots of the tensorboard or just use the debugger and do the math.
* Initial Commit
Ported most functionalities, still need to :
- Documentation
- Add Comments
- Custom drawer for BrainParameters
- Fix the UnitTests
- Review Functionalities
* Added Custom Drawer for the Brain Parameters
* Improvements to the HubDrawer
* Modified the Brain Editors
* Minor bug fixes and UI changes
* Modified the Help Boxes of the Drawers
* Modified Brain class, renamed Initialize and made DecideAction virtual
* Fix the UnityTests
* Simpler Brain creation menu
* Renamed Internal Brain to Learning Brain
* modified the parameters to remove reference to External or Internal in the Protobuf objects
* Updated the protobuf generated files
* Fix the Pytests
* Removed the graph scope from the Learning Brain
* cleaner logic than try catch
* Removed the isExternal field of the brain and put the isTraining logic into LearningBrain and Training Hub
* Modified how the Brain finds the A...
* pull/1294 from has-taiar
* removed the left bracket
* moved the windows link position
* update the windows doc
* resolved the comments, changed the pip install . to pip install -e . , added the package explanation to the Windows installation doc
* Resolved the comments
* add the 'the'
* added faq to the aws doc
* added the link
* added some faq and updated the temp ami id
* resolved the comments, updated one of the faq along with the scriptable object update
* added one other cause raise in issues
* fixed line change
* Fix Typo #1323
* First update to the docs
* Addressed comments
* remove references to TF#
* Replaced the references to TF# with new document.
* Edditied the FAQ
The check for wether an agent has fallen off the platform was using a wrong value of 1 instead of 0.
This meant that the agent immediately started in a falling state and entered a thrashing cycle of resetting itself.
* Documentation Update
* addressed comments
* new images for the recorder
* Improvements to the docs
* Address the comments
* Core_ML typo
* Updated the links to inference repo
* Put back Inference-Engine.md
* fix typos : brain
* Readd deleted file
* fix typos
* Addressed comments
* Simplified rewards and observations; Determined better settings for training within a reasonable amount of time.
* Simplified Agent rewards; Added training section that discusses hyperparameters.
* Added note about DecisionFrequency.
* Updated screenshots and a small clarification in the text.
* Tested and updated using v0.6.
* Update a couple of images, minor text edit.
* Replace with more recent training stats.
* resolve a couple of minor review commnts.
* Increased the recommended batch and buffer size hyperparameter values.
* Fix 2 typos.
* Wording and filepath changes to tutorials
* Retake editor images to match v0.6
Retake editor images so that the filepaths and Brain names match what they actually are.
* Add blurb about using the --load flag in the intro guide, and typo fix.
* Add section in tutorial to create multiple area learning environment.
* Add mention of Done() method in agent design
* Add option to set gym visual observation to uint8
* Add option to flatten branched discrete actions
* Add game_over variable to gym wrapper
* Add guide on how to use Dopamine with the gym wrapper and comparisons with Baselines and PPO
* Switched default Mac GFX API to Metal
* Added Barracuda pre-0.1.5
* Added basic integration with Barracuda Inference Engine
* Use predefined outputs the same way as for TF engine
* Fixed discrete action + LSTM support
* Switch Unity Mac Editor to Metal GFX API
* Fixed null model handling
* All examples converted to support Barracuda
* Added model conversion from Tensorflow to Barracuda
copied the barracuda.py file to ml-agents/mlagents/trainers
copied the tensorflow_to_barracuda.py file to ml-agents/mlagents/trainers
modified the tensorflow_to_barracuda.py file so it could be called from mlagents
modified ml-agents/mlagents/trainers/policy.py to convert the tf models to barracuda compatible .bytes file
* Added missing iOS BLAS plugin
* Added forgotten prefab changes
* Removed GLCore GFX backend for Mac, because it doesn't support Compute shaders
* Exposed GPU support for LearningBrain inference
...
* added the pypiwin32 package
* fixed the break on mac, fixed part of pytest above version 4
* added something to the windows to help unstuck people
* resolved the comment
* Added RenderTexture support for visual observations
* Cleaned up new ObservationToTexture function
* Added check for to width/height of RenderTexture
* Added check to hide HelpBox unless both cameras and RenderTextures are used
* Added documentation for Visual Observations using RenderTextures
* Added GridWorldRenderTexture Example scene
* Adjusted image size of doc images
* Added GridWorld example reference
* Fixed missing reference in the GridWorldRenderTexture scene and resaved the agent prefab
* Fix prefab instantiation and render timing in GridWorldRenderTexture
* Added screenshot and reworded documentation
* Unchecked control box
* Rename renderTexture
* Make RenderTexture scene default for GridWorld
Co-authored-by: Mads Johansen <pyjamads@gmail.com>
We need to document the meaning of the two new flags added for
multi-environment training. We may also want to add more specific
instructions for people wanting to speed up training in the future.
* update title caps
* Rename Custom-Protos.md to Creating-Custom-Protobuf-Messages.md
* Updated with custom protobuf messages
* Cleanup against to our doc guidelines
* Minor text revision
* Create Training-Concurrent-Unity-Instances
* Rename Training-Concurrent-Unity-Instances to Training-Concurrent-Unity-Instances.md
* update to right format for --num-envs
* added link to concurrent unity instances
* Update and rename Training-Concurrent-Unity-Instances.md to Training-Using-Concurrent-Unity-Instances.md
* Added considerations section
* Update Training-Using-Concurrent-Unity-Instances.md
* cleaned up language to match doc
* minor updates
* retroactive migration from 0.6 to 0.7
* Updated from 0.7 to 0.8 migration
* Minor typo
* minor fix
* accidentally duplicated step
* updated with new features list
* Update Learning-Environment-Create-New.md
Section : Final Editor Setup - Step 3. It says:
Drag the Brain RollerBallPlayer from the Project window to the RollerAgent Brain field.
Should say:
Drag the Brain RollerBallBrain from the Project window to the RollerAgent Brain field.
* Develop black format fix (#1998)
* fixed the format
* changed the circleci config
* [Gym] Added no_graphics argument (#1997)
> Added the no_graphics argument to the gym interface. #1413
* [Documentation] SetReward method (#1996)
Added a paragraph in the docs/Learning-Environment-Design-Agents.md document regarding the use of SetReward and how it is different from AddReward
* [Documentation] Added information for the environments the trainer cannot train with the default configurations (#1995)
* Format gym_unity using black
* Add GetTotalStepCount to the Academy
This will allow the RecordVideos plugin to record based on the current academy step
* fixup! Add GetTotalStepCount to the Academy
* Add the video recorder to the documentation
* Update Learning-Environment-Create-New.md
- Clarify that training is done in the original ml-agents project folder
- Remove mistype
- In the future it could help to show the user that they can copy the config folder and run training in a new project folder so they don't have to mix project settings in the original config folder
* Update Learning-Environment-Create-New.md
Add file paths
- Fix re-install directions to include -e modifer
- Move re-install directions from creating-custom... to protobuf readme
- Add how to see confirmation that install worked
* Create new class (RewardSignal) that represents a reward signal.
* Add value heads for each reward signal in the PPO model.
* Make summaries agnostic to the type of reward signals, and log weighted rewards per reward signal.
* Move extrinsic and curiosity rewards into this new structure.
* Allow defining multiple reward signals in YAML file. Add documentation for this new structure.
* Using-Docker.md miss a backslash in 3DBall command
Hi,
Just a quick edit because a backslash seems to be missing from the 3DBall command example.
* Added interactive options and Tensorboard documentation for Docker training
Based on the new reward signals architecture, add BC pretrainer and GAIL for PPO. Main changes:
- A new GAILRewardSignal and GAILModel for GAIL/VAIL
- A BCModule component (not a reward signal) to do pretraining during RL
- Documentation for both of these
- Change to Demo Loader that lets you load multiple demo files in a folder
- Example Demo files for all of our tested sample environments (for future regression testing)
* Don't 0 value bootstrap for GAIL and Curiosity
* Add gradient penalties to GAN to help with stability
* Add gail_config.yaml with GAIL examples
* Cleaned up trainer_config.yaml and unnecessary gammas
* Documentation updates
* Code cleanup
* Add Sampler and SamplerManager
* Enable resampling of reset parameters during training
* Documentation for Sampler and example YAML configuration file
* Fix naming conventions for consistency
* Add generalization link to ML-Agents Overview
* Add generalization to main Readme
* Include types of samplers available for use
* add kor ver of README.md and empty docs, images
* add Installation.md translated to korean
* Fixed main readme docs and move all the English documents in the docs folder
* modify contents of 'Installation.md' and add kr version 'Installation-Windows.md'(not completed) with related image
* completed 1st translation of 'Installation-Windows.md' and added related images for korean docs
* add kr version 'Using-Docker.md'(not completed)
* translate Training-PPO.md to Korean
* Change word about epsilon in Training-PPO.md
* Fix Training PPO about epsilon
* completed korean translation of 'Using-Docker.md'
* Training Imitation Learning translation to Korean is finished! Also information about the translators are added
* modified all 'blogs.unity3d.com/' to 'blogs.unity3d.com/kr'
* removed all non-translated doc
* add translator information
* Included explicit version # for ZN
* added explicit version for KR docs
* minor fix in installation doc
* Consistency with numbers for reset parameters
* Removed extra verbiage. minor consistency
* minor consistency
* Cleaned up IL language
* moved parameter sampling above in list
* Cleaned up language in Env Parameter sampling
* Cleaned up migrating content
* updated consistency of Reset Parameter Sampling
* Rename Training-Generalization-Learning.md to Training-Generalization-Reinforcement-Learning-Agents.md
* Updated doc link for generalization
* Rename Training-Generalization-Reinforcement-Learning-Agents.md to Training-Generalized-Reinforcement-Learning-Agents.md
* Re-wrote the intro paragraph for generalization
* add titles, cleaned up language for reset params
* Update Training-Generalized-Reinforcement-Learning-Agents.md
* cleanup of generalization doc
* More cleanu...
* Add Soft Actor-Critic model, trainer, and policy and sac_trainer_config.yaml
* Add documentation for SAC and tweak PPO documentation to reference the new pages.
* Add tests for SAC, change simple_rl test to run both PPO and SAC.
* check using xargs
* fix broken BC link
* install npm, run precommit before unit tests
* try to install npm
* try a node image build
* add workflow
* don't use precommit on node run
* sudo make me a sandwich
* pass config arg
* revert CI order change
* retry precommit
* sudo apt-get
* sudo npm
* make sure fails on bad link
* cleanup and refix link
* relax versions, add python 3.7 to CI
* add workflows
* try paramaterized circleci build, disable slow test
* fix workflow
* fix (?) pyversion
* set job name, fix pip freeze output
* test_requirements.txt
* fix install
* fix paths (again) - should use pushd popd instead
* use pushd and popd
* sort deps, restore unit test, cleanup CI
* relax versions more
* clean up versions in docs
* test older libs for 3.6, newer for 3.7
* pip: progress bar off
* fix gym-unity pip install
* try cat'ing setups for checksum
* dont use fallback (temporarily)
* dont turn off progress bar before upgrading pip
* PR feedback
* add parameter descriptions in CI config
* Initial Commit
* Remove the Academy Done flag from the protobuf definitions
* remove global_done in the environment
* Removed irrelevant unitTests
* Remove the max_step from the Academy inspector
* Removed global_done from the python scripts
* Modified and removed some tests
* This actually does not break either curriculum nor generalization training
* Replace global_done with reserved.
Addressing Chris Elion's comment regarding the deprecation of the global_done field. We will use a reserved field to make sure the global done does not get replaced in the future causing errors.
* Removed unused fake brain
* Tested that the first call to step was the same as a reset call
* black formating
* Added documentation changes
* Editing the migrating doc
* Addressing comments on the Migrating doc
* Addressing comments :
- Removing dead code
- Resolving forgotten merged conflicts
- Editing documentations...
* new env styles rebased on develop
* added new trained models
* renamed food collector platforms
* reduce training timescale on WallJump from 100 to 10
* uncheck academy control on walljump
* new banner image
* rename banner file
* new example env images
* add foodCollector image
* change Banana to FoodCollector and update image
* change bouncer description to include green cube
* update image
* update gridworld image
* cleanup prefab names and tags
* updated soccer env to reference purple agent instead of red
* remove unused mats
* rename files
* remove more unused tags
* update image
* change platform to agent cube
* update text. change platform to agents head
* cleanup
* cleaned up weird unused meta files
* add new wall jump nn files and rename a prefab
* walker change stacked states from 5 to 1
walker collects physics observations so stacked states are not need...
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* Modified the Academy UI to remove the control checkbox and replaced it with a train in the editor checkbox
* Removed the Broadcast functionality from the non-Learning brains
* Bug fix
* Note that the scenes are broken since the BroadcastHub has changed
* Modified the LL-API for Python to remove the broadcasting functiuonality.
* All unit tests are running
* Modifie...
* Feature Deprecation : Online Behavioral Cloning
In this PR :
- Delete the online_bc_trainer
- Delete the tests for online bc
- delete the configuration file for online bc training
* Deleting the BCTeacherHelper.cs Script
TODO :
- Remove usages in the scene
- Documentation Edits
*DO NOT MERGE*
* IMPORTANT : REMOVED ALL IL SCENES
- Removed all the IL scenes from the Examples folder
* Removed all mentions of online BC training in the Documentation
* Made a note in the Migrating.md doc about the removal of the Online BC feature.
* Modified the Academy UI to remove the control checkbox and replaced it with a train in the editor checkbox
* Removed the Broadcast functionality from the non-Learning brains
* Bug fix
* Note that the scenes are broken since the BroadcastHub has changed
* Modified the LL-API for Python to remove the broadcasting functiuonality.
* All unit tests are running
* Modified the scen...