Gym
OpenAI's way to create a reinforcement learning environment, used by most RL implementations.
Gym seeks to create standard environments, enabling better benchmarks to drive research in reinfrocement learning.
An environment contains:
action_space: space object corresponding to valid actions
observation_space: space object corresponding to valid observations
reward_range: corresponding min and max possible rewards
A step in an environment contains:
observation (object): pixel data, angles, velocities or board states
reward (float): amount of reward achieved by previous action
done (boolean): whether to reset the environment, termination action.
info (dict): diagnositic information (i.e. raw probabilities for last state)
How to create custom environment: https://gym.openai.com/docs/#environments
Keras-RL uses gym style environments (although the included ones are blank abstract classes)
Last updated
Was this helpful?