amago.envs.builtin.tmaze#
Long-term recall unit-test envs
Classes
|
Changes the movement penalty to make the environment (much) more sample efficient. |
|
TMaze environment that tests long-term recall and credit assignment. |
|
TMaze environment that unit-tests long-term recall. |
- class TMazeAlt(episode_length=11, corridor_length=10, oracle_length=0, goal_reward=1.0, penalty=0.0, distract_reward=0.0, ambiguous_position=False, expose_goal=False, add_timestep=False)[source]#
Bases:
_TMazeBase
Changes the movement penalty to make the environment (much) more sample efficient.
- step(action)[source]#
Run one timestep of the environment’s dynamics using the agent actions.
When the end of an episode is reached (
terminated or truncated
), it is necessary to callreset()
to reset this environment’s state for the next episode.Changed in version 0.26: The Step API was changed removing
done
in favor ofterminated
andtruncated
to make it clearer to users when the environment had terminated or truncated which is critical for reinforcement learning bootstrapping algorithms.- Parameters:
action (ActType) – an action provided by the agent to update the environment state.
- Returns:
- An element of the environment’s
observation_space
as the next observation due to the agent actions. An example is a numpy array containing the positions and velocities of the pole in CartPole.
reward (SupportsFloat): The reward as a result of taking the action. terminated (bool): Whether the agent reaches the terminal state (as defined under the MDP of the task)
which can be positive or negative. An example is reaching the goal state or moving into the lava from the Sutton and Barton, Gridworld. If true, the user needs to call
reset()
.- truncated (bool): Whether the truncation condition outside the scope of the MDP is satisfied.
Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call
reset()
.- info (dict): Contains auxiliary diagnostic information (helpful for debugging, learning, and logging).
This might, for instance, contain: metrics that describe the agent’s performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. In OpenAI Gym <v26, it contains “TimeLimit.truncated” to distinguish truncation and termination, however this is deprecated in favour of returning terminated and truncated variables.
- done (bool): (Deprecated) A boolean value for if the episode has ended, in which case further
step()
calls will return undefined results. This was removed in OpenAI Gym v26 in favor of terminated and truncated attributes. A done signal may be emitted for different reasons: Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state.
- An element of the environment’s
- Return type:
observation (ObsType)
- class TMazeAltActive(corridor_length=10, goal_reward=1.0, penalty=0.0, distract_reward=0.0)[source]#
Bases:
TMazeAlt
TMaze environment that tests long-term recall and credit assignment.
Based on “When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment”, Ni et al., 2023.
This version modifies the original movement penalty.
- Parameters:
corridor_length (
int
) – The length of the corridor. This environment tests whether the agent can recall information across this many timesteps.
- class TMazeAltPassive(corridor_length=10, goal_reward=1.0, penalty=0.0, distract_reward=0.0)[source]#
Bases:
TMazeAlt
TMaze environment that unit-tests long-term recall.
Based on “When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment”, Ni et al., 2023.
This version modifies the original movement penalty.
- Parameters:
corridor_length (
int
) – The length of the corridor. This environment tests whether the agent can recall information across this many timesteps.