Reinforcement Learning (RL) is a framework and collection of approaches for creating agents that can interact with environments to achieve goals. Some natural questions to ask are:

  • What is an environment?
  • What is an agent?
  • What does it mean for an agent to interact with an environment?
  • What is a goal?
  • What ways can we make an agent achieve a goal by interacting with an environment? We will answer all these questions, starting with defining what an environment is.

Environment is modeled as a Markov Decision Process (MDP)

We model an environment as a Markov Decision Process (MDP), which is defined as a 5-tuple , where

  • is the set of states
  • is the set of actions
  • is the transition dynamics, and defines the (conditional) probability of transitioning to a state conditioned on executing action from state .
  • is the reward function, and defines the (conditional) probability of receiving a scalar reward for taking action from state .
    • When the reward is a deterministic function of the state and action, you may see the reward function instead written not as a conditional probability, but as a function that maps state and actions directly to a scalar. Our probabilistic formulation generalizes this setting, and makes certain future definitions clearer as will be seen shortly.
  • is the discount factor, and it determines how much less valuable rewards become when they are received in the future (This will be explained in more detail shortly) To understand why an MDP is a useful model of an environment, let’s define what an agent is (which is the entity that interacts with the environment).

An agent is modeled as a policy

A policy is the (conditional) probability of an action from a state .

  • Defining the policy to be deterministic (i.e: agent’s behavior isn’t stochastic as a function of state) is just a special case of our probabilistic definition. Intuitively, an agent’s behavior is determined by how it acts in different situations. A policy determines how an agent acts in different situations, because we can sample actions from the policy given a state .

Policies generate trajectories by interacting with environment

When an agent (policy) interacts with an environment (MDP) for timesteps, it generates a trajectory . More formally, we define a trajectory as a sequence of steps, where each step contains a state, action, and reward:

  • This definition of trajectory (where it ends with state) is somewhat arbitrary, this is just consistent with our following definitions. Take note our definition of a trajectory has states in it, but only actions and rewards. Here is why: we can think of the environment at time as being in some start state . The agent will then sample an action , and the environment will produce a reward and next state , and we call this a step. So each step generates an action, reward, and (next) state, and so if we take steps, and we started with an initial state, we get states and actions and rewards.

Trajectories have a cumulative discounted reward

To formalize the idea that some trajectories are “better”, we want to be able to associate a scalar number to each trajectory, and “better” trajectories should be assigned higher numbers. The best trajectory has the bigger number! We formalize this by defining a function that maps a trajectory to a single number representing the cumulative discounted rewards:

  • When the discount factor , then this is just the cumulative rewards of the trajectory. Since is between and and is exponential in time, we are (exponentially) discounting future rewards, which is why we call the cumulative discounted rewards. Cumulative discounted rewards are crucial for defining what we mean by goals. Although trajectories that make go up high are “good”, we also have to account for how likely the trajectory is.

Probability of trajectory depends on MDP and policy

We can formalize our aforementioned intuition for how an agent interacts with an environment by defining the probability of a trajectory in a highly factorized way. We won’t go into details here (it is best described using graphical models), but our intuition is equivalent to making a bunch of conditional independence assumptions that are implied by our definitions of the transition dynamics, reward function, and policy all being function of the current state (and action). This is what the word Markov in MDP refers to.

For a specific environment and policy , we can define the probability of a trajectory as:

  • We add , , and to the subscript of to emphasize that this probability depends on the distributions of the policy, transition dynamics, and reward function, but when that is not relevant, we just refer to it as .
  • is called the starting state distribution, and is a distribution over starting states (i.e: how likely a state is to be the state the agent starts in). This (marginal) distribution can be dependent (or independent) of the policy and transition dynamics.
  • Homework Problem: Getting the time indices to all line up takes care! Confirm that our definition of doesn’t have an “index out of bounds” error. Now that we have defined the (cumulative discounted) value of a trajectory, as well as the probability of a trajectory, we can define a goal via the reward hypothesis

Goals are maximizing Cumulative Discounted Expected (CDE) rewards

To quote Rich Sutton, the reward hypothesis is defined as:

“That all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward).”

We can formalize this statement as the following optimization problem: Find the policy from a space of policies that maximizes the Cumulative Discounted Expected rewards (CDE):

  • We’ve done all the work to define all the terms in this optimization problem. Note that depends on , and , while just depends on (Confirm this by looking at the definition of and and see what is required to compute them given you have already). To emphasize this, we may write to emphasize the probability depends on the policy, which we leverage shortly.
  • We call the optimal policy, and “solving the RL problem” amounts to finding the optimal policy (or at least getting as “close” to optimal as possible). The optimal policy does not need to be unique (i.e: there may be multiple optimal policies for a single MDP). In order to find optimal policies, it helps to understand the CDE rewards of a policy from different states. We formalize this with a Value function .