IJCAI-17 Tutorial: Energy-based machine learning       

links

Takayuki Osogami photo

IJCAI-17 Tutorial: Energy-based machine learning - overview


Videos from IJCAI-17 tutorial
 
 
 
 

This tutorial has covered the following topics (each part will be for approximately 50 minutes):

Part I: Boltzmann machines and energy-based models
Speaker: Takayuki Osogami

We review Boltzmann machines and energy-based models.  A Boltzmann machine defines a probability distribution over binary-valued patterns.  One can learn parameters of a Boltzmann machine via gradient based approaches in a way that log likelihood of data is increased.  The gradient and Laplacian of a Boltzmann machine admit beautiful mathematical representations, although computing them is in general intractable.  This intractability motivates approximate methods, including Gibbs sampler and contrastive divergence, and tractable alternatives, namely energy-based models.

Contents:

  1. The Boltzmann machine
  2. Learning a generative model
  3. Learning a discriminative model
  4. Evaluating expectation with respect to a model distribution
  5. Other energy-based models
  6. Non-probabilistic energy-based models

Part II: Restricted Boltzmann machines and deep energy-based models
Speaker: Sakyasingha Dasgupta

We review restricted Boltzmann machines (RBMs) and deep variants thereof.  A restricted Boltzmann machine is an undirected graphical model with a bipartitie graph structure. It is based on the Boltzmann machine with hidden units, with the key distinction of having no connections within a layer (i.e. between visible-to-visble or hiddien-to-hidden). The lack of intra-layer connections makes the gradient calulation particularly easy with easier Gibbs sampling procedure. Combining RBM with directed sigmoid networks allows to contruct more powerful models like deep belief networks. Finally we discuss fully undirected multi-layered RBM model called deep Boltzmann machines that provide even better feature extraction capability and allow multimodal generative modeling.

Contents:

  1. The restricted Boltzmann machine
  2. Free energy in RBM
  3. Training RBM
  4. Contrastive divergence and persistent markov chains
  5. Gaussian-Bernoulli RBM
  6. Deep-belief network
  7. Training and sampling from DBN
  8. Applications of DBN for classification, deep auto-encoders and information retrieval
  9. Deep Boltzmann machine
  10. Approximate learning in DBM - variational inference, stochastic approximation
  11. Application of DBM to multimodal data generation

Break

Part III: Boltzmann machines for time-series
Speaker: Takayuki Osogami

We review Boltzmann machines extended for time-series. These models often have recurrent structure, and back propagration through time (BPTT) is used to learn their parameters. The per-step computational complexity of BPTT in online learning, however, grows linearly with respect to the length of preceding time-series (i.e., learning rule is
not local in time), which limits the applicability of BPTT in online learning. We then review dynamic Boltzmann machine (DyBM), whose learning rule is local in time. We discuss how its learning rule relates to spike-timing dependent plasticity (STDP), which has been postulated and experimentally confirmed for biological neural networks as well as recent extensions of DyBMs.

Contents:

  1. Non-recurrent Boltzmann machines for time-series
  2. Boltzmann machines for time-series with recurrent structures

    <The second video starts here>

  3. Dynamic Boltzmann machines

Part IV: Energy-based reinforcement learning
Speaker: Sakyasingha Dasgupta

We first review previous work on using energy-based models as function approximators for reinforcement learning (RL). We show, how the energy function of dynamic Boltzmann machines can be used for efficient RL in high dimentional action spaces and partially observable scenarios. The DyBM energy function provides a linear function approximator that prevents divergence in on-policy methods like SARSA. This is, in contrast to previous work using RBM nonlinear free-energy as function approximator. We also discuss energy-based actor-critic reinforcement learning methods. Compared to the recent achievements of deep reinforcement learning in games and robotics, energy-based reinforcement learning can deal effectively with very high dimentional action space following Boltzmann exploration policies. We discuss deep variants of these energy-based RL methods.
 

Contents:

  1. Brief introduction to reinforcement learning
  2. Free-energy based SARSA learning
  3. DySARSA: DyBM energy-based SARSA learning
  4. Energy-based actor-critic methods
  5. DyNAC: DyBM energy-based actor-critic learning
  6. Energy-based deep reinforcement learning
 



Tutorial slides

Box folder




Tutorial date & venue

  • Date: August 21
  • Time: PM1 and PM2
  • Venue: Melbourne convention & exhibition center Room 211