Skip to content

Ridhwanluthra/Practical_RL

This branch is 441 commits behind yandexdataschool/Practical_RL:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Mar 13, 2019
441bd1e · Mar 13, 2019
Mar 11, 2019
Mar 10, 2019
Mar 6, 2019
Feb 24, 2019
Mar 11, 2019
Mar 1, 2019
Mar 13, 2019
Mar 9, 2019
Mar 11, 2019
Mar 7, 2019
Mar 11, 2019
Feb 1, 2019
Feb 16, 2017
Mar 24, 2017
Mar 1, 2018
Feb 12, 2019
Jan 21, 2019
Jan 23, 2017
Dec 31, 2017

Repository files navigation

Practical_RL

An open course on reinforcement learning in the wild. Taught on-campus at HSE and YSDA and maintained to be friendly to online students (both english and russian).

Note: this branch is an on-campus version of the for spring 2019 YSDA and HSE students. For full course materials, switch to the master branch.

Manifesto:

  • Optimize for the curious. For all the materials that aren’t covered in detail there are links to more information and related materials (D.Silver/Sutton/blogs/whatever). Assignments will have bonus sections if you want to dig deeper.
  • Practicality first. Everything essential to solving reinforcement learning problems is worth mentioning. We won't shun away from covering tricks and heuristics. For every major idea there should be a lab that makes you to “feel” it on a practical problem.
  • Git-course. Know a way to make the course better? Noticed a typo in a formula? Found a useful link? Made the code more readable? Made a version for alternative framework? You're awesome! Pull-request it!

Course info

Additional materials

Syllabus

The syllabus is approximate: the lectures may occur in a slightly different order and some topics may end up taking two weeks.

  • week01_intro Introduction

    • Lecture: RL problems around us. Decision processes. Stochastic optimization, Crossentropy method. Parameter space search vs action space search.
    • Seminar: Welcome into openai gym. Tabular CEM for Taxi-v0, deep CEM for box2d environments.
    • Homework description - see week1/README.md.
  • week02_value_based Value-based methods

    • Lecture: Discounted reward MDP. Value-based approach. Value iteration. Policy iteration. Discounted reward fails.
    • Seminar: Value iteration.
    • Homework description - see week2/README.md.
  • week03_model_free Model-free reinforcement learning

    • Lecture: Q-learning. SARSA. Off-policy Vs on-policy algorithms. N-step algorithms. TD(Lambda).
    • Seminar: Qlearning Vs SARSA Vs Expected Value SARSA
    • Homework description - see week3/README.md.
  • week04 Approximate (deep) RL

  • week05 Exploration

  • week06 Policy Gradient methods

  • week07 Applications I

  • week{++i} Partially Observed MDP

  • week{++i} Advanced policy-based methods

  • week{++i} Applications II

  • week{++i} Distributional reinforcement learning

  • week{++i} Inverse RL and Imitation Learning

Course staff

Course materials and teaching by: [unordered]

Contributions

About

A course in reinforcement learning in the wild

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 92.0%
  • Python 7.7%
  • Other 0.3%