This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember your browser. We use this information to improve and customize your browsing experience, for analytics and metrics about our visitors both on this website and other media, and for marketing purposes. By using this website, you accept and agree to be bound by UVic’s Terms of Use and Protection of Privacy Policy. If you do not agree to the above, you must not use this website.

Skip to main content

Amir Mehdi Soufi Enayati

  • BSc (Sharif University of Technology, 2016)

  • MSc (Sharif University of Technology, 2018)

Notice of the Final Oral Examination for the Degree of Doctor of Philosophy

Topic

Towards Adaptive Reinforcement Learning for Robotic Manipulation: Sample Efficiency, Sim-to-Real Transfer, and Context Inference

Department of Mechanical Engineering

Date & location

  • Friday, September 12, 2025

  • 12:00 P.M.

  • Virtual Defence

Reviewers

Supervisory Committee

  • Dr. Homayoun Najjaran, Department of Mechanical Engineering, University of Victoria (Supervisor)

  • Dr. Yang Shi, Department of Mechanical Engineering, UVic (Member)

  • Dr. Teseo Schneider, Department of Computer Science, UVic (Outside Member) 

External Examiner

  • Dr. William Melek, Department of Mechanical and Mechatronics Engineering, University of Waterloo 

Chair of Oral Examination

  • Dr. Lara Lauzon, School of Exercise Science, Physical and Health Education, UVic 

Abstract

Modern robotic systems seek the ability to adapt to novel tasks and environments in a sample-efficient and robust manner. A framework is proposed to enable such adaptability through three interrelated and complementary contributions in the field of reinforcement learning (RL) for robot manipulation. The central challenges compromising the practical use of RL are addressed, including data efficiency, sim-to-real transfer, and context-aware generalization to unseen tasks. 

The first contribution addresses the sample-efficiency challenge. Demonstration Exploitation by Abstract Symmetry of Environments (Demo-EASE), introduces a sample-efficient RL framework with limited demonstrations that can be augmented by exploiting the symmetry. By identifying and leveraging symmetry in manipulation environments, abstract demonstrations are reused across multiple sub-regions of the task space. The implemented masked behavior cloning allows online adaptive balance between pure RL and imitation learning. Demo-EASE shows effective knowledge transfer, improved learning efficiency, and fewer interactions required while generalizing on the workspace of the expert policy. 

The second contribution focuses on improving the reliability of sim-to-real transfer. A novel concept, Real-Time Intrinsic Stochasticity (RT-IS), is introduced, demonstrating that inherent noise of real-time simulations can be beneficial when approximating real-world uncertainty. Experimental validation on simulated and physical robot tasks confirms that RT-IS improves deployability, requiring less explicit tuning than domain randomization and relaxing the threshold on modeling precision. The third contribution addresses the challenge of inferring task representation in a meta-RL setting. A transformer-based belief model, Context Representation via Action-Free Transformer encoder-decoder (CRAFT), is developed to infer variational latent task belief from sequences of states and rewards, without access to the agent’s actions. This action-agnostic approach improves adaptability in partially observable environments and supports effective zero-shot learning. Tested on the MetaWorld benchmark, CRAFT outperforms existing baselines in generalization with meaningful task inference quality. 

Together, these contributions complete the roadmap to create a framework for adaptive reinforcement learning in robotics. The results demonstrate how structured data, intentional noise, and agent-agnostic long-horizon attention can create efficient, robust, and adaptable learning systems. This work lays a theoretical and practical foundation for future developments in adaptive robotics, enabling robots to operate across a broad spectrum of environments and objectives while cutting down on retraining.