Deep networks are commonly used to model dynamical systems, predicting how
the state of a system will evolve over time (either autonomously or in response
to control inputs). Despite the predictive po
When neural networks are used to model dynamics, properties such as stability
of the dynamics are generally not guaranteed. In contrast, there is a recent
method for learning the dynamics of autonomou
Learning how complex dynamical systems evolve over time is a key challenge in
system identification. For safety critical systems, it is often crucial that
the learned model is guaranteed to converge t
Model-Based Reinforcement Learning involves learning a \textit{dynamics
model} from data, and then using this model to optimise behaviour, most often
with an online \textit{planner}. Much of the recen
Invariance and stability are essential notions in dynamical systems study,
and thus it is of great interest to learn a dynamics model with a stable
invariant set. However, existing methods can only ha
Learning task-agnostic dynamics models in high-dimensional observation spaces
can be challenging for model-based RL agents. We propose a novel way to learn
latent world models by learning to predict s
Predictive control is a key prerequisite for planning in robotic control.
In this paper, we explore the effects of subcomponents of a control problem on long term prediction error : including choosing a system, collecting data, and training a model.
A key challenge for an agent learning to interact with the world is to reason about physical properties of objects and to foresee their dynamics under the effect of applied forces. In order to scale l
Deep learning has been widely used within learning algorithms for robotics.
One disadvantage of deep networks is that these networks are black-box
representations. Therefore, the learned approximation
Recent work has shown deep learning can accelerate the prediction of physical
dynamics relative to numerical solvers. However, limited physical accuracy and
an inability to generalize under the distri
Standard dynamics models for continuous control make use of feedforward
computation to predict the conditional distribution of next state and reward
given current state and action using a multivariate
We propose a method to predict the sim-to-real transfer performance of RL
policies. Our transfer metric simplifies the selection of training setups (such
as algorithm, hyperparameters, randomizations)
Noisy observations coupled with nonlinear dynamics pose one of the biggest
challenges in robot motion planning. By decomposing nonlinear dynamics into a
discrete set of local dynamics models, hybrid d
A novel model of opiniondynamics is proposed, which describes the evolution of the individuals'opinions not only depends on its own and neighbors'current opinions, but also depends on past opinions in the real world.
Sufficient and/or necessary conditions for the equal polarization, consensus and neutralizability of the opinions are respectively presented in terms of the network topological structure and the spectral analysis.
Opinion dynamics models have been developed to study and predict the evolution of public opinion.
Intensive research has been carried out on thesemodels, especially exploring the different rules and topologies, which can be considered two degrees of freedom of these models.
Modelling robot dynamics accurately is essential for control, motion
optimisation and safe human-robot collaboration. Given the complexity of modern
robotic systems, dynamics modelling remains non-tri
Model-based reinforcement learning (RL) algorithms can attain excellent
sample efficiency, but often lag behind the best model-free algorithms in terms
of asymptotic performance. This is especially tr
Estimating accurate forward and inverse dynamics models is a crucial
component of model-based control for sophisticated robots such as robots driven
by hydraulics, artificial muscles, or robots dealin
It is well-known that inverse dynamics models can improve tracking
performance in robot control. These models need to precisely capture the robot
dynamics, which consist of well-understood components,
Offline reinforcement learning (rl) aims to extract near-optimal policies from imperfect offline data without additional environment interactions.
We investigate how to improve the performance of offlinerl algorithms, its robustness to the quality of offline data, as well as its generalization capabilities.