State estimation using machine learning
Loading...
Date
Authors
Advisor
Morris, Kirsten
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
State estimation refers to determining the states of a dynamical system that evolves under disturbances, based on noisy measurements, partially known or unknown initial condition, and a known system model. JRNs have a structure that mimics that of a dynamical system and are thus attractive for estimator design. We show that a JRN performs better than an EKF and UKF for several examples. We also provide an input-to-state stability analysis of the error dynamics of JRNs. The stability of the error dynamics of several examples is shown.
We then extend the Jordan structure to long-short-term memory networks to obtain a JLSTM which, as we show in several examples, is comparatively more robust to changes in initial conditions and noise and performs better than a EKF and PF. It also trains faster than an ELSTM for state estimation when trained to achieve a similar normalized MSE.
We also compare a shallow and deep JLSTM and observe that they perform almost similarly in terms of average error across time-steps and MSE but the deep JLSTM takes longer to train due to more layers.
We also train a JLSTM with a modified maximum likelihood equivalent loss function(JLSTM-ML). We observe that for Gaussian initial conditions and disturbances, the average error at each time step is best for estimates of JLSTM-ML. It is also the most robust to changes in initial conditions and disturbances in the systems considered. The measures, time taken to train, time taken to test, mean squared error, and average error at each time-step were used for comparison for various networks.
We discretized the following systems to use as examples in data generation, training, and testing: mass-spring system, down pendulum, reversed Van der Pol oscillator, Galerkin approximation of Burger's partial differential equation and Kuramoto-Sivashinsky partial differential equation.