Chordal sparsity in convex optimization
Time: 13:15-14:00. Place: Glashuset.
Chordal sparsity plays a fundamental role in many areas, including sparse Cholesky factorization, positive definite and Euclidean distance matrix completion, graphical models, and sparse semidefinite programming. In this talk we discuss conic optimization with constraints defined in terms of a cone of sparse positive semidefinite matrices with a chordal sparsity pattern. We describe efficient chordal matrix techniques for evaluating the function value and derivatives of the logarithmic barrier function for this convex cone, and for the associated dual cone. We also discuss a software implementation, and applications to semidefinite programming, nonlinear optimization, and machine learning.
Holistic perspectives on dynamic measurements - Dynamic metrology and deterministic sampling
Time: 13:15-14:00. Place: Glashuset.
Dynamic measurements are ubiquitous in modern science and technology. Their accuracy is the target for dynamic calibrations, or characterizations. The characterization is usually presented as a non-parameterized response with an estimated measurement uncertainty, in the time or the frequency domain.
The translation of one characterization of a measurement system to measurements of different signals is non-trivial. Dynamic models are rarely utilized in the context of calibrations and virtually no methods for translation are practiced. Dynamic metrology is intended to bridge this gap between calibrations and their final utilization, whatever it is. It is supposed to be the equivalent of system identification and control theory in measurement science (metrology), using similar physical, mathematical, statistical and numerical dynamic models, system analysis, transformations and signal processing methods etc. For instance, pre-distortion for linearization of amplifiers could be cast into non-causal post-distortion for correction of measurement systems. At present, dynamic metrology embraces characterization, model identification, error estimation, correction and uncertainty estimation.
In the first part of the presentation, the motivation for dynamic metrology is presented from a general perspective. Proposed methods as well as possible applications will be briefly mentioned. The second part is devoted to recently proposed methods of deterministic sampling of uncertain dynamic models for propagating dynamic measurement uncertainty. This class of methods originates from the ideas of propagating covariance within the standard unscented Kalman filter. The efficiency and simplicity allows for most complex models for which random sampling or Monte-Carlo simulations are intractable, and linearization is too inaccurate and/or inconvenient.
Towards fast and robust cooperative positioning with low complexity
Time: 13:15-14:00. Place: Glashuset.
Navigation in GPS-challenged environments, such as indoors, and in urban canyons, can be enabled with RF-based technologies, e.g., WiFi, Zigbee, or ultra-wideband transmission. Through ranging with reference nodes, devices can determine their positions in a distributed manner. However, to ensure sufficient coverage and accuracy, a high density of reference nodes is often required. Cooperative positioning can reduce the reliance on reference nodes by using peer-to-peer communication. In this presentation, we will describe a powerful algorithm for Bayesian cooperative positioning in large-scale mobile networks. This algorithm is based on distributed message passing on a distributed graphical model, and allows devices to localize quickly, precisely, even when few reference nodes are present. We will show how to fuse information in a computationally efficient way, how to represent information in a concise format, and how to deal with false or misleading information. Finally, we will list a number of open research questions.
Navigation and SAR Auto-focusing in a Sensor Fusion Framework
Time: 10:15-12:00. Place: Visionen.
Since its discovery, in the 1940's, radar (Radio Detection and Ranging) has become an important ranging sensor in many areas of technology and science. Most of the military and many civilian applications are unimaginable today without radar. With technology development, radar application areas have become larger and more available. One of these applications is Synthetic Aperture Radar (SAR), where an airborne radar is used to create high resolution images of the imaged scene. Although known since the 1950's, the SAR methods have been continuously developed and improved and new algorithms enabling real-time applications have emerged lately. Together with making the hardware components smaller and lighter, SAR has become an interesting sensor to be mounted on smaller unmanned aerial vehicles (UAV's).
One important thing needed in the SAR algorithms is the estimate of the platform's motion, like position and velocity. Since this estimate is always corrupted with errors, particularly if lower grade navigation system, common in UAV applications, is used, the SAR images will be distorted. One of the most frequently appearing distortions caused by the unknown platform's motion is the image defocus. The process of correcting the image focus is called auto-focusing in SAR terminology. Traditionally, this problem was solved by methods that discard the platform's motion information, mostly due to the off-line processing approach, i.e. the images were created after the flight. Since the image (de)focus and the motion of the platform are related to each other, it is possible to utilise the information from the SAR images as a sensor and improve the estimate of the platform's motion.
The auto-focusing problem can be cast as a sensor fusion problem. Sensor fusion is the process of fusing information from different sensors, in order to obtain best possible estimate of the states. Here, the information from sensors measuring platform's motion, mainly accelerometers, will be fused together with the information from the SAR images to estimate the motion of the flying platform. Two different methods based on this approach are tested on the simulated SAR data and the results are evaluated. One method is based on an optimisation based formulation of the sensor fusion problem, leading to batch processing, while the other method is based on the sequential processing of the radar data, leading to a filtering approach. The obtained results are promising for both methods and the obtained performance is comparable with the performance of a high precision navigation aid, such as Global Positioning System (GPS).
Diversity in Wireless Range Estimation
Time: 13:15-14:00. Place: Glashuset.
Although space and frequency diversity is utilized extensively for communications systems, the effects of diversity on positioning systems have not been investigated thoroughly. In this talk, the effects of space and frequency diversity are quantified for time delay (range) estimation. First, optimal time delay estimation in a single-input multiple-output (SIMO) system is studied. The theoretical limits on the estimation accuracy are calculated in terms of the Cramer-Rao lower bound (CRLB). In addition to the optimal solution, a two-step suboptimal range estimator is proposed, and its performance is compared with the CRLBs. Then, dispersed spectrum cognitive radio systems are considered, and the effects of frequency diversity on time delay estimation are investigated. Both the theoretical limits and the optimal estimator are obtained. In addition, a two-step approach is proposed to obtain accurate time delay estimates of signals that occupy multiple dispersed bands simultaneously, with significantly lower computational complexity than the optimal maximum likelihood (ML) estimator. Various mechanisms for diversity combining are presented for time delay estimation. Finally, time delay estimation is studied for multicarrier systems in the presence of interference. Specifically, closed-form expressions are obtained for CRLBs in various scenarios, and based on the CRLB expressions, an optimal power allocation (or, spectrum shaping) strategy is proposed. This strategy considers the constraints not only from the sensed interference level but also from the regulatory emission mask.
Aircraft Trajectory Optimization
Time: 13:15-14:00. Place: Glashuset.
The problem of finding an optimal aircraft trajectory in close to real time is considered. Using multiple models for different parts of the trajectory, where each model is tailored for the particular conditions, an efficient method has been developed making interactive use feasible. The modeling used and the numerical implementation is described in some detail. Possible pilot interfaces and various extensions involving airspace constraints and considering aircraft engine emissions in trajectory optimization are also discussed.
Learning Control Design and Applications for Electromechanical Systems
Time: 10:45-11:30. Place: Glashuset.
Iterative learning control (ILC) and repetitive control (RC) are two novel classes of learning control methods, which can significantly enhance control system performance by learning from previous experience to handle repetitive factors of a practical process in a smart way.
A majority of electromechanical systems in use are performing repetitive tasks, such as an electrical drive operates around a given speed. Meanwhile, the electromechanical systems are often perturbed by repetitive disturbances, such as the torque ripples in an electric motor. In such circumstances, ILC and RC offer most effective solutions for improving the control system performance – doing so with simple software or plug-in modifications instead of sophisticated hardware reconstruction. ILC and RC methods are starting to have a substantial impact in electromechanical and mechatronic industries.
The goal of the talk is to introduce the basic principles and designs of ILC and RC, and demonstrate several real-time examples.
Inertial Navigation at Imego Institute
Time: 13:15-14:00. Place: Glashuset.
At the Imego institute sensors and sensor systems of different kinds are developed, one kind being micro electro-mechanical inertial sensors (gyros and accelerometers). In this talk two applications of such sensors for inertial navigation are going to be discussed. The first is a system for computing the position of surveying equipment so that measurements can be performed during conditions where GPS is temporarily unavailable. The second is an IMU (inertial measurement unit) designed for rapid and violent movements, such as vehicle crash tests. The system can be used to, for example, compute the position and orientation of a vehicle or of the head of a crash dummy.
Estimation-based iterative learning control
Time: 10:15-12:00. Place: Visionen.
In many applications industrial robots perform the same motion repeatedly. One way of compensating the repetitive part of the error is by using iterative learning control (ILC). The ILC algorithm makes use of the measured errors and iteratively calculates a correction signal that is applied to the system.
The main topic of the thesis is to apply an ILC algorithm to a dynamic system where the controlled variable is not measured. A remedy for handling this difficulty is to use additional sensors in combination with signal processing algorithms to obtain estimates of the controlled variable. A framework for analysis of ILC algorithms is proposed for the situation when an ILC algorithm uses an estimate of the controlled variable. This is a relevant research problem in for example industrial robot applications, where normally only the motor angular positions are measured while the control objective is to follow a desired tool path. Additionally, the dynamic model of the flexible robot structure suffers from uncertainties. The behaviour when a system having these difficulties is controlled by an ILC algorithm using measured variables directly is illustrated experimentally, on both a serial and a parallel robot, and in simulations of a flexible two-mass model. It is shown that the correction of the tool-position error is limited by the accuracy of the robot model.
The benefits of estimation-based ILC is illustrated for cases when fusing measurements of the robot motor angular positions with measurements from an additional accelerometer mounted on the robot tool to form a tool-position estimate. Estimation-based ILC is studied in simulations on a flexible two-mass model and on a flexible nonlinear two-link robot model, as well as in experiments on a parallel robot. The results show that it is possible to improve the tool performance when a tool-position estimate is used in the ILC algorithm, compared to when the original measurements available are used directly in the algorithm. Furthermore, the resulting performance relies on the quality of the estimate, as expected.
In the last part of the thesis, some implementation aspects of ILC are discussed. Since the ILC algorithm involves filtering of signals over finite-time intervals, often using non-causal filters, it is important that the boundary effects of the filtering operations are appropriately handled when implementing the algorithm. It is illustrated by theoretical analysis and in simulations that the method of implementation can have large influence over stability and convergence properties of the algorithm.
Fuel optimal control of heavy trucks
Time: 13:15-14:00. Place: Glashuset.
One third of the life cycle cost for a heavy truck is the fuel cost. One way to reduce the fuel consumption is to use information about future conditions to find the optimal velocity profile and gear selection. In our case this information is provided by a data base with the road gradient in combination with the current position given by a GPS.
So far we have used dynamic programming to solve the optimizationproblem and I will discuss methods to reduce the complexity of the algorithm. I will also introduce a fuel equivalent and show how it can be used to approximate the residual cost at the end of the interval that is considered in the optimization. The fuel equivalent will be explained using a physical interpretation and in terms of Lagrange multipliers.
Loop detection and extended target tracking using laser data
Time: 10:15-12:00. Place: Visionen.
In the past two decades, robotics and autonomous vehicles have received ever increasing research attention. For an autonomous robot to function fully autonomously alongside humans, it must be able to solve the same tasks as humans do, and it must be able to sense the surrounding environment. Two such tasks are addressed in this thesis, using data from laser range sensors.
The first task is recognising that the robot has returned to a previously visited location, a problem called loop closure detection. Loop closure detection is a fundamental part of the simultaneous localisation and mapping problem, which consists of mapping an unknown area and simultaneously localise in the same map. In this thesis, a classification approach is taken to the loop closure detection problem. The laser range data is described in terms of geometrical and statistical properties, called features. Pairs of laser range data from two different locations are compared by using adaptive boosting to construct a classifier that takes as input the computed features. Experiments using real world laser data are used to evaluate the properties of the classifier, and the classifier is shown to compare well to existing solutions.
The second task is keeping track of objects that surround the robot, a problem called target tracking. Target tracking is an estimation problem in which data association between the estimates and measurements is of high importance. The data association is complicated by things such as noise and false measurements. In this thesis, extended targets, i.e. targets that potentially generate more than one measurement per time step, are considered. The multiple measurements per time step further complicate the data association. Tracking of extended targets is performed using an implementation of a probability hypothesis density filter, which is evaluated in simulations using the optimal subpattern assignment metric. The filter is also used to track humans with real world laser range data, and the experiments show that the filter can handle the so called occlusion problem.
Topics in Scientific Computing
Time: 13:00-14:00. Place: Glashuset.
This talk will address three topics in scientific computing,
1. New focus in scientific computing, background and future ambitions.
2. What is scientific computing?
3. Weak Boundary and Interface Conditions with Multi-Physics
Applications
By reusing the main ideas behind the recent development of stable high order finite difference methods (summation-by-parts operators, weak boundary conditions, the energy-method) new coupling precodures have been developed. We will present the theory by analysing simple examples and apply to very complex multi-physics problems.
Control of heat and power production at Södra Cell Mörrum
Time: 13:15-14:00. Place: Glashuset.
At Södra Cell Mörrum Mill kraft pulp is produced while heat and power are received as bi-products. This presentation will address different control aspects in the energy producing systems of the mill.
The energy production emanates from a recovery boiler or from combustion of bark or oil. High pressure steam is produced in the boilers and heat is used throughout the pulp making process as low pressure steam or hot water. Heat is also sold as district heating. Hence the first control issue was to uphold steam production and distribution to match the demand of steam and hot water in the process. The second issue was to stabilize the district heat production.
The most energy efficient way to reduce pressure of steam is through steam turbines. Therefore, the third control issue at Mörrum Mill was to optimize the overall energy production.
Production of power through turbines also gives the opportunity to become self-sufficient of power and thereby a theoretical ability of island operation. The pulp mill in Mörrum has been successfully tested for island operation both in simulations and live tests.
Finally, as new control strategies cannot be considered as improvements unless they are used, operator training by use of simulators was conducted to ensure that all control loops were kept in automatic mode.
On Optimal Input Design in System Identification for Control
Time: 13:15-14:00. Place: Glashuset.
The quality of an estimated model should be related to the specifications of the intended application. A classical approach is to study the "size" of the asymptotic covariance matrix (the inverse of the Fisher information matrix) of the corresponding parameter vector estimate. In many cases it is possible to design and implement external excitation signals, e.g. pilot signals in communications systems or input signals in control applications. The objective of this seminar is to present some recent advances in optimal experiment design for system identification with a certain application in mind. The idea is to minimize experimental costs (e.g. the energy of the excitation signal), while guarantying that the estimated model with a given probability satisfies the specifications of the application. This will result in a convex optimization problem, where the optimal solution should reveal system properties important for the application while hiding irrelevant dynamics. Simple Finite Impulse Response (FIR) examples will be used to illustrate the basic ideas.
This seminar is based on joint work with Mariette Annergren, Håkan Hjalmarsson, Christian Larsson and Cristian Rojas, KTH.
Receding horizon control via numerical algebraic geometry
Time: 13:15-14:00. Place: Glashuset.
The field of numerical algebraic geometry provides techniques for approximating all complex solutions of a polynomial system. These techniques are particularly efficient in the context of a parameterized family of systems for which the solutions are needed at many parameter values. Receding horizon control problems may be converted into polynomial systems via the KKT conditions. As time progresses, the coefficients of these KKT polynomial systems vary, but the monomial structure stays the same, meaning that these KKT systems form a parameterized family of polynomial systems. With Philipp Rostalski, Ioannis Fotiou, Andrea Beccuti, and Manfred Morari, we applied the methods of numerical algebraic geometry to these KKT systems in various contexts. In this talk, I will describe the basic ideas of numerical algebraic geometry and go through our algorithm for approaching optimal control problems. I will not assume familiarity with algebraic geometry (numerical or otherwise) or receding horizon control.
'Dynamic Vision and Nonlinear Observers' and 'System Integration at ISY'
Time: 13:15-14:00. Place: Glashuset.
This talk consists of two parts:
I. Dynamic Vision and Nonlinear Observers
II. System Integration at ISY
Estimation of structure and motion in computer vision systems can be performed using a dynamic systems approach, where states and parameters in a perspective dynamic system are estimated. We present distance parameterizations, which transform a perspective dynamic system, resulting in a transformed system for which structure and motion estimation problems can be formulated and solved in a common framework, using available methods from nonlinear and adaptive control.
System integration at ISY is the topic of my position at LiU. In this position, I will focus on aspects of system engineering involving hardware and software boundaries, with applications towards signal processing and control. In the seminar, a short background and motivation will be given, based on my earlier experiences from work in industry, and a summary of initial activities, interactions, and plans will be presented.
'Continuous-time parameter estimation under stochastic perturbations using LSM'
Time: 13:15-14:00. Place: Glashuset.
The problem of continuous-time parameter estimation under white and colored noise is considered. In order to solve this problem two methods are proposed. In the first one, the least square method with forgetting factor is applied. For the second method, the equivalent control technique is used, the main goal of this technique is to get a sliding mode type observer, and to use the information that this observer provides in order to apply the LSM. The objective is to analyze the robustness of the LSM in a continuous-time system, where we avoid rewritting the system in a discrete form, and for the colored noise case, filtering or whitening the noise.
Identification of Hammerstein-Wiener Models
Time: 13:15-14:00. Place: Glashuset.
Hammerstein-Wiener models are an example of block-oriented systems that combine static nonlinear blocks together with linear dynamic blocks in order to model the input-output behaviour of a system. These models have proven to be very useful in modelling many real-world situations and have attracted significant research attention. At the same time, most of the available methods for estimation make restrictive assumptions about the nonlinearities and/or the noise. In this talk we will look at some recent developments in identifying Hammerstein-Wiener models based on a maximum-likelihood algorithm where the assumptions on the system are quite general. A by-product of developing this new algorithm is that we can also handle so-called blind estimation of Wiener models, where the estimation is performed using output measurements only.
Probability & Statistics in Mathematica
Time: 13:15-14:00. Place: Glashuset.
This talk describes a new way to work with probability and statistical models in Mathematica, where a 'distribution' describes the full model.
It includes a rich modeling framework including: (think of a distribution as a model structure)
* Parametric distributions (from different domains such as finance, insurance, communication, reliability, ...).
* Nonparametric distributions (e.g. empirical, histogram, non-parametric kernel, survival, ...).
* Formula distributions, where you give a distribution function (e.g. PDF, CDF, SF, HF).
* Derived distributions, which take other distributions as arguments (e.g. transformations, mixtures, truncations, ...).
For each of these distributions there are some 35 properties that can be directly computed such as: moments, quantiles, distribution functions, generating functions, simulation, estimation, hypothesis tests, ...
Since the distributions act as model structures, this is also used to provide for automation throughout the probability and statistics subsystem, including:
* Automatically estimate parametric (and derived, and formula etc) distributions from data.
* Automatically test goodness-of-fit hypothesis between data and distributions.
* Automatically visualize goodness-of-fit between data and distributions.
* Automatically compute the probability of any event or expectation of any expression.
Throughout approximate, exact and symbolic computations can be used. About 15k examples are provided throughout the documentation for these areas.
Sound Field Control - A Polynomial Based Approach
Time: 13:15-14:00. Place: Glashuset.
In this talk I will present an outline of my PhD thesis on audio equalization and sound field control. A general aim of the thesis has been to develop a broad and practically applicable signal processing framework, covering various aspects of room acoustic modeling, measurements and filter design. In this development, two specific problems have been of central importance. The first problem concerns the design of a scalar prefilter that compensates for linear distortions in the signal path between a single loudspeaker and a region in space. In the second problem, an arbitrary spatio-temporal sound field, defined over a spatial region, is approximated by the joint use of multiple loudspeakers. Mathematically, the work is based on the polynomial approach to multivariable filtering and control. However, the acoustic setting has some unique characteristics that differ from those of a “traditional” control problem. For example, the quantity that is controlled (i.e., the sound pressure in a reverberant room) is a continuous and highly irregular function of space. Moreover, the complicated spatial behavior of the sound field will always be unknown to some degree, due to the assumed spatial sparsity of the available acoustic measurements. A considerable part of the thesis is therefore devoted to acoustic modeling, and in particular, the question of how to handle spatially sparse measurements and model uncertainties is extensively treated. Using the proposed acoustic models, the polynomial-based framework provides elegant filter design solutions as well as new and useful insights. Several results in the thesis have found industrial applications in high-end automotive audio systems, cinema audio processors and HiFi systems.
Topics in Localization and Mapping
Time: 10:15-12:00. Place: Visionen.
The need to determine ones position is common and emerges in many different situations. Tracking soldiers or a robot moving in a building or aiding a tourist exploring a new city, all share the questions "where is the unit?" and "where is the unit going?". This is known as the localization problem.
Particularly, the problem of determining ones position in a map while building the map at the same time, commonly known as the simultaneous localization and mapping problem SLAM, has been widely studied. It has been performed in cities using different land bound vehicles, in rural environments using autonomous aerial vehicles and underwater for coral reef exploration. In this thesis it is studied how RADAR signals can be used to both position a naval surface vessel but also to simultaneously construct a map of the surrounding archipelago. The experimental data used was collected using a high speed naval patrol boat and covers roughly 32 km. A very accurate map was created using nothing but consecutive RADAR images.
A second contribution covers an entirely different problem but it has a solution that is very similar to the first one. Underwater sensors sensitive to magnetic field disturbances can be used to track ships. In this thesis, the sensor positions themselves are considered unknown and are estimated by tracking a friendly surface vessel with a known magnetic signature. Since each sensor can track the vessel, the sensor positions can be determined by relating them to the vessel trajectory. Simulations show that if the vessel is equipped with a global navigation satellite system, the sensor positions can be determined accurately.
There is a desire to localize firefighters while they are searching through a burning building. Knowing where they are would make their work more efficient and significantly safer. In this thesis a positioning system based on foot mounted inertial measurement units has been studied. When such a sensor is foot mounted, the available information increases dramatically since the foot stances can be detected and incorporated in the position estimate. The focus in this work has therefore been on the problem of stand still detection and a probabilistic framework for this has been developed. This system has been extensively investigated to determine its applicability during different movements and boot types. All in all, the stand still detection system works well but problems emerge when a very rigid boot is used or when the subject is crawling.
The stand still detection framework was then included in a positioning framework that uses the detected stand stills to introduce zero velocity updates. The system was evaluated using localization experiments for which there was very accurate ground truth. It showed that the system provides good position estimates but that the estimated heading can be wrong, especially after quick sharp turns.
Bayesian methods for system identification and variable selection
Time: 10:15-11:00. Place: Glashuset.
In this talk I shall review some recent work on “Bayesian” (linear) system identification and discuss extensions of this methodology for performing simultaneous identification and variable selection. The proposed algorithms show very good performance on simulated data and have also been successfully applied to real data for thermodynamic monitoring of buildings. In order to understand some of the key features of this Bayesian variable selection problem we consider a simpler (group) linear regression problem and discuss the relation between group-Lasso (GLASSO), Multiple Kernel Learning (MKL) and our method, as well with other Bayesian techniques developed in the statistics literature. Our approach is nonconvex but one of its versions requires optimization with respect to only one scalar variable. Theoretical arguments, independent of the correctness of the priors entering the sparse model, are included to clarify the advantages of our nonconvex technique in comparison with the other two (MKL and GLASSO) convex estimators.
Maximum Likelihood Estimation of Gaussian Models with Missing Data - Eight equivalent formulations and applications to system identification
Time: 13:15-14:00. Place: Glashuset.
In this talk we derive the maximum liklihood problem for missing data from a Gaussian model. We present in total eight different equivalent formulations of the resulting optimization problem, four out of which are nonlinear least squares formulations. Among these formulations are also formulations based on the expectation-maximization algorithm. Expressions for the derivatives needed in order to solve the optimization problem are presented. We also present numerical comparisons for two of the formulations for an ARMAX model.
We will discuss input models for missing input data and make comparisons with the case when the missing inputs are modeled as parameters. Finally, we will discuss the reason for bias in popular suboptimal schemes.
Rao-Blackwellised particle methods for inference and identification
Time: 13:15-15:00. Place: Visionen.
We consider the two related problems of state inference in nonlinear systems and nonlinear system identification. More precisely, based on noisy observations from some, in general nonlinear and/or non-Gaussian dynamical system, we seek to estimate the system state as well as possible unknown static parameters of the system. We consider two different aspects of the state inference problem, filtering and smoothing, with the emphasis on the latter. To address the filtering and smoothing problems, we employ sequential Monte Carlo (SMC) methods, commonly referred to as particle filters (PF) and particle smoothers (PS).
Many nonlinear models encountered in practice contain some tractable substructure. If this is the case, a natural idea is to try to exploit this substructure to obtain more accurate estimates than what is provided by a standard particle method. For the filtering problem, this can be done by using the well-known Rao-Blackwellised particle filter (RBPF). In this thesis, we analyse the RBPF and provide explicit expressions for the variance reduction that is obtained from Rao-Blackwellisation. Furthermore, we address the smoothing problem and develop a novel Rao-Blackwellised particle smoother (RBPS), designed to exploit a certain tractable substructure in the model.
Based on the RBPF and the RBPS we propose two different methods for nonlinear system identification. The first is a recursive method referred to as the Rao-Blackwellised marginal particle filter (RBMPF). By augmenting the state variable with the unknown parameters, a nonlinear filter can be applied to address the parameter estimation problem. However, if the model under study has poor mixing properties, which is the case if the state variable contains some static parameter, SMC filters such as the PF and the RBPF are known to degenerate. To circumvent this we introduce a so called "mixing" stage in the RBMPF, which makes it more suitable for models with poor mixing properties.
The second identification method is referred to as RBPS-EM and is designed for maximum likelihood parameter estimation in so called mixed linear/nonlinear Gaussian state-space models. The method combines the expectation maximisation (EM) algorithm with the RBPS mentioned above, resulting in an identification method designed to exploit the tractable substructure present in the model.
Decomposition and Simultaneous Projection Methods for Convex Feasibility Problems
Time: 13:15-14:00. Place: Glashuset.
In this talk a specific class of convex feasibility problems are considered and tailored algorithms to solve this class of problems are introduced. First, the Nonlinear Cimmino Algorithm is reviewed. Then motivated by the special structure of the problems at hand, a modification to this method is proposed. Next, another method for solving the dual problem of the provided problem is presented. This leads to similar update rules for the variables as in the modified Nonlinear Cimmino Algorithm. Then an application for the proposed algorithms on the robust stability analysis of large scale weakly interconnected systems is presented and the performance of the proposed methods are compared.
On design of low order H-infinity controllers
Time: 10:15-13:00. Place: Visionen.
When designing controllers with robust performance and stabilization requirements, H-infinity synthesis is a common tool to use. These controllers are often obtained by solving mathematical optimization problems. The controllers that result from these algorithms are typically of very high order, which complicates implementation. Low order controllers are usually desired, since they are considered more reliable than high order controllers. However, if a constraint on the maximum order of the controller is set that is lower than the order of the so-called augmented system, the optimization problem becomes nonconvex and it is relatively difficult to solve. This is true even when the order of the augmented system is low.
In this thesis, optimization methods for solving these problems are considered. In contrast to other methods in the literature, the approach used in this thesis is based on formulating the constraint on the maximum order of the controller as a rational function in an equality constraint. Three methods are then suggested for solving this smooth nonconvex optimization problem.
The first two methods use the fact that the rational function is nonnegative. The problem is then reformulated as an optimization problem where the rational function is to be minimized over a convex set defined by linear matrix inequalities (LMIs). This problem is then solved using two different interior point methods.
In the third method the problem is solved by using a partially augmented Lagrangian formulation where the equality constraint is relaxed and incorporated into the objective function, but where the LMIs are kept as constraints. Again, the feasible set is convex and the objective function is nonconvex.
The proposed methods are evaluated and compared with two well-known methods from the literature. The results indicate that the first two suggested methods perform well especially when the number of states in the augmented system is less than 10 and 20, respectively. The third method has comparable performance with two methods from literature when the number of states in the augmented system is less than 25.
Numerical Methods for Embedded Optimization and Optimal Control, day 1
Time: 10:15-15:00. Place: Visionen.
This two day lecture course will present a personal overview on numerical methods for optimal control of nonlinear dynamic systems and present some recent advances in the field with a particular view on model predictive control and embedded optimization. It also introduces some recently developed software packages, qpOASES, ACADO, and CasADi. Joint work with Boris Houska, Hans Joachim Ferreau, Joel Andersson, and Jan Albersmeyer. For more info please visit.
Visual Inertial Navigation and Calibration
Time: 14:30-17:00. Place: Visionen.
Processing and interpretation of visual content is essential to many systems and applications. This requires knowledge of how the content is sensed and also what is sensed. Such knowledge is captured in models which, depending on the application, can be very advanced or simple. An application example is scene reconstruction using a camera; if a suitable model of the camera is known, then a model of the scene can be estimated from images acquired at different, unknown, locations, yet, the quality of the scene model depends on the quality of the camera model. The opposite is to estimate the camera model and the unknown locations using a known scene model. In this work, two such problems are treated in two rather different applications.
There is an increasing need for navigation solutions less dependent on external navigation systems such as the Global Positioning System (GPS). Simultaneous Localisation and Mapping (SLAM) provides a solution to this by estimating both navigation states and some properties of the environment without considering any external navigation systems.
The first problem considers visual inertial navigation and mapping using a monocular camera and inertial measurements which is a SLAM problem. Our aim is to provide improved estimates of the navigation states and a landmark map, given a SLAM solution. To do this, the measurements are fused in an Extended Kalman Filter (EKF) and then the filtered estimates are used as a starting solution in a nonlinear least-squares problem which is solved using the Gauss-Newton method. This approach is evaluated on experimental data with accurate ground truth for reference.
In Augmented Reality (AR), additional information is superimposed onto the surrounding environment in real time to reinforce our impressions. For this to be a pleasant experience it is necessary to have good models of the AR system and the environment.
The second problem considers calibration of an Optical See-Through Head Mounted Display system (OSTHMD), which is a wearable AR system. We show and motivate how the pinhole camera model can be used to represent the OSTHMD and the user's eye position. The pinhole camera model is estimated using the Direct Linear Transformation algorithm. Results are evaluated in experiments which also compare different data acquisition methods.
Sensor Fusion and Calibration of Inertial Sensors, Vision, Ultra-Wideband and GPS
Time: 10:15-13:00. Place: Visionen.
The usage of inertial sensors has traditionally been confined primarily to the aviation and marine industry due to their associated cost and bulkiness. During the last decade, however, inertial sensors have undergone a rather dramatic reduction in both size and cost with the introduction of MEMS technology. As a result of this trend, inertial sensors have become commonplace for many applications and can even be found in many consumer products, for instance smart phones, cameras and game consoles. Due to the drift inherent in inertial technology, inertial sensors are typically used in combination with aiding sensors to stabilize and improve the estimates. The need for aiding sensors becomes even more apparent due to the reduced accuracy of MEMS inertial sensors.
This thesis discusses two problems related to using inertial sensors in combination with aiding sensors. The first is the problem of sensor fusion: how to combine the information obtained from the different sensors and obtain a good estimate of position and orientation. The second problem, a prerequisite for sensor fusion, is that of calibration: the sensors themselves have to be calibrated and provide measurement in known units. Furthermore, whenever multiple sensors are combined additional calibration issues arise, since the measurements are seldom acquired in the same physical location and expressed in a common coordinate frame. Sensor fusion and calibration are discussed for the combination of inertial sensors with cameras, UWB or GPS.
Two setups for estimating position and orientation in real-time are presented in this thesis. The first uses inertial sensors in combination with a camera; the second combines inertial sensors with UWB. Tightly coupled sensor fusion algorithms and experiments with performance evaluation are provided. Furthermore, this thesis contains ideas on using an optimization based sensor fusion method for a multi-segment inertial tracking system used for human motion capture as well as a sensor fusion method for combining inertial sensors with a dual GPS receiver.
The above sensor fusion applications give rise to a number of calibration problems. Novel and easy-to-use calibration algorithms have been developed and tested to determine the following parameters: the magnetic field distortion when an IMU containing magnetometers is mounted close to a ferro-magnetic object, the relative position and orientation of a rigidly connected camera and IMU, as well as the clock parameters and receiver positions of an indoor UWB positioning system.