antonio paolillo   

Antonio Paolillo

Senior Researcher

IDSIA Dalle Molle Institute for Artificial Intelligence, USI-SUPSI
Lugano, Switzerland

antonio [dot] paolillo [at] idsia.ch / supsi.ch

Google ScholarORCIDLinkedIn CV


← Go back Home

Thesis Proposals

Neural Perception for Visual Servoing

VS+NN_bd_new

Standard image processing techniques implement feature extractions to provide the Visual Servoing (VS) scheme with proper feedback. Examples of visual features are points, lines, and image moments, which are visible on the image captured by camera sensors. However, in order to be more adaptive and comply with unstructured scenes, recent trends in the VS community investigate direct approaches, that treat the entire image as feedback.

In a previous work [1], we used an artificial Neural Network (NN) that is trained using the information of the control structure to compute visual features from raw images (see Fig. above). In this thesis proposal we will build on these ideas to (i) realize complex VS experiments, tracking objects with complicated shapes and appearance and (ii) extend the previous architecture to realize a more efficient scheme of machine learning.

Keywords

Neural network, self-supervised learning, visual servoing, perception for control.

Requirements

  • Python programming skills
  • Knowledge of ROS (or ROS2)
  • Familiarity with Pythorch

References

[1] A. Paolillo, M. Nava, D. Piga, A. Giusti, “Visual Servoing with Geometrically Interpretable Neural Perception,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2022, pp. 5300–5306

Useful links

Imitation Learning for Visual Servoing with Aerial Drones

camera_and_traj


Impressive results demonstrated the great abilities of aerial drones in performing agile, aggressive, and fast maneuvers. This is made possible thanks to fast and predictive control paradigms. The combination of these schemes with visual perception allows these robots to comply with dynamic environments. However, these complex frameworks often conflict with the computational resources of robots. Furthermore, planning or predictive schemes are difficult to implement and require fine-tuning. To overcome these issues, one could ‘simply’ imitate complex trajectories, previously demonstrated and saved in datasets, rather than explicitly implement them.

In previous work [1], [2], we have shown how it is possible to apply the Dynamical System(DS)-based Imitation Learning (IL) to the Visual Servoing (VS) case. Combining DS and VS is beneficial for both the techniques. On the one side, it allows to include exteroception into the DS, thus adding adaptability; on the other, it allows to avoid specific programming of additional tasks into the VS. In this thesis, we want to investigate this kind of approaches on aerial nano-drones. The successful application of DS-based VS on nano-drones would allow to achieve high performing and adaptive maneuvers without the need of implementing complex and computationally expensive trajectory generators.

Keywords

Machine learning, visual servoing, imitation learning, dynamical systems.

Requirements

  • Python programming skills
  • Knowledge of ROS (or ROS2)
  • Familiarity with machine learning inference methods, such as GPR or GMR

References

[1] A. Paolillo and M. Saveriano, “Learning Stable Dynamical Systems for Visual Servoing,” IEEE Int. Conf. on Robotics and Automation, 2022, pp. 8636–8642.
[2] A. Paolillo, P. Robuffo Giordano, M. Saveriano, “Dynamical System-based Imitation Learning for Visual Servoing using the Large Projection Formulation,” IEEE Int. Conf. on Robotics and Automation, 2023, to appear.

Useful links

Respecting Constraints in Fast Fredictive VS scheme

VS_RG_bd


The generation of efficient robotic motion is the object of lively research activities. The benefits introduced by predictive controllers, such as model predictive control, have to be carefully balanced with the fast execution requirements of robot controllers. In previous work [1], which focused on visual servoing, we applied the reference governor technique to embed the anticipatory behavior in the reference signal of the reactive controller. In practice, the original controller remains untouched, and a reference governor block, implemented as a model predictive control, is put upstream to the original control loop (see Fig. above). In this way, the MPC only computes the references for the original controller and can run at a relatively low frequency, while the fast dynamics is delegated to the reactive controller.

The RG uses the model of the closed-loop VS behavior to make predictions and compute proper references. Furthermore, if the prediction is accurate, it can compute novel references for the VS in a way that constraints can be respected by the VS law. However, in our previous work, this aspect has not been deeply investigated, and this Thesis proposes to analyse the handling of the constraints in the framework sketched in the Figure within simulation and experimental environments.

Keywords

Visual servoing, model predictive control, reference governor, optimization.

Requirements

  • Python programming skills
  • Knowledge of ROS (or ROS2)
  • Hands-on experience with simulated environments, and robotic platforms

References

[1] A. Paolillo, M. Forgione, D. Piga, E. Mingo Hoffman, “Fast Predictive Visual Servoing: a Reference Governor-based Approach,” Control Engineering Practice, 2023.

Useful links