MUHAMMAD HANIF received M.Sc. degree in Signal Processing from the Tampere University of Technology, Tampere, Finland, and the Ph.D. degree in Computer Vision from the College of Engineering and Computer Science, The Australian National University, Canberra, ACT, Australia. He was associated with the Computer Vision and Robotics Research Group, National ICT Australia (NICTA), DTAT61. Dr. Hanif is the recipient of European Research Consortium for Informatics and Mathematics (ERCIM) Fellowship for his postdoctoral research work with Istituto di Scienza e Tecnologie dell’Informazione ‘‘A. Faedo,’’ Italian National Research Council (CNR), Pisa. Currently, he is working as Associate Professor with the Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Pakistan. Dr. Hanif is actively associated as research consultant with the Kobayashi Research Laboratory, Information Technology Center, The University of Tokyo, Japan. His main research interests include blind image deconvolution, sparse signal processing, computer vision and AI applications in healthcare and precision agriculture. Dr. Hanif received several research project fundings from Pakistan Science Foundation (PSF), TUBITAK, International Team of Implantology (ITI), Higher Education Commission (HEC) Pakistan, IGNITE etc.
Deep learning models, particularly convolutional neural networks (ConvNets), offer exceptional performance on image data but are computationally expensive—posing challenges for deployment on embedded and resource-constrained systems. This is mainly because these models typically require large volumes of data and extensive training time, with the primary objective during training being the extraction of meaningful and discriminative features from the input data. To address this challenge, sparse data representation can be leveraged to exploit data redundancy and extract orthogonal features by identifying inherent regularities in the input. In this talk we will discuss the dictionary-based sparse training approach that exploits data redundancy to reduce computational overhead. By encoding input data into a sparse representation, nonredundant and discriminative features are preserved while reducing training complexity.