Kun Li
Ph.D. candidate
Scene Understanding Group, ITC, University Of Twente
Location: Enschede, Netherlands
Email: k.li@utwente.nl || Google Scholar || ResearchGate || ORCID

About Me

I am a Ph.D. candidate in EOS Department of ITC Faculty, University of Twente, supervised by Prof. George Vosselman and Prof. Michael Ying Yang (University Of Bath, UK). My research interests lie in Image Segmentation and Visual Question Answering via deep learning-based techniques.


Feel free to contact me if you are interested in similar topics.

News

  • 2024.06: One paper about aerial image VQA benchmark accepted by ISPRS-J.
  • 2023.10: One paper about interactive image segmentation accepted by ICCVW2023 in Paris, France.
  • 2023.07: Attend the 13th Lisbon Machine Learning Summer School organized by Instituto Superior Técnico in Lisbon, Portugal.
  • 2022.03: Pass the public Ph.D. Qualifier with committee: Prof. George Vosselman, Dr. Michael Ying Yang, Dr. Sylvain Lobry (Université de Paris, France).
  • 2021.09: Begin my new Ph.D. journey at the University of Twente in the Netherlands.
  • 2021.07: Funded by China Scholarship Council (CSC) for 4 years.

Educations

  • 2021.09-Now: Ph.D. in Faculty of Geo-Information Science and Earth Observation, University Of Twente, Netherlands.
  • 2018.09-2021.06: M.Sc. in School of Remote Sensing and Information Engineering, Wuhan University, China.
  • 2014.09-2018.06: B.Sc. in School of Remote Sensing and Information Engineering, Wuhan University, China.

Selected Publications

  • HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images. [PDF]
    Kun Li, George Vosselman, Michael Ying Yang.
    ISPRS Journal of Photogrammetry and Remote Sensing (ISPRS-J), 2024.

  • Transformer-based Multimodal Change Detection with Multitask Consistency Constraints. [PDF]
    Biyuan Liu, Huaixin Chen, Kun Li, Michael Ying Yang.
    Information Fusion (IF), 2024.

  • Interactive Image Segmentation with Cross-Modality Vision Transformers. [PDF]
    Kun Li, George Vosselman, Michael Ying Yang.
    Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (ICCVW), 2023.

  • A Deep Interactive Framework for Building Extraction in Remotely Sensed Images Via a Coarse-to-Fine Strategy. [PDF]
    Kun Li, Xiangyun Hu.
    IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2021.

  • Attention-Guided Multi-Scale Segmentation Neural Network for Interactive Extraction of Region Objects from High-Resolution Satellite Imagery. [PDF]
    Kun Li, Xiangyun Hu, Huiwei Jiang, Zhen Shu, and Mi Zhang.
    Remote Sensing (RS), 2020.

  • PGA-SiamNet: Pyramid feature-based attention-guided Siamese network for remote sensing orthoimagery building change detection. [PDF]
    Huiwei Jiang, Xiangyun Hu, Kun Li, Jinming Zhang, Jinqi Gong, Mi Zhang.
    Remote Sensing (RS), 2020.

Preprints

  • Learning from Exemplars for Interactive Image Segmentation. [PDF]
    Kun Li, Hao Cheng, George Vosselman, Michael Ying Yang.
    arXiv, 2024 (Under review).

  • Convincing Rationales for Visual Question Answering Reasoning. [PDF]
    Kun Li, George Vosselman, Michael Ying Yang.
    arXiv, 2024 (Under review).

Presentations

  • Poster presentation on ICCV 2023 Workshop on New Ideas in Vision Transformers. [Link]
    Paris, France, 2023.10.

  • Attender's spotlight on the 13th Lisbon Machine Learning Summer School (LxMLS 2023). [Link]
    Lisbon, Portugal, 2023.07.

  • Oral presentation on Netherlands Center for Geodesy and Geo-Informatics (NCG) Symposium. [Link]
    Enschede, Netherlands, 2023.07.

  • Ph.D.'s spotlight on Meeting on Development And Sharing of Open Geodata. [Link]
    Enschede, Netherlands, 2023.01.

Professonal Activities

  • Top Reviewers for NeurIPS (2023, 2024).
  • Reviewer for CVPR, ICCV, ECCV, ICLR, ICML, AAAI.
  • Reviewer for ISPRS-J, TGRS, GRSL.
  • IEEE/CVF student member.
  • Supervisor for Master Thesis ({Akshay Chaprana, 2023}).
  • Teaching Assistant for UT courses ({2D and 3D Scene Analysis, 2021}, {Image Analysis, 2021, 2022}, {AI for Autonomous Robots, 2023}).

Last Updated on 18th Mar, 2024

Published with GitHub Pages