Lu Yin

l.yin@surrey.ac.uk ; l.yin@tue.nl

prof_pic.jpg

Greetings! I’m Lu, an Assistant Professor in the School of Computer Science and Electronic Engineering at the University of Surrey. I am honored to be a long-term visitor and collaborator with the Visual Informatics Group (VITA) at UT Austin, led by Prof. Atlas Wang. Additionally, I am a long-term visiting researcher at Eindhoven University of Technology (TU/e). I lead the Lightweight & Universal Machine Intelligence (LUMI) lab.

Previously, I served as a Postdoctoral Fellow at TU/e and worked as a research scientist intern at Google’s New York City office.

My research interests include:

  • #AI Efficiency
  • #AI for Science
  • #Large Language Models

Feel free to reach out if you’d like to discuss anything with me :)

News

May 2025
[ACL Findings] Our paper OWS about PEFT has been accepted to ACL Findings 2025.
[Journal CEUS] Our paper about urban space foundation model CaLLiPer is accepted at Computers Environment And Urban Systems.
[ICML 2025×2] Two papers accepted at ICML 2025: (1) Weights Low rank training WeLore (2) Weights Low rank fine-tuning LIFT.
Mar 2025
[ICLR 2025 workshop] Our paper The Curse of Depth in LLM has been acceapted at ICLR SCOPE workshop.
Feb 2025
[CPAL 2025] Our paper Q-Galore got accpeted in CPAL.
[ICLR 2025×3] THREE papers accepted at ICLR 2025: (1) Normalization for LLMs Mix-LN (2) Enhancing LLM Alignment with Ternary Preferences TODO (3) Debiasing via Spuriousness Ranking SEBRA.
Dec 2024
[ICASSP 2025] Our paper Low-Rank Weight Training for Modern Speech Recognition Models is accepted at ICASSP 2025.
[Organization: CAI 2025 Workshops] Thrilled to co-organize the CAI 2025 Workshops LLM Stable Pretraining and Federated Optimization and Learning.
Sep 2024
[NeurIPS 2024 ] Our paper got accepted by NeurIPS 2024: E2ENet
[EMNLP 2024 X2] Two paper got accepted by EMNLP 2024: FFN-SkipLLM and C4 Pruning Enough?
Aug 2024
[BMVC 2024] We got one paper accepted at BMVC 2024: Are Sparse Neural Networks Better Hard Sample Learners?
Jun 2024
[Interspeech 2024×2] 🔥 We got TWO paper in collaboration with Meta London have been accepted at Interspeech 2024: Data Prunning for ASR and Training ASR from scatch.
[NeurIPS Challenge] 🔥 Excited to co-organize NeurIPS 2024 challenge Edge-Device Large Language Model Competition. We invite you to join the competition!
May 2024
[ICML 2024×3] 🔥 We got THREE paper accepted at ICML 2024: (1) LLM pruning OWL (with Google Research) (2) understanding the small magnitude in LLM JunkDNA hypothesis (with Intel Research). (3) BiDST
Jan 2024
[Takl CityU] Honored to be invited to give a talk about The Power of Model Sparsity at Multimedia Analytics (MA) Laboratory in City University of Hong Kong.
Dec 2023
[CPAL 2024×3] Three papers got accpeted in CPAL at spotlight track
[Grant: NWO] We have been awarded 10,000,000 credits for the use of NVIDIA A100 GPUs, totaling 78,120 hours. Our sincere thanks go to NWO.
Jul 2023
[NeurIPS 2023] One paper got accepted by NeurIPS 2023: Dynamic Sparsity Is Channel-Level Sparsity Learner
[Intern: Google] I am joining Google's NYC office as a researcher intern.
Jun 2023
[ECML 2023×2] Two paper got accepted by ECML-PKDD 2023: Robust overfitting, RDebiased Sparse Training
Apr 2023
[ICML 2023] One paper got accepted by ICML 2023: Large Kernel Distillation
Nov 2022
[AAAI 2023] Our paper Lottery Pools got accepted by AAAI 2023
[LoG 2022 BEST PAPER] Our paper Untrained GNNs Tickets receive the Best Paper Award from LoG 2022
Sep 2022
[Talk: CMU] I was invited to give a talk about Model/supervision Efficency at Xu Lab in Carnegie Mellon University
May 2022
[UAI 2022] Our paper Sup-tickets sparse training got accepted by UAI 2022
Mar 2022
[IDA 2022] Our paper is accpeted by IDA 2022, which was also the first conference (symposium) that I have attended in the first year of my PhD. Life is like a cycle :smile: