Lu Yin

l.yin@surrey.ac.uk ; l.yin@tue.nl

prof_pic.jpg

Greetings! I’m Lu, an Assistant Professor in the School of Computer Science and Electronic Engineering at the University of Surrey. I am honored to be a long-term visitor and collaborator with the Visual Informatics Group (VITA) at UT Austin, led by Prof. Atlas Wang. Additionally, I am a long-term visiting researcher at Eindhoven University of Technology (TU/e). I lead the Lightweight & Universal Machine Intelligence (LUMI) lab.

Previously, I served as a Postdoctoral Fellow at TU/e and worked as a research scientist intern at Google’s New York City office.

My research interests include:

  • Efficient and Scalable Foundation Models
  • Understanding and Enhancing LLMs
  • Interdisciplinary AI Applications

Feel free to reach out if you’d like to discuss anything with me :)

News

Sep 2025
NeurIPS 2025×3 Three papers have been accepted at NeurIPS 2025: Layer-wise Weight Decay in LLM, Gradient-Preserving Activation Scaling in LLM and The Curse of Depth in LLM.
Aug 2025
BMVC 2025 Our paper CLIMB-3D, about continual learning for imbalanced 3D instance segmentation, has been accepted at BMVC 2025.
ACM Sigspatial Into the Unknown is accepted at ACM Sigspatial, we build urban foundation model for individuals’ next-location prediction.
Jun 2025
ACL 2025 Our paper OWS about LLM PEFT has been accepted to ACL 2025.
Journal CEUS Our paper about urban space foundation model CaLLiPer is accepted at Computers Environment And Urban Systems.
May 2025
ICML 2025×2 Two papers accepted at ICML 2025: (1) Weights Low rank training WeLore (2) Weights Low rank fine-tuning LIFT.
Feb 2025
CPAL 2025 Our paper Q-Galore got accpeted in CPAL.
ICLR 2025×3 THREE papers accepted at ICLR 2025: (1) Normalization for LLMs Mix-LN (2) Enhancing LLM Alignment with Ternary Preferences TODO (3) Debiasing via Spuriousness Ranking SEBRA.
Dec 2024
ICASSP 2025 Our paper Low-Rank Weight Training for Modern Speech Recognition Models is accepted at ICASSP 2025.
Organization: CAI 2025 Workshops Thrilled to co-organize the CAI 2025 Workshops LLM Stable Pretraining and Federated Optimization and Learning.
Sep 2024
NeurIPS 2024 Our paper got accepted by NeurIPS 2024: E2ENet
EMNLP 2024 X2 Two paper got accepted by EMNLP 2024: FFN-SkipLLM and C4 Pruning Enough?
Aug 2024
BMVC 2024 We got one paper accepted at BMVC 2024: Are Sparse Neural Networks Better Hard Sample Learners?
Jun 2024
Interspeech 2024×2 🔥 We got TWO paper in collaboration with Meta London have been accepted at Interspeech 2024: Data Prunning for ASR and Training ASR from scatch.
NeurIPS Challenge 🔥 Excited to co-organize NeurIPS 2024 challenge Edge-Device Large Language Model Competition. We invite you to join the competition!
May 2024
ICML 2024×3 🔥 We got THREE paper accepted at ICML 2024: (1) LLM pruning OWL (with Google Research) (2) understanding the small magnitude in LLM JunkDNA hypothesis (with Intel Research). (3) BiDST
Jan 2024
Takl CityU Honored to be invited to give a talk about The Power of Model Sparsity at Multimedia Analytics (MA) Laboratory in City University of Hong Kong.
Dec 2023
CPAL 2024×3 Three papers got accpeted in CPAL at spotlight track
Grant: NWO We have been awarded 10,000,000 credits for the use of NVIDIA A100 GPUs, totaling 78,120 hours. Our sincere thanks go to NWO.
Jul 2023
NeurIPS 2023 One paper got accepted by NeurIPS 2023: Dynamic Sparsity Is Channel-Level Sparsity Learner
Intern: Google I am joining Google's NYC office as a researcher intern.
Jun 2023
ECML 2023×2 Two paper got accepted by ECML-PKDD 2023: Robust overfitting, RDebiased Sparse Training
Apr 2023
ICML 2023 One paper got accepted by ICML 2023: Large Kernel Distillation
Nov 2022
AAAI 2023 Our paper Lottery Pools got accepted by AAAI 2023
LoG 2022 BEST PAPER Our paper Untrained GNNs Tickets receive the Best Paper Award from LoG 2022
Sep 2022
Talk: CMU I was invited to give a talk about Model/supervision Efficency at Xu Lab in Carnegie Mellon University
May 2022
UAI 2022 Our paper Sup-tickets sparse training got accepted by UAI 2022
Mar 2022
IDA 2022 Our paper is accpeted by IDA 2022, which was also the first conference (symposium) that I have attended in the first year of my PhD. Life is like a cycle :smile: