Greetings! I’m Lu, now serving as a Postdoctoral fellow at Eindhoven University of Technology (TU/e), located in the beautiful Netherlands. Before that, I was a researcher interning at Google's NYC office.
I hold a Ph.D. degree in the data mining group in TU/e, where I was fortunate to be supervised by the esteemed Prof. Mykola Pechenizkiy and Dr. Vlado Menkovski . I obtained my master’s and bachelor’s degrees at the Harbin Institute of Technology.
My research interests include #AI Efficiency, #Model Sparsity, #Large Language Models, #AI for Science. Feel free to reach out if you’d like to discuss anything with me :)
|Oct, 2023||[Paper] Check out our latest research on Large Language Models: (1) LLM pruning OWL (2) understanding the small magnitude in LLM JunkDNA hypothesis. My two favorite works this year 🔥.|
|Jul, 2023||[Intern] I am joining Google's NYC office as a researcher intern.|
|Jun, 2023||[Paper] Two paper got accepted by ECML-PKDD 2023: Robust overfitting, RDebiased Sparse Training|
|Apr, 2023||[Paper] One paper got accepted by ICML 2023: Large Kernal Distillation|
|Mar, 2023||[Paper] Our paper Dynamic Sparsity Is Channel-Level Sparsity Learner got accepted by SNN Workshop 2023 as Spotlight|
|Nov, 2022||[Paper] Our paper Lottery Pools got accepted by AAAI 2023|
|Nov, 2022||[Paper] Our paper Untrained GNNs Tickets receive the Best Paper Award from LoG 2022|
|Sep, 2022||[Talk] I was invited to give a talk about Model/supervision Efficency at Xu Lab in Carnegie Mellon University|
|May, 2022||[Paper] Our paper Sup-tickets sparse training got accepted by UAI 2022|
|Mar, 2022||[Paper] Our paper is accpeted by IDA 2022 , which was also the first conference (symposium) that I have attended in the first year of my PhD. Life is like a cycle|