Curriculum Vitae
📍 Location: Beijing, China
📞 Phone: (+86) 157-3835-4608
✉️ Email: wangjie@emails.bjut.edu.cn
🌐 Website: wangjie0326.github.io
## 🎓 Education
Beijing University of Technology (BJUT)
- GPA: 4.00/4.00 | 95.72/100 | Rank: 1st/60
- Honors: Fan-Gongxiu Honors College (Top 40 selected from 3000+ students)
- Major Courses: Advanced Mathematics (100), Probability and Statistics (98), Data Structures & Algorithms (98), High-level Programming Language (97), Set Theory & Graph Theory (99), Pattern Recognition (97), Computer Organization (97)
North Carolina State University (NCSU)
- Developed AttentiveCSI: a WiFi-based Channel State Information Gesture Recognition with Attention-Enhanced CNN-LSTM Network
- Awarded First Place in Poster Symposium (1/50 participants)
## 📝 Invention Patent & Publications
[1] BEACON: Budget-Efficient Discovery of Policy Violations in LLMs via Cognitive-Guided Monte Carlo Tree Search
[2] EviGuard: Evidence-Guided Connector Intervention for Cross-Modal Safety Unlearning in Multimodal LLMs
[3] VidTouch: A Multimodal Benchmark Dataset for Dynamic Visuo-Tactile Fabric Recognition
[4] A Lane Change Decision-Making Method for Intelligent Vehicles in Foggy Days Based on Dynamic Game Theory
## 🔬 Project Experience
Cross-Modal Safety Unlearning in Multimodal LLMs (EviGuard)
- Identified that cross-modal risk in multimodal LLMs concentrates in a low-rank subspace at the visual-language connector, motivating targeted intervention rather than uniform suppression
- Reduced over-refusal rate (SARR) from 30.3% to 22.3%; on OOD benchmark SIUO, achieved 9.8% ASR and 68.0% Safe & Effective rate vs. best baseline (81.2% / 8.0%)
- Contributions: Experimental framework design, algorithm implementation, ablation analysis, manuscript writing
Budget-Constrained Safety Evaluation of LLMs (BEACON)
- Reframed LLM safety evaluation as budget-constrained failure discovery; formalized efficiency-oriented metrics (k-FDQ, NDA, CCR, DV) capturing discovery timing and harm category diversity beyond traditional ASR
- Built Cognitive-Guided MCTS with defense persona profiling and diversity-aware selection; achieved 85.5–100% ASR across 6 frontier LLMs, discovering failures 3.7× faster than strongest baseline (k-FDQ 26 vs. 95)
- Contributions: Problem formulation, attack design, experimental analysis, manuscript preparation
Vision-based Tactile Perception via Multimodal Learning (VidTouch)
- Built VidTouch (145 fabric categories, 440 RGB images / 440 tactile videos), the first dynamic multimodal benchmark for fabric recognition; supports multi-label classification, cross-modal retrieval, and zero-shot generalization
- Designed X3D + EfficientNet + MLP fusion pipeline achieving 98.6% in-category accuracy; revealed zero-shot generalization bottleneck (41.3%), positioning VidTouch as an open challenge benchmark
- Contributions: Dataset construction, model training and optimization, and experimental validation
## 🏆 Selected Awards and Honors
- Fan-Gongxiu Honors Scholarship (November 2025)
- Xiaomi Entrepreneur Scholarship (November 2025)
- Study Excellence Scholarship (September 2024 & 2025)
- Outstanding Student Honor (April 2024)
- National 2nd Prize - China Robot Competition and Artificial Intelligence Contest (August 2025)
## 💻 Technical Skills
Programming Languages:
Python C/C++ Java MATLAB Verilog HDL
Python C/C++ Java MATLAB Verilog HDL
AI/ML Frameworks:
PyTorch TensorFlow HuggingFace Transformers Scikit-learn
PyTorch TensorFlow HuggingFace Transformers Scikit-learn
Development Tools:
Linux Git/GitHub Docker Vibe Coding
Linux Git/GitHub Docker Vibe Coding
Languages:
English — IELTS 7.0 (Reading & Writing both 7.0) Chinese — Native
English — IELTS 7.0 (Reading & Writing both 7.0) Chinese — Native
Last updated: April 2026
Download PDF Version
