Curriculum Vitae

📍 Location: Beijing, China

📞 Phone: (+86) 157-3835-4608

✉️ Email: wangjie@emails.bjut.edu.cn

🌐 Website: wangjie0326.github.io


## 🎓 Education

Beijing University of Technology (BJUT)

Bachelor of Engineering in Computer Science and Technology | Sept 2023 – June 2027 (Expected)
Fan-Gongxiu Honors College | Beijing, China
  • GPA: 4.00/4.00 | 95.72/100 | Rank: 1st/60
  • Honors: Fan-Gongxiu Honors College (Top 40 selected from 3000+ students)
  • Major Courses: Advanced Mathematics (100), Probability and Statistics (98), Data Structures & Algorithms (98), High-level Programming Language (97), Set Theory & Graph Theory (99), Pattern Recognition (97), Computer Organization (97)

North Carolina State University (NCSU)

Visiting Research Assistant | June 2025 – August 2025
Supervised by Dr. Muhammad Shahzad | Raleigh, NC, U.S.
  • Developed AttentiveCSI: a WiFi-based Channel State Information Gesture Recognition with Attention-Enhanced CNN-LSTM Network
  • Awarded First Place in Poster Symposium (1/50 participants)

## 📝 Invention Patent & Publications

[1] BEACON: Budget-Efficient Discovery of Policy Violations in LLMs via Cognitive-Guided Monte Carlo Tree Search

Jie Wang, et al.
CCF-A AI Conference, Under Review | January 2026

[2] EviGuard: Evidence-Guided Connector Intervention for Cross-Modal Safety Unlearning in Multimodal LLMs

Jie Wang, et al.
CCF-A Multimedia Conference, Under Review | March 2026

[3] VidTouch: A Multimodal Benchmark Dataset for Dynamic Visuo-Tactile Fabric Recognition

Jie Wang, et al.
CCF-A Multimedia Conference (Dataset Track), Under Review | March 2026

[4] A Lane Change Decision-Making Method for Intelligent Vehicles in Foggy Days Based on Dynamic Game Theory

Jie Wang (First Author), et al.
Invention Patent ID: 202510141836.1 | Passed Preliminary Examination | February 2025

## 🔬 Project Experience

Cross-Modal Safety Unlearning in Multimodal LLMs (EviGuard)

Funded by the Honors Cornerstone Design Program | January 2026 – April 2026 | Beijing, China
  • Identified that cross-modal risk in multimodal LLMs concentrates in a low-rank subspace at the visual-language connector, motivating targeted intervention rather than uniform suppression
  • Reduced over-refusal rate (SARR) from 30.3% to 22.3%; on OOD benchmark SIUO, achieved 9.8% ASR and 68.0% Safe & Effective rate vs. best baseline (81.2% / 8.0%)
  • Contributions: Experimental framework design, algorithm implementation, ablation analysis, manuscript writing

Budget-Constrained Safety Evaluation of LLMs (BEACON)

Funded by the Honors Keystone Design Program | November 2025 – January 2026 | Beijing, China
  • Reframed LLM safety evaluation as budget-constrained failure discovery; formalized efficiency-oriented metrics (k-FDQ, NDA, CCR, DV) capturing discovery timing and harm category diversity beyond traditional ASR
  • Built Cognitive-Guided MCTS with defense persona profiling and diversity-aware selection; achieved 85.5–100% ASR across 6 frontier LLMs, discovering failures 3.7× faster than strongest baseline (k-FDQ 26 vs. 95)
  • Contributions: Problem formulation, attack design, experimental analysis, manuscript preparation

Vision-based Tactile Perception via Multimodal Learning (VidTouch)

Funded by Chinese National College Students' Innovation Program | December 2024 – June 2025 | Beijing, China
  • Built VidTouch (145 fabric categories, 440 RGB images / 440 tactile videos), the first dynamic multimodal benchmark for fabric recognition; supports multi-label classification, cross-modal retrieval, and zero-shot generalization
  • Designed X3D + EfficientNet + MLP fusion pipeline achieving 98.6% in-category accuracy; revealed zero-shot generalization bottleneck (41.3%), positioning VidTouch as an open challenge benchmark
  • Contributions: Dataset construction, model training and optimization, and experimental validation

## 🏆 Selected Awards and Honors
  • Fan-Gongxiu Honors Scholarship (November 2025)
  • Xiaomi Entrepreneur Scholarship (November 2025)
  • Study Excellence Scholarship (September 2024 & 2025)
  • Outstanding Student Honor (April 2024)
  • National 2nd Prize - China Robot Competition and Artificial Intelligence Contest (August 2025)

## 💻 Technical Skills
Programming Languages:
Python C/C++ Java MATLAB Verilog HDL
AI/ML Frameworks:
PyTorch TensorFlow HuggingFace Transformers Scikit-learn
Development Tools:
Linux Git/GitHub Docker Vibe Coding
Languages:
English — IELTS 7.0 (Reading & Writing both 7.0) Chinese — Native

Last updated: April 2026
Download PDF Version