About me

Hi! I am Jie Wang (王颉), a junior undergraduate student majoring in Computer Science and Technology at the College of Computer Science & Fan-Gongxiu Honors College, Beijing University of Technology (BJUT).

I am currently a Research Assistant at the Beijing Institute of Artificial Intelligence, advised by Associate Researcher Jinduo Liu. My research focuses on Multimodal Learning, Large Language Model (LLM) Security, and Reinforcement Learning.

In the summer of 2025, I visited North Carolina State University (NCSU) as a Research Assistant under Dr. Muhammad Shahzad, where our work on WiFi-based gesture recognition won 1st Place in the Poster Symposium (1/50).


🔔 News

📅 March 2026
Paper Two papers submitted to CCF-A conferences: EviGuard (cross-modal safety unlearning) and VidTouch (multimodal benchmark dataset).
📅 January 2026
Paper Our paper BEACON on LLM safety evaluation is under review at a top-tier AI conference.
📅 November 2025
Award Received Fan-Gongxiu Honors Scholarship and Xiaomi Entrepreneur Scholarship.
📅 August 2025
Conference Won 1st Place in Poster Symposium (1/50) at U.S. GEARS Research Program @ NCSU for our work on AttentiveCSI.
📅 August 2025
Award Awarded National 2nd Prize in China Robot Competition and Artificial Intelligence Contest.
📅 February 2025
Patent Invention patent on intelligent vehicle decision-making (ID: 202510141836.1) passed preliminary examination.

🎓 Education

Beijing University of Technology (BJUT) | Sept 2023 – June 2027 (Expected) Bachelor of Engineering in Computer Science and Technology Fan-Gongxiu Honors College

  • Academic Standing: GPA: 4.00 / 4.0095.72 / 100Rank: 1st / 60
  • Honors: Fan-Gongxiu Honors College (Top 40 selected from 3000+ students)
  • Key Coursework:
    • Advanced Mathematics (100), Probability and Statistics (98)
    • Data Structures & Algorithms (98), High-level Programming Language (97)
    • Set Theory & Graph Theory (99), Pattern Recognition (97), Computer Organization (97)

North Carolina State University (NCSU) | June 2025 – August 2025 Visiting Research Assistant Supervised by Dr. Muhammad Shahzad

  • Developed AttentiveCSI: WiFi-based Channel State Information Gesture Recognition with Attention-Enhanced CNN-LSTM Network
  • Awarded First Place in Poster Symposium (1/50 participants)

🔬 Research Interests

My research interests focus on building safe, robust, and multimodal AI systems:

  • LLM Security & Red-teaming: Evaluating and enhancing the robustness of large language models against adversarial attacks
  • Multimodal Learning: Cross-modal safety alignment and multimodal perception (vision + tactile sensing)
  • Reinforcement Learning: Decision-making optimization and knowledge graph reasoning

🧠 Selected Research Projects

Cross-Modal Safety Unlearning in Multimodal LLMs (EviGuard)

January 2026 – April 2026 | Funded by the Honors Cornerstone Design Program

  • Key Insight: Identified that cross-modal risk in multimodal LLMs concentrates in a low-rank subspace at the visual-language connector
  • Achievement: Reduced over-refusal rate (SARR) from 30.3% to 22.3%; on OOD benchmark SIUO, achieved 9.8% ASR and 68.0% Safe & Effective rate
  • Status: Under review at ACM MM 2026

Budget-Constrained Safety Evaluation of LLMs (BEACON)

November 2025 – January 2026 | Funded by the Honors Keystone Design Program

  • Innovation: Reframed LLM safety evaluation as budget-constrained failure discovery with novel efficiency-oriented metrics (k-FDQ, NDA, CCR, DV)
  • Achievement: Built Cognitive-Guided MCTS achieving 85.5–100% ASR across 6 frontier LLMs, discovering failures 3.7× faster than strongest baseline
  • Status: Advanced to Phase 2 of IJCAI 2026 review with all positive reviews

Vision-based Tactile Perception via Multimodal Learning (VidTouch)

December 2024 – June 2025 | Funded by Chinese National College Students’ Innovation Program

  • Contribution: Built VidTouch dataset (145 fabric categories, 440 RGB images / 440 tactile videos) — the first dynamic multimodal benchmark for fabric recognition
  • Achievement: Designed X3D + EfficientNet + MLP fusion pipeline achieving 98.6% in-category accuracy
  • Status: Under review at ACM MM 2026 (Dataset Track)

Autonomous Driving Decision-Making based on Game Theory

March 2024 – February 2025 | Advisor: Dr. Heng Deng

  • Designed a decision-making framework for intelligent vehicles in foggy weather using dynamic game theory with CarSim & Simulink
  • Patent: Invention Patent ID: 202510141836.1 (Passed Preliminary Examination)

🏆 Selected Awards & Honors

  • Best Poster Award (1st Place), U.S. GEARS Research Program @ NCSU, August 2025
  • Fan-Gongxiu Honors Scholarship, November 2025
  • Xiaomi Entrepreneur Scholarship, November 2025
  • Study Excellence Scholarship, September 2024 & 2025
  • National 2nd Prize, China Robot Competition and Artificial Intelligence Contest, August 2025
  • Outstanding Student Honor, BJUT, April 2024

🛠 Technical Skills

Programming Languages

  • Python, C/C++, Java, MATLAB, Verilog HDL

AI/ML Frameworks

  • PyTorch, TensorFlow, HuggingFace Transformers, Scikit-learn

Development Tools

  • Linux, Git/GitHub, Docker, Vibe Coding, LaTeX

Languages

  • English — IELTS 7.0 (Reading & Writing both 7.0)
  • Chinese — Native

Last updated: April 2026