(1)
SECURING LARGE LANGUAGE MODELS AGAINST JAILBREAKING ATTACKS: A NOVEL FRAMEWORK FOR ROBUST AI SAFETY. LEX 2025, 23 (S6), 558-567. https://doi.org/10.52152/801808.