[1]
“SECURING LARGE LANGUAGE MODELS AGAINST JAILBREAKING ATTACKS: A NOVEL FRAMEWORK FOR ROBUST AI SAFETY”, LEX, vol. 23, no. S6, pp. 558–567, Oct. 2025, doi: 10.52152/801808.