Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim
Production dialogue systems face a critical challenge: achieving high accuracy while maintaining low latency at scale. This work introduces Symbol Tuning and C-LARA — two complementary approaches that enable enterprise deployment of LLM-powered intent classification at a fraction of the computational cost.
Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim
Chain-of-Intent combines Hidden Markov Models with LLMs to generate context-aware dialogues through self-play, addressing the fundamental data scarcity problem in conversational AI.
Junhua Liu, Roy Ka-Wei Lee, Kwan Hui Lim
Using real university admissions data, we challenge the assumption that fairness and accuracy exist in tension. Our findings reveal that ML models exceed human fairness consistency by 14-18% while maintaining comparable accuracy.
Junhua Liu, Roy Ka-Wei Lee, Kwan Hui Lim
Combining Byte-Pair Encoding with gated multi-head hierarchical attention for nuanced assessment of semi-structured data. Achieves F1-score of 0.8453 while offering interpretability for high-stakes decisions.
Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim
Combines a fine-tuned compact model with retrieval-augmented LLM architecture for cross-lingual intent classification. Achieves 3.67% accuracy improvement across six languages.