Focus

Three directions — how people adapt, how systems behave, what becomes possible.

Workforce reskilling

Not training on tools. Redesigning work around new capabilities.

AI safety in production

Accountable systems at scale. Safety isn't theoretical — it's how we build.

AI-native enterprise

Closing the gap between demos and deployment.

Publications

Multi-turn dialogue, fairness in decision-making, production AI.

Balancing Accuracy and Efficiency in Multi-Turn Intent Classification for LLM-Powered Dialog Systems in Production

IAAI 2026

Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim

Production dialogue systems face a critical challenge: achieving high accuracy while maintaining low latency at scale. This work introduces Symbol Tuning and C-LARA — two complementary approaches that enable enterprise deployment of LLM-powered intent classification at a fraction of the computational cost.

From Intents to Conversations: Generating Intent-Driven Dialogues with Contrastive Learning

CIKM 2025

Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim

Chain-of-Intent combines Hidden Markov Models with LLMs to generate context-aware dialogues through self-play, addressing the fundamental data scarcity problem in conversational AI.

Understanding Fairness-Accuracy Trade-offs in Machine Learning Models

ASONAM 2025

Junhua Liu, Roy Ka-Wei Lee, Kwan Hui Lim

Using real university admissions data, we challenge the assumption that fairness and accuracy exist in tension. Our findings reveal that ML models exceed human fairness consistency by 14-18% while maintaining comparable accuracy.

BGM-HAN: Hierarchical Attention for Fair Decision Assessment on Semi-Structured Profiles

ASONAM 2025

Junhua Liu, Roy Ka-Wei Lee, Kwan Hui Lim

Combining Byte-Pair Encoding with gated multi-head hierarchical attention for nuanced assessment of semi-structured data. Achieves F1-score of 0.8453 while offering interpretability for high-stakes decisions.

LARA: Linguistic-Adaptive Retrieval-Augmentation for Multi-Turn Intent Classification

EMNLP 2024

Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim

Combines a fine-tuned compact model with retrieval-augmented LLM architecture for cross-lingual intent classification. Achieves 3.67% accuracy improvement across six languages.

All publications

Let's talk

Collaboration, training, or just curious.

hello@forth.ai