Focus

Three directions — how people adapt, how systems behave, what becomes possible.

Workforce reskilling

Not training on tools. Redesigning work around new capabilities.

AI safety in production

Accountable systems at scale. Safety isn't theoretical — it's how we build.

AI-native enterprise

Closing the gap between demos and deployment.

Publications

Research across our three focus areas.

Workforce reskilling

Title2Vec: A Contextual Job Title Embedding for Occupational Named Entity Recognition and Other Applications

J. Big Data 2022

Junhua Liu, Yung Chuen Ng, Zitong Gui, Trisha Singhal, Lucienne T M Blessing, Kristin L Wood, Kwan Hui Lim

Contextual embeddings for job titles enabling occupational named entity recognition. Addresses the challenge of understanding workforce skills and career transitions at scale.

IPOD: A Large-scale Industrial and Professional Occupation Dataset

CSCW 2020

Junhua Liu, Yung Chuen Ng, Kristin L Wood, Kwan Hui Lim

A comprehensive dataset of industrial and professional occupations supporting research in workforce analytics, skills mapping, and career pathway prediction.

AI-native enterprise

Balancing Accuracy and Efficiency in Multi-Turn Intent Classification for LLM-Powered Dialog Systems in Production

IAAI 2026

Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim

Production dialogue systems face a critical challenge: achieving high accuracy while maintaining low latency at scale. This work introduces Symbol Tuning and C-LARA — two complementary approaches that enable enterprise deployment of LLM-powered intent classification at a fraction of the computational cost.

From Intents to Conversations: Generating Intent-Driven Dialogues with Contrastive Learning

CIKM 2025

Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim

Chain-of-Intent combines Hidden Markov Models with LLMs to generate context-aware dialogues through self-play, addressing the fundamental data scarcity problem in conversational AI.

LARA: Linguistic-Adaptive Retrieval-Augmentation for Multi-Turn Intent Classification

EMNLP 2024

Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim

Combines a fine-tuned compact model with retrieval-augmented LLM architecture for cross-lingual intent classification. Achieves 3.67% accuracy improvement across six languages.

AI safety

Understanding Fairness-Accuracy Trade-offs in Machine Learning Models

ASONAM 2025

Junhua Liu, Roy Ka-Wei Lee, Kwan Hui Lim

Using real university admissions data, we challenge the assumption that fairness and accuracy exist in tension. Our findings reveal that ML models exceed human fairness consistency by 14-18% while maintaining comparable accuracy.

BGM-HAN: Hierarchical Attention for Fair Decision Assessment on Semi-Structured Profiles

ASONAM 2025

Junhua Liu, Roy Ka-Wei Lee, Kwan Hui Lim

Combining Byte-Pair Encoding with gated multi-head hierarchical attention for nuanced assessment of semi-structured data. Achieves F1-score of 0.8453 while offering interpretability for high-stakes decisions.

All publications

Let's talk

Collaboration, training, or just curious.

hello@forth.ai