Luo Xianzhen

A Survey on Natural Language Processing for Programming

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 1690--1704, 2024.

Zhu, Qingfu and Luo, Xianzhen and Liu, Fang and Gao, Cuiyun and Che, Wanxiang

A Survey on Natural Language Processing for Programming

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 1690--1704, 2024.

Zhu, Qingfu and Luo, Xianzhen and Liu, Fang and Gao, Cuiyun and Che, Wanxiang

Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training

arXiv preprint arXiv:2406.17404, 2024.

Wang, Yixuan and Luo, Xianzhen and Wei, Fuxuan and Liu, Yijun and Zhu, Qingfu and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang and Che, Wanxiang

Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training

arXiv preprint arXiv:2406.17404, 2024.

Wang, Yixuan and Luo, Xianzhen and Wei, Fuxuan and Liu, Yijun and Zhu, Qingfu and Zhang, Xuanyu and Yang, Qing and Xu, Dongliang and Che, Wanxiang

Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging

Findings of the Association for Computational Linguistics: ACL 2022, 637--647

Hou, Yutai and Chen, Cheng and Luo, Xianzhen and Li, Bohan and Che, Wanxiang

Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging

Findings of the Association for Computational Linguistics: ACL 2022, 637--647

Hou, Yutai and Chen, Cheng and Luo, Xianzhen and Li, Bohan and Che, Wanxiang