Hyeoncheol Kim


2024

pdf bib
Difficulty-Focused Contrastive Learning for Knowledge Tracing with a Large Language Model-Based Difficulty Prediction
Unggi Lee | Sungjun Yoon | Joon Seo Yun | Kyoungsoo Park | YoungHoon Jung | Damji Stratton | Hyeoncheol Kim
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

This paper presents novel techniques for enhancing the performance of knowledge tracing (KT) models by focusing on the crucial factor of question and concept difficulty level. Despite the acknowledged significance of difficulty, previous KT research has yet to exploit its potential for model optimization and has struggled to predict difficulty from unseen data. To address these problems, we propose a difficulty-centered contrastive learning method for KT models and a Large Language Model (LLM)-based framework for difficulty prediction. These innovative methods seek to improve the performance of KT models and provide accurate difficulty estimates for unseen data. Our ablation study demonstrates the efficacy of these techniques by demonstrating enhanced KT model performance. Nonetheless, the complex relationship between language and difficulty merits further investigation.

2017

pdf bib
Multi-Channel Lexicon Integrated CNN-BiLSTM Models for Sentiment Analysis
Joosung Yoon | Hyeoncheol Kim
Proceedings of the 29th Conference on Computational Linguistics and Speech Processing (ROCLING 2017)

pdf bib
Adullam at SemEval-2017 Task 4: Sentiment Analyzer Using Lexicon Integrated Convolutional Neural Networks with Attention
Joosung Yoon | Kigon Lyu | Hyeoncheol Kim
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

We propose a sentiment analyzer for the prediction of document-level sentiments of English micro-blog messages from Twitter. The proposed method is based on lexicon integrated convolutional neural networks with attention (LCA). Its performance was evaluated using the datasets provided by SemEval competition (Task 4). The proposed sentiment analyzer obtained an average F1 of 55.2%, an average recall of 58.9% and an accuracy of 61.4%.