Conference paper Open Access

Measuring Similarity between Manual Course Concepts and ChatGPT-generated Course Concepts

Yo Ehara

Editor(s)
Mingyu Feng; Tanja Käser; Partha Talukdar

ChatGPT is a state-of-the-art language model that facilitates natural language interaction, enabling users to acquire textual responses to their inquiries. The model's ability to generate answers with a human-like quality has been reported. While the model's natural language responses can be evaluated by human experts in the field through thorough reading, assessing its structured responses, such as lists, can prove challenging even for experts. This study compares an openly accessible, manually validated list of "course concepts," or knowledge concepts taught in courses, to the concept lists generated by ChatGPT. Course concepts assist learners in deciding which courses to take by distinguishing what is taught in courses from what is considered prerequisites. Our experimental results indicate that only 22\% to 33\% of the concept lists produced by ChatGPT were included in the manually validated list of 4,096 concepts in computer science courses, suggesting that these concept lists require manual adjustments for practical use. Notably, when ChatGPT generates a concept list for non-native English speakers, the overlap increases, whereas the language used for querying the model has a minimal impact. Additionally, we conducted a qualitative analysis of the concepts generated but not present in the manual list.

Files (173.5 kB)
Name Size
2023.EDM-posters.52.pdf
md5:5924c191ea69324d31f474a8f0513e4e
173.5 kB Download
Show only:
Error:
No citations.
11
7
views
downloads
All versions This version
Views 1111
Downloads 77
Data volume 1.2 MB1.2 MB
Unique views 99
Unique downloads 66

Share

Cite as

Yo Ehara. (2023). Measuring Similarity between Manual Course Concepts and ChatGPT-generated Course Concepts. Proceedings of the 16th International Conference on Educational Data Mining, 474–476. https://doi.org/10.5281/zenodo.8115758

Loading...