Ethical Intelligence for Decision Making from Human Value Education Perspective

Abstract

This research explores how Artificial Intelligence (AI) and educational technology can enhance Human Value education. It begins by reviewing recent AI applications in education, particularly those that support value, moral, and character education. The focus is on personalizing learning experiences through learner profiling, customization, and intelligent recommendations. Six categories of application of AI in education were identified as Intelligent Tutoring, Library Management, Laboratory Tools, Learner Analytics, Examinations, and Smart Campus. newlineTo support Human Values education, digital pedagogical theories were synthesized into four foundational pillars, which were then used to develop an Intelligent Moral Tutoring System (ITMS). The four pillars were Learning Outcomes, Learning Content and Analytics, Learning Support, and Learning Assessment. As education transitions into the Industry 4.0 and Education 5.0 eras, personalized digital pedagogy becomes critical. With the use of technology, the importance of ethics and human values in education and tools of education increases. newlineEthical decisions often include a choice between conflicting value groups known as a dilemma. To resolve this cognitive dissonance, this research suggests AI supported methods for ranking human values, using transparent algorithms and methods. Ethical techniques like Explainable AI (XAI) were studies that are used for ensuring transparency in AI systems. Techniques like SHAP and LIME were applied to a moral dataset, showcasing the strengths of combining XAI methods for clearer model interpretations. A Ranking Human Values (RHV) algorithm was developed using XAI techniques to assign weights to values from the Moral Foundations Theory (MFT). The algorithm uses a combination of Sensitivity Analysis, Partial Dependency Plots, and Factor Importance to rank the human values. newlineLastly, an AI model called the Value Group Classifier (VGC) was trained to assist in resolving ethical dilemmas by classifying them into value groups. Trained on real-world data, the model provides guidance while maintaining human autonomy in decision-making. newline newlineThe classifier was trained to classify all dilemmas into three value groups by using a Support Vector Classifier (SVC). The model provided scaffolding to the ethical decision-maker. The design works on the ethical theory of stakeholder management, which includes sustainable business goals. The study was conducted with 30 students and 30 adults to identify their dilemmas. The dilemma dataset was used to train the Value Group Classifier (VGC). The classifier achieved a score of 0.52 on performance. The VGC model overcomes the black-box biases of similar machine-learning models by allowing human autonomy in ethical decisions. The classifier was integrated into an Ethical Decision-Making Tool (EDMT) to deliver AI-driven philosophical counseling. newline newline

Description

Keywords

Citation

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced