Khairul 39;s Mental Ability Pdf Download

0 views
Skip to first unread message

Manric Hock

unread,
Aug 3, 2024, 3:25:10 PM8/3/24
to healhlitbnercha

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

This study examines the impact of artificial intelligence (AI) on loss in decision-making, laziness, and privacy concerns among university students in Pakistan and China. Like other sectors, education also adopts AI technologies to address modern-day challenges. AI investment will grow to USD 253.82 million from 2021 to 2025. However, worryingly, researchers and institutions across the globe are praising the positive role of AI but ignoring its concerns. This study is based on qualitative methodology using PLS-Smart for the data analysis. Primary data was collected from 285 students from different universities in Pakistan and China. The purposive Sampling technique was used to draw the sample from the population. The data analysis findings show that AI significantly impacts the loss of human decision-making and makes humans lazy. It also impacts security and privacy. The findings show that 68.9% of laziness in humans, 68.6% in personal privacy and security issues, and 27.7% in the loss of decision-making are due to the impact of artificial intelligence in Pakistani and Chinese society. From this, it was observed that human laziness is the most affected area due to AI. However, this study argues that significant preventive measures are necessary before implementing AI technology in education. Accepting AI without addressing the major human concerns would be like summoning the devils. Concentrating on justified designing and deploying and using AI for education is recommended to address the issue.

It is evident from the literature on the ethics of AI that besides its enormous advantages, many challenges also emerge with the development of AI in the context of moral values, behavior, trust, and privacy, to name a few. The education sector faces many ethical challenges while implementing or using AI. Many researchers are exploring the area further. We divide AI in education into three levels. First, the technology itself, its manufacturer, developer, etc. The second is its impact on the teacher, and the third is on the learner or student.

Foremost, there is a need to develop AI technology for education, which cannot be the basis of ethical issues or concerns (Ayling and Chapman, 2022). The high expectations of AI have triggered worldwide interest and concern, generating 400+ policy documents on responsible AI. Intense discussions over ethical issues lay a helpful foundation, preparing researchers, managers, policymakers, and educators for constructive discussions that will lead to clear recommendations for building reliable, safe, and trustworthy systems that will be a commercial success (Landwehr, 2015). But the question is, is it possible to develop an AI technology for education that will never cause an ethical concern? Maybe the developer or the manufacturer has dishonest gain from the AI technology in education. Maybe their intentions are not towards the betterment and assistance of education. Such questions come to mind when someone talks about the impact of AI in Education. Even if the development of AI technology is clear from any ethical concerns from the developer or manufacturer, there is no guarantee for the opposite view. The risk of ethical considerations will also rely upon the technical quality. Higher quality will minimize the risk but is it possible for all educational institutions to implement expensive technology of higher quality? (Shneiderman, 2021). Secondly, many issues may arise when teachers use AI technology (Topcu and Zuck, 2020). It may be security, usage, implementation, etc. Questions about security, bias, affordability, trust, etc., come to mind (IEEE, 2019). Thirdly, privacy, trust, safety, and health issues exist at the user level. To address such questions, a robust regulatory framework and policies are required. Still, unfortunately, no framework has been devised, no guidelines have been agreed upon, no policies have been developed, and no regulations have been enacted to address the ethical issues raised by AI in education (Ros et al., 2018).

It is evident that AI technology has many concerns (Stahl B. C., 2021a, 2021b), and like other sectors, the education sector is also facing challenges (Hax, 2018). If not all the issues/problems directly affect education and learning, most directly or indirectly impact the education process. So, it is difficult to decide whether AI has a positive ethical impact on education or negative or somewhat positive or negative. The debate on ethical concerns about AI technology will continue from case to case and context to context (Petousi and Sifaki, 2020). This research is focused on the following three moral fears of AI in education:

Technology has impacted almost every sector; reasonably, it also needs time (Leeming, 2021). From telecommunication to communication and health to education, it plays a significant role and assists humanity in one way or another (Stahl A., 2021a, 2021b). No one can deny its importance and applications for life, which provides a solid reason for its existence and development. One of the most critical technologies is artificial intelligence (AI) (Ross, 2021). AI has applications in many sectors, and education is one. Many AI applications in education include tutoring, educational assistance, feedback, social robots, admission, grading, analytics, trial and error, virtual reality, etc. (Tahiru, 2021).

AI is based on computer programming or computational approaches; questions can be raised on the process of data analysis, interpretation, sharing, and processing (Holmes et al., 2019) and how the biases should be prevented, which may impact the rights of students as it is believed that design biases may increase with time and how it will address concerns associated with gender, race, age, income inequality, social status, etc. (Tarran, 2018). Like any other technology, there are also some challenges related to AI and its application in education and learning. This paper focuses on the ethical concerns of AI in education. Some problems are related to privacy, data access, right and wrong responsibility, and student records, to name a few (Petousi and Sifaki, 2020). In addition, data hacking and manipulation can challenge personal privacy and control; a need exists to understand the ethical guidelines clearly (Fjelland, 2020).

Perhaps the most important ethical guidelines for developing educational AI systems are well-being, ensuring workplace safety, trustworthiness, fairness, honoring intellectual property rights, privacy, and confidentiality. In addition, the following ten principles were also framed (Aiken and Epstein, 2000).

In addition to the proper framework and principles not being followed during the planning and development of AI for Education, bias, overconfidence, wrong estimates, etc., are additional sources of ethical concerns.

Stephen Hawking once said that success in creating AI would be the most significant event in human history. Unfortunately, it might also be the last unless we learn to avoid the risks. Security is one of the major concerns associated with AI and learning (Kbis and Mehner, 2021). Trust-worthy artificial intelligence (AI) in education: Promises and challenges (Petousi and Sifaki, 2020; Owoc et al., 2021). Most educational institutions nowadays use AI technology in the learning process, and the area attracted researchers and interests. Many researchers agree that AI significantly contributes to e-learning and education (Nawaz et al. 2020; Ahmed and Nashat, 2020). Their claim is practically proved by the recent COVID-19 pandemic (Torda, 2020; Cavus et al., 2021). But AI or machine learning also brought many concerns and challenges to the education sector, and security and privacy are the biggest.

Additionally, teachers know less about the rights, acts, and laws of privacy and security, their impact and consequences, and any violations cost to the students, teachers, and country (Vadapalli, 2021). Machine learning or AI systems are purely based on data availability. Without data, it is nothing, and the risk is unavoidable of its misuse and leaks for a lousy purpose (Hbner, 2021).

AI systems collect and use enormous data for making predictions and patterns; there is a chance of biases and discrimination (Weyerer and Langer, 2019). Many people are now concerned with the ethical attributes of AI systems and believe that the security issue must be considered in AI system development and deployment (Samtani et al., 2021). The Facebook-Cambridge Analytica scandal is one of the significant examples of how data collected through technology is vulnerable to privacy concerns. Although much work has been done, as the National Science Foundation recognizes, much more is still necessary (Calif, 2021). According to Kurt Markley, schools, colleges, and universities have big banks of student records comprising data related to their health, social security numbers, payment information, etc., and are at risk. It is necessary that learning institutions continuously re-evaluate and re-design the security practices to make the data secure and prevent any data breaches. The trouble is even more in remote learning environments or when information technology is effective (Chan and Morgan, 2019).

It is also of importance and concern that in the current era of advanced technology, AI systems are getting more interconnected to cybersecurity due to the advancement of hardware and software (Mengidis et al., 2019). This has raised significant concerns regarding the security of various stakeholders and emphasizes the procedures the policymakers must adopt to prevent or minimize the threat (ELever and Kifayat, 2020). It is also important to note that security concerns increase with network and endpoints in remote learning. One problem is that protecting e-learning technology from cyber-attacks is neither easy nor requires less money, especially in the education sector, with a limited budget for academic activities (Huls, 2021). Another reason this severe threat exists is because of very few technical staff in an educational institution; hiring them is another economic issue. Although, to some extent, using intelligent technology of AI and machine learning, the level and threat of security decrease, again, the issue is that neither every teacher is a professional and trained enough to use the technology nor able to handle the common threats. And as the use of AI in education increases, the danger of security concerns also increases (Taddeo et al., 2019). No one can run from the threat of AI concerning cybersecurity, and it behaves like a double-edged sword (Siau and Wang, 2020).

c80f0f1006
Reply all
Reply to author
Forward
0 new messages