Responsible artificial intelligence practices and adoption framework a study of higher educational institutions in emerging market
Loading...
Date
item.page.authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
If AI systems are to be understood in terms of their social consequences, they must be
newlinerecognized as more than the sum of their software components. The social environment in
newlinewhich AI systems are built, used, and acted upon is essentially socio-technical, with its
newlinediverse set of stakeholders, institutions, cultures, conventions, and locations. While
newlineconsidering the governance and implications of AI technology or the artefact that integrates
newlineit, it is critical to understand that the technical aspect and the socio-technical system are
newlineintricately intertwined. Interactions between individuals and groups performing various roles
newline(creator, manufacturer, user, bystander, policymaker, etc.) comprise this system, as do the
newlineprotocols and processes that govern those interactions. AI rules, principles, and methods
newlinemust stress this socio-technical approach. In reality, it is not the ethical, dependable, or
newline
newline144
newline
newlineresponsible part of an AI artefact or application. Rather, those who build, develop, or use
newlinethese systems should take responsibility and act in accordance with moral and ethical norms
newlineso that society can have faith in the system as a whole and the outcomes. Using ethical
newlineprinciples in AI, contrary to popular belief, does not exonerate individuals and organisations
newlineof accountability by handing robots any semblance of andquot;responsibilityandquot; for their actions and
newlinechoices. Conversely, AI ethics advocates for greater accountability and responsibility on the
newlinepart of the persons and groups involved, both for the decisions and actions of the AI
newlineapplications and for their own decision to deploy AI in a specific application setting.
newline