Skip to Main Content

Generative Artificial Intelligence (GenAI) and Ethics

Definitions

Generative artificial intelligence (GenAI) is a type of machine learning. It refers to machine-learning models that use neural networks to train on vast amounts data. After training (i.e. learning) the models then produce novel output based on their training sets, much like advanced auto-complete.

Ethics refers to both moral principles and to the study of people's moral obligations in a society.

Emerging field of study

AI Ethics is an emerging field. As a result, there is considerable debate about what constitutes ethical AI. However, a recent analysis of national and international AI policies and guidelines found convergence around five ethical principles: transparency; justice, fairness, and equity; non-maleficence; responsibility and accountability; privacy.

Transparency

AI systems and processes should be open and clear about how decisions are made, including disclosing the data, algorithms, and assumptions behind their functionality, to ensure trust and accountability.

Justice, Fairness, and Equity

AI systems should not perpetuate bias or discrimination, and they should be designed to treat all individuals and groups equitably, addressing and mitigating any unfair outcomes.

Non-maleficence

AI systems should not be harmful to individuals or society, and the systems should avoid causing any unintended negative consequences.

Responsibility and Accountability

Developers, organizations, and users of AI systems should be held accountable for the outcomes of the technology, including ensuring ethical use and addressing any harm caused.

Privacy

AI systems should protect the personal information and data of individuals, ensuring that their data is not misused or exposed without consent.

Video: Ethics of AI