Artificial Intelligence Needs Human Values

Privacy Plus+

Privacy, Technology and Perspective

Artificial Intelligence Needs Human Values This week, we offer some thoughts as we highlight UNESCO’s draft Recommendation on the Ethics of AI (“Recommendation”).  The draft can be found on UNESCO’s website and is available through the following link:

https://en.unesco.org/news/major-progress-unescos-development-global-normative-instrument-ethics-ai

The Recommendation’s main points:

Artificial Intelligence (“AI”) has enormous potential for social good if it develops in a way that benefits humanity, respects global norms and standards, and is anchored in peace and development.

There is no singular definition of AI. Instead, Al Systems involve information-processing technologies that produce a capacity to learn and to perform cognitive tasks, such as making predictions.  AI Systems are designed to operate with some degree of autonomy.  Examples of AI Systems include, but are by no means limited to, machine learning, IOT systems and robotics.

The benefits of AI cannot be realized without acknowledging and dealing with the existential risks which AI presents. AI has enormous potential for promoting social good if it develops in a way that benefits humanity, respects global norms and standards.  However, it also carries the risk of widening already existing inequalities and division. Bias, lack of diversity and inclusion, dangers to privacy and data protection, increased disinformation, and weaponization are just some of the threats presented by unchecked AI. Making AI work for humanity, and protecting individuals from its potential harms, are paramount.

We need a universal values and principles-based framework so that AI Systems are designed, deployed, and overseeing by thoughtful humans.  In order to make AI work for the good of humanity and to prevent harm, we must agree on a global and intercultural ethical framework based on these shared values: 

·       Respect, protection and promotion of human dignity, human rights and fundamental freedoms. AI systems should be developed and implemented judicially with input from a broad range of stakeholders guided by international human rights law, and deliberate consideration for protecting human dignity and safeguarding the interests of present and future generations. “Human dignity relates to the recognition of the intrinsic worth of each individual human being…[and] [n]o human being should be harmed physically, economically, socially, politically, or mentally during any phase of the life cycle of AI Systems.”

·       Encouraging the environment and ecosystem to flourish. The environmental impact of AI Systems should be minimized.

·       Ensuring diversity and inclusiveness.  AI Systems should be designed with input from a diversity of individuals.  The risk of bias will be mitigated in that way.

·       Living in harmony and peace. Al Systems should promote harmony and peace, and not undermine the safety of human beings, or divide and turn individuals and groups against each other.

In addition, these familiar principles should apply:

·       Proportionality and “do no harm”, meaning the use of AI technologies should not exceed that which is necessary to achieve its purpose, and should not cause harm;

·       Safety and security, meaning keep control of the AI as deployed, preventing access by malign forces;

·       Fairness and non-discrimination – AI Systems should not deliver biased results, and bias should be minimized during the development of algorithms.  For example, the large data sets should be used for machine learning.

·       Sustainability – AI Systems should respect the environment, and heat generation and other environmental impacts should be minimized. AI technologies should also be leveraged to improve ecosystem management and habitat restoration through, for example, the use of analytics to make decisions.

·       Privacy, meaning that AI Systems should not impinge on individual privacy, but should deliver their insights/results without exposing personal information.

·       Human oversight and determination, meaning that AI should not usurp humans with respect to making decisions that impact others’ lives, and thus that AI systems should require human oversight.

·      Transparency and “explainability”, meaning that AI shouldn’t be powered by unexplainable algorithms or opaque data sets or data selection, and even the most complex technologies should be explainable to a general audience.

·       Responsibility and accountability, meaning that the responsibility for AI systems must be defined, and those who control those systems must be accountable for them.

AI already infuses our lives, curating our newsfeeds and influencing our choices, many times, in ways which can be harmful.  As the reach of AI expands, we agree that there needs to be a global consensus regarding the values that unpin the development and deployment of AI systems, so that AI does not evolve lawlessly, unethically or dangerously.  AI technologies are as global as the lives they affect and having a global framework to ensure the ethical use of AI helps, and does not hurt.

---

Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.

Previous
Previous

Priorities and Practical Compromises for Businesses Today

Next
Next

The Whirlwind Week - Regulating Ant, Settling Zoom, Re-envisioning Data Transfers and Coding a Dystopia