reading-notes

Ethics

ACM Code of Ethics and Professional Conduct

Computing professionals’ actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good. The ACM Code of Ethics and Professional Conduct (“the Code”) expresses the conscience of the profession.

The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way.

GENERAL ETHICAL PRINCIPLES.

  1. Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
  2. Avoid harm.
  3. Be honest and trustworthy.
  4. Be fair and take action not to discriminate.
  5. Respect the work required to produce new ideas, inventions, creative works, and computing artifacts.
  6. Respect privacy.
  7. Honor confidentiality.

### PROFESSIONAL RESPONSIBILITIES

  1. Strive to achieve high quality in both the processes and products of professional work.
  2. Maintain high standards of professional competence, conduct, and ethical practice.
  3. Know and respect existing rules pertaining to professional work.
  4. Accept and provide appropriate professional review.
  5. Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks.
  6. Perform work only in areas of competence.
  7. Foster public awareness and understanding of computing, related technologies, and their consequences.
  8. Access computing and communication resources only when authorized or when compelled by the public good.
  9. Design and implement systems that are robustly and usably secure.

PROFESSIONAL LEADERSHIP PRINCIPLES.

  1. Ensure that the public good is the central concern during all professional computing work.
  2. Articulate, encourage acceptance of, and evaluate fulfillment of social responsibilities by members of the organization or group.
  3. Manage personnel and resources to enhance the quality of working life.
  4. Articulate, apply, and support policies and processes that reflect the principles of the Code.
  5. Create opportunities for members of the organization or group to grow as professionals.
  6. Use care when modifying or retiring systems.
  7. Recognize and take special care of systems that become integrated into the infrastructure of society.

COMPLIANCE WITH THE CODE.

  1. Uphold, promote, and respect the principles of the Code.
  2. Treat violations of the Code as inconsistent with membership in the ACM.

The Software Engineering Code of Ethics and Professional Practice

Software Engineering Code of Ethics and Professional Practice (Version 5.2) as recommended by the ACM/IEEE-CS Joint Task Force on Software Engineering Ethics and Professional Practices and jointly approved by the ACM and the IEEE-CS as the standard for teaching and practicing software engineering.

In accordance with their commitment to the health, safety and welfare of the public, software engineers shall adhere to the following Eight Principles:

  1. PUBLIC – Software engineers shall act consistently with the public interest.

  2. CLIENT AND EMPLOYER – Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest.

  3. PRODUCT – Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.

  4. JUDGMENT – Software engineers shall maintain integrity and independence in their professional judgment.

  5. MANAGEMENT – Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.

  6. PROFESSION – Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.

  7. COLLEAGUES – Software engineers shall be fair to and supportive of their colleagues.

  8. SELF – Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.

Ethics in the workplace

The code I’m still ashamed of

If you write code for a living, there’s a chance that at some point in your career, someone will ask you to code something a little deceitful – if not outright unethical.

Google censored China’s search engine

Doing business in China is good for shareholders, bad for humanity

It’s no mystery why Google executives want to do business with Chinese government officials: It’s profitable. With its population at 1.3 billion, China has the largest number of internet users in the world, so breaking into the Chinese market has been a long-time goal for Silicon Valley tech giants in their quest to find new users and to grow profits.

But working in China inevitably raises ethical issues for any US company. Doing business in mainland China means making deals with an authoritarian government that has a record of human rights abuses and a strict suppression of speech.

Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance

Google is committing to not using artificial intelligence for weapons or surveillance after employees protested the company’s involvement in Project Maven, a Pentagon pilot program that uses artificial intelligence to analyze drone footage.

Google CEO Sundar Pichai announced the change in a set of AI principles released today. The principles are intended to govern Google’s use of artificial intelligence and are a response to employee pressure on the company to create guidelines for its use of AI.

Employees at the company have spent months protesting Google’s involvement in Project Maven, sending a letter to Pichai demanding that Google terminate its contract with the Department of Defense. Several employees even resigned in protest, concerned that Google was aiding the development of autonomous weapons systems.

Ethics in Technology

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”

Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. Already, the American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.

“These ethical problems are not just theoretical,” said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.

The ethical dilemmas of self-driving cars

“The greater challenge is the artificial intelligence behind the machine,” Toyota Canada president Larry Hutchinson said, addressing the TalkAuto Conference in Toronto last November. “Think of the millions of situations that we process and decisions that we have to make in real traffic. … We need to program that intelligence into a vehicle, but we don’t have the data yet to create a machine that can perceive and respond to the virtually endless permutations of near misses and random occurrences that happen on even a simple trip to the corner store.”

The cybersecurity risk of self-driving cars

In principle, any computerized system that has an interface to the outside world is potentially hackable. Any computer scientist knows that it is very difficult to create software without any bugs—especially when the software is very complex. Bugs may sometimes be security vulnerabilities, and may be exploitable.

Tech Company Principles

Microsoft AI principles

We put our responsible AI principles into practice through the Office of Responsible AI (ORA), the AI, Ethics, and Effects in Engineering and Research (Aether) Committee, and Responsible AI Strategy in Engineering (RAISE). The Aether Committee advises our leadership on the challenges and opportunities presented by AI innovations. ORA sets our rules and governance processes, working closely with teams across the company to enable the effort. RAISE is a team that enables the implementation of Microsoft responsible AI rules across engineering groups.

Ethical OS

As technologists, it’s only natural that we spend most of our time focusing on how our tech will change the world for the better. Which is great. Everyone loves a sunny disposition. But perhaps it’s more useful, in some ways, to consider the glass half empty.

RISK

Google AI Principles

We will assess AI applications in view of the following objectives. We believe that AI should:

  1. Be socially beneficial.

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment.

  1. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

  1. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

  1. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal.

  1. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  1. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration.

  1. Be made available for uses that accord with these principles.

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications.