When people sound the alarm about the ethical and social pitfalls in computing, especially artificial intelligence, they are often reacting to systems already in use. How should social media platforms handle algorithms that amplify hate speech and misinformation? Are racial and gender biases hidden in systems that evaluate creditworthiness and job applications? ?Does facial recognition jeopardize privacy?
But a new report from an advisory board to the National Academy of Sciences says: to that member John Hennessy, former president of Stanford University and adviser to Stanford’s HAI, said computer researchers and the institutions that fund them had social and ethical risks long before they developed products. argues that it is necessary to predict
If not, the report warns, it could be too late.
“It is much easier to design technology right the first time than to fix it later,” warns the report. “Failure to consider results early in research increases the risk of adverse social or ethical consequences.”
Read full report Promoting Responsible Computing Research: Fundamentals and Practices report release.
It may sound obvious, but the authors, who include luminaries in computer science, social science, and philosophy, are the institutions that fund and conduct their research: universities, corporations, professional associations, and governments. A broad rethink is needed by
This means early contact with stakeholders as well as experts in the social sciences, ethics and moral reasoning. It also means giving serious early consideration to the potential for unintended use or misuse of new technologies.
“One of the difficulties is that computer technology, especially these underlying models, is a universal technology that can be used for all sorts of things the developer never intended,” said a computer scientist at Stanford University. Professor and now Google’s parent company, Alphabet. “We can’t prevent all misuse, but we can at least provide some warnings that people can use as guidelines.”
chain of responsibility
The report cites examples of third-party “cookies”. These are small markers that track user behavior on your website. Originally intended only to facilitate transactions such as online shopping, cookies have quickly become tools used by data harvesters to track users and their web activity. If researchers had considered privacy issues first, they could have built in more protections before cookies became a global standard, Hennessey said.
The endless possibilities of AI and other computer technologies are unlike most other areas of innovation. Hennessy says the new vaccines may have unwanted side effects, but are only used for very targeted purposes. New algorithms and code become tools that can be used for entirely new purposes.
“The chain of responsibility starts at the research stage,” he says. “Researchers have a responsibility to try to mitigate such problems, but they also have a responsibility to make users aware of potential pitfalls. It is common to see how
Recommendations for researchers
The report recommends several ways of instilling social and ethical concerns, even in early-stage research.
Government funding agencies that fund much computer research can argue that all proposals address such potential risks. Stanford HAI’s fundraising process requires just such ethical and social scrutiny. Similarly, professional societies and journals can claim that newly published research contains in-depth discussions of potential problems.
More broadly, research institutions should ensure computer scientists have access to experts in other fields who can provide a broader perspective on potential problems, the report said.
“Until relatively recently, many researchers and observers considered computing technology to be value-neutral,” the report notes. “Few, if any, designs of new computing technologies are always imprinted with different values that their designers considered, but there is no assurance that a particular technology will meet the needs of some stakeholder. may not be wide enough to
“The goal of ethical grounds in technology is not to investigate every possible issue,” says Hennessy. “It’s meant to make you aware of these issues so that when you come across situations where there are potential ethical trade-offs, you can recognize them and deal with them.”
The NAS panel that produced this report was chaired by Barbara Gross, a computer scientist at Harvard University and an honorary member of the Human-Centered Artificial Intelligence Laboratory at Stanford University. In addition to Hennessey, other panel members included Mark Ackerman, professor of human-computer interaction at the University of Michigan. He’s Steve Bellovin, his science professor at Columbia University Computers. David Danks, Professor of Philosophy and Data Science at the University of California, San Diego. Mariano Florentino Cueller, President of the Carnegie Endowment for International Peace. Megan Finn of the University of Washington. Mary Gray of Microsoft Research. Ayanna Howard of Ohio State University. John Kleinberg of Cornell University. His James Manyika of McKinsey Global Institute. James Mickens of Harvard University. Amanda Stent of Colby College.
The mission of Stanford HAI is to advance AI research, education, policy, and practice to improve the human condition. learn more.
Comments
Post a Comment