New! Sign up for our free email newsletter.
Science News
from research organizations

Enhancing the quality of AI requires moving beyond the quantitative

Date:
August 9, 2019
Source:
New York University
Summary:
Artificial Intelligence engineers should enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, in order to reduce the potential harm of their creations and to better serve society as a whole, a pair of researchers has concluded.
Share:
FULL STORY

Artificial Intelligence engineers should enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, in order to reduce the potential harm of their creations and to better serve society as a whole, a pair of researchers has concluded in an analysis that appears in the journal Nature Machine Intelligence.

"There is mounting evidence that AI can exacerbate inequality, perpetuate discrimination, and inflict harm," write Mona Sloane, a research fellow at New York University's Institute for Public Knowledge, and Emanuel Moss, a doctoral candidate at the City University of New York. "To achieve socially just technology, we need to include the broadest possible notion of social science, one that includes disciplines that have developed methods for grappling with the vastness of social world and that helps us understand how and why AI harms emerge as part of a large, complex, and emergent techno-social system."

The authors outline reasons where social science approaches, and its many qualitative methods, can broadly enhance the value of AI while also avoiding documented pitfalls. Studies have shown that search engines may discriminate against women of color while many analysts have raised questions about how self-driving cars will make socially acceptable decisions in crash situations (e.g., avoiding humans rather than fire hydrants).

Sloane, also an adjunct faculty member at NYU's Tandon School of Engineering, and Moss acknowledge that AI engineers are currently seeking to instill "value-alignment" -- the idea that machines should act in accordance with human values -- in their creations, but add that "it is exceptionally difficult to define and encode something as fluid and contextual as 'human values' into a machine."

To address this shortcoming, the authors offer a blueprint for inclusion of the social sciences in AI through a series of recommendations:

  • Qualitative social research can help understand the categories through which we make sense of social life and which are being used in AI. "For example, technologists are not trained to understand how racial categories in machine learning are reproduced as a social construct that has real-life effects on the organization and stratification of society," Sloane and Moss observe. "But these questions are discussed in depth in the social sciences, which can help create the socio-historical backdrop against which the...history of ascribing categories like 'race' can be made explicit."
  • A qualitative data-collection approach can establish protocols to help diminish bias. "Data always reflects the biases and interests of those doing the collecting," the authors note. "Qualitative research is explicit about the data collection, whereas quantitative research practices in AI are not."
  • Qualitative research typically requires researchers to reflect on how their interventions affect the world in which they make their observations. "A quantitative approach does not require the researcher or AI designer to locate themselves in the social world," they write. "Therefore, does not require an assessment of who is included into vital AI design decision, and who is not."

"As we move onwards with weaving together social, cultural, and technological elements of our lives, we must integrate different types of knowledge into technology development," Sloane and Moss conclude. "A more socially just and democratic future for AI in society cannot merely be calculated or designed; it must be lived in, narrated, and drawn from deep understandings about society."


Story Source:

Materials provided by New York University. Note: Content may be edited for style and length.


Journal Reference:

  1. Mona Sloane, Emanuel Moss. AI’s social sciences deficit. Nature Machine Intelligence, 2019; 1 (8): 330 DOI: 10.1038/s42256-019-0084-6

Cite This Page:

New York University. "Enhancing the quality of AI requires moving beyond the quantitative." ScienceDaily. ScienceDaily, 9 August 2019. <www.sciencedaily.com/releases/2019/08/190809113025.htm>.
New York University. (2019, August 9). Enhancing the quality of AI requires moving beyond the quantitative. ScienceDaily. Retrieved November 18, 2024 from www.sciencedaily.com/releases/2019/08/190809113025.htm
New York University. "Enhancing the quality of AI requires moving beyond the quantitative." ScienceDaily. www.sciencedaily.com/releases/2019/08/190809113025.htm (accessed November 18, 2024).

Explore More

from ScienceDaily

RELATED STORIES