Don’t miss another episode of Truth In Data! Click on the red bell in the lower-left corner of your screen to receive notifications as soon as the episode publishes.
Data for today’s episode is provided by Mercator Advisory Group’s report – Tracking Mistakes in AI: Using Vigilance to Avoid Errors
The 5 Ws of Artificial Intelligence Training Data:
- WHO: Who supplied the data? Who is the data demographically?
- WHAT: What are the access rights? What is the data structure?
- WHEN: When was the data collected? When does the data expire?
- WHERE: Where was the data collected geographically? Where is the general study area?
- WHY: Why was the data collected? Why are any values missing?
- HOW: How was the data collected and created? How is the data related to other data?
About Report
AI models reflect existing biases if these biases are not explicitly eliminated by the data scientists developing the systems. Constant monitoring of the entire operation is required to detect these shifts. The remedy for such lack of focus is training.
Mercator Advisory Group’s latest research Report, Tracking Mistakes in AI: Use Vigilance to Avoid Errors, discusses modes in which data models can deliver biased results, and the ways and means by which financial institutions (FIs) can correct for these biases.
“AI solutions can unwittingly go astray,” comments Tim Sloane, the Report’s author and director of Mercator Advisory Group’s Emerging Technology Advisory Service and its VP Payments Innovation. “Applying AI to issues that can have large negative social consequences should be avoided. One example of this is using AI to implement the business plan of social networks Facebook, You Tube, and others, as presented in the documentary “The Social Dilemma.” The documentary contends that social networks have optimized AI to drive advertising revenue at the expense of the individual and society. To drive revenue, social networks build psychographic models for each user to predict exactly which content will best engage that user.”