31.4 C
HomeTechIncorporating human error into machine learning

Incorporating human error into machine learning

Follow Bahrain This Week on Google News
- Advertisement -

Many AI systems fail to grasp human error and uncertainty. It mainly happens in systems where a human provides feedback to a machine-learning model. These systems are designed to think that people are always certain and right, while in reality, decision-making involves uncertainty and blunders from time to time.

To better account for uncertainty in AI applications where humans and machines collaborate, researchers from the University of Cambridge have worked with The Alan Turing Institute, Princeton, and Google DeepMind to close the gap between human behavior and machine learning. In particular, where safety is crucial, such as in the identification of medical conditions, this could assist in reducing risk and improve the confidence and reliability of these applications.

For individuals to provide feedback and express their level of uncertainty when labeling a particular image, the team modified a well-known image classification dataset. They discovered that although humans also reduce the overall performance of these hybrid systems, training with uncertain labels can increase their ability to handle ambiguous feedback.

First author, Katherine Collins from Cambridge’s Department of Engineering, said, “Uncertainty is central in how humans reason about the world, but many AI models fail to consider this. Many developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person’s point of view.”

- Advertisement -

“Many human-AI systems assume that humans are always certain of their decisions, which isn’t how humans work – we all make mistakes. We wanted to look at what happens when people express uncertainty, which is especially important in safety-critical settings, like a clinician working with a medical AI system.”

The researchers employed three benchmark machine learning datasets for their study: one for categorizing digits, one for classifying chest X-rays, and one for classifying bird photos. For the first two datasets, the researchers simulated uncertainty; however, for the dataset on birds, they asked participants to rate their level of certainty about the images they were viewing, such as whether a bird was red or orange.

The researchers could ascertain how the final product was altered thanks to the human participants’ annotated “soft labels.” However, they discovered that when humans replaced machines, performance quickly declined.

Researchers said, “Their results have identified several open challenges when incorporating humans into machine learning models. They are releasing their datasets so that further research can occur and uncertainty might be built into machine learning systems.”

Amit Malewar

By: Amit Malewar

- Advertisement -

Check out our other news

Trending Now

Latest News