Latest News

The MIT taxonomy helps build explainability into the components of machine learning models

The MIT taxonomy helps build explainability into the components of machine learning models

#MIT #taxonomy #helps #build #explainability #components #machine #learning #models Welcome to Alaska Green Light Blog, here is the new story we have for you today:

Click Me To View Restricted Videos

Integrate explainability into machine learning models

Researchers are building tools to help data scientists make the features used in machine learning models more understandable to end users. Explanation methods that help users understand and trust machine learning models often describe how much certain features used in the model contribute to its prediction. Photo credits: Christine Daniloff, MIT; Image from a photo agency

Researchers are building tools to help data scientists make the features used in machine learning models more understandable to end users.

Explanation methods that help users understand and trust machine learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient’s risk of developing heart disease, a doctor might want to know how much the patient’s heart rate data affects that prediction.

But if these functions are so complex or convoluted that the user cannot understand them, is there any use in the explanation method?

MIT researchers are striving to improve the interpretability of features so decision makers can better leverage the results of machine learning models. Based on years of fieldwork, they have developed a taxonomy to help developers create features that are easier for their target audience to understand.

“We found that in the real world, even though we’ve used state-of-the-art methods to explain machine learning models, there’s still a lot of confusion because of the features and not the model itself,” says Alexandra Zytek, PhD student in electrical engineering and computer science and lead author of an article presenting the taxonomy.

To build the taxonomy, researchers defined properties that make traits interpretable by five types of users, from artificial intelligence experts to the people affected by the prediction of a machine learning model. They also provide guidance on how modelers can transform functions into formats that are easier for a layperson to understand.

They hope their work will inspire modelers to use interpretable features early in the development process, rather than trying to work backwards and focusing on explainability after the fact.

MIT co-authors are Dongyu Liu, a postdoc; Visiting Professor Laure Berti-Équille, Research Director at IRD; and senior author Kalyan Veeramachaneni, senior research scientist at the Information and Decision Systems (LIDS) Laboratory and leader of the Data to AI group. They are joined by Ignacio Arnaldo, a senior data scientist at Corelight. The research results were published in the June issue of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining’s peer-reviewed exploration newsletter.

Real World Lessons

Features are input variables fed to machine learning models; They are usually pulled from the columns in a dataset. Data scientists typically select and hand-craft features for the model, and their main focus is on making sure features are developed to improve model accuracy, not on whether a decision-maker can understand them, Veeramachaneni explains .

For several years, he and his team have been working with decision makers to identify usability challenges for machine learning. These domain experts, most of whom are unfamiliar with machine learning, often don’t trust models because they don’t understand the characteristics that drive predictions.

For one project, they worked with doctors in a hospital’s intensive care unit who used machine learning to predict a patient’s risk of developing complications after heart surgery. Some characteristics were presented as aggregated values, such as B. the trend of a patient’s heart rate over time. While features encoded in this way were “model-ready” (the model could handle the data), clinicians did not understand how they were calculated. They’d rather see how these aggregated features relate to the original values ​​so they could identify abnormalities in a patient’s heart rate, Liu says.

In contrast, a group of learning scientists preferred features that were aggregated. Instead of having a feature like “number of posts a student made on discussion boards,” they would rather group related features and label them with terms they understand, like “participation.”

“When it comes to interpretability, one size doesn’t fit all. As you go from area to area, there are different needs. And the interpretability itself has many levels,” says Veeramachaneni.

The idea that one size doesn’t fit all is key to the researchers’ taxonomy. They define properties that can make features more or less interpretable by different decision makers, and outline which properties are likely to be most important to specific users.

For example, machine learning developers might focus on having features that are compatible with the model and predictive, meaning they are expected to improve the performance of the model.

On the other hand, decision makers with no machine learning experience might benefit better from human-written features, meaning they are described in a way that is natural and understandable to users, meaning they can relate to real-world metrics that users can understand think about it.

“Taxonomy says if you create interpretable traits, to what degree are they interpretable? Depending on the type of domain experts you work with, you may not need all levels,” says Zytek.

interpretability in the first place

The researchers also outline feature engineering techniques that a developer can apply to make features more interpretable for a given audience.

Feature engineering is a process in which data scientists transform data into a format that machine learning models can process, using techniques such as data aggregation or value normalization. Also, most models cannot handle categorical data unless it is converted to a numeric code. These transformations are often almost impossible to unpack for laypeople.

Creating interpretable functions might require undoing some of that coding, Zytek says. For example, a common feature engineering technique organizes data spans so that they all contain the same number of years. To make these characteristics easier to interpret, one could group age groups using human terms such as infant, toddler, child, and teenager. Or instead of using a transformed feature like average heart rate, an interpretable feature could simply be actual heart rate data, Liu adds.

“In many areas, the compromise between interpretable features and model accuracy is actually very small. For example, when working with child protection screeners, we retrained the model using only features that met our interpretability definitions, and the performance penalty was almost negligible,” says Zytek.

Building on this work, researchers are developing a system that allows a model developer to more efficiently handle complicated feature transformations to create human-centric explanations for machine learning models. This new system will also transform algorithms for explaining model-ready datasets into formats that decision-makers can understand.

Reference: “The Need for Interpretable Features: Motivation and Taxonomy” by Alexandra Zytek, Ignacio Arnaldo, Dongyu Liu, Laure Berti-Equille, and Kalyan Veeramachaneni, June 21, 2022, ACM SIGKDD Explorations Newsletter.
DOI: 10.1145/3544903.3544905

Click Here To Continue Reading From Source

Related Articles

Back to top button