Monday, August 1, 2016

Summary of Interactive and interpretable machine learning models for human machine collaboration


Summary

I found this thesis is very relevant to what I want to do in my dissertation study. This paper basically considers to fill up the gap between human and machine collaboration, based on the approach of interactive and interpretable models. More specifically, this thesis develops a framework of "human-in-the-loop machine learning" system. There are three parts to consist this paper: 1) build up a generative model for re-produce human decision process. This section is aimed to extract the relevant information from a natural human decision process, to prove the machine learning can effectively predict the humans' sequential plan; 2) a case-based reasoning and prototype classification. This section is aimed to provide a meaningful explanation to user to better engage with the system. The goal of this part is to examine the interpretability of the proposed system; 3) an interactive Bayesian case model. This section focus on an interactive Bayesian model the human or expert can contribute their knowledge or preference into the system. The author used a graphical user interface for the interaction between humans and machine, in an online educational system.

Although the thesis structure is very similar to what I want to do. However, the author focus on more in the Bayesian decision support model. All the findings and systems are based on the model. Say, how a rescue team to form up a resource allocation plan, within a sequential decision step? How the domain expert can contribute their knowledge into the model to generate a better result (model accuracy)? How the graphical interface can help user to input their feedback and get a better model result? But in my expertise, we should think about this issue in a different perspective: recommender system.

If I follow the same three layer structure, the whole idea would be: First, interaction patterns, to understand the human behavior to the machine, e.g. The recommender system with a complex machine learning or data mining techniques. I need to know how human interaction with the system and how the system can help them better fulfill the task they care about. More specifically, to test how the interpretability/transparency/explain functions are required for human to interact with the recommender system. Second, an effective system, to design a system that can help user to retrieve the useful information or suggestions. So based on the finding of the above, we can design a novel system to implement the functions we found, e.g. What kind of interpretability/transparency/explain functions that really help users to better use the recommender system. Third, a human-in-a-loop model, based on the above two findings, we know the functions are necessary and useful. Here comes a challenge to make the human can involve in the process. More specifically, an interactive recommender system that support user to contribute their preference or domain knowledge to improve the system or user experience.

  1. Kim, Been. Interactive and interpretable machine learning models for human machine collaboration. Diss. Massachusetts Institute of Technology, 2015.
  2. Explain of recommender system: a literature review.

No comments:

Post a Comment