Tuesday, August 9, 2016

Explainable Artificial Intelligence Systems



In a military information system of training or tactics, an after-action-review(AAR) is the most common use approach to learn from the exercise. With the complexity of artificial intelligence system, it is hard for user to interact or question the outcome from an "AI-controlled system". This feature caused the following challenges: 1) the user is hard to understand how the result is made or processed; 2) the user feedback is not well considered by the system; 3) the situation is hard to re-produce for training or debug purpose, the users need to re-run the system until the certain criteria occur. In [1], the research proposed a user interface for a military simulator system. The user can "ask question" by subject, time and entity. 

However, the user interface can only provide the "straightforward" information for users. For example, during a simulator, the user can ask "What is your location/health condition/current task"? All of these are only the attribution in the system that's not difficult to retrieve and display. In today, with the data mining and machine techniques, many of the attribution are lacking of a straightforward way to explain it. For instance, a decision made by targeting system with a deep, multiple layer neunor network, with a hundred rounds or training and testing. In this case, to provide the explain of why choose A instead B would be a more challenge issue. 

For national security and military purpose, this issue is even more critical in following aspects: 1) for training purpose: if the users have no idea how the system works, it is impossible for users to interact or even correct the wrong decision make by the system or algorithm; 2) accountability: if the system made a wrong decision, it is hard to account the responsibility between human and machine; 3) security issue: if all the data process and analysis are in a block box, there is a security concern to use the technique in the real world environment - no one knows if the system was hacked or wormed. 

The military system seems far away from us, but actually, the similar issue has been discussed for personalized system in [2]. There are several main issues for a personalized system without a "scrutinize and control": privacy, invisibility, error correction, compatibility across systems and controllability. It seems to have an overlapping between the two research directions. In personalized system, the research focus on the intractability of the user modeling to help with the system effective, trust, transparency and comprehensive. In explainable artificial intelligence systems, it more focus on AAR for military purpose. For example, in [3], it provided another case of how the exploitation of AAR help users in the medical training session, for self-evaluation and problem-solving. The explainable AI system plays an educational role for training purpose. 

Either of the two directions, plus the state-of-the art machine learning techniques would be a great research subject. Here is the a note for three layers, machine learning categories: 


  • Layer 1: Classifier (Supervised)
    • ADA-Boost, Logistic regression, SVM, Knn, Naive Bayes, decision tree: The classification method is basically trying to find out a point, line or faces split the 2 to N type of elements, based on the feeding training/testing data
    • The issue here is we need to extract the “features” from the raw data, rather than a set of raw data. The features should reflect the original data property as possible.
    • It would be simple to show the feature in different latent space. For instance, to show a regression line to distinguish the classification question. 
  • Layer 2: Markov Chain (semi-Supervised)
    • Hidden Markov Model (HMM): based on a series of decision process, to find something unknown.
    • In Markov model, we need to define the motion as a sequential state, a series of observations. The model is trained for maximum the output probability.
    • In [4], the author dealt with the similar issues for a neuron network decision process (control and interaction), a self-explainable approach is still unknown. 
  • Layer 3: Deep learning (Unsupervised)
    • Convolutional neural network (CNN), Recurrent Neural Network (RNN), Deep Neural Network (DNN): automatically extract features from convolutional, recurrent or deep strategies, plus the method above to train/test the models.
    • In the two approaches above, the human needs to extract the features based on some inference. For example, a concept from physics. These features are interpreted by some prior-knowledge. What if in some of the cases, the feature extraction is almost impossible? E.g. The image recognition.
    • In this layer, many of the features are not recognizable. In face, we can use "eigen-face" to visualize the image recognition features. For the other domain, it is hard to visualize the features. Furthermore, the state-of-the-art approach combines the classifier in layer 1 and the feature extraction in layer 3. It remains many challenge research topics in algorithm, interface and human perception. 

Reference
  1. Core, Mark G, H Chad Lane, Michael Van Lent, Dave Gomboc, Steve Solomon, and Milton Rosenberg. “Building Explainable Artificial Intelligence Systems.” In Proceedings of the National Conference on Artificial Intelligence, 21:1766. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006.
  2. Kay, Judy, and Bob Kummerfeld. “Creating Personalized Systems That People Can Scrutinize and Control: Drivers, Principles and Experience.” ACM Transactions on Interactive Intelligent Systems (TiiS) 2, no. 4 (2012): 24.
  3. Lane, H. Chad, et al. Explainable artificial intelligence for training and tutoring. UNIVERSITY OF SOUTHERN CALIFORNIA MARINA DEL REY CA INST FOR CREATIVE TECHNOLOGIES, 2005.
  4. Kim, Been. Interactive and interpretable machine learning models for human machine collaboration. Diss. Massachusetts Institute of Technology, 2015.

Monday, August 1, 2016

Thoughts on Exposure to ideologically diverse news and opinion on Facebook


Summary

The ranking algorithm on the social media is always a controversial issue across disciplines researchers. For instance, the famous debate of filter bubble and echo chamber effect. The state-of-the-art data mining and machine learning techniques, actually, enforcing the phenomena. Due the Facebook ranking algorithm become smarter and smarter, your post wall is filled with all the content you preferred, lack of diversity or multiple voices. Even worse, manipulated by some of the commercial or private purpose.

In this paper that Facebook published in Science magazine, it is the first time to response this issue based on the real world massive data set. The finding are: 1) the stronger ideological alignment will come with higher share numbers. In other words, the article with strong perspective would be re-post more from users, and not a surprise, by the same alignment of users (i.e. Liberal users will tend to re-post more liberal articles and vise versa); 2) the homophily of the friends. The users with the similar ideological affiliation will tend to friend each other on Facebook. The data analysis reflects a clear pattern that the liberal and conservative both with less friend ties of different ideological affiliation, in other words, less diversity. 3) The crosscutting percentage is dropping when the content explosure decreasing. More specifically, when the user can randomly browse all the content on Facebook post, there are around 40-45% of crosscutting rate. However, the rate dropping dramatically, when the user selects from within friend circle, algorithmic suggestion or by themselves. More interestingly, the paper makes a conclusion that, the lower diversity reads/share behavior is mainly due to the individuals' choices.

This is valuable research due to this is the first time to reveal the detail pattern from the Facebook real-world data set. However, I against the conclusion they made, also thinking about the other potential research topic that we can pursue. Here are my reasons: 1) to account the responsibility for the user is not fair due to most of the users, they, have no clue about how the algorithm behind the system will affect their future information consumption. For example, the Facebook ranking algorithm will penalty the ranking score if you not to "like" or "share" the content you saw, say, the news articles. Hence, the news article you ignore will, slowly, disappear on your wall post. And, the mechanism is not transparent at all. The user will never know some of the content they are pre-filtered, due to some of the ignorance action they did. I argue if the "ignore" or "don't like" represent the preference of dislike for each user? 2) There is no way for the user to understand or join in the loop of algorithice processing. The user is basically followed the system suggestion or guidelines, in a very "user-friendly" and "simple" way. There should be either a way to explain or "undo" function let user can maintain the diversity content consumption, in their own will. Also, a double reminder when you decide to unfollow or dislike something, does a reminder interface require, just like when you decide to permanently delete a file from your computer? 3) The user deserves the controllability. Why there is only one personalized ranking algorithm for all different kinds of users? I think the user has the right to choose the preference they like, rather than, decided by some unknown experts or machine learning algorithms. I think this would be fair to claim, the less diverse is due to the click behavior by the users.

The three points above are the potential research topics in my point of view. If we think about Facebook ranking in a recommender system, the same discrimination or less diverse issue may also happen just under our nose. Furthermore, the potential conflict of interest the researcher from industry should be revealed. I admin the researcher from big companies with more resources to answer some of the social phenomena than a laboratory environment. For example, the Google trend for flu prediction and the Facebook ideologically diverse across real-world system users. However, the commercial companies are responsible to their stock holder, not to publics. I believe this should be the advantage for researcher in academia to play a natural role on the research subjects. This is also the value to establish a small scale system and seek for controlling experiments.


  1. Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. "Exposure to ideologically diverse news and opinion on Facebook." Science 348.6239 (2015): 1130-1132. APA

Summary of Interactive and interpretable machine learning models for human machine collaboration


Summary

I found this thesis is very relevant to what I want to do in my dissertation study. This paper basically considers to fill up the gap between human and machine collaboration, based on the approach of interactive and interpretable models. More specifically, this thesis develops a framework of "human-in-the-loop machine learning" system. There are three parts to consist this paper: 1) build up a generative model for re-produce human decision process. This section is aimed to extract the relevant information from a natural human decision process, to prove the machine learning can effectively predict the humans' sequential plan; 2) a case-based reasoning and prototype classification. This section is aimed to provide a meaningful explanation to user to better engage with the system. The goal of this part is to examine the interpretability of the proposed system; 3) an interactive Bayesian case model. This section focus on an interactive Bayesian model the human or expert can contribute their knowledge or preference into the system. The author used a graphical user interface for the interaction between humans and machine, in an online educational system.

Although the thesis structure is very similar to what I want to do. However, the author focus on more in the Bayesian decision support model. All the findings and systems are based on the model. Say, how a rescue team to form up a resource allocation plan, within a sequential decision step? How the domain expert can contribute their knowledge into the model to generate a better result (model accuracy)? How the graphical interface can help user to input their feedback and get a better model result? But in my expertise, we should think about this issue in a different perspective: recommender system.

If I follow the same three layer structure, the whole idea would be: First, interaction patterns, to understand the human behavior to the machine, e.g. The recommender system with a complex machine learning or data mining techniques. I need to know how human interaction with the system and how the system can help them better fulfill the task they care about. More specifically, to test how the interpretability/transparency/explain functions are required for human to interact with the recommender system. Second, an effective system, to design a system that can help user to retrieve the useful information or suggestions. So based on the finding of the above, we can design a novel system to implement the functions we found, e.g. What kind of interpretability/transparency/explain functions that really help users to better use the recommender system. Third, a human-in-a-loop model, based on the above two findings, we know the functions are necessary and useful. Here comes a challenge to make the human can involve in the process. More specifically, an interactive recommender system that support user to contribute their preference or domain knowledge to improve the system or user experience.

  1. Kim, Been. Interactive and interpretable machine learning models for human machine collaboration. Diss. Massachusetts Institute of Technology, 2015.
  2. Explain of recommender system: a literature review.

Explanation of recommender system: a literature review


The possible research directions…  

  1. Model Effectiveness (Effective)
    1. Trust ability of the system. (Trust)
    2. Personalized result explanation (Survey & Framework)
    3. Transparent issues (Transparency)*
    4. User satisfaction(Perception)
  2. Legal and social issue
    1. Privacy
    2. Accountability of the recommendation result (Decision Support & Issues)*
    3. Discrimination (Diversity)
  3. Educational Purpose
    1. Learning the advance techniques behind recommendation.
    2. A stepwise learning model for tuning the system (Debug).
    3. Training for using the recommender system (Comprehensive).

Comprehensive

  1. Al-Taie, Mohammed Z, and Seifedine Kadry. “Visualization of Explanations in Recommender Systems.” Journal of Advanced Management Science Vol 2, no. 2 (2014).
  2. Barbieri, Nicola, Francesco Bonchi, and Giuseppe Manco. “Who to Follow and Why: Link Prediction with Explanations.” In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1266–1275. ACM, 2014.
  3. Blanco, Roi, Diego Ceccarelli, Claudio Lucchese, Raffaele Perego, and Fabrizio Silvestri. “You Should Read This! Let Me Explain You Why: Explaining News Recommendations to Users.” In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, 1995–1999. ACM, 2012.
  4. Cleger-Tamayo, Sergio, Juan M Fernandez-Luna, and Juan F Huete. “Explaining Neighborhood-Based Recommendations.” In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1063–1064. ACM, 2012.
  5. Françoise, Jules, Frédéric Bevilacqua, and Thecla Schiphorst. “GaussBox: Prototyping Movement Interaction with Interactive Visualizations of Machine Learning.” In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 3667–3670. ACM, 2016.
  6. Freitas, Alex A. “Comprehensible Classification Models: A Position Paper.” ACM SIGKDD Explorations Newsletter 15, no. 1 (2014): 1–10.
  7. Hernando, Antonio, JesúS Bobadilla, Fernando Ortega, and Abraham GutiéRrez. “Trees for Explaining Recommendations Made through Collaborative Filtering.” Information Sciences 239 (2013): 1–17.
  8. Kahng, Minsuk, Dezhi Fang, and Duen Horng. “Visual Exploration of Machine Learning Results Using Data Cube Analysis.” In HILDA@ SIGMOD, 1, 2016.
  9. Krause, Josua, Adam Perer, and Kenney Ng. “Interacting with Predictions: Visual Inspection of Black-Box Machine Learning Models.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5686–5697. ACM, 2016.
  10. “Understanding LSTM Networks,” n.d. http://colah.github.io/posts/2015-08-Understanding-LSTMs/.
  11. Yamaguchi, Yuto, Mitsuo Yoshida, Christos Faloutsos, and Hiroyuki Kitagawa. “Why Do You Follow Him?: Multilinear Analysis on Twitter.” In Proceedings of the 24th International Conference on World Wide Web, 137–138. ACM, 2015.

Debug

  1. Kulesza, Todd, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. “Principles of Explanatory Debugging to Personalize Interactive Machine Learning.” In Proceedings of the 20th International Conference on Intelligent User Interfaces, 126–137. ACM, 2015.
  2. McGregor, Sean, Hailey Buckingham, Thomas G Dietterich, Rachel Houtman, Claire Montgomery, and Ronald Metoyer. “Facilitating Testing and Debugging of Markov Decision Processes with Interactive Visualization.” In Visual Languages and Human-Centric Computing (VL/HCC), 2015 IEEE Symposium on, 53–61. IEEE, 2015.

Decision Support

  1. Ehrlich, Kate, Susanna E Kirk, John Patterson, Jamie C Rasmussen, Steven I Ross, and Daniel M Gruen. “Taking Advice from Intelligent Systems: The Double-Edged Sword of Explanations.” In Proceedings of the 16th International Conference on Intelligent User Interfaces, 125–134. ACM, 2011.
  2. Jameson, Anthony, Silvia Gabrielli, Per Ola Kristensson, Katharina Reinecke, Federica Cena, Cristina Gena, and Fabiana Vernero. “How Can We Support Users’ Preferential Choice?” In CHI’11 Extended Abstracts on Human Factors in Computing Systems, 409–418. ACM, 2011.
  3. Martens, David, and Foster Provost. “Explaining Data-Driven Document Classifications,” 2013.
  4. McSherry, David. “Explaining the Pros and Cons of Conclusions in CBR.” In European Conference on Case-Based Reasoning, 317–330. Springer, 2004.
  5. Tan, Wee-Kek, Chuan-Hoo Tan, and Hock-Hai Teo. “Consumer-Based Decision Aid That Explains Which to Buy: Decision Confirmation or Overconfidence Bias?” Decision Support Systems 53, no. 1 (2012): 127–141.

Diversity

  1. Graells-Garrido, Eduardo, Mounia Lalmas, and Ricardo Baeza-Yates. “Data Portraits and Intermediary Topics: Encouraging Exploration of Politically Diverse Profiles.” In Proceedings of the 21st International Conference on Intelligent User Interfaces, 228–240. ACM, 2016.
  2. Szpektor, Idan, Yoelle Maarek, and Dan Pelleg. “When Relevance Is Not Enough: Promoting Diversity and Freshness in Personalized Question Recommendation.” In Proceedings of the 22nd International Conference on World Wide Web, 1249–1260. ACM, 2013.
  3. Yu, Cong, Sihem Amer-Yahia, and Laks Lakshmanan. Diversifying Recommendation Results through Explanation. Google Patents, 2013.
  4. Yu, Cong, Laks VS Lakshmanan, and Sihem Amer-Yahia. “Recommendation Diversification Using Explanations.” In 2009 IEEE 25th International Conference on Data Engineering, 1299–1302. IEEE, 2009.

Effective

  1. Komiak, Sherrie YX, and Izak Benbasat. “The Effects of Personalization and Familiarity on Trust and Adoption of Recommendation Agents.” MIS Quarterly, 2006, 941–960.
  2. Nanou, Theodora, George Lekakos, and Konstantinos Fouskas. “The Effects of Recommendations’ Presentation on Persuasion and Satisfaction in a Movie Recommender System.” Multimedia Systems 16, no. 4–5 (2010): 219–230.
  3. Tan, Wee-Kek, Chuan-Hoo Tan, and Hock-Hai Teo. “When Two Is Better Than One–Product Recommendation with Dual Information Processing Strategies.” In International Conference on HCI in Business, 775–786. Springer, 2014.
  4. Tintarev, Nava, and Judith Masthoff. “Effective Explanations of Recommendations: User-Centered Design.” In Proceedings of the 2007 ACM Conference on Recommender Systems, 153–156. ACM, 2007.
  5. ———. “The Effectiveness of Personalized Movie Explanations: An Experiment Using Commercial Meta-Data.” In International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, 204–213. Springer, 2008.

Framework

  1. Ben-Elazar, Shay, and Noam Koenigstein. “A Hybrid Explanations Framework for Collaborative Filtering Recommender Systems.” In RecSys Posters. Citeseer, 2014.
  2. Berner, Christopher Eric Shogo, Jeremy Ryan Schiff, Corey Layne Reese, and Paul Kenneth Twohey. Recommendation Engine That Processes Data Including User Data to Provide Recommendations and Explanations for the Recommendations to a User. Google Patents, 2013.
  3. Charissiadis, Andreas, and Nikos Karacapilidis. “Strengthening the Rationale of Recommendations Through a Hybrid Explanations Building Framework.” In Intelligent Decision Technologies, 311–323. Springer, 2015.
  4. Chen, Wei, Wynne Hsu, and Mong Li Lee. “Tagcloud-Based Explanation with Feedback for Recommender Systems.” In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, 945–948. ACM, 2013.
  5. Chen, Yu-Chih, Yu-Shi Lin, Yu-Chun Shen, and Shou-De Lin. “A Modified Random Walk Framework for Handling Negative Ratings and Generating Explanations.” ACM Transactions on Intelligent Systems and Technology (TIST) 4, no. 1 (2013): 12.
  6. Du, Zhao, Lantao Hu, Xiaolong Fu, and Yongqi Liu. “Scalable and Explainable Friend Recommendation in Campus Social Network System.” In Frontier and Future Development of Information Technology in Medicine and Education, 457–466. Springer, 2014.
  7. El Aouad, Sara, Christophe Dupuy, Renata Teixeira, Christophe Diot, and Francis Bach. “Exploiting Crowd Sourced Reviews to Explain Movie Recommendation.” In 2nd Workshop on Recommendation Systems for ℡EVISION and ONLINE VIDEO, 2015.
  8. Jameson, Anthony, Martijn C Willemsen, Alexander Felfernig, Marco de Gemmis, Pasquale Lops, Giovanni Semeraro, and Li Chen. “Human Decision Making and Recommender Systems.” In Recommender Systems Handbook, 611–648. Springer, 2015.
  9. Lamche, Béatrice, Ugur Adıgüzel, and Wolfgang Wörndl. “Interactive Explanations in Mobile Shopping Recommender Systems.” In Proc. Joint Workshop on Interfaces and Human Decision Making for Recommender Systems (IntRS 2014), ACM Conference on Recommender Systems, Foster City, USA, 2014.
  10. Lawlor, Aonghus, Khalil Muhammad, Rachael Rafter, and Barry Smyth. “Opinionated Explanations for Recommendation Systems.” In Research and Development in Intelligent Systems XXXII, 331–344. Springer, 2015.
  11. Muhammad, Khalil. “Opinionated Explanations of Recommendations from Product Reviews,” 2015.
  12. Nagulendra, Sayooran, and Julita Vassileva. “Providing Awareness, Explanation and Control of Personalized Filtering in a Social Networking Site.” Information Systems Frontiers 18, no. 1 (2016): 145–158.
  13. Schaffer, James, Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek Abdelzaher, and John O’Donovan. “Getting the Message?: A Study of Explanation Interfaces for Microblog Data Analysis.” In Proceedings of the 20th International Conference on Intelligent User Interfaces, 345–356. ACM, 2015.
  14. Tintarev, Nava. “Explanations of Recommendations.” In Proceedings of the 2007 ACM Conference on Recommender Systems, 203–206. ACM, 2007.
  15. Tintarev, Nava, and Judith Masthoff. “Explaining Recommendations: Design and Evaluation.” In Recommender Systems Handbook, 353–382. Springer, 2015.
  16. Vig, Jesse, Shilad Sen, and John Riedl. “Tagsplanations: Explaining Recommendations Using Tags.” In Proceedings of the 14th International Conference on Intelligent User Interfaces, 47–56. ACM, 2009.
  17. Zanker, Markus, and Daniel Ninaus. “Knowledgeable Explanations for Recommender Systems.” In Web Intelligence and Intelligent Agent Technology (WI-IAT), 2010 IEEE/WIC/ACM International Conference on, 1:657–660. IEEE, 2010.

Issues

  1. Bunt, Andrea, Matthew Lount, and Catherine Lauzon. “Are Explanations Always Important?: A Study of Deployed, Low-Cost Intelligent Interactive Systems.” In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, 169–178. ACM, 2012.
  2. BURKE, BRIAN, and KEVIN QUEALY. “How Coaches and the NYT 4th Down Bot Compare.” New York Times, 2013. http://www.nytimes.com/newsgraphics/2013/11/28/fourth-downs/post.html.
  3. Diakopoulos, Nicholas. “Accountability in Algorithmic Decision-Making.” Queue 13, no. 9 (2015): 50.
  4. ———. “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.” Digital Journalism 3, no. 3 (2015): 398–415.
  5. Lokot, Tetyana, and Nicholas Diakopoulos. “News Bots: Automating News and Information Dissemination on Twitter.” Digital Journalism, 2015, 1–18.

Perception

  1. Gkika, Sofia, and George Lekakos. “The Persuasive Role of Explanations in Recommender Systems.” In 2nd Intl. Workshop on Behavior Change Support Systems (BCSS 2014), 1153:59–68, 2014.
  2. Hijikata, Yoshinori, Yuki Kai, and Shogo Nishida. “The Relation between User Intervention and User Satisfaction for Information Recommendation.” In Proceedings of the 27th Annual ACM Symposium on Applied Computing, 2002–2007. ACM, 2012.
  3. Kulesza, Todd, Simone Stumpf, Margaret Burnett, and Irwin Kwan. “Tell Me More?: The Effects of Mental Model Soundness on Personalizing an Intelligent Agent.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–10. ACM, 2012.
  4. Kulesza, Todd, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. “Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models.” In 2013 IEEE Symposium on Visual Languages and Human Centric Computing, 3–10. IEEE, 2013.
  5. Valdez, André Calero, Simon Bruns, Christoph Greven, Ulrik Schroeder, and Martina Ziefle. “What Do My Colleagues Know? Dealing with Cognitive Complexity in Organizations Through Visualizations.” In International Conference on Learning and Collaboration Technologies, 449–459. Springer, 2015.
  6. Zanker, Markus. “The Influence of Knowledgeable Explanations on Users’ Perception of a Recommender System.” In Proceedings of the Sixth ACM Conference on Recommender Systems, 269–272. ACM, 2012.

Survey

  1. Al-Taie, MOHAMMED Z. “Explanations in Recommender Systems: Overview and Research Approaches.” In Proceedings of the 14th International Arab Conference on Information Technology, Khartoum, Sudan, ACIT, Vol. 13, 2013.
  2. Buder, Jürgen, and Christina Schwind. “Learning with Personalized Recommender Systems: A Psychological View.” Computers in Human Behavior 28, no. 1 (2012): 207–216.
  3. Cleger, Sergio, Juan M Fernández-Luna, and Juan F Huete. “Learning from Explanations in Recommender Systems.” Information Sciences 287 (2014): 90–108.
  4. Gedikli, Fatih, Dietmar Jannach, and Mouzhi Ge. “How Should I Explain? A Comparison of Different Explanation Types for Recommender Systems.” International Journal of Human-Computer Studies 72, no. 4 (2014): 367–382.
  5. Papadimitriou, Alexis, Panagiotis Symeonidis, and Yannis Manolopoulos. “A Generalized Taxonomy of Explanations Styles for Traditional and Social Recommender Systems.” Data Mining and Knowledge Discovery 24, no. 3 (2012): 555–583.
  6. Scheel, Christian, Angel Castellanos, Thebin Lee, and Ernesto William De Luca. “The Reason Why: A Survey of Explanations for Recommender Systems.” In International Workshop on Adaptive Multimedia Retrieval, 67–84. Springer, 2012.
  7. Tintarev, Nava, and Judith Masthoff. “A Survey of Explanations in Recommender Systems.” In Data Engineering Workshop, 2007 IEEE 23rd International Conference on, 801–810. IEEE, 2007.

Transparency

  1. El-Arini, Khalid, Ulrich Paquet, Ralf Herbrich, Jurgen Van Gael, and Blaise Agüera y Arcas. “Transparent User Models for Personalization.” In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 678–686. ACM, 2012.
  2. Hebrado, Januel L, Hong Joo Lee, and Jaewon Choi. “Influences of Transparency and Feedback on Customer Intention to Reuse Online Recommender Systems.” Journal of Society for E-Business Studies 18, no. 2 (2013).
  3. Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390–2395. ACM, 2016.
  4. Radmacher, Mike. “Design Criteria for Transparent Mobile Event Recommendations.” AMCIS 2008 Proceedings, 2008, 304.
  5. Sinha, Rashmi, and Kirsten Swearingen. “The Role of Transparency in Recommender Systems.” In CHI’02 Extended Abstracts on Human Factors in Computing Systems, 830–831. ACM, 2002.

Trust

  1. Biran, Or, and Kathleen McKeown. “Generating Justifications of Machine Learning Predictions.” In 1st International Workshop on Data-to-Text Generation, Edinburgh, 2015.
  2. Cleger-Tamayo, Sergio, Juan M Fernández-Luna, Juan F Huete, and Nava Tintarev. “Being Confident about the Quality of the Predictions in Recommender Systems.” In European Conference on Information Retrieval, 411–422. Springer, 2013.
  3. Kang, Byungkyu, Tobias Höllerer, and John O’Donovan. “Believe It or Not? Analyzing Information Credibility in Microblogs.” In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, 611–616. ACM, 2015.
  4. Katarya, Rahul, Ivy Jain, and Hitesh Hasija. “An Interactive Interface for Instilling Trust and Providing Diverse Recommendations.” In Computer and Communication Technology (ICCCT), 2014 International Conference on, 17–22. IEEE, 2014.
  5. Muhammad, Khalil, Aonghus Lawlor, and Barry Smyth. “On the Use of Opinionated Explanations to Rank and Justify Recommendations.” In The Twenty-Ninth International Flairs Conference, 2016.
  6. O’Donovan, John, and Barry Smyth. “Trust in Recommender Systems.” In Proceedings of the 10th International Conference on Intelligent User Interfaces, 167–174. ACM, 2005.
  7. Shani, Guy, Lior Rokach, Bracha Shapira, Sarit Hadash, and Moran Tangi. “Investigating Confidence Displays for Top-N Recommendations.” Journal of the American Society for Information Science and Technology 64, no. 12 (2013): 2548–2563.

Summary of Hypertext/UMAP 2016 conference.


Summary

This conferene was combined with UMAP2016 and Hypertext2016.

I had a short paper presentation in this conference. The topic is about personalized recommender system for local businesses using Yelp dataset [1]. It was nice to have some feedback from the audience. Here are some of the paragraphs: 1) if I defined the business relationship based on the user review, how can I make sure the sequential shopping behavior? Say, the user from one restaurant to another ice cream shop? I think this suggestion is critical. But I pre-filter the data into daily basis, i.e. The same day shopping patterns. So this may cover the most sequential between any two businesses share the same group of users; 2) two attendees asked about the data pre-processing issue. They wonder how I can make sure its a sequential shopping for any two of the businesses? This is related to the previous question. I pre-filter them into a daily basis; 3) the long distance shopping pattern between Las Vegas and Phoenix. Some of the audience likes the idea to see the commercial pattern across cities; 4) for future study, the system of this paper requires a user study of the costumers. I may send out the questionnaires to the customers in different businesses, to see if the recommendation result fits their shopping preferences. Besides, my presentation, I talked to many of the attendees at the conference. I believe it would be a meaningful connection for future collaboration.

  1. Tsai, Chun-Hua. "A Fuzzy-Based Personalized Recommender System for Local Businesses." Proceedings of the 27th ACM Conference on Hypertext and Social Media. ACM, 2016. APA