- 更完備的熱力學第二定律
- 熵: Wiki,
- Choon Hui Teo, Houssam Nassif, Daniel Hill, Sriram Srinivasan, Mitchell Goodman, Vijai Mohan, and S.V.N. Vishwanathan. 2016. Adaptive, Personalized Diversity for Visual Discovery. In Proceedings of the 10th ACM Conference on Recommender Systems (RecSys '16). ACM, New York, NY, USA, 35-38. DOI: http://dx.doi.org/10.1145/2959100.2959171
- Discuss How to model Cognitive Biases
- Entropy-Based Decision Tree Induction
- Special Issue "Machine Learning and Entropy: Discover Unknown Unknowns in Complex Data Sets"
- A simple note of my thinking
Showing posts with label note. Show all posts
Showing posts with label note. Show all posts
Tuesday, January 17, 2017
About Entropy
Thursday, January 5, 2017
Leap Year Workshop: note
Leap Year Workshop: the research opportunity.
@Department Biomedical Informatics, University of Pittsburgh
The order, title and note for the presentations.
- Balaji Palanisamy: Protecting Time-varying Privacy with Self-emerging Data
- Kayhan Batmanghelich: An Exciting New Horizon: Medical Image Computing Meets EHR
- Daniel Mosse: OCCAM: define, run, curate, visualize experiments for your group, your class, your organization and/or the world
- Store and archive all the research data in one place
- Daqing He: Intelligent Access and Deep Representation for Medical Tasks
- Deep learning with NLP
- Rami Melhem: Distributed Graph Analytics
- big graph analysis platform
- Hochheiser, Harry: Interactive graphical tools for robust and reproducible data interpretation
- detect and avoid bias using visualization interface
- What is the bias in the medical environment?
- Michael Becich: Towards a Pitt Data Commons
- Potential Pitt funding and grants.
- big data mail list.
- Don Taylor: How to factor industry into academic commercial translation
- Supporting from the university leadership
- UPMC is one of the commercialize example.
- Peter Brusilovsky: Data Driven Education
- Using the proposed system in Pitt campus?
- Madhavi Ganapathairaju: Computational and collective intelligence for translating protein interaction predictions
- Identify the highest-impact protein interaction.
- Using visualization techniques.
- Greg Cooper: technology and workforce
- Computerization and employment
- How to keep people to adopt the computerization environment?
- Richard Boyce: Bridging islands of information to establish an integrated knowledge base of drugs and health outcomes of interest.
- A control panel to integrate the medical record and research publications.
- Idea: to put all the reading material in Google Drive and with the reading note. (Try blogger maybe?)
- Dmitriy Babichenko: Designing the Model Patient: Data-Driven Virtual Patients in Health Sciences Education”
- How to model the case? What is the effort?
- Xinghua Lu: From big data to bed side: A machine learning approach
- Personalized medical treatment.
- Cancer pathway detection
- Yu-Ru Lin: Mining Insights from Disasters Using Social Sensors
- Computational focus groups.
- David Boone: Pipeline into computational research: educational outreach internships
- Internship for high school and undergraduate students.
- Any interested students? To be a mentor?
- UPCI academy
- Milos Haushrecht: Real-time EHR data analysis monitoring and alerting.
- Many data type presentation
- From the data that is suitable for machine learning
- Bedside medical machine learning
- Liz Lyon: Research transparency: don't just talk the talk, walk the walk
- Put transparency into the research cycle.
- Songjin Liu: Efficient exact algorithms and high-performance computing for Bioinformatics
- The NP-hard problem in biology systems and researchers.
- The approximation algorithm for NP-hard problems.
Thursday, December 29, 2016
High-dimensional data visualization
Note:
This paper introduced the basic figure plots to display the multi-dimensional data. The mentioned schemes included:
Reference
This paper introduced the basic figure plots to display the multi-dimensional data. The mentioned schemes included:
- Mosaic Plots
This plot is good for categorical data display, for the user to compare the different between features. But it requires the user to pay attention to multiple directions (top/bottom, left/right), which makes it harder to follow, less user perception. Besides, this plot provides a quick overview categorically, but for ordinal and interval variables.
- Trellis Displays
Nice to provide a comparison between variables, not suitable for temporal data and categorical data. Besides, many of the cells may repeating or empty.
- Parallel Coordinate Plots
Nice to show the temporal data, requires the skill to solve the overplotting, scaling and sorting problems.
- Projection Pursuit and the Grand Tour
Not easy for the human brand to process a 3D plot, but it shows the dynamic between the dimension projection. For instance, using a scatterplot with 3 dimensions, let the user explore the pattern across dimensions, is one type of grand tour.
Summary
A summary with the functionality of exploration and presentation included the interactivity of each plot. However, I think the Trellis may also provide interactively, e.g. this demo.
Reference
- Theus, Martin. "High-dimensional data visualization." Handbook of data visualization. Springer Berlin Heidelberg, 2008. 151-178.
Wednesday, December 28, 2016
Collaborative visual analysis with RCloud
Note
This paper discussed a collaborative visual analysis environment for a team work. For a data science related project work, it is very common to design, analyze and deliver the result to target audience, could be a colleague, customer or your boss. This is a process of exploratory data analysis (EDA). This paper argues the works are usually done by different tools, i.e. coding in scripting language and design the interface with web techniques. This makes the collaborative work very difficult, due to lack of discoverability (code reuse), technology transfer (collaborate) and coexistence (plus interactive visualization tool). Hence, this paper proposed a framework - RCloud, which using R to integrate the back end analyze and front display in a restful API structure. The basic idea is every application natively demonstrates the result to users through web browsers. This framework is re-using and coupling the existing package in R.
Points: in a small scale teamwork size and low dynamic of project requirements, I think this framework would work well. However, if more and more projects (usually small and not mature result) go live, the search and re-use may create extra workload for the developer. In another hand, the R package may not be suitable to solve all the practical problems, e.g. a large scale data storage or distributed computing tasks. Besides, there are more framework options to better facilitate the collaborative between developer and designer, e.g. the MVC framework. I think a good framework should stand alone with the specific language and techniques, so it can generally support to dynamic real world requirement.
I actually like this idea, it shows the values to deliver the beta works to the users. It 'd be good if we can put the research finding or preliminary result on the web for a better potential collaborative, public exposure, and self-advertisement. The other trend is using Scala to bundle the analysis, implementation, and production.
Reference
This paper discussed a collaborative visual analysis environment for a team work. For a data science related project work, it is very common to design, analyze and deliver the result to target audience, could be a colleague, customer or your boss. This is a process of exploratory data analysis (EDA). This paper argues the works are usually done by different tools, i.e. coding in scripting language and design the interface with web techniques. This makes the collaborative work very difficult, due to lack of discoverability (code reuse), technology transfer (collaborate) and coexistence (plus interactive visualization tool). Hence, this paper proposed a framework - RCloud, which using R to integrate the back end analyze and front display in a restful API structure. The basic idea is every application natively demonstrates the result to users through web browsers. This framework is re-using and coupling the existing package in R.
Points: in a small scale teamwork size and low dynamic of project requirements, I think this framework would work well. However, if more and more projects (usually small and not mature result) go live, the search and re-use may create extra workload for the developer. In another hand, the R package may not be suitable to solve all the practical problems, e.g. a large scale data storage or distributed computing tasks. Besides, there are more framework options to better facilitate the collaborative between developer and designer, e.g. the MVC framework. I think a good framework should stand alone with the specific language and techniques, so it can generally support to dynamic real world requirement.
I actually like this idea, it shows the values to deliver the beta works to the users. It 'd be good if we can put the research finding or preliminary result on the web for a better potential collaborative, public exposure, and self-advertisement. The other trend is using Scala to bundle the analysis, implementation, and production.
Reference
- North, Stephen, et al. "Collaborative visual analysis with RCloud." Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on. IEEE, 2015.
EgoNetCloud: Event-based egocentric dynamic network visualization
Note
A quality work on network visualization, this paper proposed a visual analytic tool to display the structure and temporal dynamics of an egocentric dynamic network [1,3]. It considered three important design factors in this work: 1) network simplification: to show all the links in the network graph is meaningless and over the information loading for users. A reasonable way to "prune" the node to highlight the important nodes is necessary. It firstly defined the weighting function by co-author number and ordering. Based on the weighting function, the authors tried four different approaches to pruning the node, to maximize the efficiency function, which maxes the weighting in the sub-graph.
2) temporal network: the temporal information present by horizon graph with an axis of time. It would be a simple task to identify the distribution over time; 3) graph layout: the layout designs with a 2D space. Due to the temporal relationship, the chart divides into several sub-graph that hard to fit by regular force-directed graph layout. They extend the stress model to calculate the ideal design [2].
Points: 1) the research methodology of visual analytic: from design, implantation, case study to user study. The user study design is a useful reference for my research; 2) considering the single publication as an event to form the egocentric network. It may supports to multiple use cases, e.g. urban computing, conference, news event, etc. This system is suitable to explore the relationship of a given dataset, for a temporal and egocentric related tasks; 3) the interaction of slider on time and weighting items is useful for a user to explore the content. It may potentially help a user to understand the deep relationship of the given person. This idea may also link to the explain function in the recommender system.
A worth to read citation [4].
Reference
A quality work on network visualization, this paper proposed a visual analytic tool to display the structure and temporal dynamics of an egocentric dynamic network [1,3]. It considered three important design factors in this work: 1) network simplification: to show all the links in the network graph is meaningless and over the information loading for users. A reasonable way to "prune" the node to highlight the important nodes is necessary. It firstly defined the weighting function by co-author number and ordering. Based on the weighting function, the authors tried four different approaches to pruning the node, to maximize the efficiency function, which maxes the weighting in the sub-graph.
2) temporal network: the temporal information present by horizon graph with an axis of time. It would be a simple task to identify the distribution over time; 3) graph layout: the layout designs with a 2D space. Due to the temporal relationship, the chart divides into several sub-graph that hard to fit by regular force-directed graph layout. They extend the stress model to calculate the ideal design [2].
Points: 1) the research methodology of visual analytic: from design, implantation, case study to user study. The user study design is a useful reference for my research; 2) considering the single publication as an event to form the egocentric network. It may supports to multiple use cases, e.g. urban computing, conference, news event, etc. This system is suitable to explore the relationship of a given dataset, for a temporal and egocentric related tasks; 3) the interaction of slider on time and weighting items is useful for a user to explore the content. It may potentially help a user to understand the deep relationship of the given person. This idea may also link to the explain function in the recommender system.
A worth to read citation [4].
Reference
- Liu, Qingsong, et al. "EgoNetCloud: Event-based egocentric dynamic network visualization." Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on. IEEE, 2015.
- Gansner, Emden R., Yehuda Koren, and Stephen North. "Graph drawing by stress majorization." International Symposium on Graph Drawing. Springer Berlin Heidelberg, 2004.
- Shi, Lei, et al. "1.5 d egocentric dynamic network visualization." IEEE transactions on visualization and computer graphics 21.5 (2015): 624-637.
- Zheng, Yixian, et al. "Visual Analytics in Urban Computing: An Overview." IEEE Transactions on Big Data 2.3 (2016): 276-296.
Tuesday, December 27, 2016
Following scholars
Network Science
- Jure Leskovec (Homepage, Google Scholar)
- Jon Kleinberg (Homepage, Google Scholar)
Recommendation System
- Izak Benbasat (Homepage, Google Scholar)
- Bo Xiao (Homepage, Google Scholar)
- Dokyun Lee (Homepage, Google Scholar)
Visualization
- Kwan-Liu Ma (Homepage, Google Scholar)
Tuesday, August 9, 2016
Explainable Artificial Intelligence Systems
In a military information system of training or tactics, an after-action-review
However, the user interface can only provide the "straightforward" information for users. For example, during a simulator, the user can ask "What is your location/health condition/current task"? All of these are only the attribution in the system that's not difficult to retrieve and display. In today, with the data mining and machine techniques, many of the attribution are lacking of a straightforward way to explain it. For instance, a decision made by targeting system with a deep, multiple layer neunor network, with a hundred rounds or training and testing. In this case, to provide the explain of why choose A instead B would be a more challenge issue.
For national security and military purpose, this issue is even more critical in following aspects: 1) for training purpose: if the users have no idea how the system works, it is impossible for users to interact or even correct the wrong decision make by the system or algorithm; 2) accountability: if the system made a wrong decision, it is hard to account the responsibility between human and machine; 3) security issue: if all the data process and analysis are in a block box, there is a security concern to use the technique in the real world environment - no one knows if the system was hacked or wormed.
The military system seems far away from us, but actually, the similar issue has been discussed for personalized system in [2]. There are several main issues for a personalized system without a "scrutinize and control": privacy, invisibility, error correction, compatibility across systems and controllability. It seems to have an overlapping between the two research directions. In personalized system, the research focus on the intractability of the user modeling to help with the system effective, trust, transparency and comprehensive. In explainable artificial intelligence systems, it more focus on AAR for military purpose. For example, in [3], it provided another case of how the exploitation of AAR help users in the medical training session, for self-evaluation and problem-solving. The explainable AI system plays an educational role for training purpose.
Either of the two directions, plus the state-of-the art machine learning techniques would be a great research subject. Here is the a note for three layers, machine learning categories:
- Layer 1: Classifier (Supervised)
- ADA-Boost, Logistic regression, SVM, Knn, Naive Bayes, decision tree: The classification method is basically trying to find out a point, line or faces split the 2 to N type of elements, based on the feeding training/testing data
- The issue here is we need to extract the “features” from the raw data, rather than a set of raw data. The features should reflect the original data property as possible.
- It would be simple to show the feature in different latent space. For instance, to show a regression line to distinguish the classification question.
- Layer 2: Markov Chain (semi-Supervised)
- Hidden Markov Model (HMM): based on a series of decision process, to find something unknown.
- In Markov model, we need to define the motion as a sequential state, a series of observations. The model is trained for maximum the output probability.
- In [4], the author dealt with the similar issues for a neuron network decision process (control and interaction), a self-explainable approach is still unknown.
- Layer 3: Deep learning (Unsupervised)
- Convolutional neural network (CNN), Recurrent Neural Network (RNN), Deep Neural Network (DNN): automatically extract features from convolutional, recurrent or deep strategies, plus the method above to train/test the models.
- In the two approaches above, the human needs to extract the features based on some inference. For example, a concept from physics. These features are interpreted by some prior-knowledge. What if in some of the cases, the feature extraction is almost impossible? E.g. The image recognition.
- In this layer, many of the features are not recognizable. In face, we can use "eigen-face" to visualize the image recognition features. For the other domain, it is hard to visualize the features. Furthermore, the state-of-the-art approach combines the classifier in layer 1 and the feature extraction in layer 3. It remains many challenge research topics in algorithm, interface and human perception.
Reference
- Core, Mark G, H Chad Lane, Michael Van Lent, Dave Gomboc, Steve Solomon, and Milton Rosenberg. “Building Explainable Artificial Intelligence Systems.” In Proceedings of the National Conference on Artificial Intelligence, 21:1766. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006.
- Kay, Judy, and Bob Kummerfeld. “Creating Personalized Systems That People Can Scrutinize and Control: Drivers, Principles and Experience.” ACM Transactions on Interactive Intelligent Systems (TiiS) 2, no. 4 (2012): 24.
- Lane, H. Chad, et al. Explainable artificial intelligence for training and tutoring. UNIVERSITY OF SOUTHERN CALIFORNIA MARINA DEL
REY CA INST FOR CREATIVE TECHNOLOGIES, 2005. - Kim, Been. Interactive and interpretable machine learning models for human machine collaboration. Diss. Massachusetts Institute of Technology, 2015.
Monday, August 1, 2016
Explanation of recommender system: a literature review
The possible research directions…
- Model Effectiveness (Effective)
Trust ability of the system. (Trust)- Personalized result explanation (Survey & Framework)
Transparent issues (Transparency) *- User satisfaction
( Perception) - Legal and social
issue - Privacy
- Accountability of the recommendation result (Decision Support & Issues
) * - Discrimination (Diversity)
- Educational Purpose
- Learning the advance techniques behind
recommendation . - A stepwise learning model for tuning the system (Debug).
- Training for using the recommender system (Comprehensive).
Comprehensive
- Al-Taie, Mohammed Z, and Seifedine Kadry. “Visualization of Explanations in Recommender Systems.” Journal of Advanced Management Science Vol 2, no. 2 (2014).
- Barbieri, Nicola, Francesco Bonchi, and Giuseppe Manco. “Who to
Follow and Why: Link Prediction with Explanations.” In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1266–1275. ACM, 2014. - Blanco, Roi, Diego Ceccarelli, Claudio Lucchese, Raffaele Perego, and Fabrizio Silvestri. “You Should Read This! Let Me Explain You Why: Explaining News Recommendations to Users.” In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, 1995–1999. ACM, 2012.
- Cleger-Tamayo, Sergio, Juan M Fernandez-Luna, and Juan F Huete. “Explaining Neighborhood-Based Recommendations.” In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1063–1064. ACM, 2012.
- Françoise, Jules, Frédéric Bevilacqua, and Thecla Schiphorst. “
GaussBox : Prototyping Movement Interaction with Interactive Visualizations of Machine Learning.” In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 3667–3670. ACM, 2016. - Freitas, Alex A. “Comprehensible Classification Models: A Position Paper.” ACM SIGKDD Explorations Newsletter 15, no. 1 (2014): 1–10.
- Hernando, Antonio, JesúS Bobadilla, Fernando Ortega, and Abraham GutiéRrez. “Trees for Explaining Recommendations Made through Collaborative Filtering.” Information Sciences 239 (2013): 1–17.
Kahng ,Minsuk , Dezhi Fang, and Duen Horng. “Visual Exploration of Machine Learning Results Using Data Cube Analysis.” In HILDA@ SIGMOD, 1, 2016.- Krause, Josua, Adam Perer, and Kenney Ng. “Interacting with Predictions: Visual Inspection of Black-Box Machine Learning Models.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5686–5697. ACM, 2016.
- “Understanding LSTM Networks,” n.d. http://colah.github.io/posts/2015-08-Understanding-LSTMs/.
- Yamaguchi, Yuto, Mitsuo Yoshida, Christos Faloutsos, and Hiroyuki Kitagawa. “Why Do You Follow Him?: Multilinear Analysis on Twitter.” In Proceedings of the 24th International Conference on World Wide Web, 137–138. ACM, 2015.
Debug
- Kulesza, Todd, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. “Principles of Explanatory Debugging to Personalize Interactive Machine Learning.” In Proceedings of the 20th International Conference on Intelligent User Interfaces, 126–137. ACM, 2015.
- McGregor, Sean, Hailey Buckingham, Thomas G Dietterich, Rachel Houtman, Claire Montgomery, and Ronald Metoyer. “Facilitating Testing and Debugging of Markov Decision Processes with Interactive Visualization.” In Visual Languages and Human-Centric Computing (VL/HCC), 2015 IEEE Symposium on, 53–61. IEEE, 2015.
Decision Support
- Ehrlich, Kate, Susanna E Kirk, John Patterson, Jamie C Rasmussen, Steven I Ross, and Daniel M Gruen. “Taking Advice from Intelligent Systems: The Double-Edged Sword of Explanations.” In Proceedings of the 16th International Conference on Intelligent User Interfaces, 125–134. ACM, 2011.
- Jameson, Anthony, Silvia Gabrielli, Per Ola Kristensson, Katharina Reinecke, Federica Cena, Cristina Gena, and Fabiana Vernero. “How Can We Support Users’ Preferential Choice?” In CHI’11 Extended Abstracts on Human Factors in Computing Systems, 409–418. ACM, 2011.
- Martens, David, and Foster Provost. “Explaining Data-Driven Document Classifications,” 2013.
- McSherry, David. “Explaining the Pros and Cons of Conclusions in CBR.” In European Conference on Case-Based Reasoning, 317–330. Springer, 2004.
- Tan, Wee-Kek, Chuan-Hoo Tan, and Hock-Hai Teo. “Consumer-Based Decision Aid That Explains Which to Buy: Decision
Confirmation or Overconfidence Bias?” Decision Support Systems 53, no. 1 (2012): 127–141.
Diversity
- Graells-Garrido, Eduardo, Mounia Lalmas, and Ricardo Baeza-Yates. “Data Portraits and Intermediary Topics: Encouraging Exploration of Politically Diverse Profiles.” In Proceedings of the 21st International Conference on Intelligent User Interfaces, 228–240. ACM, 2016.
Szpektor , Idan, Yoelle Maarek, and Dan Pelleg. “When Relevance Is Not Enough: Promoting Diversity and Freshness in Personalized Question Recommendation.” In Proceedings of the 22nd International Conference on World Wide Web, 1249–1260. ACM, 2013.- Yu, Cong, Sihem Amer-Yahia, and Laks Lakshmanan. Diversifying Recommendation Results through Explanation. Google Patents, 2013.
- Yu, Cong, Laks VS Lakshmanan, and Sihem Amer-Yahia. “Recommendation Diversification Using Explanations.” In 2009 IEEE 25th International Conference on Data Engineering, 1299–1302. IEEE, 2009.
Effective
Komiak , Sherrie YX, and Izak Benbasat. “The Effects of Personalization and Familiarity on Trust and Adoption of Recommendation Agents.” MIS Quarterly, 2006, 941–960.Nanou , Theodora, George Lekakos, and Konstantinos Fouskas. “The Effects of Recommendations’ Presentation on Persuasion and Satisfaction in a Movie Recommender System.” Multimedia Systems 16, no. 4–5 (2010): 219–230.- Tan, Wee-Kek, Chuan-Hoo Tan, and Hock-Hai Teo. “When Two Is Better Than One–Product Recommendation with Dual Information Processing Strategies.” In International Conference on HCI in Business, 775–786. Springer, 2014.
Tintarev , Nava, and Judith Masthoff. “Effective Explanations of Recommendations: User-Centered Design.” In Proceedings of the 2007 ACM Conference on Recommender Systems, 153–156. ACM, 2007.- ———. “The Effectiveness of Personalized Movie Explanations: An Experiment Using Commercial
Meta-Data .” In International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, 204–213. Springer, 2008.
Framework
- Ben-Elazar, Shay, and Noam Koenigstein. “A Hybrid Explanations Framework for Collaborative Filtering Recommender Systems.” In RecSys Posters.
Citeseer , 2014. - Berner, Christopher Eric Shogo, Jeremy Ryan Schiff, Corey Layne Reese, and Paul Kenneth Twohey.
Recommendation Engine That Processes Data Including User Data to Provide Recommendations and Explanations for the Recommendations to a User. Google Patents, 2013. Charissiadis , Andreas, and Nikos Karacapilidis. “Strengthening the Rationale of Recommendations Through a HybridExplanations Building Framework.” In Intelligent Decision Technologies, 311–323. Springer, 2015.- Chen, Wei, Wynne Hsu, and Mong Li Lee. “Tagcloud-Based Explanation with Feedback for Recommender Systems.” In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, 945–948. ACM, 2013.
- Chen, Yu-Chih, Yu-Shi Lin, Yu-Chun Shen, and Shou-De Lin. “A Modified Random Walk Framework for Handling Negative Ratings and Generating Explanations.” ACM Transactions on Intelligent Systems and Technology (TIST) 4, no. 1 (2013): 12.
Du , Zhao, Lantao Hu, Xiaolong Fu, and Yongqi Liu. “Scalable and Explainable Friend Recommendation in Campus Social Network System.” In Frontier and Future Development of Information Technology in Medicine and Education, 457–466. Springer, 2014.- El Aouad, Sara, Christophe Dupuy, Renata Teixeira, Christophe Diot, and Francis Bach. “Exploiting Crowd Sourced Reviews to Explain Movie Recommendation.” In 2nd Workshop on Recommendation Systems for ℡EVISION and ONLINE VIDEO, 2015.
- Jameson, Anthony, Martijn C Willemsen, Alexander Felfernig, Marco de Gemmis, Pasquale Lops, Giovanni Semeraro, and Li Chen. “Human Decision Making and Recommender Systems.” In Recommender Systems Handbook, 611–648. Springer, 2015.
Lamche , Béatrice, Ugur Adıgüzel, and Wolfgang Wörndl. “Interactive Explanations in Mobile Shopping Recommender Systems.” In Proc. Joint Workshop on Interfaces and Human Decision Making for Recommender Systems (IntRS 2014), ACM Conference on Recommender Systems, Foster City, USA, 2014.- Lawlor, Aonghus, Khalil Muhammad, Rachael Rafter, and Barry Smyth. “Opinionated Explanations for Recommendation Systems.” In Research and Development in Intelligent Systems XXXII, 331–344. Springer, 2015.
- Muhammad, Khalil. “Opinionated Explanations of Recommendations from Product Reviews,” 2015.
Nagulendra , Sayooran, and Julita Vassileva. “Providing Awareness, Explanation and Control of Personalized Filtering in a Social Networking Site.” Information Systems Frontiers 18, no. 1 (2016): 145–158.- Schaffer, James, Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek Abdelzaher, and John O’Donovan. “Getting the Message?: A Study of Explanation Interfaces for Microblog Data Analysis.” In Proceedings of the 20th International Conference on Intelligent User Interfaces, 345–356. ACM, 2015.
Tintarev , Nava. “Explanations of Recommendations.” In Proceedings of the 2007 ACM Conference on Recommender Systems, 203–206. ACM, 2007.Tintarev , Nava, and Judith Masthoff. “Explaining Recommendations: Design and Evaluation.” In Recommender Systems Handbook, 353–382. Springer, 2015.- Vig, Jesse, Shilad Sen, and John Riedl. “Tagsplanations: Explaining Recommendations Using Tags.” In Proceedings of the 14th International Conference on Intelligent User Interfaces, 47–56. ACM, 2009.
- Zanker, Markus, and Daniel Ninaus. “Knowledgeable Explanations for Recommender Systems.” In Web Intelligence and Intelligent Agent Technology (WI-IAT), 2010 IEEE/WIC/ACM International Conference on, 1:657–660. IEEE, 2010.
Issues
- Bunt, Andrea, Matthew Lount, and Catherine Lauzon. “Are Explanations Always Important?: A Study of Deployed, Low-Cost Intelligent Interactive Systems.” In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, 169–178. ACM, 2012.
- BURKE, BRIAN, and KEVIN QUEALY. “How Coaches and the NYT 4th Down Bot Compare.” New York Times, 2013. http://www.nytimes.com/newsgraphics/2013/11/28/fourth-downs/post.html.
- Diakopoulos, Nicholas. “Accountability in Algorithmic Decision-Making.” Queue 13, no. 9 (2015): 50.
- ———. “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.” Digital Journalism 3, no. 3 (2015): 398–415.
- Lokot, Tetyana, and Nicholas Diakopoulos. “News Bots: Automating News and Information Dissemination on Twitter.” Digital Journalism, 2015, 1–18.
Perception
- Gkika, Sofia, and George Lekakos. “The Persuasive Role of Explanations in Recommender Systems.” In 2nd Intl. Workshop on Behavior Change Support Systems (BCSS 2014), 1153:59–68, 2014.
- Hijikata, Yoshinori, Yuki Kai, and Shogo Nishida. “The Relation between User Intervention and User Satisfaction for Information Recommendation.” In Proceedings of the 27th Annual ACM Symposium on Applied Computing, 2002–2007. ACM, 2012.
- Kulesza, Todd, Simone Stumpf, Margaret Burnett, and Irwin Kwan. “Tell Me More?: The Effects of Mental Model Soundness on Personalizing an Intelligent Agent.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–10. ACM, 2012.
- Kulesza, Todd, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. “Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models.” In 2013 IEEE Symposium on Visual Languages and Human Centric Computing, 3–10. IEEE, 2013.
- Valdez, André Calero, Simon Bruns, Christoph Greven, Ulrik Schroeder, and Martina Ziefle. “What Do My Colleagues Know? Dealing with Cognitive Complexity in Organizations Through Visualizations.” In International Conference on Learning and Collaboration Technologies, 449–459. Springer, 2015.
- Zanker, Markus. “The Influence of Knowledgeable Explanations on Users’ Perception of a Recommender System.” In Proceedings of the Sixth ACM Conference on Recommender Systems, 269–272. ACM, 2012.
Survey
- Al-Taie, MOHAMMED Z. “Explanations in Recommender Systems: Overview and Research Approaches.” In Proceedings of the 14th International Arab Conference on Information Technology, Khartoum, Sudan, ACIT, Vol. 13, 2013.
- Buder, Jürgen, and Christina Schwind. “Learning with Personalized Recommender Systems: A Psychological View.” Computers in Human Behavior 28, no. 1 (2012): 207–216.
- Cleger, Sergio, Juan M Fernández-Luna, and Juan F Huete. “Learning from Explanations in Recommender Systems.” Information Sciences 287 (2014): 90–108.
- Gedikli, Fatih, Dietmar Jannach, and Mouzhi Ge. “How Should I Explain? A Comparison of Different Explanation Types for Recommender Systems.” International Journal of Human-Computer Studies 72, no. 4 (2014): 367–382.
- Papadimitriou, Alexis, Panagiotis Symeonidis, and Yannis Manolopoulos. “A Generalized Taxonomy of Explanations Styles for Traditional and Social Recommender Systems.” Data Mining and Knowledge Discovery 24, no. 3 (2012): 555–583.
- Scheel, Christian, Angel Castellanos, Thebin Lee, and Ernesto William De Luca. “The Reason Why: A Survey of Explanations for Recommender Systems.” In International Workshop on Adaptive Multimedia Retrieval, 67–84. Springer, 2012.
Tintarev , Nava, and Judith Masthoff. “A Survey of Explanations in Recommender Systems.” In Data Engineering Workshop, 2007 IEEE 23rd International Conference on, 801–810. IEEE, 2007.
Transparency
- El-Arini, Khalid, Ulrich Paquet, Ralf Herbrich, Jurgen Van Gael, and Blaise Agüera y Arcas. “Transparent User Models for Personalization.” In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 678–686. ACM, 2012.
- Hebrado, Januel L, Hong Joo Lee, and Jaewon Choi. “Influences of Transparency and Feedback on Customer Intention to Reuse Online Recommender Systems.” Journal of Society for E-Business Studies 18, no. 2 (2013).
- Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390–2395. ACM, 2016.
- Radmacher, Mike. “Design Criteria for Transparent Mobile Event Recommendations.” AMCIS 2008 Proceedings, 2008, 304.
- Sinha, Rashmi, and Kirsten Swearingen. “The Role of Transparency in Recommender Systems.” In CHI’02 Extended Abstracts on Human Factors in Computing Systems, 830–831. ACM, 2002.
Trust
- Biran, Or, and Kathleen McKeown. “Generating Justifications of Machine Learning Predictions.” In 1st International Workshop on Data-to-Text Generation, Edinburgh, 2015.
- Cleger-Tamayo, Sergio, Juan M Fernández-Luna, Juan F Huete, and Nava Tintarev. “Being Confident about the Quality of the Predictions in Recommender Systems.” In European Conference on Information Retrieval, 411–422. Springer, 2013.
- Kang, Byungkyu, Tobias Höllerer, and John O’Donovan. “Believe It or Not? Analyzing Information Credibility in Microblogs.” In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, 611–616. ACM, 2015.
- Katarya, Rahul, Ivy Jain, and Hitesh Hasija. “An Interactive Interface for Instilling Trust and Providing Diverse Recommendations.” In Computer and Communication Technology (ICCCT), 2014 International Conference on, 17–22. IEEE, 2014.
- Muhammad, Khalil, Aonghus Lawlor, and Barry Smyth. “On the Use of Opinionated Explanations to Rank and Justify Recommendations.” In The Twenty-Ninth International Flairs Conference, 2016.
- O’Donovan, John, and Barry Smyth. “Trust in Recommender Systems.” In Proceedings of the 10th International Conference on Intelligent User Interfaces, 167–174. ACM, 2005.
- Shani, Guy, Lior Rokach, Bracha Shapira, Sarit Hadash, and Moran Tangi. “Investigating Confidence Displays for Top-N Recommendations.” Journal of the American Society for Information Science and Technology 64, no. 12 (2013): 2548–2563.
Subscribe to:
Posts (Atom)