Monday, August 1, 2016

Explanation of recommender system: a literature review


The possible research directions…  

  1. Model Effectiveness (Effective)
    1. Trust ability of the system. (Trust)
    2. Personalized result explanation (Survey & Framework)
    3. Transparent issues (Transparency)*
    4. User satisfaction(Perception)
  2. Legal and social issue
    1. Privacy
    2. Accountability of the recommendation result (Decision Support & Issues)*
    3. Discrimination (Diversity)
  3. Educational Purpose
    1. Learning the advance techniques behind recommendation.
    2. A stepwise learning model for tuning the system (Debug).
    3. Training for using the recommender system (Comprehensive).

Comprehensive

  1. Al-Taie, Mohammed Z, and Seifedine Kadry. “Visualization of Explanations in Recommender Systems.” Journal of Advanced Management Science Vol 2, no. 2 (2014).
  2. Barbieri, Nicola, Francesco Bonchi, and Giuseppe Manco. “Who to Follow and Why: Link Prediction with Explanations.” In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1266–1275. ACM, 2014.
  3. Blanco, Roi, Diego Ceccarelli, Claudio Lucchese, Raffaele Perego, and Fabrizio Silvestri. “You Should Read This! Let Me Explain You Why: Explaining News Recommendations to Users.” In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, 1995–1999. ACM, 2012.
  4. Cleger-Tamayo, Sergio, Juan M Fernandez-Luna, and Juan F Huete. “Explaining Neighborhood-Based Recommendations.” In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1063–1064. ACM, 2012.
  5. Françoise, Jules, Frédéric Bevilacqua, and Thecla Schiphorst. “GaussBox: Prototyping Movement Interaction with Interactive Visualizations of Machine Learning.” In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 3667–3670. ACM, 2016.
  6. Freitas, Alex A. “Comprehensible Classification Models: A Position Paper.” ACM SIGKDD Explorations Newsletter 15, no. 1 (2014): 1–10.
  7. Hernando, Antonio, JesúS Bobadilla, Fernando Ortega, and Abraham GutiéRrez. “Trees for Explaining Recommendations Made through Collaborative Filtering.” Information Sciences 239 (2013): 1–17.
  8. Kahng, Minsuk, Dezhi Fang, and Duen Horng. “Visual Exploration of Machine Learning Results Using Data Cube Analysis.” In HILDA@ SIGMOD, 1, 2016.
  9. Krause, Josua, Adam Perer, and Kenney Ng. “Interacting with Predictions: Visual Inspection of Black-Box Machine Learning Models.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5686–5697. ACM, 2016.
  10. “Understanding LSTM Networks,” n.d. http://colah.github.io/posts/2015-08-Understanding-LSTMs/.
  11. Yamaguchi, Yuto, Mitsuo Yoshida, Christos Faloutsos, and Hiroyuki Kitagawa. “Why Do You Follow Him?: Multilinear Analysis on Twitter.” In Proceedings of the 24th International Conference on World Wide Web, 137–138. ACM, 2015.

Debug

  1. Kulesza, Todd, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. “Principles of Explanatory Debugging to Personalize Interactive Machine Learning.” In Proceedings of the 20th International Conference on Intelligent User Interfaces, 126–137. ACM, 2015.
  2. McGregor, Sean, Hailey Buckingham, Thomas G Dietterich, Rachel Houtman, Claire Montgomery, and Ronald Metoyer. “Facilitating Testing and Debugging of Markov Decision Processes with Interactive Visualization.” In Visual Languages and Human-Centric Computing (VL/HCC), 2015 IEEE Symposium on, 53–61. IEEE, 2015.

Decision Support

  1. Ehrlich, Kate, Susanna E Kirk, John Patterson, Jamie C Rasmussen, Steven I Ross, and Daniel M Gruen. “Taking Advice from Intelligent Systems: The Double-Edged Sword of Explanations.” In Proceedings of the 16th International Conference on Intelligent User Interfaces, 125–134. ACM, 2011.
  2. Jameson, Anthony, Silvia Gabrielli, Per Ola Kristensson, Katharina Reinecke, Federica Cena, Cristina Gena, and Fabiana Vernero. “How Can We Support Users’ Preferential Choice?” In CHI’11 Extended Abstracts on Human Factors in Computing Systems, 409–418. ACM, 2011.
  3. Martens, David, and Foster Provost. “Explaining Data-Driven Document Classifications,” 2013.
  4. McSherry, David. “Explaining the Pros and Cons of Conclusions in CBR.” In European Conference on Case-Based Reasoning, 317–330. Springer, 2004.
  5. Tan, Wee-Kek, Chuan-Hoo Tan, and Hock-Hai Teo. “Consumer-Based Decision Aid That Explains Which to Buy: Decision Confirmation or Overconfidence Bias?” Decision Support Systems 53, no. 1 (2012): 127–141.

Diversity

  1. Graells-Garrido, Eduardo, Mounia Lalmas, and Ricardo Baeza-Yates. “Data Portraits and Intermediary Topics: Encouraging Exploration of Politically Diverse Profiles.” In Proceedings of the 21st International Conference on Intelligent User Interfaces, 228–240. ACM, 2016.
  2. Szpektor, Idan, Yoelle Maarek, and Dan Pelleg. “When Relevance Is Not Enough: Promoting Diversity and Freshness in Personalized Question Recommendation.” In Proceedings of the 22nd International Conference on World Wide Web, 1249–1260. ACM, 2013.
  3. Yu, Cong, Sihem Amer-Yahia, and Laks Lakshmanan. Diversifying Recommendation Results through Explanation. Google Patents, 2013.
  4. Yu, Cong, Laks VS Lakshmanan, and Sihem Amer-Yahia. “Recommendation Diversification Using Explanations.” In 2009 IEEE 25th International Conference on Data Engineering, 1299–1302. IEEE, 2009.

Effective

  1. Komiak, Sherrie YX, and Izak Benbasat. “The Effects of Personalization and Familiarity on Trust and Adoption of Recommendation Agents.” MIS Quarterly, 2006, 941–960.
  2. Nanou, Theodora, George Lekakos, and Konstantinos Fouskas. “The Effects of Recommendations’ Presentation on Persuasion and Satisfaction in a Movie Recommender System.” Multimedia Systems 16, no. 4–5 (2010): 219–230.
  3. Tan, Wee-Kek, Chuan-Hoo Tan, and Hock-Hai Teo. “When Two Is Better Than One–Product Recommendation with Dual Information Processing Strategies.” In International Conference on HCI in Business, 775–786. Springer, 2014.
  4. Tintarev, Nava, and Judith Masthoff. “Effective Explanations of Recommendations: User-Centered Design.” In Proceedings of the 2007 ACM Conference on Recommender Systems, 153–156. ACM, 2007.
  5. ———. “The Effectiveness of Personalized Movie Explanations: An Experiment Using Commercial Meta-Data.” In International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, 204–213. Springer, 2008.

Framework

  1. Ben-Elazar, Shay, and Noam Koenigstein. “A Hybrid Explanations Framework for Collaborative Filtering Recommender Systems.” In RecSys Posters. Citeseer, 2014.
  2. Berner, Christopher Eric Shogo, Jeremy Ryan Schiff, Corey Layne Reese, and Paul Kenneth Twohey. Recommendation Engine That Processes Data Including User Data to Provide Recommendations and Explanations for the Recommendations to a User. Google Patents, 2013.
  3. Charissiadis, Andreas, and Nikos Karacapilidis. “Strengthening the Rationale of Recommendations Through a Hybrid Explanations Building Framework.” In Intelligent Decision Technologies, 311–323. Springer, 2015.
  4. Chen, Wei, Wynne Hsu, and Mong Li Lee. “Tagcloud-Based Explanation with Feedback for Recommender Systems.” In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, 945–948. ACM, 2013.
  5. Chen, Yu-Chih, Yu-Shi Lin, Yu-Chun Shen, and Shou-De Lin. “A Modified Random Walk Framework for Handling Negative Ratings and Generating Explanations.” ACM Transactions on Intelligent Systems and Technology (TIST) 4, no. 1 (2013): 12.
  6. Du, Zhao, Lantao Hu, Xiaolong Fu, and Yongqi Liu. “Scalable and Explainable Friend Recommendation in Campus Social Network System.” In Frontier and Future Development of Information Technology in Medicine and Education, 457–466. Springer, 2014.
  7. El Aouad, Sara, Christophe Dupuy, Renata Teixeira, Christophe Diot, and Francis Bach. “Exploiting Crowd Sourced Reviews to Explain Movie Recommendation.” In 2nd Workshop on Recommendation Systems for ℡EVISION and ONLINE VIDEO, 2015.
  8. Jameson, Anthony, Martijn C Willemsen, Alexander Felfernig, Marco de Gemmis, Pasquale Lops, Giovanni Semeraro, and Li Chen. “Human Decision Making and Recommender Systems.” In Recommender Systems Handbook, 611–648. Springer, 2015.
  9. Lamche, Béatrice, Ugur Adıgüzel, and Wolfgang Wörndl. “Interactive Explanations in Mobile Shopping Recommender Systems.” In Proc. Joint Workshop on Interfaces and Human Decision Making for Recommender Systems (IntRS 2014), ACM Conference on Recommender Systems, Foster City, USA, 2014.
  10. Lawlor, Aonghus, Khalil Muhammad, Rachael Rafter, and Barry Smyth. “Opinionated Explanations for Recommendation Systems.” In Research and Development in Intelligent Systems XXXII, 331–344. Springer, 2015.
  11. Muhammad, Khalil. “Opinionated Explanations of Recommendations from Product Reviews,” 2015.
  12. Nagulendra, Sayooran, and Julita Vassileva. “Providing Awareness, Explanation and Control of Personalized Filtering in a Social Networking Site.” Information Systems Frontiers 18, no. 1 (2016): 145–158.
  13. Schaffer, James, Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek Abdelzaher, and John O’Donovan. “Getting the Message?: A Study of Explanation Interfaces for Microblog Data Analysis.” In Proceedings of the 20th International Conference on Intelligent User Interfaces, 345–356. ACM, 2015.
  14. Tintarev, Nava. “Explanations of Recommendations.” In Proceedings of the 2007 ACM Conference on Recommender Systems, 203–206. ACM, 2007.
  15. Tintarev, Nava, and Judith Masthoff. “Explaining Recommendations: Design and Evaluation.” In Recommender Systems Handbook, 353–382. Springer, 2015.
  16. Vig, Jesse, Shilad Sen, and John Riedl. “Tagsplanations: Explaining Recommendations Using Tags.” In Proceedings of the 14th International Conference on Intelligent User Interfaces, 47–56. ACM, 2009.
  17. Zanker, Markus, and Daniel Ninaus. “Knowledgeable Explanations for Recommender Systems.” In Web Intelligence and Intelligent Agent Technology (WI-IAT), 2010 IEEE/WIC/ACM International Conference on, 1:657–660. IEEE, 2010.

Issues

  1. Bunt, Andrea, Matthew Lount, and Catherine Lauzon. “Are Explanations Always Important?: A Study of Deployed, Low-Cost Intelligent Interactive Systems.” In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, 169–178. ACM, 2012.
  2. BURKE, BRIAN, and KEVIN QUEALY. “How Coaches and the NYT 4th Down Bot Compare.” New York Times, 2013. http://www.nytimes.com/newsgraphics/2013/11/28/fourth-downs/post.html.
  3. Diakopoulos, Nicholas. “Accountability in Algorithmic Decision-Making.” Queue 13, no. 9 (2015): 50.
  4. ———. “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.” Digital Journalism 3, no. 3 (2015): 398–415.
  5. Lokot, Tetyana, and Nicholas Diakopoulos. “News Bots: Automating News and Information Dissemination on Twitter.” Digital Journalism, 2015, 1–18.

Perception

  1. Gkika, Sofia, and George Lekakos. “The Persuasive Role of Explanations in Recommender Systems.” In 2nd Intl. Workshop on Behavior Change Support Systems (BCSS 2014), 1153:59–68, 2014.
  2. Hijikata, Yoshinori, Yuki Kai, and Shogo Nishida. “The Relation between User Intervention and User Satisfaction for Information Recommendation.” In Proceedings of the 27th Annual ACM Symposium on Applied Computing, 2002–2007. ACM, 2012.
  3. Kulesza, Todd, Simone Stumpf, Margaret Burnett, and Irwin Kwan. “Tell Me More?: The Effects of Mental Model Soundness on Personalizing an Intelligent Agent.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–10. ACM, 2012.
  4. Kulesza, Todd, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. “Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models.” In 2013 IEEE Symposium on Visual Languages and Human Centric Computing, 3–10. IEEE, 2013.
  5. Valdez, André Calero, Simon Bruns, Christoph Greven, Ulrik Schroeder, and Martina Ziefle. “What Do My Colleagues Know? Dealing with Cognitive Complexity in Organizations Through Visualizations.” In International Conference on Learning and Collaboration Technologies, 449–459. Springer, 2015.
  6. Zanker, Markus. “The Influence of Knowledgeable Explanations on Users’ Perception of a Recommender System.” In Proceedings of the Sixth ACM Conference on Recommender Systems, 269–272. ACM, 2012.

Survey

  1. Al-Taie, MOHAMMED Z. “Explanations in Recommender Systems: Overview and Research Approaches.” In Proceedings of the 14th International Arab Conference on Information Technology, Khartoum, Sudan, ACIT, Vol. 13, 2013.
  2. Buder, Jürgen, and Christina Schwind. “Learning with Personalized Recommender Systems: A Psychological View.” Computers in Human Behavior 28, no. 1 (2012): 207–216.
  3. Cleger, Sergio, Juan M Fernández-Luna, and Juan F Huete. “Learning from Explanations in Recommender Systems.” Information Sciences 287 (2014): 90–108.
  4. Gedikli, Fatih, Dietmar Jannach, and Mouzhi Ge. “How Should I Explain? A Comparison of Different Explanation Types for Recommender Systems.” International Journal of Human-Computer Studies 72, no. 4 (2014): 367–382.
  5. Papadimitriou, Alexis, Panagiotis Symeonidis, and Yannis Manolopoulos. “A Generalized Taxonomy of Explanations Styles for Traditional and Social Recommender Systems.” Data Mining and Knowledge Discovery 24, no. 3 (2012): 555–583.
  6. Scheel, Christian, Angel Castellanos, Thebin Lee, and Ernesto William De Luca. “The Reason Why: A Survey of Explanations for Recommender Systems.” In International Workshop on Adaptive Multimedia Retrieval, 67–84. Springer, 2012.
  7. Tintarev, Nava, and Judith Masthoff. “A Survey of Explanations in Recommender Systems.” In Data Engineering Workshop, 2007 IEEE 23rd International Conference on, 801–810. IEEE, 2007.

Transparency

  1. El-Arini, Khalid, Ulrich Paquet, Ralf Herbrich, Jurgen Van Gael, and Blaise Agüera y Arcas. “Transparent User Models for Personalization.” In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 678–686. ACM, 2012.
  2. Hebrado, Januel L, Hong Joo Lee, and Jaewon Choi. “Influences of Transparency and Feedback on Customer Intention to Reuse Online Recommender Systems.” Journal of Society for E-Business Studies 18, no. 2 (2013).
  3. Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390–2395. ACM, 2016.
  4. Radmacher, Mike. “Design Criteria for Transparent Mobile Event Recommendations.” AMCIS 2008 Proceedings, 2008, 304.
  5. Sinha, Rashmi, and Kirsten Swearingen. “The Role of Transparency in Recommender Systems.” In CHI’02 Extended Abstracts on Human Factors in Computing Systems, 830–831. ACM, 2002.

Trust

  1. Biran, Or, and Kathleen McKeown. “Generating Justifications of Machine Learning Predictions.” In 1st International Workshop on Data-to-Text Generation, Edinburgh, 2015.
  2. Cleger-Tamayo, Sergio, Juan M Fernández-Luna, Juan F Huete, and Nava Tintarev. “Being Confident about the Quality of the Predictions in Recommender Systems.” In European Conference on Information Retrieval, 411–422. Springer, 2013.
  3. Kang, Byungkyu, Tobias Höllerer, and John O’Donovan. “Believe It or Not? Analyzing Information Credibility in Microblogs.” In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, 611–616. ACM, 2015.
  4. Katarya, Rahul, Ivy Jain, and Hitesh Hasija. “An Interactive Interface for Instilling Trust and Providing Diverse Recommendations.” In Computer and Communication Technology (ICCCT), 2014 International Conference on, 17–22. IEEE, 2014.
  5. Muhammad, Khalil, Aonghus Lawlor, and Barry Smyth. “On the Use of Opinionated Explanations to Rank and Justify Recommendations.” In The Twenty-Ninth International Flairs Conference, 2016.
  6. O’Donovan, John, and Barry Smyth. “Trust in Recommender Systems.” In Proceedings of the 10th International Conference on Intelligent User Interfaces, 167–174. ACM, 2005.
  7. Shani, Guy, Lior Rokach, Bracha Shapira, Sarit Hadash, and Moran Tangi. “Investigating Confidence Displays for Top-N Recommendations.” Journal of the American Society for Information Science and Technology 64, no. 12 (2013): 2548–2563.

Summary of Hypertext/UMAP 2016 conference.


Summary

This conferene was combined with UMAP2016 and Hypertext2016.

I had a short paper presentation in this conference. The topic is about personalized recommender system for local businesses using Yelp dataset [1]. It was nice to have some feedback from the audience. Here are some of the paragraphs: 1) if I defined the business relationship based on the user review, how can I make sure the sequential shopping behavior? Say, the user from one restaurant to another ice cream shop? I think this suggestion is critical. But I pre-filter the data into daily basis, i.e. The same day shopping patterns. So this may cover the most sequential between any two businesses share the same group of users; 2) two attendees asked about the data pre-processing issue. They wonder how I can make sure its a sequential shopping for any two of the businesses? This is related to the previous question. I pre-filter them into a daily basis; 3) the long distance shopping pattern between Las Vegas and Phoenix. Some of the audience likes the idea to see the commercial pattern across cities; 4) for future study, the system of this paper requires a user study of the costumers. I may send out the questionnaires to the customers in different businesses, to see if the recommendation result fits their shopping preferences. Besides, my presentation, I talked to many of the attendees at the conference. I believe it would be a meaningful connection for future collaboration.

  1. Tsai, Chun-Hua. "A Fuzzy-Based Personalized Recommender System for Local Businesses." Proceedings of the 27th ACM Conference on Hypertext and Social Media. ACM, 2016. APA

Sunday, April 17, 2016

Summary about the 2016 big scholarly workshop (WWW companion)

Summary for 2016 Big Scholar Workshop

This is a great event for scholarly data oriented researchers to share the ideas and interact with each other. The talk from this workshop is very informative from several scholars with rich academic reputation. The keynote speaker included: Dr. C. Lee Giles (Pennsylvania State University), Dr. Jie Tang (Tsinghua University). Dr. Giles shared his works on CiteSeerX system and the future plan about open and share the dataset to other researchers. He also mentioned some of the interesting work, e.g. To estimate the scholarly document number on the web. Dr. Tang shared about his works on Aminer. This is another famous project about the scholarly portal and recommendation system. His talk is mainly about the detail of system implementation, e.g. How to collect paper and parse them into structured text, how to maintain the user profile for data accuracy, how to find out the domain expert based on collected scholarly data, etc. The Aminer system is now focusing on the expert recommend task. I raised one question about how many percentages the user actually maintains/interact with the system. Dr. Tang replied: although everyone can edit/modify any profiles, but few users are actually using the system. To make sure the data accuracy, they will focus on the manually maintain on some of the listed domain experts.

There are some more famous scholars joined the workshop, included: Dr. Jevin West (University of Washington), Dr. Feng Xia (Dalian University of Technology), Dr. Huan Liu (Arizona State University), Dr. Kuansan Wang (Microsoft Research) and Dr. Philip S. Yu (University of Illinois at Chicago). It is nice to hear their presentation and have some feedback on my work from them. Their feedback included: 1) why not included venue information for the prediction model? 2) Can you predict the future junior school productivity based on your model? 3) How do you sample the postive/negative size for the model evaluation? This is a critical point for the performance of the classification problem. 4) How do you define the junior scholar age? All of the feedback is valuable for me to refine my future works. [1]

Since there are many projects work on the scholarly data analysis, e.g. Google Scholar, Microsoft Academic Search, CiteSeerX, Aminer and more (Conference Navigator). The closing remark is discussing the platform to utilize the data in different sources and create this as a community for scholarly data researches. The research topic can be extended into data sciences, education, health and more. Dr. Giles and Dr. Wang is actually initiated the next workshop or conference into a broader scope. Dr. Wang, as a representative from industry, is agreeing to provide some of the infrastructure support for all scholars to work on the big scholarly projects. Dr. Giles is also willing to open the dataset for further collaborations. I believe this would be a potential research direction for future studies.


  1. Tsai, C. H. and Lin, Y.-R. (2016), Tracing and Predicting Collaboration for Junior Scholars. WWW 2016 Proceedings (workshop paper)

Thursday, March 31, 2016

Bringing social networks into a physical space - social sensing computing

Summary 

This paper used the RFID technology to recognize the user's social cluster from DBLP database. The authors provided a big display screen within a conference that participants can interact with the interpersonal connections. This paper actually consisted with several interesting elements together: 1) the RFID sensor: the RFID sensor is aim to recognize the user approaching the display screen and provide the personified social network graph to user; 2) graph exploration: the display screen provided the zoom in/out function that provide the exploration functions for users to discover the interpersonal relationship; 3) heterogeneous network: the displayed graph contained with the conference/co-citation/co-authorship features, this make the user can discover the hidden relationship or knowledge insides the network.

The experiment result showed a positive feedback from users, but some of the issues attract my eye: 1) limited user usage: only few of the conference participants actually used the system, fewer people used more than one time; 2) the cold start problem: some junior scholar may not have publication or their publication is not listed inside the DBLP database; 3) the privacy issue that display your personal social network in a public view display screen; 4) the cost and purpose to deploy RFID: the usage of the RFID tag is not significantly necessary.

This idea is interesting to make the conference with more fun and entourage people to explore the social interaction during the event. But I think this tool should be more personalize and privacied. For instance, to display the result with a personal device, e.g. cell phone. In other way, the design of the graph is not highlighting the meaningful information for users. To show everything on screen is almost equal nothing for users. This might be the reason the user return rate is low. These issues would be valuable for the future application.

Besides, the RFID tag implied a research subject about the social sensing computing. With the developing of wearable devices, there are more and more sensor, tag or device that with the potential to load the user data for multiply computing task. For research purpose, to gather all of these data is critical, also, difficult. However, there are many alternative way to "simulate" the scenarios. For instance, a QR code, a reference number or mobile phone built-in function (e.g. GPS, Bluetooth). All of the technologies are interesting, but also require many developing efforts. Nor sure if this could be our expertise to do all these things by our own?

Reference

  1. Konomi, Shini'chi, et al. "Supporting colocated interactions using RFID and social network displays." Pervasive Computing, IEEE 5.3 (2006): 48-56. Link

Summary for two papers of facilitate the conference by social games

Summary 

To help conference attendees gain social capital or expand social networking is always an interesting research topic. One of the direction is by "social games". The paper from [1] and [2], both design a social game to facilitate people social interaction inside a conference. In  [1], the author designed a game requires 2-6 people to communicate on a puzzle of ball and hole location matching. This is a tool for "ice-breaking" between the conference attendees through the teamwork procedure. In [2], the author discussed an approach to gain community retention by the social game. They build up a cell phone app that supports a collaborative function on the task. The users can solve problems insides the game with others, as a community. The author argued, this app encourages user to improve their network connectivity. Furthermore, this app may be used inside the class to increase the class retention rate for minority groups.

Both of the papers proposed an interesting idea about social game. However, I think the evaluation would be an issue to support the above claims. In [1], the author proposed a questionnaire for the game players. Based on the response, the users indicated the communication and team-work function are the most important election for them. The evidence to support the game usefulness in helping users to "making friends fast" is not strong. In [2], the authors only present the evaluation plan about how the social game is helping to build a community. The experimental data are still lacking.

I think the research question can be more specific classified as 1) cold-breaking; 2) social interaction and engagement; 3) social recommendation; 4)social networking ; 5) teamwork and communication and 6) community formation and retention. For each of them, the experimental design should be varied. Some of the aspect is hard to find a ground truth to prove the model/game/app effectiveness. For instance, if the user talk to each other more due to the apps? It is not easy to compare the talk frequency before/after the game play. Hence, an experiment design for certain research questions is critical. Some of the ideas: 1) A/B testing to different group of users; 2) quick questionnaire/feedback insides the game; 3) clicking/bookmarking/friendship behavior analysis, etc.

Reference

[1] Evie Powell, Rachel Brinkman, Tiffany Barnes, and Veronica Catete. 2012. Table tilt: making friends fast. In Proceedings of the International Conference on the Foundations of Digital Games(FDG '12). ACM, New York, NY, USA, 242-245. DOI=10.1145/2282338.2282386 http://doi.acm.org/10.1145/2282338.2282386

[2] Samantha L. Finkelstein, Eve Powell, Andrew Hicks, Katelyn Doran, Sandhya Rani Charugulla, and Tiffany Barnes. 2010. SNAG: using social networking games to increase student retention in computer science. In Proceedings of the fifteenth annual conference on Innovation and technology in computer science education (ITiCSE '10). ACM, New York, NY, USA, 142-146. DOI=http://dx.doi.org/10.1145/1822090.1822131


Monday, January 25, 2016

ExcUseMe: Asking Users to Help in Item Cold-Start Recommendations


Summary

This paper is mainly about the solution of item cold start problem in the real world recommender system. In an on-line recommender system, the cold start problem is caused by lacking of the historical data to generate meaningful suggestions. In other words, the cold start problem can be treated as the user or item that newly enter into the system. In general, the cold start problem was solved by include new context or content features. However, in this paper, the author focus on the specific solution for collaborative filtering (CF). Their idea is conducting a small scale user experiment to those users who might interested in the new item. The user interaction within the experiment can be used as a reference for the new item recommendation.

The major challenge behind this idea is: Who is the target user? In this paper, they proposed a novel algorithm "ExcUseMe". The authors assumed the user is randomly visited the online recommender system. In this continuing user stream, the system required to determine who should be included in the new item experiment. The approach can be divided into two stages: 1) the learning phase; 2) the selection phase. In 1), the algorithm is computing the likelihood of the user provide feedback of the item. In stage 2), the user is selected by their likelihood ranking. The vector similarity between the users are also considered in a semi-greedy manner.

The experiment simulated the real world on-line setting, they sampled the n% users as selection pool. The performance was evaluated by RMSE (Root-mean-square deviation) metric for three large scale datasets. The baseline was compared with Random, Frequent Users, Distance and Anava approaches. The experiment result indicated the proposed ExeUseMe algorithm outperform all the baseline models, moreover, with a lower computational cost. The experiment result also pointed out the importance and contribution of positive feedback from users. In other words, the key to drive this method as a meaningful output is the user participants.

This paper highlight a new approach for CF to solve the cold start problem. Besides, this approach is also practical that can be adopted into the real world on-line system. The main contributions of this paper are 1) the real-world user selection method; 2) efficiency algorithm and 3) the high model performance in most of the scenarios. There are some comments in this paper: 1) this algorithm is valuable in an on-line experiment setting, but the experiment is simulated with off-line setting and dataset. A further simulation experiment can better demonstrate the advantage of this approach; 2) the data filtering section limited the user with between 20 - 300 ratings. This setting reduced the probability to provide useful suggestions for long-tail items; 3) In real-world recommender system, the small amount of user feedback lead to the result of a recommendation. There might be some incentive to inverse control the experiment result in some ways.

Reference


Michal Aharon, Oren Anava, Noa Avigdor-Elgrabli, Dana Drachsler-Cohen, Shahar Golan, and Oren Somekh. 2015. ExcUseMe: Asking Users to Help in Item Cold-Start Recommendations. InProceedings of the 9th ACM Conference on Recommender Systems (RecSys '15). ACM, New York, NY, USA, 83-90. DOI=http://dx.doi.org/10.1145/2792838.2800183


Tuesday, January 19, 2016

Relescope: An Experiment in Accelerating Relationships

Reading Summary

To help people to digest the information is always an interesting research question. In this paper, the author tried to produce a 1-2 page short report to the conference attendees based on their previous works. This report provided a meaningful relationship network of the conference and aim to better determine the suitable social activity during the conference, for example, to recognize and talked to people. The author sends out the questionnaire to examine the validity of the report. The survey showed the effeteness of this application. Moreover, the newcomer of the conference is benefited more than the senior participants.

This implication reminds a fundamental way of human to digest information: the "device" to provide personalized information to users. The paper is published in the year 2005. At that time, since the cell phone is not popular with the public yet, the author sends out the application result by paper. Sometimes, the small piece of paper might be useful and convenient for people to access and carry the information during the conference. This is the reason, until today, the paper handout is still the most necessity in an academic conference.

The paper-based approach is with the natural limitation - update the information and interaction. Besides, it is also hard to collect user behavior or feedback based on this approach. With the most popular on mobile and wireless technology, it is possible for people to stay online all day. The social media, e.g. Facebook, Twitter and Linkedin, start to grab the attention of participants of the physical environment. People are now interacting in physical and virtual. The exploration of social community behavior is a popular research topic today. For instance, the human behavior/interaction on Twitter during a conference.

The future study might be focused on the personalized recommendation, e.g. Conference Navigator, tried to provide a personalized application interface for conference participants. However, there is still an unanswered question about how to help the newcomer of the conference to better into the environment and to their future academic career. More precisely, to promote a meaningful social connections. The way to evaluate the effectiveness of social connections is a challenge here.

Reference:
  1. Farrell, S., Campbell, C., and Myagmar, S. (2005) Relescope: an experiment in accelerating relationships. In:  Proceedings of CHI '05 extended abstracts on Human factors in computing systems, Portland, OR, USA, ACM, pp. 1363-1366, also available athttp://dx.doi.org/10.1145/1056808.1056917.