Tuesday, October 20, 2015

An summary review of three recommendation systems for academia.


Summary

The recommendation system is a way to help users to better retrieval or digest the ubiquitous enormous information. In [1], the authors tried to build up a committee candidate recommendation system to help conference organizer. This is one sub-domain of expert finding studies. They adopted the social network of the program committee (PC), publication history and topical expertise matching to generate a list of potential committee members. They found the three prediction features are all useful to provide a useful recommendation result. 

In [2], the authors intended to help scholars to choose the suitable journal to contribute their works. They built a system to ask user input their publication title, abstract and domain tag. Based on the input information, they proposed an information retrieval model to generate the high-similarity journal list. This is basically a content-based approach using BM2.5 algorithm.  This study focus on the publisher of Elsevier that might favor the paper in relevant discipline. If the sample paper is enough, the recommendation performance is valid. 

In [3], this paper helped user to search the relevant research publication based on their publication reference list. The idea was to enhance the search ability rather than keyword-based search. This could be treated as an extension of information retrieval research. They proposed a system to ask user to input a list of reference paper and built up the citation graph. This is a graph-based approach to do the recommendation system. The idea behind this system implies the potential of secondary or more search in different applications. 

Thoughts

The idea of recommendation system could be applied to several research questions. However, it is pretty difficult to examine the effectiveness of the recommendation result. There are three main directions to solve this issue: 1) ground truth[1][2]; 2) user study[3]; 3)domain expert (e.g. knowledge ontology, expert review). 

  1. The ground truth approach: is widely used in many different data mining researches. The study compared the proposed model with the previous real world user generated data. This is a way to claim the effectiveness of your model or system. However, there are two issues here. First, in some research topics, it is pretty hard to get the ground truth (or maybe not re-producible). Second, this is encouraging research to "fit" model into the exist dataset. The model might not fit the new growing data or features. 
  2. The user study approach: is another widely used approach in cross-domain studies. For example, the psychology that study in human behavior would hire the users to do experiments in a closed, controlled environment. In another way, the proposed recommendation system would record the user feedback to examine or improve the system effectiveness. However, the cost of user study is high, either to hire participants or build up a system with large number of users. Moreover, the closed, controlled experiment might not fit the sense of ecologically. The conclusion might lack of the utility in the real world. 
  3. The domain expert: is an approach that more used in social science. They invited the domain expert to verify the result. Based on the reliability of the domain experts to prove the effectiveness of the research finding. In another way, some research focused on the ontology building of domain knowledge. This is a way to transfer domain knowledge to a recommendation model. However, the cost to invite domain expert is high, moreover, the comments from experts might be an inconsistency or contradiction. The same issue also exists in domain knowledge building, to construct a complete and rigorous logic, ontology is still a challenge research problem.

    Reference:
    1. Han, Shuguang, Jiepu Jiang, Zhen Yue, and Daqing He. “Recommending Program Committee Candidates for Academic Conferences.” In Proceedings of the 2013 Workshop on Computational Scientometrics: Theory & Applications, 1–6. CompSci ’13. New York, NY, USA: ACM, 2013. doi:10.1145/2508497.2508498.
    2. Kang, Ning, Marius A. Doornenbal, and Robert J.A. Schijvenaars. “Elsevier Journal Finder: Recommending Journals for Your Paper.” In Proceedings of the 9th ACM Conference on Recommender Systems, 261–64. RecSys ’15. New York, NY, USA: ACM, 2015. doi:10.1145/2792838.2799663.
    3. Küçüktunç, Onur, Erik Saule, Kamer Kaya, and Ümit V. Çatalyürek. “TheAdvisor: A Webservice for Academic Recommendation.” In Proceedings of the 13th ACM/IEEE-CS Joint Conference on Digital Libraries, 433–34. JCDL ’13. New York, NY, USA: ACM, 2013. doi:10.1145/2467696.2467752.

    No comments:

    Post a Comment