Tuesday, January 17, 2017

About Entropy


  1. 更完備的熱力學第二定律
  2. 熵: Wiki
  3. Choon Hui Teo, Houssam Nassif, Daniel Hill, Sriram Srinivasan, Mitchell Goodman, Vijai Mohan, and S.V.N. Vishwanathan. 2016. Adaptive, Personalized Diversity for Visual Discovery. In Proceedings of the 10th ACM Conference on Recommender Systems (RecSys '16). ACM, New York, NY, USA, 35-38. DOI: http://dx.doi.org/10.1145/2959100.2959171
  4. Discuss How to model Cognitive Biases
  5. Entropy-Based Decision Tree Induction
  6. Special Issue "Machine Learning and Entropy: Discover Unknown Unknowns in Complex Data Sets"
  7. A simple note of my thinking


Thursday, January 5, 2017

Leap Year Workshop: note

Leap Year Workshop: the research opportunity.
@Department Biomedical Informatics, University of Pittsburgh

The order, title and note for the presentations. 

  • Balaji Palanisamy: Protecting Time-varying Privacy with Self-emerging Data
  • Kayhan Batmanghelich: An Exciting New Horizon: Medical Image Computing Meets EHR
  • Daniel Mosse: OCCAM: define, run, curate, visualize experiments for your group, your class, your organization and/or the world
    • Store and archive all the research data in one place
  • Daqing He: Intelligent Access and Deep Representation for Medical Tasks
    • Deep learning with NLP
  • Rami Melhem: Distributed Graph Analytics
    • big graph analysis platform
  • Hochheiser, Harry: Interactive graphical tools for robust and  reproducible data interpretation
    • detect and avoid bias using visualization interface
    • What is the bias in the medical environment?
  • Michael Becich: Towards a Pitt Data Commons
    • Potential Pitt funding and grants.
    • big data mail list.
  • Don Taylor: How to factor industry into academic commercial translation
    • Supporting from the university leadership
    • UPMC is one of the commercialize example.
  • Peter Brusilovsky:  Data Driven Education
    • Using the proposed system in Pitt campus?
  • Madhavi  Ganapathairaju: Computational and collective intelligence for translating protein interaction predictions
    • Identify the highest-impact protein interaction.
    • Using visualization techniques.
  • Greg Cooper: technology and workforce
    • Computerization and employment
    • How to keep people to adopt the computerization environment?
  • Richard Boyce: Bridging islands of information to establish an integrated knowledge base of drugs and health outcomes of interest.
    • A control panel to integrate the medical record and research publications.
    • Idea: to put all the reading material in Google Drive and with the reading note. (Try blogger maybe?)
  • Dmitriy Babichenko: Designing the Model Patient: Data-Driven Virtual Patients in Health Sciences Education”
    • How to model the case? What is the effort?
  • Xinghua Lu: From big data to bed side: A machine learning approach
    • Personalized medical treatment.
    • Cancer pathway detection
  • Yu-Ru Lin: Mining Insights from Disasters Using Social Sensors
    • Computational focus groups.  
  • David Boone: Pipeline into computational research: educational outreach internships
    • Internship for high school and undergraduate students.
    • Any interested students? To be a mentor?
    • UPCI academy
  • Milos Haushrecht: Real-time EHR data analysis monitoring and alerting.
    • Many data type presentation
    • From the data that is suitable for machine learning
    • Bedside medical machine learning
  • Liz Lyon: Research transparency: don't just talk the talk, walk the walk
    • Put transparency into the research cycle.
  • Songjin Liu: Efficient exact algorithms and high-performance computing for Bioinformatics
    • The NP-hard problem in biology systems and researchers.
    • The approximation algorithm for NP-hard problems.



Tuesday, January 3, 2017

Attention and visual memory in visualization and computer graphics

Note

A survey paper discusses the attention and visual memory in computer visualization. It first discusses the effect of preattentive processing, which is quick, pop-out and parallel processing (versus serial processing). The theories of preattentive included:

  • Feature Integration Theory: selective perception, classify preattentive features through brand cells. some feature can parallelly detect the features.
  • Texton Theory: Elongated blobs (lines, rectangles or ellipse, etc.), Terminator (end of line segments), Crossings of line segments. 
  • Similarity Theory: structure units that share a common property, with limited short-term visual memory, a closer structure is with more information to process. 
  • Guided Search Theory: the top-down or bottom-up visual search. 
  • Boolean Map Theory: consider information location, to process and held the pattern in memory to search the target. 
  • Ensemble Coding: guide attention in a large scene, to catch the ensemble difference. 
  • Feature Hierarchy: most important data should be highlight by color or other visual features. 


The second section of the paper discussed the visual expectation and memory.

  • Eye Tracking: eye gaze pattern analysis, the eye would repeatedly track the visual information if no preattentive information pop out.  
  • Postattentive Amnesia: conjunction features which with no preattentive effect, i.e. cannot be semantically recognized and remembered. This can be done by traditional search or postattentive search. 
  • Attention guided by memory and prediction: viewer finds a target more rapidly for a subset of the display that is presented repeatedly. Second, the unconscious tendency of a viewer to look for targets in novel locations in the display. 
  • Change blindness: the feature that users can not be detected even the user actively search for it, e.g. compare two picture, one with modification. 
  • Inattentional blindness: the user can completely fail to perceive visually salient objects or activities, e.g. the gorilla inattentional blindness experiment. 
  • Attention Blink: the limited ability in users' ability to process information that arrives in quick succession even when that information is presented at a single location in space. 
The vision models: 
  • Visual Attention: perceptual salience (e.g. number of colors, is the visualization perform as expected?), predicting attention (predict where a viewer will focus their attention), directing attention (to catch the eyeball). 
  • Visual Memory: to make sure user not miss the important information to avoid the change blindness and inattention blindness effect
Current challenges:
  • Visual Acuity: what is the information-processing capacity of the visual system?
  • Aesthetics: understand the perception of aesthetics
  • Engagement: consider the factor of visual interaction, decision. 


Reference
  1. Healey, Christopher, and James Enns. "Attention and visual memory in visualization and computer graphics." IEEE Transactions on Visualization and Computer Graphics 18.7 (2012): 1170-1188.

Monday, January 2, 2017

Empirical studies in information visualization: Seven scenarios

Note

A useful reference of visual tool evaluation. The paper provides 7 scenarios that research can easily follow to conduct the user study.

  1. Understand Environments and Work Practices (UWP)
  2. Evaluating Visual Data Analysis and Reasoning (VDAR)
  3. Evaluating CommunicationThrough Visualization (CTV)
  4. Evaluating Collaborative Data Analysis (CDA)
  5. Evaluating User Performance (UP)
  6. Evaluating User Experience (UE)
  7. Evaluating Visualization Algorithm (VA)
Reference
  1. Lam, Heidi, et al. "Empirical studies in information visualization: Seven scenarios." IEEE Transactions on Visualization and Computer Graphics 18.9 (2012): 1520-1536.

A nested model for visualization design and validation

Note

4 layers nested model to analyze and evaluate the visualization design. The layers are:

  1. Domain problem and data characterization: the designer should follow the "vocabulary" in each domain, e.g. business or biology.  
  2. Operation and data type abstraction: data type transformation
  3. Visual encoding and interaction design: the cost of interaction
  4. Algorithm Design: run-time speed and time

To evaluation:

  1. Vocabulary: to discuss the terminology in different domains
  2. Interactive Loops and Rapid Prototyping: looping and refining. 
  3. Domain Threats: mischaracterized problem
  4. Abstraction Threats: not solve the characterized problem the target users.
  5. Encoding and interaction Threats: not effective communication. 
  6. Algorithm Threats: memory performance.

Reference
  1. Munzner, Tamara. "A nested model for visualization design and validation." IEEE transactions on visualization and computer graphics 15.6 (2009): 921-928.

A design space of visualization tasks

Note

A taxonomy for data visualization tasks. The author defines the design space dimensions as:

  • Goal: Exploratory Analysis (e.g. undirected search), Confirmatory Analysis (directed search), Presentation (exhibiting confirmed analysis results) 
  • Means: Navigation (e.g. browsing or searching), (Re-)organization (e.g. extraction, abstraction), Relation (e.g. variations, discrepancies)
  • Characteristics: Low-level (e.g. values, objects) & High-level (e.g. trends, outliers, clusters, frequency, distribution, correlation, etc.) data characteristics
  • Target: Attribute Relations (e.g. Temporal and Spatial relations), Structural relation (e.g. causal relations, topological relations)
  • Cardinality: Single (highlight detail), Multiple (putting data into context), and All Instances (getting the overview). 


The classification can be used as the semantic tuple, i.e. (exploratory, search, trend, attrib(variable), all). This tuple is used to calculate the suitable techniques.

Reference
  1. Schulz, Hans-Jörg, et al. "A design space of visualization tasks." IEEE Transactions on Visualization and Computer Graphics 19.12 (2013): 2366-2375.

Interactive dynamics for visual analysis

Note

A taxonomy of tools that support the fluent and flexible use of visualizations.

Pay attention more to Coordinate and Organize sections.

Reference
  1. Heer, Jeffrey, and Ben Shneiderman. "Interactive dynamics for visual analysis." Queue 10.2 (2012): 30.

Task taxonomy for graph visualization

Note

A graph-specific visualization consists of Nodes, Links, Paths, Graphs, Connected Components, Clusters, and Groups. This paper discussed the possible tasks to examine the tool based on the given objects.

The low-level tasks, included:

  • Retrieve value
  • Filter
  • Compute the Derived Value
  • Find Extremum
  • Sort
  • Determine Range
  • Characterize Distribution
  • Find Anomalies
  • Cluster
  • Correlate
Tasks which commonly encountered while analyzing graph data: 
  • Topology-based Tasks: adjacency (direct connection), accessibility (direct or indirect connection), common connection, connectivity
  • Attribute-based Tasks: On the Nodes, On the Links
  • Browsing Tasks: Follow path, Revisit
Some more high-level tasks: 
  • compare two web graph for the difference, e.g. two recipe graph. 
  • nodes duplication
  • some tasks need users' interpretation
Reference
  1. Lee, Bongshin, et al. "Task taxonomy for graph visualization." Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization. ACM, 2006.

Sunday, January 1, 2017

Design considerations for collaborative visual analytics.

Note

This paper discussed the factor to consist a collaborative visual analytics environment. Some of the theory is overlapping with the online community operation. A successful collaboration is an effective division of labor among participants, the author argue three factors here: modularity, granularity, and cost of integration. In other words, the tasks should split, conduct and integrate at a reasonable price. If each of the factors is too expensive, it may hard to be a success collaboration scenario. For modularity factor, the author provides an information visualization reference model; this model helps for decomposing the visualization process into data acquisition and representation visual encoding, display, and interaction. Each of the components can be a reasonable module to start the collaborative works. For granularity factor, the author discussed the sensemaking model, for instance, in cooperative scenarios, the collaborator can immediate benefit from the actions of others. It is hard to facilitate cooperation if a lack of the incentive.

The ground sense principle is listing below:

  • discussion models, awareness 
  • Reference & deixis, pointing
  • Incentives & engagement, personal relevance, social-psychological incentives, gameplay, 
  • Identity & trust & reputation, identity presentation 
  • Group dynamics,  management, size, diversity 
  • Consensus and decision making, information distribution & presentation

A good reference to consider the collaborative theory in different scenarios, e.g. business intelligence system. For social analysis, a extend reading at [2].

Reference
  1. Heer, Jeffrey, and Maneesh Agrawala. "Design considerations for collaborative visual analytics." Information visualization 7.1 (2008): 49-62.
  2. Wattenberg, Martin, and Jesse Kriss. "Designing for social data analysis." IEEE transactions on visualization and computer graphics 12.4 (2006): 549-557.

egoSlider: Visual analysis of egocentric network evolution.

Note

This paper proposes a tool to visualize the dynamic and temporal information of ego-network. The primary goal of this tool is to support the study of the exploratory pattern for cross domains. For instance, how the ego-network change among time to the relationship with personal health? The contribution lay in three layers: 1) macroscopic: summarize the entire ego-network; 2) mesoscopic: overviewing particular individuals' ego-network evolution; 3) microscopic: displaying detailed temporal information of egos and their alters.



The visualization idea may come from different discipline, e.g. the sociology research may focus on more social interaction with developed social theory. It may be a great contribution to design such a tool to help them better facilitate, utilize and digest the generated data.

Reference
  1. Wu, Yanhong, et al. "egoSlider: Visual analysis of egocentric network evolution." IEEE transactions on visualization and computer graphics 22.1 (2016): 260-269.

Reducing snapshots to points: A visual analytics approach to dynamic network exploration.

Note

This paper uses the dimensional reduction technique to reduce the complex, multi-dimensional graph into points as 2-dimension plot. It shows the pattern with a different cluster, the user can further explore the generated points to see the detail of the network.



This may help the user to understand the deep learning through neural network, the feature extraction process. But the challenge is still remaining how to explain/label the projection cluster. It is not guarantee to have a meaningful (or at least human understandable) pattern in each round of exploration.

Reference
  1. van den Elzen, Stef, et al. "Reducing snapshots to points: A visual analytics approach to dynamic network exploration." IEEE transactions on visualization and computer graphics 22.1 (2016): 1-10.

Information visualization and visual data mining


Note

A good survey paper to follow the trend of data visualization and mining. This paper provides a clear classification for visual data mining works.  The author describes: "The visual data exploration process be seen a hypothesis generation process". A visualization interface provides the user an overview of the dataset. Based on the insight, the user can explore/filter/verify the finding to answer the hypothesis, the hypothesis can be generated by user/statistics/machine learning. In another hand, a visual data exploration usually follows a three looping process: overview, filter, and detail-on-demand. The different insight will jump out while the user explores the data through designed interface.

A visual data mining has consisted with three components: 1) data type to be visualized:  1D, 2D, ND, Text and hypertext and algorithm data visualization; 2) visualization technique: standard 2/3D, geometrically transformed, icon-based, dense pixel and stacked display; 3) interaction and distortion technique: projection, filtering, zooming, interactive distortion, linking and brushing. Each categories is with a reference paper that worth to further reading.

Reference
  1. Keim, Daniel A. "Information visualization and visual data mining." IEEE transactions on Visualization and Computer Graphics 8.1 (2002): 1-8.