Showing posts with label Key. Show all posts
Showing posts with label Key. Show all posts

Tuesday, January 3, 2017

Attention and visual memory in visualization and computer graphics

Note

A survey paper discusses the attention and visual memory in computer visualization. It first discusses the effect of preattentive processing, which is quick, pop-out and parallel processing (versus serial processing). The theories of preattentive included:

  • Feature Integration Theory: selective perception, classify preattentive features through brand cells. some feature can parallelly detect the features.
  • Texton Theory: Elongated blobs (lines, rectangles or ellipse, etc.), Terminator (end of line segments), Crossings of line segments. 
  • Similarity Theory: structure units that share a common property, with limited short-term visual memory, a closer structure is with more information to process. 
  • Guided Search Theory: the top-down or bottom-up visual search. 
  • Boolean Map Theory: consider information location, to process and held the pattern in memory to search the target. 
  • Ensemble Coding: guide attention in a large scene, to catch the ensemble difference. 
  • Feature Hierarchy: most important data should be highlight by color or other visual features. 


The second section of the paper discussed the visual expectation and memory.

  • Eye Tracking: eye gaze pattern analysis, the eye would repeatedly track the visual information if no preattentive information pop out.  
  • Postattentive Amnesia: conjunction features which with no preattentive effect, i.e. cannot be semantically recognized and remembered. This can be done by traditional search or postattentive search. 
  • Attention guided by memory and prediction: viewer finds a target more rapidly for a subset of the display that is presented repeatedly. Second, the unconscious tendency of a viewer to look for targets in novel locations in the display. 
  • Change blindness: the feature that users can not be detected even the user actively search for it, e.g. compare two picture, one with modification. 
  • Inattentional blindness: the user can completely fail to perceive visually salient objects or activities, e.g. the gorilla inattentional blindness experiment. 
  • Attention Blink: the limited ability in users' ability to process information that arrives in quick succession even when that information is presented at a single location in space. 
The vision models: 
  • Visual Attention: perceptual salience (e.g. number of colors, is the visualization perform as expected?), predicting attention (predict where a viewer will focus their attention), directing attention (to catch the eyeball). 
  • Visual Memory: to make sure user not miss the important information to avoid the change blindness and inattention blindness effect
Current challenges:
  • Visual Acuity: what is the information-processing capacity of the visual system?
  • Aesthetics: understand the perception of aesthetics
  • Engagement: consider the factor of visual interaction, decision. 


Reference
  1. Healey, Christopher, and James Enns. "Attention and visual memory in visualization and computer graphics." IEEE Transactions on Visualization and Computer Graphics 18.7 (2012): 1170-1188.

Monday, January 2, 2017

Empirical studies in information visualization: Seven scenarios

Note

A useful reference of visual tool evaluation. The paper provides 7 scenarios that research can easily follow to conduct the user study.

  1. Understand Environments and Work Practices (UWP)
  2. Evaluating Visual Data Analysis and Reasoning (VDAR)
  3. Evaluating CommunicationThrough Visualization (CTV)
  4. Evaluating Collaborative Data Analysis (CDA)
  5. Evaluating User Performance (UP)
  6. Evaluating User Experience (UE)
  7. Evaluating Visualization Algorithm (VA)
Reference
  1. Lam, Heidi, et al. "Empirical studies in information visualization: Seven scenarios." IEEE Transactions on Visualization and Computer Graphics 18.9 (2012): 1520-1536.

A nested model for visualization design and validation

Note

4 layers nested model to analyze and evaluate the visualization design. The layers are:

  1. Domain problem and data characterization: the designer should follow the "vocabulary" in each domain, e.g. business or biology.  
  2. Operation and data type abstraction: data type transformation
  3. Visual encoding and interaction design: the cost of interaction
  4. Algorithm Design: run-time speed and time

To evaluation:

  1. Vocabulary: to discuss the terminology in different domains
  2. Interactive Loops and Rapid Prototyping: looping and refining. 
  3. Domain Threats: mischaracterized problem
  4. Abstraction Threats: not solve the characterized problem the target users.
  5. Encoding and interaction Threats: not effective communication. 
  6. Algorithm Threats: memory performance.

Reference
  1. Munzner, Tamara. "A nested model for visualization design and validation." IEEE transactions on visualization and computer graphics 15.6 (2009): 921-928.

A design space of visualization tasks

Note

A taxonomy for data visualization tasks. The author defines the design space dimensions as:

  • Goal: Exploratory Analysis (e.g. undirected search), Confirmatory Analysis (directed search), Presentation (exhibiting confirmed analysis results) 
  • Means: Navigation (e.g. browsing or searching), (Re-)organization (e.g. extraction, abstraction), Relation (e.g. variations, discrepancies)
  • Characteristics: Low-level (e.g. values, objects) & High-level (e.g. trends, outliers, clusters, frequency, distribution, correlation, etc.) data characteristics
  • Target: Attribute Relations (e.g. Temporal and Spatial relations), Structural relation (e.g. causal relations, topological relations)
  • Cardinality: Single (highlight detail), Multiple (putting data into context), and All Instances (getting the overview). 


The classification can be used as the semantic tuple, i.e. (exploratory, search, trend, attrib(variable), all). This tuple is used to calculate the suitable techniques.

Reference
  1. Schulz, Hans-Jörg, et al. "A design space of visualization tasks." IEEE Transactions on Visualization and Computer Graphics 19.12 (2013): 2366-2375.

Interactive dynamics for visual analysis

Note

A taxonomy of tools that support the fluent and flexible use of visualizations.

Pay attention more to Coordinate and Organize sections.

Reference
  1. Heer, Jeffrey, and Ben Shneiderman. "Interactive dynamics for visual analysis." Queue 10.2 (2012): 30.

Task taxonomy for graph visualization

Note

A graph-specific visualization consists of Nodes, Links, Paths, Graphs, Connected Components, Clusters, and Groups. This paper discussed the possible tasks to examine the tool based on the given objects.

The low-level tasks, included:

  • Retrieve value
  • Filter
  • Compute the Derived Value
  • Find Extremum
  • Sort
  • Determine Range
  • Characterize Distribution
  • Find Anomalies
  • Cluster
  • Correlate
Tasks which commonly encountered while analyzing graph data: 
  • Topology-based Tasks: adjacency (direct connection), accessibility (direct or indirect connection), common connection, connectivity
  • Attribute-based Tasks: On the Nodes, On the Links
  • Browsing Tasks: Follow path, Revisit
Some more high-level tasks: 
  • compare two web graph for the difference, e.g. two recipe graph. 
  • nodes duplication
  • some tasks need users' interpretation
Reference
  1. Lee, Bongshin, et al. "Task taxonomy for graph visualization." Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization. ACM, 2006.

Sunday, January 1, 2017

Design considerations for collaborative visual analytics.

Note

This paper discussed the factor to consist a collaborative visual analytics environment. Some of the theory is overlapping with the online community operation. A successful collaboration is an effective division of labor among participants, the author argue three factors here: modularity, granularity, and cost of integration. In other words, the tasks should split, conduct and integrate at a reasonable price. If each of the factors is too expensive, it may hard to be a success collaboration scenario. For modularity factor, the author provides an information visualization reference model; this model helps for decomposing the visualization process into data acquisition and representation visual encoding, display, and interaction. Each of the components can be a reasonable module to start the collaborative works. For granularity factor, the author discussed the sensemaking model, for instance, in cooperative scenarios, the collaborator can immediate benefit from the actions of others. It is hard to facilitate cooperation if a lack of the incentive.

The ground sense principle is listing below:

  • discussion models, awareness 
  • Reference & deixis, pointing
  • Incentives & engagement, personal relevance, social-psychological incentives, gameplay, 
  • Identity & trust & reputation, identity presentation 
  • Group dynamics,  management, size, diversity 
  • Consensus and decision making, information distribution & presentation

A good reference to consider the collaborative theory in different scenarios, e.g. business intelligence system. For social analysis, a extend reading at [2].

Reference
  1. Heer, Jeffrey, and Maneesh Agrawala. "Design considerations for collaborative visual analytics." Information visualization 7.1 (2008): 49-62.
  2. Wattenberg, Martin, and Jesse Kriss. "Designing for social data analysis." IEEE transactions on visualization and computer graphics 12.4 (2006): 549-557.

egoSlider: Visual analysis of egocentric network evolution.

Note

This paper proposes a tool to visualize the dynamic and temporal information of ego-network. The primary goal of this tool is to support the study of the exploratory pattern for cross domains. For instance, how the ego-network change among time to the relationship with personal health? The contribution lay in three layers: 1) macroscopic: summarize the entire ego-network; 2) mesoscopic: overviewing particular individuals' ego-network evolution; 3) microscopic: displaying detailed temporal information of egos and their alters.



The visualization idea may come from different discipline, e.g. the sociology research may focus on more social interaction with developed social theory. It may be a great contribution to design such a tool to help them better facilitate, utilize and digest the generated data.

Reference
  1. Wu, Yanhong, et al. "egoSlider: Visual analysis of egocentric network evolution." IEEE transactions on visualization and computer graphics 22.1 (2016): 260-269.

Reducing snapshots to points: A visual analytics approach to dynamic network exploration.

Note

This paper uses the dimensional reduction technique to reduce the complex, multi-dimensional graph into points as 2-dimension plot. It shows the pattern with a different cluster, the user can further explore the generated points to see the detail of the network.



This may help the user to understand the deep learning through neural network, the feature extraction process. But the challenge is still remaining how to explain/label the projection cluster. It is not guarantee to have a meaningful (or at least human understandable) pattern in each round of exploration.

Reference
  1. van den Elzen, Stef, et al. "Reducing snapshots to points: A visual analytics approach to dynamic network exploration." IEEE transactions on visualization and computer graphics 22.1 (2016): 1-10.

Information visualization and visual data mining


Note

A good survey paper to follow the trend of data visualization and mining. This paper provides a clear classification for visual data mining works.  The author describes: "The visual data exploration process be seen a hypothesis generation process". A visualization interface provides the user an overview of the dataset. Based on the insight, the user can explore/filter/verify the finding to answer the hypothesis, the hypothesis can be generated by user/statistics/machine learning. In another hand, a visual data exploration usually follows a three looping process: overview, filter, and detail-on-demand. The different insight will jump out while the user explores the data through designed interface.

A visual data mining has consisted with three components: 1) data type to be visualized:  1D, 2D, ND, Text and hypertext and algorithm data visualization; 2) visualization technique: standard 2/3D, geometrically transformed, icon-based, dense pixel and stacked display; 3) interaction and distortion technique: projection, filtering, zooming, interactive distortion, linking and brushing. Each categories is with a reference paper that worth to further reading.

Reference
  1. Keim, Daniel A. "Information visualization and visual data mining." IEEE transactions on Visualization and Computer Graphics 8.1 (2002): 1-8.

Wednesday, December 28, 2016

Collaborative visual analysis with RCloud

Note

This paper discussed a collaborative visual analysis environment for a team work. For a data science related project work, it is very common to design, analyze and deliver the result to target audience, could be a colleague, customer or your boss. This is a process of exploratory data analysis (EDA). This paper argues the works are usually done by different tools, i.e. coding in scripting language and design the interface with web techniques. This makes the collaborative work very difficult, due to lack of discoverability (code reuse), technology transfer (collaborate) and coexistence (plus interactive visualization tool). Hence, this paper proposed a framework - RCloud, which using R to integrate the back end analyze and front display in a restful API structure. The basic idea is every application natively demonstrates the result to users through web browsers. This framework is re-using and coupling the existing package in R.

Points: in a small scale teamwork size and low dynamic of project requirements, I think this framework would work well. However, if more and more projects (usually small and not mature result) go live, the search and re-use may create extra workload for the developer. In another hand, the R package may not be suitable to solve all the practical problems, e.g. a large scale data storage or distributed computing tasks. Besides, there are more framework options to better facilitate the collaborative between developer and designer, e.g. the MVC framework. I think a good framework should stand alone with the specific language and techniques, so it can generally support to dynamic real world requirement.

I actually like this idea, it shows the values to deliver the beta works to the users. It 'd be good if we can put the research finding or preliminary result on the web for a better potential collaborative, public exposure, and self-advertisement. The other trend is using Scala to bundle the analysis, implementation, and production.

Reference
  1. North, Stephen, et al. "Collaborative visual analysis with RCloud." Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on. IEEE, 2015.

EgoNetCloud: Event-based egocentric dynamic network visualization

Note

A quality work on network visualization, this paper proposed a visual analytic tool to display the structure and temporal dynamics of an egocentric dynamic network [1,3].  It considered three important design factors in this work: 1) network simplification: to show all the links in the network graph is meaningless and over the information loading for users. A reasonable way to "prune" the node to highlight the important nodes is necessary. It firstly defined the weighting function by co-author number and ordering. Based on the weighting function, the authors tried four different approaches to pruning the node, to maximize the efficiency function, which maxes the weighting in the sub-graph.
2) temporal network: the temporal information present by horizon graph with an axis of time. It would be a simple task to identify the distribution over time; 3) graph layout: the layout designs with a 2D space. Due to the temporal relationship, the chart divides into several sub-graph that hard to fit by regular force-directed graph layout. They extend the stress model to calculate the ideal design [2].

Points: 1) the research methodology of visual analytic: from design, implantation, case study to user study. The user study design is a useful reference for my research; 2) considering the single publication as an event to form the egocentric network. It may supports to multiple use cases, e.g. urban computing, conference, news event, etc. This system is suitable to explore the relationship of a given dataset, for a temporal and egocentric related tasks; 3) the interaction of slider on time and weighting items is useful for a user to explore the content. It may potentially help a user to understand the deep relationship of the given person. This idea may also link to the explain function in the recommender system.

A worth to read citation [4].

Reference
  1. Liu, Qingsong, et al. "EgoNetCloud: Event-based egocentric dynamic network visualization." Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on. IEEE, 2015.
  2. Gansner, Emden R., Yehuda Koren, and Stephen North. "Graph drawing by stress majorization." International Symposium on Graph Drawing. Springer Berlin Heidelberg, 2004.
  3. Shi, Lei, et al. "1.5 d egocentric dynamic network visualization." IEEE transactions on visualization and computer graphics 21.5 (2015): 624-637.
  4. Zheng, Yixian, et al. "Visual Analytics in Urban Computing: An Overview." IEEE Transactions on Big Data 2.3 (2016): 276-296.