Context visualization research batch

Date: 2026-04-11 Collector: Meridian (Hermes) Method: local wiki grounding plus direct terminal fetches against arXiv, ACL Anthology, DOI/Crossref, product docs, and project documentation when web_search / web_extract were unreliable.

Scope

  • Better ways to show users what context is being assembled, from where, and under which trust conditions.
  • Honest UI for source provenance, trust state, and influence.
  • What can and cannot be shown about “what the model is attending to now”.

Provenance and trust visualization anchors

  1. PROV-DM (W3C)

    • URL: https://www.w3.org/TR/prov-dm/
    • Notes: provenance should model entities, activities, agents, derivation, attribution, revision, and invalidation as first-class relations.
  2. in-toto (2019)

  3. Provenance and Annotation for Visual Exploration Systems (2006)

  4. Provenance Semirings (2007)

  5. ModelTracker (2015)

  6. The What-If Tool (2019)

  7. From Common Operating Picture to Situational Awareness (2014)

  8. Design of a role-based trust-management framework (2002/2005 record)

  9. ClaimChain (2017)

  10. W3C Verifiable Credentials Data Model v2.0

  11. Evaluation of Filesystem Provenance Visualization Tools (2013)

  12. AVOCADO: Visualization of Workflow-Derived Data Provenance for Reproducible Biomedical Research (2016)

  13. Characterizing Provenance in Visualization and Data Analysis (2016)

  14. Uncertainty as a Form of Transparency (2021)

  15. Augmenting Web Pages and Search Results to Support Credibility Assessment (2011)

Attention and attribution anchors

  1. Attention is not Explanation (2019)

  2. Attention is not not Explanation (2019)

  3. What Does BERT Look at? An Analysis of BERT’s Attention (2019)

  4. Quantifying Attention Flow in Transformers (2020)

    • URL: https://arxiv.org/abs/2005.00928
    • Notes: attention rollout / flow are better input-level summaries than raw attention alone, but still derived heuristics.
  5. A Multiscale Visualization of Attention in the Transformer Model / BertViz (2019)

  6. Ecco

  7. LIT (Learning Interpretability Tool)

  8. MIRAGE: Model Internals-based Answer Attribution for Trustworthy RAG (2024)

    • URL: https://arxiv.org/abs/2406.13663
    • Notes: answer-to-document attribution using model internals; useful for “which retrieved source supported this answer span?”
  9. VISA: Retrieval Augmented Generation with Visual Source Attribution (2025)

  10. Source Attribution in Retrieval-Augmented Generation (2025)

  11. TokenShapley (2025)

  12. Lost in the Middle (2023)

Context-assembly UX anchors

  1. Sourcegraph Cody Context

  2. NotebookLM Sources panel

  3. LangGraph Studio

  4. LangSmith observability

  5. Arize Phoenix

  6. TruLens

  7. Whyline

  8. Jigsaw

  9. Analyst’s Workspace

Provisional synthesis

  • The UI should separate three layers that products often blur:
    1. exact assembly provenance
    2. trust / verification / uncertainty state
    3. influence or salience estimates
  • Source selection score is not the same thing as actual answer influence.
  • “What the model is attending to now” should usually be framed as a selected-token diagnostic for open-weight models, or as a proxy estimate for API-only models.
  • One canonical context/provenance substrate should back a shared common operating picture and role-specific situational panes.
  • Trust should be displayed as evidence state vectors and receipts, not as a scalar reputation score.
  • The smallest serious UI should show Included / Candidate / Dropped sources, why chips, answer-to-source traversal, and a with/without-source compare affordance.