Jeffrey Heer (@jeffrey_heer) 's Twitter Profile
Jeffrey Heer

@jeffrey_heer

UW Computer Science Professor. Data, visualization & interaction. he/him. @uwdata @uwdub @vega_vis ex-@trifacta

ID: 247943631

linkhttp://idl.cs.washington.edu calendar_today05-02-2011 22:33:03

1,1K Tweet

12,12K Followers

797 Following

Jeffrey Heer (@jeffrey_heer) 's Twitter Profile Photo

I've had fun playing with Observable Framework to deploy dashboards and web apps. Here's an example site integrating Interactive Data Lab Mosaic and @DuckDB for scalable visualization: uwdata.github.io/mosaic-framewo…

I've had fun playing with <a href="/observablehq/">Observable</a> Framework to deploy dashboards and web apps. Here's an example site integrating <a href="/uwdata/">Interactive Data Lab</a> Mosaic and @DuckDB for scalable visualization: uwdata.github.io/mosaic-framewo…
Mike Bostock (@mbostock) 's Twitter Profile Photo

If you’re excited about Observable Framework’s new SQL cells, wait till you see our Mosaic integration! The code: --- sql: gaia: ./gaia-sample.parquet --- # Hello, vgplot ```js echo vg.plot(vg.raster(vg.from("gaia"), {x: "ra", y: "dec", fill: "density"})) ```

If you’re excited about Observable Framework’s new SQL cells, wait till you see our Mosaic integration! The code:
---
sql:
  gaia: ./gaia-sample.parquet
---

# Hello, vgplot

```js echo
vg.plot(vg.raster(vg.from("gaia"), {x: "ra", y: "dec", fill: "density"}))
```
Dominik Moritz (@domoritz) 's Twitter Profile Photo

I've always been excited about the hybrid client/server execution model in MotherDuck but seeing how well the WebAssembly client works with Interactive Data Lab Mosaic is even more exciting. In the video below I am looking at 10m data points in my browser.

Mike Bostock (@mbostock) 's Twitter Profile Photo

Observable Framework 1.3 🆕 integrates Interactive Data Lab’s Mosaic vgplot, which can concisely expressive performant coordinated views of millions of data points. observablehq.com/framework/lib/…

Vega, Vega-Lite & Altair (@vega_vis) 's Twitter Profile Photo

The Vega Project is happy to announce the release of version 5.3.0 of the Vega-Altair Python visualization library. This release has been 4 months in the making and includes enhancements, fixes, and documentation improvements from 11 contributors. Highlights in đź§µ

Michelle Lam (@michelle123lam) 's Twitter Profile Photo

“Can we get a new text analysis tool?” “No—we have Topic Model at home” Topic Model at home: outputs vague keywords; needs constant parameter fiddling🫠 Is there a better way? We introduce LLooM, a concept induction tool to explore text data in terms of interpretable concepts🧵

Jeffrey Heer (@jeffrey_heer) 's Twitter Profile Photo

Excited to analyze text at the level of *interpretable concepts*, addressing many weaknesses of topic models we’ve found in the past - and with more control & revision by analysts. Also a fun collaboration between Interactive Data Lab and Stanford Human-Computer Interaction Group, led by the impressive Michelle Lam!

Madeleine Grunde-McLaughlin (@madeleinegrunde) 's Twitter Profile Photo

Chaining LLM calls can improve output quality, but navigating the massive space of task decompositions is challenging. Revisiting the established field of crowdsourcing, we distill strategies for effective LLM chain design and identify opportunities for future research. [1/11]

Michelle Lam (@michelle123lam) 's Twitter Profile Photo

Frustrated with topic models? Wish emergent concepts were interpretable, steerable, and able to classify new data? Check out our #CHI2024 talk on Tues 9:45am in 316C (Politics of Datasets)! Or try LLooM, our open-sourced tool :) ✨ github.com/michelle123lam…

Dominik Moritz (@domoritz) 's Twitter Profile Photo

I'm super happy that Hex donated VegaFusion to the Vega org to bring it together with Altair which is also now under the Vega org. It's been amazing to collaborate with Jon Mease and I look forward to continue doing so.

Ken Gu (@kenqgu) 's Twitter Profile Photo

🚀 New Benchmark Alert: ⚙️ BLADE — A benchmark with expert-annotated analyses, evaluating how #LLM agents tackle open-ended, data-driven scientific research questions. Results show top models reach only ~44% F1. 📄 arxiv.org/abs/2408.09667 👨🏻‍💻 github.com/behavioral-dat… 🧵1/9

🚀 New Benchmark Alert: ⚙️ BLADE  — A benchmark with expert-annotated analyses, evaluating how #LLM agents tackle open-ended, data-driven scientific research questions. Results show top models reach only ~44% F1.
đź“„ arxiv.org/abs/2408.09667
👨🏻‍💻 github.com/behavioral-dat…

đź§µ1/9
Interactive Data Lab (@uwdata) 's Twitter Profile Photo

Congratulations to IDL alum Dominik Moritz for winning a VGTC Significant New Researcher award!! A premier honor for early career researchers in visualization!

Interactive Data Lab (@uwdata) 's Twitter Profile Photo

With DracoGPT, Will Wang shows how to extract and model visualization design preferences from generative AI systems — enabling new ways to quantify, evaluate, and efficiently reuse LLM-based chart recommendations. #ieeevis idl.uw.edu/papers/dracogpt