LLMs for Inductive Coding
Designing ensemble pipelines where multiple LLMs propose open codes for qualitative data and a moderator model refines them into compact, human-interpretable label sets.
I prototype playful workflows where language models act as collaborators, critics, and note-taking buddies for social scientists. I like messy transcripts, political debates, and making AI explain itself with codes and graphs humans can trust.
How I use LLMs to make qualitative social science a bit less painful and a bit more fun.
Designing ensemble pipelines where multiple LLMs propose open codes for qualitative data and a moderator model refines them into compact, human-interpretable label sets.
Simulating groups of models that discuss and negotiate codes for text, to study patterns of agreement, influence, bias, and “social” behaviour in artificial collectives.
Building hierarchical concept graphs from parliamentary speech, connecting utterances, codes, and higher-order themes to explore political cleavages and narratives.
A few experiments that keep my browser tabs open at night.
Pinned GitHub repo for visualizing topic models as interactive graphs—clean way to explore themes, connections, and dominant terms without living inside notebooks.
Pinned GitHub repo for brain-signal decoding experiments: model baselines, data prep, and exploratory notebooks aimed at mapping neural signals to interpretable representations.
Synced from my Google Scholar page so I don’t have to keep copy-pasting titles.
For collaborations, talks, or sending me cursed plots.