For verifying claims, she turned to Anchor, a fact-tracking tool that cross-checked statements against primary sources and flagging where studies used small samples or self-reported data. Anchor chimed a soft alert as it found a paper that had been retracted—something Mai might have missed in a hurried skim. It linked to the retraction notice and summarized the reason in one line.
Weeks later, at the small symposium where she presented her findings, an older researcher asked how she’d managed to handle so many sources so fast. Mai smiled and named the tools—Prism, Scribe, Anchor, Loom, Argus, Verity, Beacon—but also said something more important: "They helped, but I was always the one deciding what mattered." For verifying claims, she turned to Anchor, a
Before submission, Mai ran her references through Beacon, a tool that scanned for missing DOIs, inconsistent author names, and journal title formatting. Beacon found three missing DOIs and a misspelled coauthor name—small fixes that made the bibliography sing. Weeks later, at the small symposium where she
The end.
Mai still needed to test a hypothesis of her own: did people retain information better when AI tools highlighted structure? For that she built a small experiment with Loom—an easy survey-and-task builder. Loom randomized participants into two groups, recorded time-on-task, and produced clean CSV exports for analysis. The end