This is a monthly meeting hosted by the Wikimedia Foundation to demo things like new wiki gadgets, 10% time projects, and works in progress. Presentations will be from individuals or teams from Community, Reading, Editing, Discovery, Infrastructure and Technology.
1. David Chan: Multi-user editing in VisualEditor
Wikipedia's VisualEditor has always been designed with multi-user collaborative editing in mind.
While the big question of how to implement multi-user editing on MediaWiki is still open, we can demonstrate
the core technology with just a few changes to our standalone editor to make an Etherpad-like product.
2. Ed Sanders: VisualEditor source mode
VisualEditor has become the preferred editor for many Wikipedia editors, however on occasion
there are some tasks that still require editing the wikitext directly. Currently you can switch
between VE and the wikitext editor mid-edit, but this results in page refresh and a jarring
change to a six-year-old UI that is missing many of the tools VE users are used to
(automatic citations, link inspector, insert media, undo/redo, etc.)
We aim to fix this by building a source mode editor into VE, giving the user a consistent UI,
faster switching, and adding a suite of powerful editing tools.
3. Erik Bernhardson, Phan integration for mediawiki core
Static analysis of php code. Can detect typo'd class/method names, missing use statements, and more.
4. Stephen Niedzielski, Automated unit testing of Android views across locale, accessibility, theme, screen and other configurations
The standard benefits of unit testing (detect regressions automatically, identify corner case bugs, fearless refactoring, self-documentation, and more modular designs) but for Android views. This implementation is also noteworthy for how views are tested across device configurations.
5. Erik Bernhardson, Search Click Models
Estimates relevance of a (query, article) pair based on user click behaviour. Allows to generate large amounts of
relevance data for tuning search or training machine learning models from implicit user feedback, rather than the
explicit user feedback model of Discernatron. Compared to Discernatron this has some large downsides, especially
with respect to measuring recall, but large upsides in the amount of data that can be labeled and then evaluated.