Wikiq

Wikiq is our tool for building tabular datasets from raw mediawiki edit data. Mediawiki outputs xml dump files, but these files are not so easy to work with, particularly because they contain the full text of every revision to every page. This makes it quite computationally expensive to process large wikis and leads to other technical problems. Wikiq efficiently processes mediawiki xml dumps to produce much smaller datasets that only contain variables that will be useful in our research. Nate is working this summer on improvements to wikiq. Let him know if you have any requests!.

! New feature for Wikiq 2019
We have recently added a general-purpose pattern matching (regular expressions) feature to wikiq. The design doc can be seen here and more information is given in the Command Line Arguments and Codebook below.

See Also: CommunityData:Dataset_And_Tools_Release_2018

Setting up Wikiq
Wikiq is a python3 program with dependencies. To run on Hyak, for now, you will need to install the dependencies using

Command Line Arguments
Some important command line flags control the behavior wikiq and change which variables are output.

This is used to safely handle text which might contain unicode characters that conflict with other parsing systems. You will probably want to url-decode these columns when you read them.

This is somewhat costly, and slow, to compute. You can specify,   or   methods of calculating persistence. Segment is the default, and recommended way, but it is somewhat slower than sequence. Segment persistence is a faster, but marginally less accurate, version of the algorithm presented in this [paper https://arxiv.org/abs/1703.08244].

This can be useful for addressing issues with text persistence measures.



Id of namespace to include. Can be specified more than once. For some wikis (e.g. Large Wikipedias) computing persistence for the project namespace can be extremely slow.

Pattern matching arguments
Users can now search for patterns in edit revision text, with a list of matches for each edit being output in columns (a column for each pattern indicated by the pattern arguments below). Users may provide multiple revision patterns and accompanying labels. The patterns and the labels must be provided in the same order for wikiq to be able to correctly label the output columns.





In addition to revisions, we also wish to support pattern matching against revision summaries (comments). Therefore we also have corresponding command line arguments.





A note on named capture groups in pattern matching
The regular expressions in  and   may include one or more named capture groups. If the `pattern` matches, it will then also capture values for each named capture group. If a  has one or more named capture groups wikiq will output a new column for each named capture group to store these values, with the column getting named:. Since a `pattern` can match a revision more than once it is possible that more than one value should go in this column (regardless of named capture group or not).

For cases in which the  or   have more than one named capture group and part of the searched string being searched matches for more than one capture group, only the first capture group will indicate a match because matching consumes characters in Python. For example, if a regular expression is  and the test string being searched is , we note that   works for both the   and. However, the capture group listed first consumes '500' when it matches, so the   column will contain the list   while the   column will simple have. As a result, one should consider the order of capture groups or create separate regular expression and label pairs.

Codebook
The current version of wikiq provides one row for each edit (unless --collapse-user is passed, in which case each row corresponds to consecutive edits by the same editor), with columns for the following variables: anon   articleid    collapsed_revs    date_time    deleted    editor    editor_id    minor    namespace    revert    reverteds    revid    sha1    text_chars    title    token_revs    tokens_added    tokens_removed    tokens_window

The meaning of the variables is:















For example, User:mkross (see https://www.mediawiki.org/wiki/Manual:Namespace#Built-in_namespaces)













The following variables refer to persistent word revisions (PWR) and are only provided when wikiq is called with the  argument:

This is the key PWR variable.







The following variables are output when wikiq is called with the  argument:



The following variables are output when wikiq is called with the pattern matching arguments:

from  : A list of the matches of the pattern given for this label found in that edit's revision text or comment (whatever specified). If none found, None.

If none found, None.

Bugs

 * Not all anonymous edits get flagged as anon. Editor name being an IP Address seems to work (Not confirmed). (Note: I've never seen a bug with this and I've done a lot of work with anon edits. -kc)

Samples
Kaylea likes to use a script-generating script for wikiq.

Step 1: Create a script-generating script like this:

from os import path import os import stat import glob
 * 1) !/usr/bin/env python3

dumpHome = '/gscratch/comdata/raw_data/' outPath = '/gscratch/comdata/output/' langDump = dumpHome + enwiki_20230401 #customize if needed
 * 1) this script makes wikiq scripts for a given dump path

outPath = outPath + "wikiq_enwiki_name_this_something_useful/"
 * 1) customize output path

archives = glob.glob(langDump + "/*pages-meta-hist*.7z") #makes a list of all the files, about 800 of them

if not os.path.exists(outPath): #makes the dir for storing the output os.makedirs(outPath)

with open('run_wikiq.sh', 'w') as fh: #creates a script for item in archives: #select options to customize the below as needed # as you see above, wikiq has a ton of options. # note that -o requires next field to be outPath; if more cmdline args are added, place before the -o. # if you wanted to regex match misinf or disinf in the edit comment field, this is how you'd do it: #fh.write(f"wikiq -u -CP '.*(misinf|disinf).*' -CPl comment -n 0 -n 1 -o {outPath} {item}\n") # a more normal wikiq invocation is this: fh.write(f"wikiq --collapse-user -u -o {outPath} {item}\n")

Step 2: use the split command to turn your giant run_wikiq.sh script into a bunch of smaller files, named automatically things like xaa, xab, xac. For example, to make 40 lines per smaller script, do: split -l 40 run_wikiq.sh After running split, if you type ls, you'll see the autonamed files, each containing part of your run_wikiq.sh script.

Step 3: you can now run the subchunks of your script, e.g. use tmux to log in to the same node 10-15 times, running sh xaa in the first one, sh xab in the second one, and so on. This is more hands-on and not really a proper batch approach, but it lets you sail through certain kinds of disruptions while still getting your output quickly.