CommunityData:Introduction to CDSC Resources

If you're new to the group, welcome!

This is an introduction to the various technical tools we use (as we use many) in our research work. It may be helpful to look at before diving into everything and starting your research with/in this group. You can find any of the resources mentioned below on the Resources page. The Resources page will generally list more resources than those listed in the intro here.

To start, here's some common shorthand that members might use.

Communication Channels
We communicate on multiple channels.


 * One might contact specific members directly.
 * We communicate (chat) much more frequently on IRC
 * We use email lists to communicate things relevant to the entire group or subgroup, like upcoming events or circulating papers for feedback: CDSC - Email
 * For weekly meetings and other (video)calls, we videocall using Jitsi. There are a lot of us, which can make calls a little hectic, so please keep in mind some Jitsi etiquette.
 * We also have a calendar of group-wide events: CDSC Calendar, such as the retreats.

Shared Resources

 * We maintain a large shared Zotero directory that is really helpful for finding relevant papers and smooths the process of collaboration (as one can see the papers and sources stored by collaborators as well). Please review the Zotero etiquette described on the "Adding and Organizing References" and "Tips and Tricks" sections of Zotero before using the shared folder.
 * We also have a Git repository with some shared resources (both technical and non-technical) on it:
 * CommunityData:Git — Getting set up on the git server
 * CommunityData:Code — List of software projects maintained by the collective.

Servers and Data Stuff
Much of our work is quantitative and involves large datasets. We have multiple computing resources and servers. For any given project, you might not need it eventually.


 * Hyak: Hyak is a supercomputer system that is hosted at UW but that the whole group uses for conducting statistical analysis and data processing. Hyak is necessary if you need large amounts of storage (e.g., tens of terabytes) or if you need large amount of computational resources (e.g., CPU time, memory, etc). Severs in Hyak do not direct access to the Internet. That means that Hyak is not useful for collecting data from APIs, etc. Access requires a UW NetID but they will be sponsored for you. You can learn more about it at: CommunityData:Hyak which has various links to tutorials/documentation as well.


 * In order to use Hyak, you need to get an account setup. This is documented on CommunityData:Hyak setup.


 * Kibo: Kibo is a server we use for research hosted at Northwestern that came online in 2018-2019. Kibo is only a single machine but it is very powerful and is connected to the Internet. It has several dozen terabytes of space, a large amount of memory, and many CPUs. We use it primarily for (a) data collection APIs and (b) publication of large datasets like the data from the CDSC COVID-19 Digital Observatory. Access requires a NU NetID but they will be sponsored for you. More details are on CommunityData:Kibo.


 * Nada: Nada is a sever at UW that is used primarily for infrastructure. It runs the blogs, mailing lists, git repositories and so on. We backup all of nada and these backups can be very expensive. Before you download or use data on Nada, please read the page CommunityData:Backups (nada) which provide details on what is, and what isn't, backed up from nada.


 * Asha: Asha is a server at UW that is used for storing and analyzing Scratch data. Only people on the IRB protocol for Scratch are online.

When using servers, these pages might be helpful:
 * CommunityData:Tmux — You can use tmux (terminal multiplexer) to keep a persistent session on a server, even if you're not logged into the server. This is especially helpful when you ssh to a server and then run a job that runs for quite a while and then you can't stay logged in the whole time. Check out the tmux git repo or its Wikipedia page for more information about this.
 * CommunityData:Hyak Spark — Spark is a powerful tool that helps build programs dealing with large datasets.

Re: Wiki Data

 * CommunityData:ORES - Using ORES with wikipedia data
 * CommunityData:Wikia data — Documents information about how to get and validate wikia dumps.
 * CommunityData:Wikiq - Wikiq is a handy tool we use to process Wikipedia XML dumps, outputting dumps as tsv (which can then be easily processed by the very powerful Spark).

Planning
You can develop a research plan in whatever way works best, but one thing that may be useful is the outline of a Matsuzaki-style planning documents. You can see a detailed outline description here to help guide the planning process. If you scroll to the bottom, you'll see who to contact to get some good examples of planning documents.

Also helpful in developing a research plan might be some of the readings in this course taught by Aaron to PhD students: Practice of Scholarship (SP19).

Paper building
We typically write LaTeX documents when writing papers. One option to do this is to use the web-based Overleaf. Another option, using CDSC TeX templates, is detailed here. These comes with some assumptions about your workflow, which you can learn about here: CommunityData:Build papers.

If you're creating graphs and tables or formatting numbers in R that you want to put into a TeX document, you should look at the knitr package.

Some more specific things that might crop up in building the La/TeX document:
 * CommunityData:Embedding fonts in PDFs —  creates PDFs with fonts that are not embedded which, in turn, causes the ACM to bounce our papers back. This page describes how to fix it.

Building presentation slides
Below are some options to creating presentation slides (though, feel free to use what you want and are most comfortable with):
 * CommunityData:Beamer — Beamer is a LaTeX document class for creating presentation slides. This is a link to installing/using Mako's beamer templates.
 * Again, like the CDSC TeX templates, these Beamer templates also come with some assumptions about your workflow, which you can learn about here (again): CommunityData:Build papers.


 * CommunityData:reveal.js — Using RMarkdown to create reveal.js HTML presentations

Technical

 * CommunityData:Exporting from Python to R
 * CommunityData:Northwestern VPN - How to use the Northwestern VPN

Non-technical

 * CommunityData:Advice on writing a background section to an academic paper
 * See some past and upcoming lab retreats [here].