CommunityData:Introduction to CDSC Resources
If you're new to the group, welcome!
This is an introduction to some of the tools we use (and we use many!) in our research work. It may be helpful to look at before diving into everything and starting your research with/in this group. You can find additional information on the resources mentioned below on the Resources page. The Resources page will generally list more resources and details than those listed here.
To start, here's some common shorthand that members might use. It's a little outdated but has some acronyms, names of things, etc. that might pop up in conversation.
There's a video introduction from June 29 2020 online here. It's hosted in Mako's cdsc_only
video repository so there's a username and and password but you can ask anybody in the group and they should be able to get it by searching their email for "cdsc_only".
Communication Channels
We communicate on multiple channels!
- We communicate (chat) frequently on IRC
- We use email lists to communicate things relevant to the entire group or subgroup, like upcoming events or circulating papers for feedback: CDSC - Email
- One can also contact specific members directly.
- For weekly meetings and other (video)calls, we typically use Jitsi. There are a lot of us, which can make calls a little hectic, so please keep in mind some Jitsi etiquette.
- We also have a calendar of group-wide events: CDSC Calendar, such as the retreats.
We also have some public facing channels:
- We have a variety of various social media accounts including the Community Data Science blog, Twitter, Youtube, Mastodon, and so on. That page has details about to get accounts.
Collaboration tools
- This wiki: The CDSC Wiki includes group resources, as well as things like research project pages and course websites. It is highly recommended that you create an account and then reach out to someone else in the group to make you an admin. This will help you to avoid having your edits reverted.
- Bibliographic references: We maintain a large shared Zotero directory that is really helpful for finding relevant papers and smooths the process of collaboration (as one can see the papers and sources stored by collaborators as well). Please review the Zotero etiquette described on the "Adding and Organizing References" and "Tips and Tricks" sections of Zotero before using the shared folder.
- LaTeX authoring: Some of us work on papers and presentations together in Overleaf. See additional info about this below. You can get a free account to join a project or two and use the basic functionalities of Overleaf. More sustained use of more features probably means you should join the cdsc account or another paid account. We don't have a CDSC overleaf info page (yet), but if you think you need to join the group account, contact Aaron about that.
- Meeting Poll Tools: We use When Is Good for a lot of our meeting polls. Here are some tips, tricks and norms about filling out meeting polls
- Version control: We also have a Git repository with some shared resources (both technical and non-technical) on it:
- Git repositories: CommunityData:Git — How to get set up on the git server to create, clone, work on/in shared git repositories we maintain.
- Software projects: CommunityData:Code — List of software projects maintained by the collective.
Computation: Servers, data, and more
Much of our work is pretty computational/quantitative and involves large datasets. We have multiple computing resources and servers.
- Hyak
- Hyak is a supercomputer system that is hosted at UW but that the whole group uses for conducting statistical analysis and data processing. Hyak is necessary if you need large amounts of storage (e.g., tens of terabytes) or if you need large amount of computational resources (e.g., CPU time, memory, etc). Severs in Hyak do not direct access to the Internet. That means that Hyak is not useful for collecting data from APIs, etc. Access requires a UW NetID but they will be sponsored for you. You can learn more about it at: CommunityData:Hyak which has various links to tutorials/documentation as well.
- In order to use Hyak, you need to get an account setup. This is documented on CommunityData:Hyak setup.
- Kibo
- Kibo is a server we use for research hosted at Northwestern that came online in 2018-2019. Kibo is only a single machine but it is very powerful and is connected to the Internet. It has several dozen terabytes of space, a large amount of memory, and many CPUs. We use it primarily for (a) data collection APIs and (b) publication of large datasets like the data from the CDSC COVID-19 Digital Observatory. Access requires a NU NetID but they will be sponsored for you. More details are on CommunityData:Kibo.
- Nada
- Nada is a sever at UW that is used primarily for infrastructure. It runs the blogs, mailing lists, git repositories and so on. We backup all of nada and these backups can be very expensive. Before you download or use data on Nada, please read the page CommunityData:Backups (nada) which provide details on what is, and what isn't, backed up from nada.
- Asha
- Asha is a server at UW that is used for storing and analyzing Scratch data. Only people on the IRB protocol for Scratch are online.
When using servers, these pages might be helpful:
- CommunityData:Tmux — You can use tmux (terminal multiplexer) to keep a persistent session on a server, even if you're not logged into the server. This is especially helpful when you ssh to a server and then run a job that runs for quite a while and then you can't stay logged in the whole time. Check out the tmux git repo or its Wikipedia page for more information about this.
- CommunityData:Hyak Spark — Spark is a powerful tool that helps build programs dealing with large datasets. It's great for Wikimedia data dumps.
Wiki Data in particular
Multiple people in the group work on large datasets gathered from Wikipedia, Wikia (Fandom), or other projects running MediaWiki software. We have some specific resources and tools for these kinds of data
- CommunityData:ORES - Using ORES with Wikipedia data
- CommunityData:Wikia data — How to get and validate wikia dumps.
- CommunityData:Wikiq - Processing MediaWiki XML dumps, outputting parsed dumps as tsv (which can then be processed by the very powerful Spark).
Creating Documents and Presentations
Planning
You can develop a research plan in whatever way works best, but one thing that may be useful is the outline of a Matsuzaki-style planning documents. You can see a detailed outline description here to help guide the planning process. If you scroll to the bottom, you'll see who to contact to get some good examples of planning documents.
Also helpful in developing a research plan might be some of the readings in this course taught by Aaron to PhD students: Practice of Scholarship (SP19).
Paper building
We typically write LaTeX documents when writing papers. One option to do this is to use the web-based Overleaf. Another option, using CDSC TeX templates, is detailed here. These comes with some assumptions about your workflow, which you can learn about here: CommunityData:Build papers.
If you're creating graphs and tables or formatting numbers in R that you want to put into a TeX document, you should look at the knitr package.
Some more specific things that might crop up in building the La/TeX document:
- CommunityData:Embedding fonts in PDFs —
ggplot2
creates PDFs with fonts that are not embedded which, in turn, causes the ACM to bounce our papers back. This page describes how to fix it.
Building presentation slides
Below are some options to creating presentation slides (though, feel free to use what you want and are most comfortable with):
- CommunityData:Beamer — Beamer is a LaTeX document class for creating presentation slides. This is a link to installing/using Mako's beamer templates.
- Again, like the CDSC TeX templates, these Beamer templates also come with some assumptions about your workflow, which you can learn about here (again): CommunityData:Build papers.
- CommunityData:reveal.js — Using RMarkdown to create reveal.js HTML presentations
A few additional resources
Technical
- CommunityData:Exporting from Python to R
- CommunityData:Northwestern VPN - How to use the Northwestern VPN
Non-technical
- CommunityData:Advice on writing a background section to an academic paper
- See some past and upcoming lab retreats [here].