Jul 28-29, 2016
9:00 am - 4:00 pm
Instructors: Kim Pham, Leanne Trimble, Greg Wilson, Nich Worby, Thomas Guignard
Helpers: Nancy Fong, Leslie Barnes, Bella Ban, Sean Zhao, Stephanie Pegg, Andy Wagner
Software Carpentry's mission is to help scientists and engineers get more research done in less time and with less pain by teaching them basic lab skills for scientific computing. Library Carpentry offers comparable training to help librarians, archivists, museums and information professionals gain computing skills relevant to their profession. Participants will be encouraged to help one another and to apply what they have learned to their own problems sets.
Who: The course is aimed at librarians, archivists, museum professionals, and other information professionals. You don't need to have any previous knowledge of the tools that will be presented at the workshop. However, we do recommend that attendees have had some exposure / familarity with the following concepts: programming logic, boolean expressions, XML.
Where: University of Toronto Robarts Library - 4th Floor - Blackburn Room. Get directions with OpenStreetMap or Google Maps.
Requirements: Participants must bring a laptop with a Mac, Linux, or Windows operating sytem (not a tablet, Chromebook, etc.) that they have administrative privileges on. They should have a few specific software packages installed (listed below). They are also required to abide by Software Carpentry's Code of Conduct.
Accessibility: We are committed to making this workshop accessible to everybody. The workshop organisers have checked that:
Materials will be provided in advance of the workshop and large-print handouts are available if needed by notifying the organizers in advance. If we can help making learning easier for you (e.g. sign-language interpreters, lactation facilities) please get in touch and we will attempt to provide them.
Contact: Please mail kim.pham@utoronto.ca for more information.
09:00 | Welcome & Introduction |
09:15 | Regular Expressions (Regex) |
10:15 | Break |
10:30 | XQuery / XPath |
12:00 | Lunch break |
13:00 | Introduction to Open Refine |
14:30 | Coffee |
16:00 | Wrap-up |
09:00 | Introduction to Python |
10:15 | Break |
10:30 | Python and APIs |
12:00 | Lunch break |
13:00 | Introduction to Web scraping |
14:30 | Coffee |
16:00 | Wrap-up |
Etherpad: http://pad.software-carpentry.org/2016-07-2829-librarycarpentry.
We will use this Etherpad for chatting, taking notes, and sharing URLs and bits of code.
To participate in this Library Carpentry workshop, you will need access to the software described below. In addition, you will need an up-to-date web browser.
We maintain a list of common issues that occur during installation as a reference for instructors that may be useful on the Configuration Problems and Solutions wiki page.
Bash is a commonly-used shell that gives you the power to do simple tasks more quickly.
cmd
and press [Enter])setx HOME "%USERPROFILE%"
SUCCESS: Specified value was saved.
exit
then pressing [Enter]This will provide you with both Git and Bash in the Git Bash program.
The default shell in all versions of Mac OS X is Bash, so no
need to install anything. You access Bash from the Terminal
(found in
/Applications/Utilities
).
See the Git installation video tutorial
for an example on how to open the Terminal.
You may want to keep
Terminal in your dock for this workshop.
The default shell is usually Bash, but if your
machine is set up differently you can run it by opening a
terminal and typing bash
. There is no need to
install anything.
No installation required -- we will be using a browser-based tool during the workshop.
Install BaseX. You may need to install a JDK to get it to run. Once you've installed the program, in your command prompt or terminal type in the command $ basexgui
to start the program.
Download the Windows Installer.
A Linux distribution of basex is available here.
OpenRefine (formerly Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data. Please look at the Installation Instructions from the OpenRefine project for more details on how to run OpenRefine on your machine. The instructions below are adapted from this link. We will be using version 2.6-rc2 during the workshop. A Java Runtime Environment (JRE) is required to run OpenRefine. If the installation procedure below fails, make sure you have a working JRE installed on your computer.
Ctrl-C
in the command window that is running OpenRefine. Wait until there's a message that says the shutdown is complete.
That window might close automatically, or you can close it yourself. If you get asked, "Terminate all batch processes? Y/N", just press Y.ctrl
(Control) key and click on the app icon./refine
Ctrl-C
in the shell that is running OpenRefine.OpenRefine is operated from within a web browser (such as Chrome or Firefox). If your browser doesn't open automatically when you start OpenRefine (see above), navigate to http://127.0.0.1:3333/ in your favourite browser to open the OpenRefine window. Please note that even though you use a browser to operate OpenRefine, it is still run locally on your machine, and not on the web.
When you're writing code, it's nice to have a text editor that is
optimized for writing code, with features like automatic
color-coding of key words. The default text editor on Mac OS X and
Linux is usually set to Vim, which is not famous for being
intuitive. if you accidentally find yourself stuck in it, try
typing the escape key, followed by :q!
(colon, lower-case 'q',
exclamation mark), then hitting Return to return to the shell.
nano is a basic editor and the default that instructors use in the workshop. To install it, download the Software Carpentry Windows installer and double click on the file to run it. This installer requires an active internet connection.
Others editors that you can use are Notepad++ or Sublime Text. Be aware that you must add its installation directory to your system path. Please ask your instructor to help you do this.
nano is a basic editor and the default that instructors use in the workshop. See the Git installation video tutorial for an example on how to open nano. It should be pre-installed.
Others editors that you can use are Text Wrangler or Sublime Text.
nano is a basic editor and the default that instructors use in the workshop. It should be pre-installed.
Others editors that you can use are Gedit, Kate or Sublime Text.
Python is a popular language for scientific computing, and great for general-purpose programming as well. Installing all of its scientific packages individually can be a bit difficult, so we recommend Anaconda, an all-in-one installer.
Regardless of how you choose to install it, please make sure you install Python version 3.x (e.g., 3.4 is fine).
We will teach Python using the IPython notebook, a programming environment that runs in a web browser. For this to work you will need a reasonably up-to-date browser. The current versions of the Chrome, Safari and Firefox browsers are all supported (some older browsers, including Internet Explorer version 9 and below, are not).
bash Anaconda3-and then press tab. The name of the file you just downloaded should appear.
yes
and
press enter to approve the license. Press enter to approve the
default location for the files. Type yes
and
press enter to prepend Anaconda to your PATH
(this makes the Anaconda distribution the default Python).
In the first part of the Web scraping lesson, we will use a Chrome browser extension to get started with web scraping. Please ensure you have a working copy of the Chrome browser, as well as the Scraper extension. We will also use OpenRefine to clean up extracted data.
In the second part of the lesson, we will use the Scrapy framework to build a web scraper in Python. This requires a working installation of Python, please refer to the section on installing Python for details.
Make sure you have a working copy of the Google Chrome browser on your machine. Install the Scraper extension.
We will also need a command-line tool to run Scrapy. See the section on installing GitBash if you haven't done so already. If you already have another shell installed, such as Cygwin, this should be fine too.
If you installed Python using Anaconda as recommended above, do the following:
conda install -c scrapinghub scrapy
If you have another install of Python, you should be able to use the pip package manager to install Scrapy:
pip install Scrapy
. If you run into issues, refer to the official Scrapy install guide or get in touch
with Thomas (@timtomch).
Make sure you have a working copy of the Google Chrome browser on your machine. Install the Scraper extension.
If you installed Python using Anaconda as recommended above, do the following:
conda install scrapy
If you have another install of Python, you should be able to use the pip package manager to install Scrapy:
pip install Scrapy
. If you run into issues, refer to the official Scrapy install guide or get in touch
with Thomas (@timtomch).
Make sure you have a working copy of the Google Chrome browser on your machine. Install the Scraper extension.
If you installed Python using Anaconda as recommended above, do the following:
conda install scrapy
If you have another install of Python, you should be able to use the pip package manager to install Scrapy:
pip install Scrapy
. If you run into issues, refer to the official Scrapy install guide or get in touch
with Thomas (@timtomch).