Let’s Scrape a Blog! (Part 1)

One thing I’ve been considering lately is what kind of intelligence you could gain from scraping a blog and analyzing the data. To test this out, this is the first in a series of posts where I’ll scrape a blog and try to squeeze out every last bit of useful or interesting intelligence I possibly can.

I’ll start off simple, but down the road I plan to use more advanced techniques in machine learning and natural language processing techniques to see what additional information these tools can uncover. I’m keeping all my analysis on a Jupyter notebook you can find on Github here.

The target site I’ll be using for my analysis is my all-time favourite blog: Marginal Revolution. I have been following this blog pretty much daily since 2005 when I started my undergraduate degree. It’s run by the economists Tyler Cowen and Alex Tabarrok, who are personal heroes of mine.

Why scrape a blog?

For me, scraping Marginal Revolution was just something I did for kicks. Since I’m so interested in the content of the blog, I want to be able to do very customized searches of blog posts that would not be possible through the blog’s built-in search feature.

But there are reasons other than “just for fun” that you might want to scrape a blog. For example, maybe the blog is a competitor or in an industry you’re researching. Maybe you want to find out:

  • Roughly how many people read / comment on the blog
  • Blogging strategy in terms of number, type, and timing of posts
  • Which types of posts produce the most discussion / comments / controversy
  • What notable people read the blog (i.e. seeing if they comment in the comments section)
  • Analyzing trends over time to determine if things have changed

…and I’m sure there are more possibilities.

Very brief overview of how the scraper works

My goal with the scraper was to get each individual post from the Marginal Revolution website. Marginal Revolution was fairly easy to scrape since the list of posts by month provided a predictable URL structure that made it possible to gather the links for each individual post across the entire website. With the full list of links, it was then simply a matter of making a request to each of these URLs and saving the resulting blog post HTML to disk. The scraper ultimately gathered 23,342 posts.

The final step was to extract the information of relevance through each HTML file and conduct data cleaning. I did this with the python BeautifulSoup library to parse the html and then pandas to do some further data cleaning and feature generation. The final result was a nice csv file:

My scraper had a generous delay between requests so I didn’t create a burden on the website. As you would expect, the scraper took a very long time to run to get all the posts – I ran it slowly over a period of about 3 weeks.

Initial Analysis

Often times when reading Marginal Revolution, I would want to search in ways that the built-in search feature wouldn’t allow. For example, I know that Marginal Revolution has had a few guest posts over the years, but they are difficult to find with the search feature because of the sheer volume of posts. Also, many people guest posting are often mentioned in the regular daily posts by Tyler and Alex, further complicating the search.

With all the posts scraped, figuring out who has all posted on the site and how many posts they’ve done was easy:

Obviously it’s totally dominated by Tyler Cowen and Alex Tabarrok, as any reader of the blog would expect, but the plot reveals some interesting authors that I had no idea posted on Marginal Revolution.

Now, say I want to look at all the posts by Tim Harford. It’s just a simple filter operation to get all the links and check them out (15 of them):

http://marginalrevolution.com/marginalrevolution/2005/07/dear_economist_-2.html
http://marginalrevolution.com/marginalrevolution/2005/07/using_cartoons_.html
http://marginalrevolution.com/marginalrevolution/2005/07/marginal_revolu-2.html
http://marginalrevolution.com/marginalrevolution/2005/07/we_shall_see_ho.html
http://marginalrevolution.com/marginalrevolution/2005/07/markets_in_ever_6-2.html
http://marginalrevolution.com/marginalrevolution/2005/12/seasonal_advice.html
http://marginalrevolution.com/marginalrevolution/2005/12/seasonal_advice_2.html
http://marginalrevolution.com/marginalrevolution/2005/07/john_kay_on_cli.html
http://marginalrevolution.com/marginalrevolution/2005/12/seasonal_advice_1.html
http://marginalrevolution.com/marginalrevolution/2005/07/choosing_whethe.html
http://marginalrevolution.com/marginalrevolution/2005/07/what_is_the_rig-2.html
http://marginalrevolution.com/marginalrevolution/2005/07/a_critic_on_cri.html
http://marginalrevolution.com/marginalrevolution/2005/07/red_tape_and_ho.html
http://marginalrevolution.com/marginalrevolution/2005/07/should_londoner.html
http://marginalrevolution.com/marginalrevolution/2005/07/risky_business_1.html

I also looked at the amount of discussion generated by each author:

Note that some of these authors only posted once or twice which would skew their results. Also, some posted in the blog’s early years where there appear to be few comments (e.g. Tim Harford in 2005). Interesting to see that Alex’s posts on average seem to generate slightly more comments. Of course, the total amount of discussion / engagement is way higher for Tyler, given that he posts about 5 times as much as Alex.

Another easy thing to do is examine the time of post to get an idea of the blogging habits of each of the authors. Each blog post includes the time of publication down to the minute. 

Looking at the time of the post reveals some clear patterns. Tyler Cowen is most likely to post in the morning, around 7 am, although he is also likely to post in the early afternoon. 

Alex Tabarrok clearly has a much more rigid blogging schedule. Almost all of his posts are published around 7 am.

You can also get an idea of the writing techniques and writing habits of the blog authors. I’m barely scratching the surface of what’s possible here, but as a start I simply looked at the number of characters in the headline. The headline is the most important part of a blog post as it determines whether the reader will continue to read.

The longest headline in Marginal Revolution is 117 characters: The Icelandic Stock Exchange fell by 76% in early trading as it re-opened after closing for two days last week.” The table below shows the different average headline length for each of the blog authors. Tyler tends to use longer headlines than Alex.

Interestingly, when I read through the top 10 longest headlines, I noticed one called: “Browse every book hyperlink ever posted on Marginal Revolution (is this the second best website ever?)Clearly I’m not the first person to have scraped Marginal Revolution!

My goal now is to figure out what to do with this data to make the 3rd best website ever…

Addendum: In the comments, the creator of Marginal Revolution Books points to the github repository for his website.

Adventures in Open Data Hacking – Winnipeg Tree Data!

As a data guy, I’m pretty excited to see my city making a strong commitment to open data. The other day, I was sifting through some of this stuff to see what I could play with and what kind of interesting data mash-ups I could create with it. Very soon, my prayers were answered, with the Winnipeg tree inventory – yes, that’s right, the City of Winnipeg keeps a detailed database of all 300,000+ public trees located in the City, including botanical name, common name, tree size (diameter), and precise GPS coordinates!

After chuckling to myself in amusement for a while about how awesome this is, I dug into the data. Read on to see the results. 

You can find the code I used to create the visualizations here. For those curious, I used Python, along with a couple of amazing geographic mapping packages: GeoPandas (incorporates geographic data types into the Pandas package) and Folium (allows you to write Leaflet.js maps using Python code).

Most common trees

Turns out that, by far, the most common trees in the city are Green Ash and American Elm. These two types represent almost half of the trees in the database. Check out the plot below to see the top 10 tree types in the City.

Rarest trees

As shown in the previous plot, there are a few types of trees that totally dominate. Looking at the other end of the spectrum, there are quite a few tree types that are extremely rare, with only one or two of them found across the entire city. The map below shows the location of the 50 rarest trees (click on the tree to see the tree type). A valuable resource for all you rare tree hunters.

Biggest trees

The map below shows the 50 biggest trees in Winnipeg (by diameter). The size of each of the green dots represents the size of the tree. As you can see in the map, apparently there is a monster American Elm in Transcona that I may have to check out next time I’m the area.

Neighbourhoods with the most trees

Next, I thought I would mash-up the tree data with neighbourhood boundary data also available from the City of Winnipeg website to see what the tree situation is like for each neighbourhood.

Looking at the total number of trees, Pulberry is in the lead, followed by Kildonan Park, River Park South, and Linden Woods.

Here are the 10 neighbourhoods with the fewest trees.

I also put together a choropleth map to visualize at a glance the total number of trees in each neighbourhood.

These measurements aren’t totally fair, since bigger neighbourhoods will naturally have more trees. A better measure of how tree-filled a neighbourhood is the tree density, or the number of trees per square kilometres. The map below shows the tree density of all the neighbourhoods in the city.

Looking at the top ten neighbourhoods in terms of tree density, many are (not surprisingly) parks (Kildonan Park is the most densely treed, by far).

The plot below shows the 10 neighbourhoods with the lowest tree densities. Interestingly, Assiniboine Park is near the bottom. The only explanation I can think of is that the trees there are privately owned (the data only includes public trees).

Other ideas?

There are a lot of other ways you could slice, dice, and mash-up the data with other sources to get more interesting results. Here’s a few things I thought of:

  • Zooming in on one neighbourhood and show a dot map or heat map of all the trees to show the distribution.
  • Looking at  percentage of the trees that come from parks and filtering out park trees.
  • Developing a “tree index” for each neighbourhood (like something you would see on a real estate ad to describe the neighbourhood).
  • Examining the most common types of trees in each neighbourhood.

Comment below if you have any suggestions / requests!

Creating a Commonplace Book with Google Drive

Here’s a problem I’ve had for a long time: I would invest a lot of time into reading a great book, then inevitably as time passed the insights I gained would slowly disappear from my mind. This was pretty discouraging to me and seemed to defeat the purpose of reading the book in the first place.

So, I was very excited to come across this blog post by Ryan Holiday on keeping a “commonplace book”: your own personal repository of important insights from the various sources you encounter throughout your life.

The source of these insights can come from anywhere, like books, blogs, speeches, interactions you’ve had with others, interesting situations you’ve been in, jokes, personal life stories, ideas, etc. I’ve been doing it for about a year and have added 220 notes and counting. 

There are a bunch of benefits from keeping a commonplace book:

  • You can easily review it and go back to categories of ideas as you encounter challenges in your life, and your commonplace book will have all the most important insights for you at your disposal, ready to go. For example, if you have a challenge with one of your kids, you have your “parenting” category in your commonplace book ready to go to provide you with support.
  • You improve your reading skills by consolidating and condensing the most important and relevant material from your sources, as it forces you to think more carefully about what you’re reading and what it means.
  • By reviewing and reflecting on your commonplace book entries, you improve your writing skills and increase your memory and comprehension about the materials you’ve read.
  • You can use the quotes from your commonplace book to enhance in your own writing. For example, I’ve noticed Ryan Holiday’s writing liberally uses quotes from other sources. These often come from an extremely wide variety of sources and are really effective at supporting the points he is making in his writing. This is clearly the result of his voracious reading habits and commonplace book note-taking.

A commonplace book is like an investment that grows and grows over time. Much like a stock or bond, the earlier you invest, the bigger the payoff.

My commonplace book system

There are a million different ways that you can develop a commonplace book. There’s obviously no one “correct” way to do it, but hopefully my personal commonplace book system gives you some inspiration.

My system uses Google Docs, using a template designed to maximize my retention and reflection on the commonplace book notes and make the commonplace book easily searchable. The system also uses the Google Drive API to send myself daily emails each morning with commonplace book notes to review.

As much as possible, I’ve designed the system to take advantage of scientifically proven learning and retention techniques, including testing and recalling, spaced repetition, varied practice, and elaboration.

Aside: I highly recommend the book Make it Stick, which outlines the best evidence-supported study and learning methods and debunks a lot of common misconceptions, For example, re-reading passages and highlighting are horribly inefficient learning techniques.

My Template for Notes

Here is the template that I use for each of my commonplace book notes:

I make one of these notes for each important point or insight that I come across. For a good book packed with useful information, I’ll probably create about 10-20 of these notes. Each of the components of this template have a specific purpose:

  • Title of Note: This typically labels the content of the note in some way to trigger my memory about what the passage is about. I try to make it somewhat vague so I don’t give away all the content. Then, when I review my notes, I’ll read only the title and then look away to try to recall what the passage is about. This is a way of incorporating testing and recall into my review. It helps improve retention and memory of the note and makes it more likely that I’ll have it in mind, ready to apply when the time is right.
  • Content: The main content of the note – usually a quotation, but not always.
  • Notes: A place where I can connect the content with existing knowledge and add any personal ideas or insights. Doing this kind of elaboration helps with understanding and retention.
  • Citation information: The author, source, page, and url so I know exactly where each passage came from and can look it up or cite it easily if necessary. This also makes the notes way more searchable. Google Drive has great built-in search features (as you would expect), making it easy to find notes from a particular author, book, or tag.

Here’s an example of a note I took from the business and management book The Effective Executive by Peter Drucker. I added this quote to my commonplace book because it had actually never really occurred to me that a job could be poorly designed and unfit for humans. Seemed like a good insight to keep in mind as a prospective employee and if I’m ever in the position of creating job positions myself. 

The Folder System

I divide my notes into folders and subfolders related to the topic. Often, one note could fit in more than one folder. To solve this problem, I make sure to write tags in the filename, and then randomly pick one of the relevant folders to put it in. Using tags, I can rely on search more easily for notes applicable to multiple topics. So if something belongs to both “Business” and “Productivity”, I can just add it to productivity and make sure I add both the business and productivity tags. 

The great thing about having your notes in Google Drive is that you can take advantage of Google’s powerful search feature to find exactly what you need. For example, you can see below the options available in the search feature. You can search by file type, folder location, filename, and contents. With the structured note templates, you can find pretty much anything you need at the snap of a finger. For example, finding all the notes from Peter Drucker is simply a matter of writing in “Author: Peter Drucker” in the “Item Name” option shown below.

Daily Reviews Using Python and the Google Drive API

If you don’t have any programming knowledge, this commonplace book system will still serve you well and you don’t have to read on. But since this is a blog about hacking APIs and open data for the purposes of automation and competitive intelligence, there is more to my commonplace book system than simply adding documents to a Google Drive folder.

The commonplace book notes aren’t of much use if they’re just sitting in Google Docs unused, so I wanted to create a system of regular and automated review. Specifically, I wanted to receive an email every morning with 5 randomly selected notes from and review these notes a part of my daily routine. This helps incorporate the learning techniques of testing, spaced repetition, and varied practice. Regular review emails also give me a chance to edit notes if there are things I want to change or add.

You can find all of the code for this review system here. Here is an overview of what it does:

  • Selects 5 documents at random across all the files in your commonplace book folder and subfolders
  • Builds an email template with links to the five randomly selected commonplace book notes. It does this using the Jinja2 template engine.
  • Sends the email of commonplace book notes to review to yourself (and any other recipients you want). See this previous blog post on how to write programs to send automated email updates.

I run the code on a DigitalOcean droplet, set on a cron job to run build_email.py at 7 am every day.

Before you try to run the code, you should follow steps 1 and 2 in this Python quick-start guide to turn on the Google Drive API and install the Google Drive client library. This will produce client_secret.json that you will need to place in the top directory of the code.

You’ll also need to make a few substitutions to placeholders in the code:

  • Enter in your Gmail email and password in the file email_user_pass.json.
  • Enter in the email that will be sending the email update and the list of recipients in the file emails.json.
  • In build_email.py, you need to provide a value for COMMONPLACE_BOOK_FOLDER_ID. You can find this by looking in the URL when you navigate to your commonplace book folder in Google Drive.
  • Install any of the required packages in build_email.py

You can customize the way that the email looks by modifying templates/email.html.

Note that this code is useful not just for this commonplace book system, but any system where you need to receive automated email updates that randomly select files in your Google Drive.

Just do it

Start your common book now. Even if you don’t know Python and can’t do the automated email update stuff, it doesn’t matter. This is just icing on the cake, and there are other ways you can review your commonplace book notes.

Trust me, you won’t regret taking the time to do this. The only thing you’ll regret is the fact that you didn’t start doing it 20 years ago.

How to Build an Automated, Large-Scale Fax Survey Campaign using Python, docx-mailmerge, and Phaxio

You can find the full code for this project here:
https://github.com/marknagelberg/fax-survey-with-phaxio

For most people, fax is an antiquated, old-school technology that has basically no relevance to their lives at all. Most people nowadays probably don’t even know how to use a fax machine and would be annoyed if they were forced to.

But fax is by no means dead: a 2017 poll by Spiceworks suggests that approximately 89% of companies  still use fax machines, including 62% that still use physical fax devices. Fax is still very popular in particular industries, such as medicine. If you need to reach these businesses, fax is something to consider.

One of the main problems with physical fax machines is they are labour intensive. This can become an issue in scenarios where you need to send a lot of faxes or you need to send them in some automated way.

For example, suppose you’re conducting a survey of businesses and the only contact information is a fax number. Let’s say you have 5,000 people in your sample and each survey has unique information for each respondent (for example, their name or a unique ID value to track responses).

What do you do? At first glance, it looks like the only option is to print out each survey for each person on the sample, and then feed each survey by hand into the fax machine. If each survey takes a minute to fax, getting these out would mean 83 hours of work, or 2 full work weeks for a single person, plus printing costs (not to mention low morale). Not good!

Luckily, if you have some programming skills, there is a much better alternative: programmable fax APIs. I’m going to take you through an example of using one of these services called Phaxio to send out faxes for a hypothetical survey campaign. Here’s the scenario:

  • You have a sample of potential respondents in a CSV file that you want to fax information to which includes their fax number, name, address, and ID.
  • You have a template in a word document of the survey that you want to send. The survey needs to be slightly different for each person in the sample, including their particular name, address, and ID.

Solving this problem involves two main steps: 1) creating the documents to fax and 2) faxing them out.

Creating the documents to fax (using docx-mailmerge in Python)

Let’s say you have two people in the sample like this (it’s straightforward to generalize this to 500 or 5000 or higher sample sizes):

Picture1

And suppose the template of the survey you want to fax looks something like this:

Picture1

To get the documents ready to fax in Phaxio, we need to create separate documents for each person in the sample, each with the appropriate variable information (i.e. ID, Address, and Name) filled in.

As a first step, you need to open up your fax template document in Microsoft Word and add the appropriate mail merge fields to the document. This actually was not as straightforward as I expected, but this guide from Practical Business Python blog was helpful. First you want to select the “Insert Field” button which can be found within the “Insert” tab:

Picture1

Then, you’ll be taken to a pop-up where you’ll want to select the category “Mail Merge” and the field name “MergeField”, and write the desired name of your field. Repeat this step for all fields you need (in our case, three times: once for Name, Address, and ID).

Picture1

This produces the mergeable fields that you can copy and paste throughout the document as needed. You can tell it worked when there is a greyed out box that appears when you click your cursor over the field:

Picture1

Now that your Word document template is prepped, you are ready to do the mail merge to create a unique document for each person in the sample. Since we need to separate each merged document into a separate file, this cannot be done in Microsoft Word (the standard mail merge in Word combines all the documents into one big file).

To do this, there is a handy little library called docx-mailmerge built exactly for the purpose of doing custom mail merges in Microsoft Word with Python.  To install the library, type:

$ pip install docx-mailmerge

Using docx-mailmerge, loading the csv sample values into the Word document is simply a matter of calling the merge function, which contains each merge field in the Word document as an argument and allows you to assign these fields the string values of sample information you want to fill in:

from mailmerge import MailMerge
import pandas as pd
import os

def merge_documents(sample_file, template_file, folder):
df = pd.read_csv(sample_file)

for index, row in df.iterrows():
with MailMerge(template_file) as document:
document.merge(Name = str(row['Name']),
Address = str(row['Address']),ID = str(row['ID']))
document.write(os.path.join(folder, 
str(row['Fax Number']) + '.docx'))

This produces a folder full of all the documents that you want to fax with the variable information filled and the fax number included as the filename.

Faxing the documents out with Phaxio

To get started with Phaxio, you first need to create an account (https://www.phaxio.com/). Once you’ve done that, install Phaxio’s Python client library, which makes it easy to interact with its API:

$ pip install phaxio

The code below for the send_fax function takes in your personal Phaxio key and secret (provided by Phaxio when you create your account), the fax number, the file to fax. It sets a time delay of 1.5 seconds in between faxes (the Phaxio rate limit is 1 request per second).

from phaxio import PhaxioApi
import time

def send_fax(key, secret, fax_number, filename):
time.sleep(1.5)
api = PhaxioApi(key, secret)
response = api.Fax.send(to = fax_number,
files = filename)

With this function, sending out your faxes is just a matter of iterating through the Word files you created in the mail merge and passing them to send_fax (along with the fax_number, which appears in the filename).

To Come: Using the Twilio Programmable Fax API

Although Phaxio is a fantastic tool, it is considerably more expensive than the fax API provided by Twilio (https://www.twilio.com/fax): the Phaxio API costs 7 cents per page, while the Twilio fax API only costs 1 cent per page. Although this seems like a minor difference given the low numbers involved, this really adds up if you are sending a lot of faxes. For example, say you are sending out a survey via fax to 5,000 potential respondents and the survey is 5 pages long. Using Phaxio, this would cost $1,750, while Twilio would cost only $250. The more documents you fax, the wider this price gap grows.

However, you do pay a price for using Twilio due to added complexity: Twilio does not let you pass in binary files into it’s API – you must pass in a URL to the file you want to fax and cannot just point to the documents on your local disk. This would be a great feature for Twilio to have (take note, Twilio!), but since it doesn’t yet exist, you have to configure a server that provides the documents you want to fax to Twilio. Doing this securely is by no means a straightforward task, but I’ll cover this in a future post…Stay tuned!

Setting up Email Updates for your Scraper using Python and a Gmail Account

Very often when building web scrapers (and lots of other scripts), you’ll run into one of these situations:

  • You want to send the program’s results to someone else
  • You’re running the script on a remote server and you want automatic, real-time reports on results (e.g. updates on price information from an online retailer, an update indicating a competing company has made changes to their job openings site)

One easy and effective solution is to have your web scraping scripts automatically email their results to you (or anyone else that’s interested).

It turns out this is extremely easy to do in Python. All you need is a Gmail account and you can piggyback on Google’s Simple Mail Transfer Protocol (SMTP) servers. I’ve found this technique really useful, especially for a recent project I created to send myself and my family monthly financial updates from a program that does some customized calculations on our Mint account data.

The first step is importing the built-in Python packages that will do most of the work for us:

import smtplib
from email.mime.text import MIMEText

smtplib is the built-in Python SMTP protocol client that allows us to connect to our email account and send mail via SMTP.

The MIMEText class is used to define the contents of the email. MIME (Multipurpose Internet Mail Extensions) is a standard for formatting files to be sent over the internet so they can be viewed in a browser or email application. It’s been around for ages and it basically allows you to send stuff other than ASCII text over email, such as audio, video, images, and other good stuff. The example below is for sending an email that contains HTML.

Here is example code to build your MIME email:

sender = 'your_email@email.com'
receivers = ['recipient1@recipient.com', 'recipient2@recipient.com']
body_of_email = 'String of html to display in the email'
msg = MIMEText(body_of_email, 'html')
msg['Subject'] = 'Subject line goes here'
msg['From'] = sender
msg['To'] = ','.join(receivers)

The MIMEText object takes in the email message as a string and also specifies that the message has an html “subtype”. See this site for a useful list of MIME media types and the corresponding subtypes. Check out the Python email.mime docs for other classes available to send other types of MIME messages (e.g. MIMEAudio, MIMEImage).

Next, we connect to the Gmail SMTP server with host ‘smtp.gmail.com’ and port 465, login with your Gmail account credentials, and send it off:

s = smtplib.SMTP_SSL(host = 'smtp.gmail.com', port = 465)
s.login(user = 'your_username', password = ‘your_password')
s.sendmail(sender, receivers, msg.as_string())
s.quit()

Heads up: notice that the list of email recipients needs to be expressed as a string in the assignment to msg[‘From’] (with each email separated by a comma), and expressed as a Python list when specified in smtplib object s.sendmail(sender, receivers, msg.as_string(). (For quite a while, I was banging my head against the wall trying to figure out why the message was only sending to the first recipient or not sending at all, and this was the source of the error. I finally came across this StackExchange post which solved the problem.)

As a last step, you need to change your Gmail account settings to allow access to “less secure apps” so your Python script can access your account and send emails from it (see instructions here). A scraper running on your computer or another machine is considered “less secure” because your application is considered a third party and it is sending your credentials directly to Gmail to gain access. Instead, third party applications should be using an authorization mechanism like OAuth to gain access to aspects of your account (see discussion here).

Of course, you don’t have to worry about your own application accessing your account since you know it isn’t acting maliciously. However, if other untrusted applications can do this, they may store your login credentials without telling you or doing other nasty things.  So, allowing access from less secure apps makes your Gmail account a little less secure.

If you’re not comfortable turning on access to less secure apps on your personal Gmail account, one option is to create a second Gmail account solely for the purpose of sending emails from your applications. That way, if that account is compromised for some reason due to less secure app access being turned on, the attacker would only be able to see sent mail from the scraper.