Blog

The right way to use requests in parallel in Python

Today I was working on getting as many YouTube comments out of the internets as was possible. I’m sure that my code has a long way to go, but here’s one speed-up that a naive first day out with multiprocessing and requests generated. import requests import multiprocessing BASE_URI = 'http://placewherestuff.is/?q=' def internet_resource_getter(stuff_to_get): session = session.Session() stuff_got = [] for thing in stuff_to_get: response = session.get(BASE_URI + thing) stuff_got.append(response.json()) return stuff_got stuff_that_needs_getting = ['a', 'b', 'c'] pool = multiprocessing.

Some updates

It’s been while, again, since I’ve blogged. And I’m sort of concerned about how to fix that. Do I commit to a post a day? There’s a bit of pop pyschology floating around that you should never tell your goals to anyone. Doing so only gives you a dopamine dose of self-satisfaction that actually reduces your likelihood of completing the project. Why go for delayed gratification at all if you can get your hit of happy by telling all your friends what you plan to do.

30 days of iPython

See the GitHub repo here I suck at Python. I write Python like I’m still 10 years old, programming in QBASIC. I don’t even need to be a good better programmer in my line of work (I’m a music student), but it’s something that I’ve wanted to work on for a while, and I know the only way to improve is to write, write, write. I love iPython Notebook (a.k.a. Jupyter + Python 2 kernel) because it allows me to mess up, fix my mistakes, and run the code again.

Getting Eulerian Video Magnification set up on Ubuntu 14.10

Download this version (R2012b a.k.a. v80) of the Matlab Compiler Runtime. Follow the instructions carefully and make sure to modify the LD_LIBRARY_PATH and XAPPLRESDIR environment variables appropriately. These changes can be made permanent in your shell startup profiles. Trusty Tahr (14.04) doesn’t usually come with the right codecs in order for the Matlab Compiler Runtime to do its thing. These packages seemed to do the trick for me: ubuntu-restricted-extras, and then add the ppa ppa:mc3man/trusty-media which provides gstreamer0.

This is not what hyperlinks are for

Allow me a little rant. I was reading this FastCo article about a Spotify webapp that seemed interesting to me. Here’s a screencap of the relevant part. See the hyperlinked words “playlist tool”, underlined in yellow? You’d think that this would link to the webapp in question. But no, it resolves to a category/tag-explorer page with the URI http://www.fastcompany.com/explore/playlist-tool. What about “web app”? Nope: http://www.fastcompany.com/explore/web-app. Does the article link to the tool at all?

How I hacked scheduling class meetings

As a preface, I think this merits the label hack not because it’s particularly clever or well-implemented; simply it was the fastest way for me to arrive at an optimal solution for a well-defined problem. Problem statement Splitting a class of $$k$$ students into $$n$$ disjoint meetings (‘sections’) which meet once a week on a pre-determined day of the week, and finding a mutually convenient time for each session based on the availability of each student

GitHub pages subdirectory hassle

This blog is hosted on GitHub pages. It is automatically generated from Markdown source files by Jekyll every time a commit is pushed to the gh-pages branch of the GitHub repo corresponding to the blog. I have a private repo called ‘blog’, and under normal circumstances its ‘project page’ (actually my blog) would appear at, say, http://myusername.github.io/blog. GitHub’s Jekyll process seems to be clever enough to handle this and ensure that html links in the source code are rendered correctly as links relative to this base URL.

Scraping great music taste

I’m a sometime listener of John Schaefer’s New Sounds podcast. He has a particularly eclectic taste of wide reknown. Sometimes a recording of the show is posted online, but this is far from often the case. However, blog posts corresponding to each show include the tracklist for each show as a HTML element. Therefore, it is trivial to write a scraper that iterates through the back-catalog of tracklists. This scraper outputs a CSV file.

Back in New York, again.

You have to go to come back, I suppose. And so I have done both of these things. I spent this summer mostly in and around Ireland, and partly in Dresden, Germany. Trying different combinations of transport from the airport, I took the LIRR to Penn Station and got a cab with my gigantic luggage (wasn’t fancying the subway on a Friday afternoon). After taking the West Side Highway, the taxi driver wisely bailed out at around 79th Street at the sight of congestion there and (comparatively) sailed up West End Avenue, mitigating the Manhattan midday madness somewhat.

Listening to Thunder (as)

A couple of nights ago I had the dubious pleasure of a musical performance at 4 a.m. in the morning, awake when ought to have been sleeping, roused precisely by the auditory phenomenon that captured my attention. A system of several gigantic thunderstorms, larger than any I’d ever experienced my life, trundled over and around Dresden for about a half an hour bringing with them an unignorable musical event. The sheer volume of each thunderclap was such that I felt it not only in my “ears” (whatever that means) but also in my head, my chest, my whole body.