When I was a kid I loved to scour my local library for books about computers. Whether because their computer-related offerings were thin or because I wasn’t very resourceful, though, the only one I ever found was The Hacker Crackdown: Law and Disorder on the Electronic Frontier by Bruce Sterling. Released in 1992, the book documents the first interactions between the U.S. law-enforcement community and the phone phreaks and other renegades who were testing the legal and technological limits of the nascent internet.
The book was released under an unusual licensing setup: the rights to the printed book were owned by the publisher, Bantam Books, as usual, but Sterling maintained control over the digital version and released it online for free. “Since Bantam has seen fit to peaceably agree to this scheme of mine,” he writes in the digital version’s preface, “Bantam Books is not going to fuss about this. Provided you don’t try to sell the book, they are not going to bother you for what you do with the electronic copy of this book.”
Over the last week I’ve put together a new “edition” (for lack of a better word) of this ebook. I couldn’t find a copy online that had all the niceties I wanted, like italics and curly quotes, so I took this HTML-formatted version and started beautifying it.1 The result is hosted on GitHub and, thanks to the magic of Pandoc, available as HTML, EPUB, and Markdown. (If you’re up for tweaking the Makefile you can ask Pandoc for any of twenty other formats too.)
The book content itself is stored as HTML, a format which allows for semantically styled text and which has a huge ecosystem around it. Keeping the contents on GitHub means that it’s very easy for people to fix typos or change the CSS to their liking. I hope that this little bit of effort allows others to get the same enjoyment out of this book that I did!
This version was translated to HTML by Bryan O’Sullivan, one of the authors of the Haskell text package, which is used by Pandoc. ↩︎
California, United States
I start new zsh instances all the time so long startup times become distracting quickly. Here’s how you can figure out which parts of your zshrc or zshenv are taking the most time to run.
Instrumenting your zshrc
Add this code to the top of your zshrc:
And add this code to the bottom:
Each time you start zsh, this code will create a new log file in the current directory (typically your home directory) and display the name of this log file. Each command will be written to the log file along with its start time in UNIX epoch format. This logging is turned off when your zshrc is done running. (I took this approach from a Stack Overflow answer by Dennis Williamson.)
You can profile your zshenv instead of your zshrc by adding the same commands to that file.
Now if you create a new zsh instance you’ll have a file called something like zsh_profile.abcd1234 in your home directory. Copy the following script into a file called sort_timings.zsh and make it executable:
You can also just right-click and download this link. The script is hosted on Gist, where there is much more detail of exactly how the script works.1
Run your log file through this script with
./sort_timings.zsh zsh_profile.abcd1234 | head
Interpreting the result
This will list all of the commands that were executed in the course of running your zshrc, longest-running first. For example, here’s what the beginning of the output looked like for me:
The first column shows the approximate amount of time, in microseconds, taken by each command. The next column shows the filename and line number of the zsh script that was being run. The rest of the line shows the command that was run.
Here are two things I learned from profiling my zshrc and zshenv:
Loading NVM, the Node Version Manager, added nearly two seconds to my shell startup time. This is a great tool if you’re working in Node all day, but I’m not anymore, so I moved the NVM loading code out of the main part of my zshrc into a function that I can call when needed.
The “right way” to get the path to a Homebrew-installed package is brew --prefix. For example, if you’ve installed the GNU Coreutils with Homebrew and you want to make them available on your PATH, you could do
path+=$(brew --prefix coreutils)/libexec/gnubin
However, running this brew command takes nearly a second! It’s a much better idea just to hardcode the path. If Homebrew’s directory structure changes you can update the path in your zshenv and you’re probably still saving time overall.
This profiling code actually slows down zsh as it starts up, so once you’ve gotten the timing information and made some changes you’ll probably want to take it out again. (See the next section for a less detailed but easier way to measure the total zsh startup time.)
This method of profiling only measures individual commands; it doesn’t make a fancy tree that shows you, for example, the total time spent sourcing some file. This means that if you load a file that contains a million commands that each take ten microseconds, those commands are all going to show up at the bottom of this list even though in aggregate they’re taking a long time to run.
A faster way to measure the total zsh startup time
By the way, if you just want a measurement of the total time it takes to launch zsh, run
time zsh -i -c echo
The time labeled with “total” is the total shell startup time in seconds. The -i flag forces zsh to load all of the files, like your zshrc, that it would usually load for an interactive session. The -c echo tells zsh to run a dummy command—print a blank line—and immediately exit. This is handy if you want to make a change to your zshrc or zshenv and you just want to know the overall effect on the startup time.
I would usually write this kind of script in Awk, but writing it in portable Awk would have been clumsy, and hey, everyone in the target audience has zsh installed… ↩︎
California, United States
There’s nothing more lovely
There’s nothing more profound
Than the certainty
Than the certainty
That all of this will end
OK Go’s music videos are sort of a known quantity at this point. They come up with a concept somewhere between “zany” and “outrageous” and then execute on it unbelievably well. The concepts are different enough that each new video is a revelation.
I’d suggest that their 2016 video for “The One Moment” succeeds not only in making you say “holy crap!” but also, by subverting your expectations, succeeds in a new, emotional dimension too.
The video begins with a couple seconds of whirlwind action—it’s too quick to follow, so all you retain is the last shot, of some guitars exploding. An intertitle spells out the video’s premise: you are about to watch the same bit of action again, but slowed down enough to be watchable.
Slowed down, it turns out, that whirlwind becomes a typically atypical OK Go video. We pan across Tim Nordwind, Andy Ross, and Dan Konopka as they cause various sorts of brightly-colored things to explode. Damian Kulash is soaked by some water balloons in slow motion. The last things to explode are the guitars that the camera lingered on earlier.
The expectation now is that the video is over. After all, each OK Go video is—while certainly not simple in execution—at least simple in concept, and we’ve been given the concept of this one explicitly, in text on the screen. We saw some things happening very quickly and then we watched those things happen more slowly. We ended with some exploding guitars the first time around and now we’re back to the guitars again.
But this isn’t where it ends.
Instead, the camera pans back down to a soaking wet Kulash. He brushes his hair off of his forehead and grins knowingly. As the music softens, he looks directly into the camera and sings, “this will be / the one moment that matters…” It makes the hairs on the back of my neck stand up.
OK Go’s videos are all impressive technically, but despite the fact that the songs themselves touch on doomed relationships, breakups, and dealing with grief, the videos are usually so focused on execution that they only address this emotional piece abstractly. But in “The One Moment”—a song about impermanence, about taking advantage of our limited time by finding a connection to someone else—for Damian Kulash to suddenly break out of the mold of our expectations, to look right at us and smile—that one moment is downright magical.
Mozilla made a big marketing push last month for Firefox Quantum, the newest version of Firefox. I’d been using Safari for a solid five years but decided to give Firefox another shot. A big reason was that I was juggling thirty or so tabs in Safari, which has horrible UI for dealing with that situation: the tabs far from the current one are displayed as stubs with no identifying icon or text at all.1 I was intrigued by eevee’s mention of the Firefox-only Tree Style Tab extension, which seemed like it might be perfect for me. (This extension replaces the usual horizontal tab bar with a vertical, hierarchically-nested list of tabs in the browser’s sidebar.) It turns out that I like the extension quite a lot, although it’s not a “can’t live without it” feature for me yet.
Switching browsers didn’t take too much time—I keep passwords in 1Password and bookmarks in Pinboard, so there wasn’t much to migrate. I do wish that Firefox could have imported the list of currently-open tabs and that it could have pulled my old Favorites into the Top Sites list. (I also wish that it had been easier to move the data for my browser extensions, like Stylish, but that isn’t Mozilla’s fault.)
My main complaint so far is how Firefox chews through my battery. This problem was severe until I installed an ad blocker; now it’s merely annoying. Battery life and performance are the top priority for Apple’s Safari team and that’s really been hammered home to me by using a different browser. If I switch back to Safari this is likely to be why.
Firefox has a more Mac-like interface now than I remember from the mid-2000s. Two things still stick out as non-native: the appearance of tall <select> elements and the lack of system text shortcuts.
On macOS, dropdowns will take up most of the available screen height before they start scrolling. When that happens, arrows are shown at the top and/or bottom of the list, pointing to the top and/or bottom, to indicate that there are more items available. Firefox makes these boxes much shorter and uses a scroll bar to indicate that there has been overflow. We can debate the merits of these two approaches but I don’t think the Firefox behavior is better enough to justify replacing the system-standard behavior.
Text shortcuts in macOS let you define short pieces of text that will, when typed, expand into other (presumably longer or harder-to-type) pieces of text. These are supported in all native text fields on macOS2 but they don’t work in Firefox. I really miss the ability to type “bbb” and have it expand into my email address.
Finally, one feature that I wish Firefox would outright steal from Safari: If you open a link in a new tab and then press Command-[ to go back, Safari will just close the current tab. This might sound weird but it makes total sense when you try it: it means that the behavior of Command-[ is now the same regardless of whether you opened a new tab when you navigated to the current page. You no longer have to remember whether you need to press Command-[ or Command-W to go back.
Mozilla employee (and IndieWeb guru) Tantek Çelik asked which CSS features people want to see next. Two that I’ve missed recently—things that are available in Safari—are CSS Initial Letter for fancy drop caps and CSS Backdrop Filter for attractively blurred backgrounds. These are, admittedly, minor features, though. I’m gratified at just how much functionality the leading browsers share these days.
You can get a nice visual overview of your tabs in Safari by choosing View → Show All Tabs or pressing Command-Shift-Backslash. This shows you a thumbnail of each tab and groups consecutive tabs from the same domain. Inexplicably, though, you can’t rearrange tabs in this view, and the tabs’ “close” buttons are so small that I often accidentally click on something else instead. ↩︎
The only things I own that still take AA or AAA batteries are a wireless mouse and a pair of noise-cancelling headphones. My other devices either use proprietary batteries (like my camera) or don’t have user-replaceable batteries at all (like my phone and my laptop).
When I was growing up in the late 90s and early 2000s, I needed those standard-sized batteries for a bunch of things:
By doubling the character limit, Twitter has eliminated what made them unique. Yes, there were many trade-offs with the 140-character limit, both pros and cons. But one of the pros is it made Twitter unique. Twitter timelines now look more like Facebook — but Facebook is already there for Facebook-like timelines.
He quotes others, including J.K. Rowling and Stephen King, who are also down on the change. One of the uniquely appealing things about Twitter—especially, I think, to long-time users like Gruber—is the art of fitting a complete thought or joke into 140 characters. I enjoy writing with this constraint sometimes, and I definitely like reading well-done examples.
But here’s another way to think about it: Twitter is no longer an obscure forum for tech-industry early adopters. It has become a super-mainstream social network. The bulk of its users now are concerned not with crafting 140-character bons mots but with the ordinary, everyday sharing of information and opinions.1 For that application, the looser limit may be a boon. More characters mean room for more detail, more nuance, more subtlety on a platform that is rife with misinformation and clipped, tense exchanges. Does this mean people will begin to have more thoughtful conversations? Not universally. But Twitter admits that possibility now more than it did before, and for a site that is becoming an all-purpose public medium, I think the extra space could be a good thing.
(The elephant in the room, of course, is that Twitter voluntarily gives a platform to Nazis. In this context, the change from 140 to 280 is going to do little to cut down on abuse, and might well make it worse in some situations. Enforced terseness was causing some degree of social ill on Twitter, but it was nothing compared to the effects of Twitter’s refusal to address harassment in a meaningful way.)
I’m talking about what Twitter is, not some notion of what it “should be.” ↩︎
California, United States
I haven’t used AIM in years. The last conversation I had with another human—not a spam bot or “aolsystemmsg”—was in 2012. Like a lot of people, I suppose, I got a smartphone in 2011 and never looked back.
When I was in fourth grade or so I was under the impression that a “chat” was when you sent an email to someone and they sent an email right back, because they were online right now! One of my mom’s college students, a guy named Chris, introduced us to AOL Instant Messenger in probably 1999. Compared to exchanging emails, this was a step further: it really felt like you were talking to the other person in real time.
Other parts of the country used MSN Messenger—presumably the same parts that drank “pop” and got their Girl Scout cookies from the other bakery—but AIM was ubiquitous in my school. Most of my free hours were spent in front of the computer and for most of those hours I had the AIM client (and later, Adium) open on the side. I rarely talked to my friends on the phone; it was all online and text-based.1 High school was a heady, hormonal time, which is why, fifteen years later, I still get a little thrill when I hear this sound:
Who’s this? A classmate, a friend, a crush, a secret admirer? Are they asking for homework help or spreading some choice gossip? AIM conversations never started with any subtlety. It was this mechanistic fanfare, every time.
Now instead of “instant messaging” we just have “messaging.” AIM profiles have been replaced with Facebook profiles. Status updates have taken the place of away messages, and who can really be “away” from the internet now anyway? Realistically, AIM has been irrelevant for a while now. It was the social glue of my formative years, though, and purely for nostalgia’s sake I’ll miss it.
Still true today, although “online” no longer means “sitting in front of a desktop computer and monopolizing your parents’ phone line.” ↩︎
California, United States
Inspired by Homebrew Website Club, which I’ve been going to recently, I created a web app. Microformats2 on a Map reads webpages that have been marked up according to the microformats2 standard, extracts locations from them, and displays the locations on—yes—a map. You can try it out here.
The project isn’t complete yet; the UI is unpolished and nested microformats aren’t yet supported. See the GitHub project page for more caveats, usage information, and technical details. The source code is available there under the GNU GPL, version 3. Pull requests are welcome! Bug reports are also welcome but slightly less so.
As part of my endless process of tinkering with this website instead of actually writing posts for it, I went back and added location data to all of my blog entries. Each one now shows the city and state whence it was posted. This information is marked up with the microformats2 h-adr standard to make it machine-readable.
After going through the effort of figuring out where I posted each entry from, I wanted to be able to see all of these locations on a map. It’s a form of gamification: I feel a sense of accomplishment in general when I post a blog entry, and seeing all of my posts laid out visually just reinforces that feeling. (Instagram used to have a similar feature but it seems that they’ve discontinued it… bummer!)
The same-origin policy bit me while I was making this. My original plan was to offer an interface where users could put in multiple URLs and then the app would retrieve each page and extract all of its locations. I was also hoping to make a completely client-side web app so that I could host a few static files but not worry about any server-side logic. But I realized that the same-origin policy would prevent a page hosted on my server from downloading pages from other servers.1
My solution was an awkward compromise. I added a mode in which the user could just put in some HTML and locations would be extracted from that. I also created a server with an endpoint that would take in a URL, retrieve it, and return its contents—the world’s simplest proxy. If someone downloads Microformats2 on a Map and runs the server, the client-side code detects that and offers the possibility to enter a list of URLs. (The pages are retrieved through this proxy, which is allowed by the same-origin policy since that server and the client-side code are coming from the same port on localhost.)
This solution adds a little bit of complexity but it allows me to host a copy of the application without being responsible for a server that could probably be made to DOS someone. It also provides for the ability—if you run the server piece—to map multiple webpages’ locations, which I think is much more useful than just mapping the locations from a single page.
Standing on the node_modules of giants
It’s been said that modern web development consists mostly of hooking together other people’s libraries. In this case, that’s totally true. I pulled in Leaflet for the maps, Leaflet.markercluster to gracefully handle markers that were visually close together, and microformat-node to extract Microformats data from HTML. The app uses OpenStreetMap for map tiles and OpenStreetMap’s Nominatim service for geocoding (turning addresses into latitudes and longitudes). My contribution was a paltry 255 lines of code on the client side and 32 for the server.
It would work fine if the other server had the appropriate CORS headers set, but most sites don’t (and I think this is the right default for servers). ↩︎
California, United States
The kibibyte and mebibyte and their ilk are ridiculous. Introduced in 1998, these units are related to each other by factors of 1,024, a number which is almost, but not quite the same as 1,000, the basis of the metric system. The only reason these units exist is to work around the ambiguity of some people using “kilobyte” to mean 1,000 bytes and others—understandably but unwisely—using “kilobyte” to mean 1,024 bytes. The sole advantage of the newfangled binary-based units is that they’re unambiguous.
For that reason, though, everyone should measure things in kibibytes, mebibytes, and so on when at all possible. After a decade or two of such sanity maybe we’ll be able to ease back into using “kilobyte” when we mean “kilobyte” and not have to worry about being misinterpreted.
California, United States
If the column command is not available on your system,1 you can replace it with
| sed -e "s/;/\t/g"
for a similar effect. Note also that you will need Git 2.13 (released in May 2017) or later.
Using the Jekyll repository as an example,2 the output will look like
6 years ago Tom Preston-Werner book
4 years, 4 months ago Parker Moore 0.12.1-release
4 years ago Matt Rogers 1.0-branch
3 years, 11 months ago Matt Rogers 1.2_branch
3 years, 1 month ago Parker Moore v1-stable
12 months ago Ben Balter pages-as-documents
10 months ago Jordon Bedwell make-jekyll-parallel
6 months ago Pat Hawks to_integer
5 months ago Parker Moore 3.4-stable-backport-5920
4 months ago Parker Moore yajl-ruby-2-4-patch
4 weeks ago Parker Moore 3.4-stable
3 weeks ago Parker Moore rouge-1-and-2
19 hours ago jekyllbot master
My most recent project at work had several contributors from multiple teams. I took it upon myself to periodically prune our branches, which meant that I needed to know who was responsible for each branch. BitBucket didn’t seem to show that information anywhere so I rigged up this command.
(By the way, I highly recommend GitUp for macOS if you’re interested in a novel way of visualizing your branches:
Be sure to turn on the options to show stale branch tips and remote branch tips.)
How it works
The git command lists all of the branches on the server,3 ordered from least recently edited to most recently edited. For each branch, it prints the relative timestamp of the latest commit; the name of the author of the latest commit; and the branch name. The grep command removes the “HEAD” pointer from the list, since it’s probably just pointing to one of the other branches in the list and we don’t need to show that branch twice. Finally, the column command puts the information into a nice tabular form.
column is part of BSD, so it’s available on macOS. It’s available under Ubuntu if the “bsdmainutils” package is installed, which it seems to be by default. ↩︎
I’ve omitted some of Jekyll’s branches for brevity. ↩︎
To be precise, it lists all of the remote-tracking branches. If your local copy of the repo is up to date then this is the same as “all of the branches on the server.” ↩︎
California, United States