Centralisious

About Productivity, Social Networks and everything else I'm interested in

Tag: Dr. Drang

Notes on notes

http://farm9.staticflickr.com/8208/8186577909_d9863447f2_z.jpg

Shortly after the 24:00 mark in the latest episode of Gabe Weatherhead’s Generational podcast, Gabe’s guest, Walton Jones, starts talking about his system for annotating and summarizing academic papers. If you can listen to that 8- to 10-minute stretch without being inspired to improve your own methods for managing the flood of information in your job, then you’re dead to me.

Walton’s system

To be sure, Walton’s system is highly tuned to the specifics of his profession. As a scientist, a good portion of his time is spent analyzing and synthesizing the research of others. That research comes to him in the form of PDFs of journal papers. He adds color-coded annotations to the PDFs as he reads them: red for summaries, green for references, yellow for results, and so on. This may sound like nothing more than a digital version of Post-it notes, but Walton has an amazing trick up his sleeve. When he’s done reading a paper, he runs an AppleScript that goes through the PDF and creates a Markdown document with all the paper’s annotations listed by page number and organized according to category (summary, reference, result, etc.). The Markdown is then turned into a new page in a VoodooPad document.

So he has this VoodooPad document with his notes on the papers he’s read, which is nice, but that’s not the end. Each individual note in VoodooPad is linked to the page of the PDF to which it refers. The power of this system is that he can search through his VoodooPad document, which has his notes and therefore uses terminology that come naturally to him while searching, and when he finds what he’s looking for, he can click a link and be taken immediately to the right spot in the right paper. This is so much better than simply searching through abstracts or lists of keywords, all of which are words chosen by others.

But don’t just go by my description, read Walton’s own explanation of his system.

My system

While I don’t pore over research papers anymore, I do deal with a menagerie of documents—drawings, photographs, videos, test reports, deposition testimony, presentation slides, email trails—that are increasingly in some sort of electronic format. I try to organize this mess by turning everything except the photos and videos into PDFs. Like Walton, I make notes on these documents as I go through them, but I don’t do it the way he does.

My system is based on talking. Long ago, I talked into a voice recorder. Later, I started talking into my iPhone, using Griffin’s iTalk app. With both of these systems, I’d replay the recording to myself and type up the notes, usually cleaning up the sentence structure as I went along. For the past two months, though, I’ve had a much better system: Siri.

Say all the mean things you want about Siri; for me, she’s a great dictation transcriber. The individual notes I make as I read through a document are typically one or two sentences long, which is just about the perfect length for Siri. In Notesy, I tap the microphone button on the keyboard, say my one or two sentences, and tap Done. A few seconds later, the note appears. Unless I’ve hemmed and hawed or there’s a peculiar word, the transcription needs no editing and I move on.

I have a particular format I prefer, with the page number on a line of its own, then the note itself, then a blank line. A typical session would be me saying something like

Fifty-two. New line. A solid or liquid to a change in direction will be as great as a ton per square inch. Period. There are many transformations of motion. Period. New paragraph.

which comes out in Notesy as

Siri dictation in Notesy

Notesy syncs to Dropbox, so the file will be on my Mac when I’m done making notes. The format is not exactly Markdown, but it’s easy to run a global search-and-replace to add a pair of space characters after each page number to provide the line breaks I want in the output. Marked then turns the text file into a PDF.

This is a pretty good system, but what’s missing—and what Walton inspired me to add—are links from my notes to the page numbers in the original documents. Since I keep my summaries in the same directory as the original documents, the links could be added this way in Markdown:

[52](example-report.pdf#page=52)
A solid or liquid to a change in direction will be as
great as a ton per square inch. Period. There are many
transformations of motion.

I’m currently working on a script that’ll do this. It works, but it isn’t especially robust and there’s too much “by hand” work in turning the Markdown into a PDF. I’ll do a complete post when I get those problems solved.

I should mention here that neither Preview (under Lion) nor PDFpenPro handles page number links correctly. Preview opens the original document (sometimes—other times it refuses and says I don’t have permission to open it, which is probably some kind of sandboxing stupidity) but won’t go the linked page number. PDFpenPro doesn’t even get that far; it opens a blank document that it claims in the title bar is original document.

Skim, on the other hand, handles page number links like a boss. This was a little surprising to me, because Walton says in another post that it doesn’t and that he had to write a script to word around that limitation. All I can say is that Skim has worked fine in all my tests so far. I just need to get that script working so I can start using summaries with links.

via And now it’s all this http://www.leancrew.com/all-this/2012/11/notes-on-notes/

Saving browser tab sets

http://farm9.staticflickr.com/8046/8117740799_383791233f_z.jpg

I often find myself in the middle of some online research with several browser tabs open. I need to stop and move on to something else, but I want to be able to return to my current browser state a day or two from now. I’m going to be using the browser for other things in the meantime, so I can’t just flip the setting that allows me to quit and then relaunch to the same state.

Safari launch preference

There are a few options for saving the browser state, and unsurprisingly, I’ve chosen one that involves scripting. But let’s look over a few others, too.

Pinboard tab sets

First, there’s the remote solution. Pinboard has a Save Tab Sets extension1 that allows you to create a set of bookmarks from all your open tabs that can be accessed through a single name.

Pinboard tab set

Days later, when you want to return to the previous state, go to Pinboard, choose that tab set, and open all the bookmarks.

Open saved tab set in Pinboard

This is a quick, clean solution, but I’m starting to favor local storage instead of the cloud, and it’s often very nice to have all the research links for a project stored with all my other work in the project folder. I’m sure there’s a way to export these links from Pinboard into an OPML file that could be saved to the project folder, but that complicates what should be a simple process. Let’s look at something else.

Safari and Chrome bookmark folders

A simple local solution in Safari is to choose the Add Bookmarks for These n Tabs… command from the Bookmarks menu.

Save tabs as bookmarks in Safari

Chrome has a similar Bookmark All Tabs… command.

Both of these commands create a folder with bookmarks to every tab. All the bookmarks can be opened at a later date with a single menu selection.

Open folder of tabs in Safari

This doesn’t store the bookmarks in the project folder, but that can be done by choosing the Show All Bookmarks command (or clicking the little book icon in the bookmarks bar) and dragging the folder of links from the Safari window into the project folder. That creates a folder of .webloc files that can be double-clicked to open the web pages. At some point, it’ll be necessary to go back and delete these bookmarks from Safari to keep the Bookmarks menu clean.

Bookmarks window in Safari

What I don’t like about this solution—apart from the need to clean up the Bookmarks menu—is that .webloc files aren’t just plain text. It’s not especially hard to extract the plain text URLs from them (they can, for example, be dragged into a text file), but I’d rather they be stored in a more universal format.

Oddly enough, dragging a folder of bookmarks from Chrome into the Finder creates a single .textclipping file with all the URLs. This can be dragged into a text file, which is nice, but won’t open the web pages when double-clicked.

Scripts that write scripts are the luckiest scripts in the world

Which leads to my little AppleScript, “Save Tabset.” It creates a little executable bash script file that looks like this:

bash:
#!/bin/bash

open -g http://hints.macworld.com/article.php?story=20100112100027790
open -g http://www.chipwreck.de/blog/
open -g http://daringfireball.net/2004/02/setting_empty_file_and_creator_types
open -g http://www.google.com/search?client=safari&rls=en&q=pinboard+save+tab+sets&ie=UTF-8&oe=UTF-8
open -g http://www.flickr.com/photos/drdrang/8117740799/

The URLs are all there in plain text, and when the file is double-clicked, it launches Terminal (which I always have running anyway) and opens all the web pages in the default browser.

When invoked, which I do through FastScripts, “Save Tabset” puts up the standard save file dialog box, which allows the user to save the shell script with any name in any folder. As a first guess, it assumes you want the script saved in the folder of the frontmost Finder window.

Save Tabset

Here’s the AppleScript:

applescript:
 1:  -- Assume the frontmost Finder window (or the Desktop)
 2:  -- is where we want to store the script.
 3:  try
 4:    tell application "Finder" to set defaultFolder to the folder of the front window
 5:  on error
 6:    set defaultFolder to (path to desktop)
 7:  end try
 8:  
 9:  -- Initialize the text ot the script.
10:  set cmd to "#!/bin/bash" & linefeed & linefeed
11:  
12:  -- Add commands to open all the tabs.
13:  tell application "Safari"
14:    set n to count of tabs in front window
15:    repeat with i from 1 to n
16:      set cmd to cmd & "open -g " & URL of tab i of front window & linefeed
17:    end repeat
18:  end tell
19:  
20:  -- Open/create a file and save the script.
21:  set scriptAlias to choose file name default name "tabset" default location (defaultFolder as alias)
22:  set scriptPath to POSIX path of scriptAlias
23:  set scriptFile to open for access scriptAlias with write permission
24:  set eof scriptFile to 0
25:  write cmd to scriptFile starting at eof
26:  close access scriptFile
27:  
28:  -- Change the file attributes to make it double-clickable.
29:  do shell script "chmod 777 " & scriptPath
30:  do shell script "xattr -wx com.apple.FinderInfo '00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00' " & scriptPath

Lines 1-7 set the default folder for saving the tabset script—the frontmost Finder window if there is one and the Desktop if there isn’t. Nothing precludes the user from changing the folder when the dialog appears.

Lines 9-18 generate the text of the tabset script by cycling through the tabs of the frontmost Safari window and adding an open -g <url> line for each one. When the tabset script is run, the -g option tells the open command not to bring the browser to the foreground. This makes it easier to dismiss the Terminal window that appears with the tabset script is double-clicked. If you’re a Chrome user, just change Line 13 to

applescript:
13:  tell application "Google Chrome"

Lines 20-26 open a file and write the script out to it.

Lines 28-30 are, admittedly, weird. Line 29 makes the tabset script universally readable, writable, and executable. Line 30 sets both the file type and file creator to null. Both of these are necessary for the script to be taken as a Unix Executable File that gets opened and run in the Terminal when double-clicked. (This may be a Lion-related bug; I still haven’t upgraded to Mountain Lion.)

You may be wondering why I don’t use the AppleScript commands set file type and set file creator, as John Gruber showed in this Daring Fireball post from way back in ’04. The reason is they don’t work. I don’t know why they don’t work, but I tried and they don’t. I found the xattr solution in this Mac OS X Hint. That’s 32 pairs of zeros between the single quotes.

With this AppleScript, I get a single command that does everything I want:

  • Saves the tab URLs in plain text.
  • Saves them to a project folder on my computer.
  • Provides a single double-clickable file that launches the browser and opens all the pages in tabs.

  1. The tab set feature is also included in the Pinbar extension, but I don’t like the Pinbar, because it adds another toolbar across my browser window. 

via And now it’s all this http://www.leancrew.com/all-this/2012/10/saving-browser-tab-sets/

Tidying Markdown reference links

http://ifttt.com/images/no_image_card.png

Oscar Wilde—who would have been great on Twitter—said “I couldn’t help it. I can resist everything except temptation.” That’s my excuse for this post.

Several days ago I got an email from a reader, asking if I knew of a script that would tidy up Markdown reference links in a document. She wanted them reordered and renumbered at the end of the document to match the order in which they appear in the body of the text. I didn’t know of one1 and suggested she write it herself and let me know when it’s done. I’ve been getting progress reports, but her script isn’t finished yet.

There’s certainly no need to tidy the links up that way. Markdown doesn’t care what order the reference links appear in or the labels that are assigned to them. I’ve written dozens of posts in which the order of the references at the end of the Markdown source were way off from the order of the links in body. But…

But there is an attraction to putting everything in apple pie order, even when no one but me will ever see it. Last night I succumbed and wrote a script to tidy up the links. Sorry, Phaedra.

Here’s an example of a short Markdown document with out-of-order reference links:

Species and their hybrids, How simply are these facts! How
strange that the pollen of each But we may thus have
[succeeded][2] in selecting so many exceptions to this rule.
but the species would not all the same species living on the
White Mountains, in the arctic regions of that large island.
The exceptions which are now large, and triumphant, and
which are known to every naturalist: scarcely a single
[character][4] in the descendants of the Glacial period,
would have been of use to the plants, have been accumulated
and if, in both regions.

Supposed to be extinct and unknown, form. We have seen that
it yields readily, when subjected as [under confinement][4],
to new and improved varieties will have been much
compressed, we may assume that the species, which are
already present in the ordinary spines serve as a prehensile
or snapping apparatus. Thus every gradation, from animals
with true lungs are descended from a marsupial form), "and
if so, there can be followed by which viscid matter, such as
that of making [slaves][1]. Let it be remembered that
selection may be extended--to the stigma of.

[1]: http://daringfireball.net/markdown/
[2]: http://www.google.com/
[3]: http://docs.python.org/library/index.html
[4]: http://www.kungfugrippe.com/

Note that the references are numbered 1, 2, 3, 4 at the bottom of the document, but that they appear in the body in the order 2, 4, 3, 1. The purpose of the script is to change the document to

Species and their hybrids, How simply are these facts! How
strange that the pollen of each But we may thus have
[succeeded][1] in selecting so many exceptions to this rule.
but the species would not all the same species living on the
White Mountains, in the arctic regions of that large island.
The exceptions which are now large, and triumphant, and
which are known to every naturalist: scarcely a single
[character][2] in the descendants of the Glacial period,
would have been of use to the plants, have been accumulated
and if, in both regions.

Supposed to be extinct and unknown, form. We have seen that
it yields readily, when subjected as [under confinement][3],
to new and improved varieties will have been much
compressed, we may assume that the species, which are
already present in the ordinary spines serve as a prehensile
or snapping apparatus. Thus every gradation, from animals
with true lungs are descended from a marsupial form), "and
if so, there can be followed by which viscid matter, such as
that of making [slaves][4]. Let it be remembered that
selection may be extended--to the stigma of.


[1]: http://www.google.com/
[2]: http://docs.python.org/library/index.html
[3]: http://www.kungfugrippe.com/
[4]: http://daringfireball.net/markdown/

Now the links are numbered 1, 2, 3, 4 in both the text and the end references. The HTML produced when this document is run through a Markdown processor will be the same as the previous one—the links will still go to the right places—but the Markdown source looks better.

Here’s the script that does it:

python:
 1:  #!/usr/bin/python
 2:  
 3:  import sys
 4:  import re
 5:  
 6:  '''Read a Markdown file via standard input and tidy its
 7:  reference links. The reference links will be numbered in
 8:  the order they appear in the text and placed at the bottom
 9:  of the file.'''
10:  
11:  # The regex for finding reference links in the text. Don't find
12:  # footnotes by mistake.
13:  link = re.compile(r'\[([^\]]+)\]\[([^^\]]+)\]')
14:  
15:  # The regex for finding the label. Again, don't find footnotes
16:  # by mistake.
17:  label = re.compile(r'^\[([^^\]]+)\]:\s+(.+)$', re.MULTILINE)
18:  
19:  def refrepl(m):
20:    'Rewrite reference links with the reordered link numbers.'
21:    return '[%s][%d]' % (m.group(1), order.index(m.group(2)) + 1)
22:  
23:  # Read in the file and find all the links and references.
24:  text = sys.stdin.read()
25:  links = link.findall(text)
26:  labels = dict(label.findall(text))
27:  
28:  # Determine the order of the links in the text. If a link is used
29:  # more than once, its order is its first position.
30:  order = []
31:  for i in links:
32:    if order.count(i[1]) == 0:
33:      order.append(i[1])
34:  
35:  # Make a list of the references in order of appearance.
36:  newlabels = [ '[%d]: %s' % (i + 1, labels[j]) for (i, j) in enumerate(order) ]
37:  
38:  # Remove the old references and put the new ones at the end of the text.
39:  text = label.sub('', text).rstrip() + '\n'*3 + '\n'.join(newlabels)
40:  
41:  # Rewrite the links with the new reference numbers.
42:  text = link.sub(refrepl, text)
43:  
44:  print text

The regular expressions in Lines 13 and 17 are fairly easy to understand. The first one looks for the links in the body of the text and the second looks for the labels.

The key to the script are the four data structures: links, labels, order, and newlabels. For our example document, links is the list of tuples

[('succeeded', '2'),
 ('single character', '3'),
 ('under confinement', '4'),
 ('slaves', '1')]

labels is the dictionary

{'1': 'http://daringfireball.net/markdown/',
 '3': 'http://docs.python.org/library/index.html',
 '2': 'http://www.google.com/',
 '4': 'http://www.kungfugrippe.com/'}

order is the list

['2', '3', '4', '1']

and newlabels is the list of strings

['[1]: http://www.google.com/',
 '[2]: http://docs.python.org/library/index.html',
 '[3]: http://www.kungfugrippe.com/',
 '[4]: http://daringfireball.net/markdown/']

links and labels are built via the regex findall method in Lines 25-26. links is the direct output of the method and maintains the order in which the links appear in the text. labels is that same output, but converted to a dictionary. Its order, which we don’t care about, is lost in the conversion, but it can be used to easily access the URL from the link label.

order is the order in which the link labels first appear in the text. The if statement in Line 32 ensures that repeated links don’t overwrite each other.

newlabels is built from labels and order in Line 36. It’s the list of labels after the renumbering. Line 39 deletes the original label lines and puts the new ones at the end of the document.

Finally, Line 42 replaces all the link labels in the body of the text with the new values. Rather than a replacement string, it uses a simple replacement function defined in Lines 19-21 to do so.

Barring any bugs I haven’t found yet, this script (or filter) will work on any Markdown document and can be used either directly from the command line or through whatever system your text editor uses to call external scripts. I have it stored in BBEdit’s Text Filters folder under the name “Tidy Markdown Reference Links.py,” so I can call it from the Text ‣ Apply Text Filter submenu.

I should mention that although this script is fairly compact and simple, it didn’t spring from my head fully formed. There were starts and stops as I figured out which data structures were needed and how they could be built. Each little subsection of the script was tested as I went along. The order list was originally a list of tuples; it wasn’t until I had a working version of the entire script that I realized that it could be simplified down to a list of link labels. That change shortened the script by five lines or so and, more importantly, clarified its logic.

Despite these improvements, the script is hardly foolproof. The Markdown source of this very post confuses the hell out it. Not only does it think there are links in the sample document (which you’d probably guess), it also thinks the [%s][%d] in Line 21 of the script is a link (and the one in this sentence, too). And why wouldn’t it? To distinguish between real links and things that look like links in embedded source code, the script would have to be able to parse Markdown, not just match a couple of short regular expressions. This is a variant on what Hamish Sanderson said in the comments on an earlier post.

At the moment, I’m not willing to sacrifice the simplicity of the Tidy script to get it to handle weird posts like this one. But if I find that it fails often with the kind of input I commonly give it, I’ll have to revisit that decision.

As Wilde also said, “Experience is the name everyone gives to their mistakes.”


  1. I didn’t think Seth Brown’s formd did that, but this tweet from Brett Terpsta says I was wrong about that. 

via And now it’s all this http://www.leancrew.com/all-this/2012/09/tidying-markdown-reference-links/

Implementing PubSubHubbub

http://ifttt.com/images/no_image_card.png

I mentioned in last night’s post that I wanted to implement Nathan Griggs’s system for instant updates to the site’s feed at Google Reader. I managed to get it done, but I ran into a couple of problems along the way. One I was able to solve cleanly, the other required an underhanded trick.

Even if you never visit the Google Reader site itself, there’s a good chance the feed readers you do use—NetNewsWire, Reeder, Vienna, whatever—use Google Reader to sync the status of feed subscriptions across your devices. And if you’re a blogger, the same is true of most of the subscribers to your site’s feed—Google Reader has become almost everyone’s master subscription list.

In the first paragraph of his post, Nathan lays out why bloggers might want to exercise some control over when this master subscription list gets updated:

Google Reader fetches my RSS feed about once every hour. So if I publish a new post, it will be 30 minutes, on average, before the post appears there. If I notice a typo but Google Reader already cached the feed, then I have to wait patiently until the Feed Fetcher returns. In the mean time, everyone reads and makes fun of my mistake.

As someone who’s always finding (or being told about) typos in his just-published posts, I’m mortified to know that subscribers may be seeing my stupid mistakes for as long as an hour after I fix them. I found the ability to control when Google Reader’s cache for ANIAT was updated very appealing. So I followed Nathan’s instructions to implement the PubSubHubbub protocol here.

PubSubHubbub is an intermediary between a publisher’s site and Google Reader.1 Instead of Google’s Feed Fetcher checking the site periodically to see if the feed has changed, the publisher tells PubSubHubbub when the feed has changed and PubSubHubbub then pushes those changes to Google Reader. Reader updates its cache of the site’s feed almost instantly and no longer needs to poll the site periodically.

The two tasks a publisher must complete to implement PubSubHubbub are:

  1. Tell Google Reader to look for updates to come from PubSubHubbub.
  2. Ping PubSubHubbub whenever the feed changes.

Task 1 requires a line or two to be added to the site’s feed. Because ANIAT is a WordPress site and most of my readers subscribe to the RSS2 feed, I added a line to wp-includes/feed-rss2.php:2

xml:
23:  <channel>
24:    <atom:link href="http://pubsubhubbub.appspot.com/" rel="hub" />
25:      <title><?php bloginfo_rss('name'); wp_title_rss(); ?></title>

Line 24 is the new line. After making this change, the next time Google Reader polled the site, it learned that PubSubHubbub was now the intermediate hub from which it would get future updates.

In my first attempt to get PubSubHubbub working, I misinterpreted Nathan’s instructions and put Line 24 in the wrong place. I thought the line was supposed to go after the entire channel element, and therefore after the </channel> end tag. But as you can see, the proper place for the line is as a child of the channel element—putting it immediately after the opening <channel> tag does the trick. This was the clean solution I described in the opening paragraph.

Task 2 can be accomplished in several ways. Because I use a Python script to publish posts (and to republish them after editing), I simply added these lines to the end of the script to ping the PubSubHubbub server:

python:
104: # Ping PubSubHubbub so Google Reader knows to update its feed cache.
105: data = urllib.urlencode({'hub.mode': 'publish',
106:                          'hub.url': 'http://www.leancrew.com/all-this/feed/'})
107: psh = httplib.HTTPConnection('pubsubhubbub.appspot.com')
108: psh.request('POST', '', data)

Lines 105-106 define the data that needs to be POSTed to the hub server. Lines 107-108 make the connection to the server and POST the data. There are, of course import httplib and import urllib lines at the top of the script.

Nathan does his pinging through the curl command. I could have done that, too, by calling curl from within my script. But I thought using an httplib request was more Pythonic.

With these two tasks complete, the Google Reader cache of the RSS2 feed now updates within seconds of my publishing or republishing a post.

To accomplish the same thing with the Atom feed, I had to resort to dirty tricks. There’s a simple line to add to wp-includes/feed-atom.php that should work the same as Line 24 above, but for reasons I can’t explain, it never did. Despite many attempts and rereadings of the Discovery section of the PubSubHubbub spec, I just couldn’t get Google Reader to update its cache of the Atom feed.

Luckily, there are only a handful of readers who subscribe via the Atom feed, and I don’t think any of them really care whether they get Atom or RSS2. So I cheated, redirecting requests for the Atom feed to the RSS2 feed by adding this line to the blog’s .htaccess file:

RewriteRule ^feed/atom/$ feed/ [R,L]

Already present at the beginning of the file were the lines:

RewriteEngine On
RewriteBase /all-this

which allowed me to do the rewriting without having to write out long URLs.

Now I have a blog publishing system that’s more tolerant of my errors and doesn’t keep broadcasting them after I make the fixes.


  1. Or other subscription services, but we’re focusing on Google Reader. 

  2. There’s a similar addition to be made to the Atom feed, but I’ll discuss that later in the post. 

via And now it’s all this http://www.leancrew.com/all-this/2012/09/implementing-pubsubhubbub/

Linkety

http://ifttt.com/images/no_image_card.png

As a rule, I don’t do link blogging. It’s a pretty crowded field, and I’m way too slow on the draw. It’s better for me to stick with the failure analysis of torsion springs, where I have the field to myself.

Today, though, I ran across three things that I have little to add to but are deserving of more than the brief comment I can give to a link in my Twitter stream.

First is Nathan Grigg’s method for forcing Google Reader to refetch your blog’s feed immediately. This is a godsend for those of us who can’t seem to find our syntactical and typographical errors until after our posts are published. I’ll be adding his header lines to my template and incorporating his pinging code into my post publishing script this weekend.

Next up is Clark Goble’s variation on blackbird.py, the script I use to embed tweets in web pages. In addition to making a few stylistic changes,1 Clark has created a Quickeys macro with which he can generate the HTML code from the selected tweet in Tweetbot. This, to me, is what scripting is all about—not just using others’ work, but recasting it to fit your own needs.

Finally, Gabe Weatherhead from Macdrifter has joined the ranks of podcasters with Generational on the 70 Decibels network. Its topic is “living with technology and trying to make it all work together.” I hope he has an episode about how to find time to listen to all the cool podcasts that are out now. Then maybe I’ll be able to return to whatever show I decide to drop to make room for Gabe.

Oh, wait! There’s one more thing. The In Our Time podcast from the BBC—a show I’ll never drop—returned from hiatus this week with an episode on The Cell. Starts a bit creakily, but gets moving after about 20 minutes or so. My only real complaint is that one of the guests referred to Robert Hooke as a “biologist.” Nonsense.


  1. Notice how gracefully I’m ignoring Clark’s dig that his embedded tweets have “a more subdued and less garish appearance” than mine. I’m classy that way. 

via And now it’s all this http://www.leancrew.com/all-this/2012/09/linkety/