I can’t figure out how to play localiTunes U content on my iPhone - it just doesn’t appear on my device.
I like to download tutorial and training material (from sites like Railscasts, iDeveloper TV, PeepCode and Mijingo) and I’m one of these people who likes to hold onto the material just in case I ever need to watch it again. For convenience I dump it into iTunes on my Mac Mini that serves up content to a number of Apple TV and iOS devices. Because I’ve got a thing for organization, I like to mark the videos as being iTunes U material so that I can get to it easily through the Apple TV menus.
Sometimes I’m not using the Apple TV and I want to watch a quick tutorial on my iPhone. This happened the other night when I decided to watch a Railscast. So I opened up the iTunes U app on the iPhone. It gives me the option to get official material from the catalog, but doesn’t let me browse my local Home Sharing machine.
So I tried the Videos app - it does let you choose Shared libraries, but only displays the categories Movies, TV Shows, and Music Videos. No sign of iTunes U content.
In desperation I tried the Music (formerly iPod) app but it didn’t include the option for iTunes U content (and hasn’t since iTunes U was split off into its own app a while back).
This seemed weird - I couldn’t find a way to play iTunes U material stored on my local server. Strangely, when I picked up my iPad mini the Videos app there has a section for iTunes U material (and I was able to watch the Railscast absolutely fine - “panic” over).
Seems like quite an oversight that there is no way to carry out this (admittedly edge) use-case on an iPhone. Time for a Radar…
Mountain Lion has been with us for several months now and has been a pretty solid release with some nice new features. One of those features is Power Nap - the ability for a certain Macs to sleep with one eye open. It allows for the updating of data (fetching mail, contact updates, etc) and the performing of housekeeping tasks (carrying out backups and downloading software updates) while your Mac give sthe impression of being asleep.
I have a nice shiny 2012 MacBook Air so I’ve been getting these Power Nap features for free since upgrading to Mountain Lion. Generally I don’t notice it much - I have notifications from Messages already onscreen when I open my machine in the morning, software updates are quick to apply because they have already been downloaded and, most usefully, I can see new entries in my mailbox that arrived overnight.
So far, so good. There’s just one gripe with the way the mail messages arrive. I have an IMAP account that gets updated with new messages. But if I have viewed those messages on my iPhone/iPad or on another Mac, they don’t get marked as read via Power Nap. This seems like a minor inconvenience as the next refresh of the IMAP account should update their read/unread status.
Nope. Checking for new mail in all accounts doesn’t change the status. In fact, basic testing has shown that once woken from Power Nap I can’t even pick up new items in the IMAP mailbox. A restart of Mail.app is required before any updates to that mailbox are detected. It just seems to be the one IMAP account though - I have 3 other mail accounts (2 GMail and 1 iCloud) that update fine.
“It’s been a great little science experiment since Netbot came out about the importance of apps versus web clients”
Gruber, and his guest Brent Simmons, said a lot of sensible things in this discussion, but the implication here is that App.net was treading water prior to the release of Netbot because there were no native clients.
Many App.net users will be well aware that Netbot was not the first native iOS client to be unleashed. There were numerous native clients for iOS available prior to this (I personally used AppApp, Felix, Rhino and Rivr) so the native client requisite for success was already satisfied. The real reason why App.net usage took off after the release of Netbot was more likely due to a large volume of subscribers (existing and potential) who were too timid to truly embrace the platform until a big-name client was launched.
I’m glad that the release of Netbot for iOS gave a much needed boost to the App.net population and I hope we see a similar boost when Netbot for Mac is released. I’d also like to see some acknowledgement given to the original native clients who put in the hard yards in the early days. After all it was they who helped cultivate the existing population that made it viable for a bigger developer like Tapbots to invest in the platform.
Having recently moved to an Octopress blog I’ve been revelling in the geekery of managing my posts from the command line. I was previously using MarsEdit to manage my posting to Tumblr, but that’s not as viable an option any more. This is a shame because I like MarsEdit and I especially liked the ability to work with local draft posts, something that Octopress (and the underlying Jekyll framework) does not natively provide.
Judging by a quick Google search on the subject I’m not the only person who has been thinking about this. Even the mighty Brett Terpstra recently posted to App.net saying that he’d implemented a rake draft option (among other goodies) that stores drafts to a _drafts folder. This seems to be an avenue that other people are going down but I’m not sure about the need for this separation.
Jekyll already offers the facility to mark a post as published using an attribute in the YAML front matter. When published: false is specified the post will not be output to the public folder as part of the rake generate task. It therefore will not be included in a deploy operation, which is exactly as we would expect, but makes it harder to see the draft posts in the context of the blog.
Interestingly though when using the rake watch or rake preview tasks, posts with an attribute of published: falseare included in the generation of posts. When you point your browser to localhost:4000 (or via a Pow configuration) then you’ll see your unpublished posts included as though they were published. For me, this negates the need to create a dedicated drafts folder.
There are still failings with this approach: drafts are mixed in with published posts requiring file inspection to manually separate them, and the date given to the post (crucially updating both the YAML front matter and the filename) will be the creation date rather than the publishing date.
To facilitate this workflow I updated the Rakefile to include a few more default YAML attributes for a new post (defaulting published to false). Most importantly it includes a Rake task called publish_draft that lists all the posts with a published status of false and allows the user to select a post. The post can then be published using the current date (updating the YAML and the filename) or it can simply be published with the original date. You can find this Rake task in the Gist below - please take care as it has not been made 100% foolproof that this stage.
After what seemed like an agonising wait for the Mac App Store review period to complete Tweetbot for Mac has now been released. The app went through a long (and strangely quite public1) alpha testing period and as a result there is very little to surprise the user in the finished product. Except that is, the price.
The sale price of $20 has ruffled a few feathers amongst the potential customers for Tweetbot. This is not surprising as there are few Twitter clients in the Mac App Store that sell for more that $5. Additionally Tweetbot for iPhone and Tweetbot for iPad are both priced at $2.99 making a $20 selling price a 666% increase in price for an app that does pretty much the same thing in a consumer’s eyes.
Personally, I feel that the $20 price tag for this piece of software is steep, but still justifiable. Tapbots have put a lot of effort into this application and they deserve some compensation. Additionally, they have to factor in the cost of Twitter user tokens meaning that they can’t rely on an ever-expanding user base in order to re-coup costs. I can understand some of the backlash to the price increase but this is probably more of a case of the previous clients being undervalued, as opposed to this client being overvalued.
I see this as a positive step, not just for Tapbots, but for App Store pricing in general. The long-discussed “race to the bottom” in app pricing can make it difficult for all but the most popular apps to survive. It is good to see a “headline” app take on realistic pricing and hopefully some of the smaller developers will gain the confidence to increase their prices in future. Higher prices will ultimately make for a more sustainable business model that allows small development companies to survive.
I’m holding onto my conspiracy theory that the Tweetbot for Mac went to alpha very early because Tapbots were tipped off somehow. The fact that the app was around for a while before user token limits were put in place gave Tapbots a chance to acrue more user tokens than they would have done otherwise.↩
The Mac and iOS community got in all of a tizzy late last week following the announcement of Marco Arment’s latest project - The Magazine, a foray into the world of publishing through the channel of the iOS Newsstand application. The basic premise is simple: a monthly subscription of $1.99 delivers you two issues per month direct to your iOS device. Each issue contains 4 editorials produced by a variety of writers.
The reaction to the launch has been favourable; not just for the actual product itself but the concept as well. Federico Viticci (Macstories) thinks that ”…The Magazine is a promising and notable initiative…”. Matthew Panzarino (The Next Web) described it as ”…a template for the future of lightweight Newsstand publications.” Numerous comments across Twitter and App.net praised the initiative and suggested that it was an experiment in publishing.
Marco himself has described The Magazine as “new and experimental” and has set himself a limit for determining the success or failure for the “experiment”:
If it doesn’t turn a profit within two months — just four issues — I’ll shut it down.
I think the praise is deserved - the application is a great reading environment, the writing is of a high quality, and the pricing is favourable. And while I agree that this could be a new business model for publishing I do not agree that Marco’s venture is how we should determine if the model can truly be successful.
A Controlled Environment
To truly judge the success of an experiment or trial it must be carried out in a controlled manner and The Magazine is far from being launched in such a manner. Marco Arment is best known for being the creator of Instapaper, and a co-founder of Tumblr. He has a very successful podcast called Build and Analyze on the 5by5 network. He writes a successful blog for which he can command sponsorship revenues. Marco is justifiably a bit of a celebrity in the Apple world and arguably in wider technology circles.
Like any astute businessman, Marco has been drumming up interest in his new venture for some time now, primarily through his podcast and to his many Twitter followers. It may not be a multi-million dollar advertising campaign, but Marco has enough of a loyal following to all but guarantee the success of The Magazine, in the short term at least.
That is not the only ace The Magazine has up it’s sleeve: the opening lineup of writers includes well-established players such as Jason Snell, Michael Lopp and Guy English. Marco has invited submissions from other writers with openings starting from issue four suggesting that the roster of big name writers may be set up for most of his four-issue trial period.
The Real Experiment
The Magazine is great for the readers, great for the writers and ultimately great for Marco; it should be a huge success. It’s not so great for other budding publishers waiting to see if the business model works, especially if they don’t have the following that Marco has, or the access to top writing talent.
It reminds me of artists like Radiohead or Nine Inch Nails experimenting with self-publishing and distribution. They declare it a success when they sell a lot of copies and state it to be the future of music publishing. It’s not so easy for the up-and-coming artist who doesn’t have the established fanbase or production talent.
In my opinion the real experiment here is not the success or failure of The Magazine. It is instead in the success or failure of Marco Arment to diversify from his roots as a developer of software into a curator, an editor and a publisher. He has expressed a dissatisfaction with writing software in the past and his forays into podcasting, writing and public speaking have indicated a desire to get away from the coal face.
I hope that The Magazine succeeds because it places an emphasis on content rather than advertising. I hope it kick starts a new business model for publishing. But as someone who likes to see an individual succeed I’m more interested in seeing how Marco’s experiment with his career pans out.
The brouhaha over third-party Twitter clients continues with the news of the release of a beta version of Tweetbot for Mac. Just 4 days ago the alpha version was withdrawn with Tapbots citing as a reason the “finite limit on the number of user tokens we can get for Tweetbot for Mac”.
This was a fair move by the developers - any Mac user could download the alpha, sign in and obtain a limited Tweetbot for Mac token. If they subsequently decided they were not interested in buying the application they can discard it without releasing the token they use. It was a problem with only one resolution - withdraw the app and preserve the tokens.
Today Tapbots came back with a beta version and some new rules:
new users can install the beta but they’ll not be able to add a Twitter account;
users of the alpha version can use the same Twitter account they used before;
users of the alpha version cannot add extra Twitter accounts;
users of the alpha who revoke their access token on the Twitter site cannot add their account again.
All of this makes perfect sense from the point of view of token preservation. Unfortunately in this post-announcement landscape these once common items are now a scarce resource. They’ve become the fossil fuels of the technology world, and like fossil fuels they’re limited and easily wasted. They used to be very disposable - a quick look at my Apps section on twitter.com lists 34 tokens on one account alone, and that’s after a prune earlier this week.
Efficient Token Usage
Like many people I have a number of Twitter accounts, and on each of my clients I like to have all of my accounts signed in. This was never a problem before but now it just exacerbates the issues faced by app developers. They have a limited number of access tokens that can be used with their apps and users like me with multiple accounts end up using more tokens than other people.
To make an app commercially viable developers need to squeeze as much money per token as possible. One way to do this would be to increase the cost of their apps to cover the average number of accounts per user. Alternatively they could keep the initial purchase price the same while limiting the number of accounts that can be signed in through the app. Power users could then be offered the option to use in-app purchases to effectively buy extra access tokens.
There has been a lot of discussion recently on the devaluing of software through bargain basement prices and this is really going to hit home to Twitter client developers now. Maybe the scarcity of access tokens will encourage users to value the software higher and pay more for it. Remember, you’re not just paying for the software anymore - you’re also paying for the privilege of connecting to the service through your favourite client.
During a recent episode of Hypercritical, host John Siracusa mentioned that he had received feedback from readers of his OS X Mountain Lion review commenting about the failings of full screen mode in Mountain Lion (and previously in Lion), particularly with respect to multiple monitor support.
I see where these comments have come from and have sympathy. When I first installed Lion I was eager to get stuck into this new world of full screen apps. Pressing the little button on the title bar unleashed a magical zoom animation. Some apps just stretched the existing interface, but others (such as iPhoto) changed their interface to better suit the full screen experience. It was very exciting for about 5 minutes, but it wasn’t long before I shared the frustrations aired by Siracusa and his readers:
No, I wanna use the other screens, I wanna put stuff there, I wanna see my other windows there, I wanna see my web browser there, my Twitter feed there - whatever the heck I have over there. And you can put palettes and stuff from the application that you made go full screen, you can put them on the other screens but you can’t use them for other applications. It was like, it was almost like it was being mean to you, it’s like “I know you want to see your Twitter client, but I’m putting linen over that whole screen - nah!” and that pissed people off.
I admit that I was that reader initially and I found the lack of flexibility when using multiple displays was exceptionally frustrating. The app running in full screen mode would take over the primary display but the secondary display would only contain that boring linen background. Yes, it was possible to place extra palettes or inspectors on the secondary display but there didn’t seem to be any way to truly take advantage of the additional real estate.
It sure would have been nice if users could have placed two independent apps side-by-side. Or if developers could have created UI layouts that took advantage of full screen mode with multiple displays, akin to the way full screen editing was available in many previous versions of Aperture. John, being the sensible guy that he is, tried to reason it out, and I think he almost nailed it1 when he said that full screen is for people who need to concentrate.
One use case I think he glossed over though is the fact that full screen is also for people who are working on portable machines. People who are working on one (possible small) screen and need to squeeze every last pixel out of their displays. It’s not for the power user who wants to work with multiple apps at once (or with a multitude of windows/palettes/inspectors from a single app). That’s what regular OS X is for and has been all along.
I’m going to say he nailed it, because came to the pretty much the same conclusion that I came to.↩
Recently I’ve been experimenting with text-editor-du-jourSublime Text 2 in parallel with coming late to the Rails party. Sublime Text 2 has a great plugin ecosystem that seems to be growing by the day and one plugin (or package in ST2 parlance) that I find indispensible is SublimeLinter.
As a developer I’ve always written code in an iterative fashion - write a couple of lines, compile, edit those lines to fix the bugs, compile, write a few more lines, and so on. In my career this has occasionally been problematic, as I’ve sometimes had to work with unwieldy and inefficient make-based build systems that unnecessarily rebuilt large swathes of code instead of a single source file. In recent years I’ve been spoilt by some excellent IDEs (such as Eclipse/STS and Xcode) that have been kind enough to alert me to syntactical errors before I invest time in a build.
While playing with Rails I found that I was regularly coming across syntactical errors only when it came to testing out the application. It wasn’t the horrible build systems of yore, but it wasn’t exactly the convenience of a nice IDE. So I hunted around for a linter that I could integrate with ST2 and quickly came across SublimeLinter. A couple of seconds later (thanks to the ridiculously useful Package Control) and I had it up and running and I was getting lots of notifications about errors in my Ruby code.
Unfortunately I soon realised that I was getting too many errors, to the extent that I was getting errors on perfectly valid code. The code ran fine in my Rails app, but still SublimeLinter was bitching and moaning about it. I had a look around the SublimeLinter documentation, and noticed that they recommended using the full path to the ruby executable if using a Ruby interpreter provided by rvm. On OS X this comes down to the fact that the default Ruby is 1.8.7 whereas most people will be using code written against Ruby 1.9.X, which they’ve usually installed themselves.
Thankfully SublimeLinter makes it easy to customise the Ruby interpreter to use, so if you’re having trouble with syntax errors in your Ruby code, just chose the menu item Sublime Text 2 -> Preferences -> Package Settings -> SublimeLinter -> Settings - User. This will open up the user preferences file for SublimeLinter and you can enter the following snippet (substituting in your own username of course, and remembering to merge the JSON with any settings you may already have in there):
I’m stuck between a rock and a hard place. I have a beautiful home of timberframe construction where every internal wall is a stud wall. What this really means is that every wall is a candidate for being stuffed full of cables. My biggest home-improvement desire is to get every room wired with Gigabit Ethernet, but if possible I’d like to get every type of cable under the sun in there. If it’s possible to generate enough heat from those cables to remove the need for central heating then I’ll consider that to be a bonus.
The problem is that all those lovely wall cavities are hiding behind plasterboard that my wife does not want me to remove. I’m convinced that I could do it in a surgical manner - in and out with minimal disruption and a nice new shiny coat of paint in every room. She’s convinced that I’ll leave gaping holes running down the walls that never get patched up, and that the painting won’t get beyond the colour-picking stage. She’s probably right.
Enter stage left: 802.11n WiFi
With the right hardware 802.11n WiFi could very easily have been the answer to my (or my wife’s) prayers. The theoretical maximum is 600Mb/s - it’s not quite Gigabit Ethernet, but is still capable of delivering a large amount of data quickly. Theoretically. With the right hardware.
At the minute I only have one weak spot in the home network - the link between a Gigabit network upstairs and a Gigabit network downstairs in the office. I invested in the latest Apple Airport Extreme base station that I paired up with a 2nd generation Time Capsule. The Airport Extreme was configured to establish a 5GHz 802.11n network using the wide-band settings.
I had hoped to see a pretty fast link between the two base stations, but unfortunately the data rate reported by the Extreme was just 180Mb/s. Testing the network for “real-world” performance resulted in just 93.43Mb/s (using zPerf as a test tool). This isn’t a bad speed, but I do tend to fling a large amount of data around the network. It also gets extremely frustrating to be limited to Fast Ethernet speeds when I know there’s a Gigabit network up there just waiting to be saturated.
Enter stage right: powerline networking adapters
I’d considered powerline adapters in the past, but had always balked at the relatively low speeds they offered. Having discovered just how poor the WiFi connection was I decided that the newer 500Mb/s models might be worth a blast - even at 50% of the theoretical maximum I could still get 2.5 times the performance of the WiFi link. When an offer for Maplin powerline network adapters came up they were hastily purchased. Once received I got to testing, and the results were extremely disappointing.
I know that the 500Mb/s is an ideal scenario and many things will conspire to reduce this figure in the real world. Old wiring, interference from other devices and use of extension cables amongst other factors. Our home is fairly modern, and has wiring that is only 8 years old. I tried to improve things by avoiding extension cables, and plugging directly into the mains sockets at both ends, but to no avail. The best rate I could get was a paltry 53.21Mb/s. WiFi it is then.
Not wanting to return the product without trying to find a use for it, I decided to try using the powerline adapters to connect my ADSL modem/router from downstairs to the upstairs network. We only get 5.5-6Mb/s from our ADSL connection so 50+Mb/s is plenty of bandwidth. Unfortunately since doing so, I noticed regular breaks in the connection from the home network to the ADSL router.
The only new factor there is the powerline adapters, so they’ve had to come out and are currently on their way back to Maplin. Rather than post them, my kind wife agreed to drop them into the store as she was going to be passing it. She’ll probably be away for a few hours at least. Now to find my hammer, some Cat6, and the crimping tool.