Tweetbot for Mac Pricing

After what seemed like an agonising wait for the Mac App Store review period to complete Tweetbot for Mac has now been released. The app went through a long (and strangely quite public1) alpha testing period and as a result there is very little to surprise the user in the finished product. Except that is, the price.

The sale price of $20 has ruffled a few feathers amongst the potential customers for Tweetbot. This is not surprising as there are few Twitter clients in the Mac App Store that sell for more that $5. Additionally Tweetbot for iPhone and Tweetbot for iPad are both priced at $2.99 making a $20 selling price a 666% increase in price for an app that does pretty much the same thing in a consumer’s eyes.

Personally, I feel that the $20 price tag for this piece of software is steep, but still justifiable. Tapbots have put a lot of effort into this application and they deserve some compensation. Additionally, they have to factor in the cost of Twitter user tokens meaning that they can’t rely on an ever-expanding user base in order to re-coup costs. I can understand some of the backlash to the price increase but this is probably more of a case of the previous clients being undervalued, as opposed to this client being overvalued.

I see this as a positive step, not just for Tapbots, but for App Store pricing in general. The long-discussed “race to the bottom” in app pricing can make it difficult for all but the most popular apps to survive. It is good to see a “headline” app take on realistic pricing and hopefully some of the smaller developers will gain the confidence to increase their prices in future. Higher prices will ultimately make for a more sustainable business model that allows small development companies to survive.

  1. I’m holding onto my conspiracy theory that the Tweetbot for Mac went to alpha very early because Tapbots were tipped off somehow. The fact that the app was around for a while before user token limits were put in place gave Tapbots a chance to acrue more user tokens than they would have done otherwise.

The Magazine Is Not the Experiment

The Mac and iOS community got in all of a tizzy late last week following the announcement of Marco Arment’s latest project - The Magazine, a foray into the world of publishing through the channel of the iOS Newsstand application. The basic premise is simple: a monthly subscription of $1.99 delivers you two issues per month direct to your iOS device. Each issue contains 4 editorials produced by a variety of writers.

The reaction to the launch has been favourable; not just for the actual product itself but the concept as well. Federico Viticci (Macstories) thinks that ”…The Magazine is a promising and notable initiative…”. Matthew Panzarino (The Next Web) described it as ”…a template for the future of lightweight Newsstand publications.” Numerous comments across Twitter and App.net praised the initiative and suggested that it was an experiment in publishing.

Marco himself has described The Magazine as “new and experimental” and has set himself a limit for determining the success or failure for the “experiment”:

If it doesn’t turn a profit within two months — just four issues — I’ll shut it down.

I think the praise is deserved - the application is a great reading environment, the writing is of a high quality, and the pricing is favourable. And while I agree that this could be a new business model for publishing I do not agree that Marco’s venture is how we should determine if the model can truly be successful.

A Controlled Environment

To truly judge the success of an experiment or trial it must be carried out in a controlled manner and The Magazine is far from being launched in such a manner. Marco Arment is best known for being the creator of Instapaper, and a co-founder of Tumblr. He has a very successful podcast called Build and Analyze on the 5by5 network. He writes a successful blog for which he can command sponsorship revenues. Marco is justifiably a bit of a celebrity in the Apple world and arguably in wider technology circles.

Like any astute businessman, Marco has been drumming up interest in his new venture for some time now, primarily through his podcast and to his many Twitter followers. It may not be a multi-million dollar advertising campaign, but Marco has enough of a loyal following to all but guarantee the success of The Magazine, in the short term at least.

That is not the only ace The Magazine has up it’s sleeve: the opening lineup of writers includes well-established players such as Jason Snell, Michael Lopp and Guy English. Marco has invited submissions from other writers with openings starting from issue four suggesting that the roster of big name writers may be set up for most of his four-issue trial period.

The Real Experiment

The Magazine is great for the readers, great for the writers and ultimately great for Marco; it should be a huge success. It’s not so great for other budding publishers waiting to see if the business model works, especially if they don’t have the following that Marco has, or the access to top writing talent.

It reminds me of artists like Radiohead or Nine Inch Nails experimenting with self-publishing and distribution. They declare it a success when they sell a lot of copies and state it to be the future of music publishing. It’s not so easy for the up-and-coming artist who doesn’t have the established fanbase or production talent.

In my opinion the real experiment here is not the success or failure of The Magazine. It is instead in the success or failure of Marco Arment to diversify from his roots as a developer of software into a curator, an editor and a publisher. He has expressed a dissatisfaction with writing software in the past and his forays into podcasting, writing and public speaking have indicated a desire to get away from the coal face.

I hope that The Magazine succeeds because it places an emphasis on content rather than advertising. I hope it kick starts a new business model for publishing. But as someone who likes to see an individual succeed I’m more interested in seeing how Marco’s experiment with his career pans out.

Maximising Bang Per Token

The brouhaha over third-party Twitter clients continues with the news of the release of a beta version of Tweetbot for Mac. Just 4 days ago the alpha version was withdrawn with Tapbots citing as a reason the “finite limit on the number of user tokens we can get for Tweetbot for Mac”.

This was a fair move by the developers - any Mac user could download the alpha, sign in and obtain a limited Tweetbot for Mac token. If they subsequently decided they were not interested in buying the application they can discard it without releasing the token they use. It was a problem with only one resolution - withdraw the app and preserve the tokens.

Today Tapbots came back with a beta version and some new rules:

  • new users can install the beta but they’ll not be able to add a Twitter account;
  • users of the alpha version can use the same Twitter account they used before;
  • users of the alpha version cannot add extra Twitter accounts;
  • users of the alpha who revoke their access token on the Twitter site cannot add their account again.

All of this makes perfect sense from the point of view of token preservation. Unfortunately in this post-announcement landscape these once common items are now a scarce resource. They’ve become the fossil fuels of the technology world, and like fossil fuels they’re limited and easily wasted. They used to be very disposable - a quick look at my Apps section on twitter.com lists 34 tokens on one account alone, and that’s after a prune earlier this week.

Efficient Token Usage

Like many people I have a number of Twitter accounts, and on each of my clients I like to have all of my accounts signed in. This was never a problem before but now it just exacerbates the issues faced by app developers. They have a limited number of access tokens that can be used with their apps and users like me with multiple accounts end up using more tokens than other people.

To make an app commercially viable developers need to squeeze as much money per token as possible. One way to do this would be to increase the cost of their apps to cover the average number of accounts per user. Alternatively they could keep the initial purchase price the same while limiting the number of accounts that can be signed in through the app. Power users could then be offered the option to use in-app purchases to effectively buy extra access tokens.

There has been a lot of discussion recently on the devaluing of software through bargain basement prices and this is really going to hit home to Twitter client developers now. Maybe the scarcity of access tokens will encourage users to value the software higher and pay more for it. Remember, you’re not just paying for the software anymore - you’re also paying for the privilege of connecting to the service through your favourite client.

Full Screen Mode Is Not for You

During a recent episode of Hypercritical, host John Siracusa mentioned that he had received feedback from readers of his OS X Mountain Lion review commenting about the failings of full screen mode in Mountain Lion (and previously in Lion), particularly with respect to multiple monitor support.

I see where these comments have come from and have sympathy. When I first installed Lion I was eager to get stuck into this new world of full screen apps. Pressing the little button on the title bar unleashed a magical zoom animation. Some apps just stretched the existing interface, but others (such as iPhoto) changed their interface to better suit the full screen experience. It was very exciting for about 5 minutes, but it wasn’t long before I shared the frustrations aired by Siracusa and his readers:

No, I wanna use the other screens, I wanna put stuff there, I wanna see my other windows there, I wanna see my web browser there, my Twitter feed there - whatever the heck I have over there. And you can put palettes and stuff from the application that you made go full screen, you can put them on the other screens but you can’t use them for other applications. It was like, it was almost like it was being mean to you, it’s like “I know you want to see your Twitter client, but I’m putting linen over that whole screen - nah!” and that pissed people off.

I admit that I was that reader initially and I found the lack of flexibility when using multiple displays was exceptionally frustrating. The app running in full screen mode would take over the primary display but the secondary display would only contain that boring linen background. Yes, it was possible to place extra palettes or inspectors on the secondary display but there didn’t seem to be any way to truly take advantage of the additional real estate.

It sure would have been nice if users could have placed two independent apps side-by-side. Or if developers could have created UI layouts that took advantage of full screen mode with multiple displays, akin to the way full screen editing was available in many previous versions of Aperture. John, being the sensible guy that he is, tried to reason it out, and I think he almost nailed it1 when he said that full screen is for people who need to concentrate.

One use case I think he glossed over though is the fact that full screen is also for people who are working on portable machines. People who are working on one (possible small) screen and need to squeeze every last pixel out of their displays. It’s not for the power user who wants to work with multiple apps at once (or with a multitude of windows/palettes/inspectors from a single app). That’s what regular OS X is for and has been all along.

  1. I’m going to say he nailed it, because came to the pretty much the same conclusion that I came to.

SublimeLinter - an Enabler for Lazy Programming Techniques

Recently I’ve been experimenting with text-editor-du-jour Sublime Text 2 in parallel with coming late to the Rails party. Sublime Text 2 has a great plugin ecosystem that seems to be growing by the day and one plugin (or package in ST2 parlance) that I find indispensible is SublimeLinter.

As a developer I’ve always written code in an iterative fashion - write a couple of lines, compile, edit those lines to fix the bugs, compile, write a few more lines, and so on. In my career this has occasionally been problematic, as I’ve sometimes had to work with unwieldy and inefficient make-based build systems that unnecessarily rebuilt large swathes of code instead of a single source file. In recent years I’ve been spoilt by some excellent IDEs (such as Eclipse/STS and Xcode) that have been kind enough to alert me to syntactical errors before I invest time in a build.

While playing with Rails I found that I was regularly coming across syntactical errors only when it came to testing out the application. It wasn’t the horrible build systems of yore, but it wasn’t exactly the convenience of a nice IDE. So I hunted around for a linter that I could integrate with ST2 and quickly came across SublimeLinter. A couple of seconds later (thanks to the ridiculously useful Package Control) and I had it up and running and I was getting lots of notifications about errors in my Ruby code.

Unfortunately I soon realised that I was getting too many errors, to the extent that I was getting errors on perfectly valid code. The code ran fine in my Rails app, but still SublimeLinter was bitching and moaning about it. I had a look around the SublimeLinter documentation, and noticed that they recommended using the full path to the ruby executable if using a Ruby interpreter provided by rvm. On OS X this comes down to the fact that the default Ruby is 1.8.7 whereas most people will be using code written against Ruby 1.9.X, which they’ve usually installed themselves.

Thankfully SublimeLinter makes it easy to customise the Ruby interpreter to use, so if you’re having trouble with syntax errors in your Ruby code, just chose the menu item Sublime Text 2 -> Preferences -> Package Settings -> SublimeLinter -> Settings - User. This will open up the user preferences file for SublimeLinter and you can enter the following snippet (substituting in your own username of course, and remembering to merge the JSON with any settings you may already have in there):

{
    "sublimelinter_executable_map": {
        "ruby": "/Users/username/.rvm/rubies/default/bin/ruby"
    }
}

May you continue to be a lazy, iterative developer like me.

Maplin 500Mb/s Powerline Adapters - a Rambling Review

I’m stuck between a rock and a hard place. I have a beautiful home of timberframe construction where every internal wall is a stud wall. What this really means is that every wall is a candidate for being stuffed full of cables. My biggest home-improvement desire is to get every room wired with Gigabit Ethernet, but if possible I’d like to get every type of cable under the sun in there. If it’s possible to generate enough heat from those cables to remove the need for central heating then I’ll consider that to be a bonus.

The problem is that all those lovely wall cavities are hiding behind plasterboard that my wife does not want me to remove. I’m convinced that I could do it in a surgical manner - in and out with minimal disruption and a nice new shiny coat of paint in every room. She’s convinced that I’ll leave gaping holes running down the walls that never get patched up, and that the painting won’t get beyond the colour-picking stage. She’s probably right.

Enter stage left: 802.11n WiFi

With the right hardware 802.11n WiFi could very easily have been the answer to my (or my wife’s) prayers. The theoretical maximum is 600Mb/s - it’s not quite Gigabit Ethernet, but is still capable of delivering a large amount of data quickly. Theoretically. With the right hardware.

At the minute I only have one weak spot in the home network - the link between a Gigabit network upstairs and a Gigabit network downstairs in the office. I invested in the latest Apple Airport Extreme base station that I paired up with a 2nd generation Time Capsule. The Airport Extreme was configured to establish a 5GHz 802.11n network using the wide-band settings.

I had hoped to see a pretty fast link between the two base stations, but unfortunately the data rate reported by the Extreme was just 180Mb/s. Testing the network for “real-world” performance resulted in just 93.43Mb/s (using zPerf as a test tool). This isn’t a bad speed, but I do tend to fling a large amount of data around the network. It also gets extremely frustrating to be limited to Fast Ethernet speeds when I know there’s a Gigabit network up there just waiting to be saturated.

Enter stage right: powerline networking adapters

I’d considered powerline adapters in the past, but had always balked at the relatively low speeds they offered. Having discovered just how poor the WiFi connection was I decided that the newer 500Mb/s models might be worth a blast - even at 50% of the theoretical maximum I could still get 2.5 times the performance of the WiFi link. When an offer for Maplin powerline network adapters came up they were hastily purchased. Once received I got to testing, and the results were extremely disappointing.

I know that the 500Mb/s is an ideal scenario and many things will conspire to reduce this figure in the real world. Old wiring, interference from other devices and use of extension cables amongst other factors. Our home is fairly modern, and has wiring that is only 8 years old. I tried to improve things by avoiding extension cables, and plugging directly into the mains sockets at both ends, but to no avail. The best rate I could get was a paltry 53.21Mb/s. WiFi it is then.

Not wanting to return the product without trying to find a use for it, I decided to try using the powerline adapters to connect my ADSL modem/router from downstairs to the upstairs network. We only get 5.5-6Mb/s from our ADSL connection so 50+Mb/s is plenty of bandwidth. Unfortunately since doing so, I noticed regular breaks in the connection from the home network to the ADSL router.

The only new factor there is the powerline adapters, so they’ve had to come out and are currently on their way back to Maplin. Rather than post them, my kind wife agreed to drop them into the store as she was going to be passing it. She’ll probably be away for a few hours at least. Now to find my hammer, some Cat6, and the crimping tool.

Why App.net?

App.net is a hot topic of conversation at the minute in the intersection between the Twitter and Apple worlds that I like to inhabit1. Just over a month ago, Dalton Caldwell wrote a blog post entitled “What Twitter Could Have Been”. While this initial post could be viewed as a bit of a complaint or a moan, it was also something of a catalyst. He soon followed up with a post that may go down in Internet history: “Announcing an Audacious Proposal”.

In this post Dalton essentially proposed building a new Twitter. A Twitter that was funded directly by the users instead of by the advertisers. He and his company established a KickStarter-style campaign to raise $500k by 13th August in order to fund this new social network. As of the time of writing, there are just under two days remaining, and there is only $69,200 left to be raised.

I’ve decided to back this project because, like Dalton, I’m concerned about what I perceive as a “degradation” of Twitter. It’s not a degradation for many people - in fact Twitter is probably going from strength to strength in the eyes of many of it’s users. But I’m an atypical user. I’m not on Twitter to follow celebrities, to track mainstream news, or to keep up with my friends. I’m mainly on Twitter to receive tech news, to use innovative apps, and to communicate with other people in the tech industry - the industry that is both my livelihood and my passion.

For me, Twitter is degrading. For me, it is becoming less vital, and is in danger of becoming unusable to me. I may be over-reacting, but Twitter’s recent rumblings about third-party clients has not inspired confidence. I use TweetBot for iOS (and now for Mac) fanatically. I don’t want to go back to Twitter’s first-party clients, partly because Twitter are not putting in the effort to maintain them, and partly because I don’t want to use apps that are really just conduits to the advertising model that will become essential to Twitter’s survival. Like many others, I would be happy to pay for access to Twitter, but we’re not being offered this opportunity.

So Why App.net?

I don’t know if App.net can work but I’m happy to put my money where my mouth is, and give it a chance to succeed.

I’m keen to join a social network at the beginning and help it grow. I joined Twitter in 2009 and while I’ve enjoyed my time on it, I sometimes feel like I missed the halcyon days when it was a closer, more tight-knit community.

I’m happy to pay to be part of a smaller community with a more focussed group of users. Part of the problem with Twitter is that there is no barrier to entry. Yes, this is me being a little bit elitist but I feel that it is a valid point. Every day on Twitter I am being subjected to random followers who have no interest in what I have to say. I hope that the App.net community will be more focussed and less likely to resort to such behaviour. I’m also hoping that the App.net ecosystem will be subject to more moderation - where “spam followers” can be reported and ultimately warned and/or banned.

I’m hoping that the requirement to pay a fee will weed out a large number of the people who try to use Twitter as a conduit to spam. Twitter is like email, in that there is very little barrier to entry to start spamming. It’s effectively free to send emails or tweets as spam if you can cover your tracks. Twitter spam is a huge problem in my eyes. I’m subjected to daily @reply spam. I’m hoping that the requirement to pay for the service will reduce the number of spammers who will target App.net, or at the very least make it easier to track and remove these spammers.

I’m interested in a service that transcends Twitter and, because it is paid for by the users, will listen to those users in implementing the features that they need, or want. If done right, it will be directed by the needs of the people who use the service, not the people who wish to sell advertising on the service.

Huge Responsibilities

App.net is an ambitious venture. Just getting to $500k does not guarantee any level of success. And while the developer community, or the hacker community, or the whatever-we-are community can drive the success of many things, even we can’t make it succeed through goodwill alone. The threshold may be met, but unless we embrace the resulting service, building the apps, tools, and sites to make it succeed, then it will be doomed to failure.

We’re also putting a lot of faith in the ability of Dalton Caldwell and his team. The fact that funding comes directly from the users means a lot, but this is still not some kind of democracy where every paying customer gets a vote. We’re not shareholders. We don’t belong to a board of directors. We are putting our hard-earned cash on the line, and trusting App.net to make the right decisions, to implement the necessary features, and to implement them on time.

So let’s all pull together. App.net who will build the infrastructure. The developers who will build applications to use the infrastructure. And finally the community who will use the applications. We all have a responsibility for the success or failure of this venture.

  1. I’m sure it’s a hot topic in other areas too, but I can’t really say much about that.

Q. When Is Apple News Not Actually News?

A. When you report old “news” as though it just “finally” happened.

Mike Schramm at TUAW posted the following last night:

Apple has finished smoothing out its Xcode releases with version 4.4.1, which finally brings Xcode out as a standalone app.

For some bizarre reason, Mike seems to have completely missed the Xcode 4.3 release that first introduced the concept of a standalone Xcode. (He also missed the 4.3.x point releases as well). Seems like sloppy “reporting” to me.

Unfortunately, as Apple has grown in stature, the number of websites covering Apple news grows at the same time. Venerable Apple news sites like TUAW are competing with upstart sites, and there seems to be a scrambling to report the tiniest little details as news.

The (Mountain) Lion and the Mouse

Mountain Lion has been running really well on almost all the machines I’ve installed it to, with just one exception - my (employer’s) 17” MacBook Pro. This is unfortunate, as it’s the machine that I spend the most time on - if it’s not running smoothly I feel the “pain” all day, every day.

Specifically I was experiencing severe cursor lag when using the mouse (or trackpad) on an intermittent basis. It wasn’t caused by any one application that I could isolate, but some apps did seem to trigger it more frequently. I couldn’t see anything under Activity Monitor to indicate what the problem could be - there were no CPU or I/O spikes. I resolved to just sit it out waiting for the inevitable 10.8.1.

Then I experienced a bout of constant lag. At first it seemed like a curse, but I realised that I had a recently booted system with few applications running - this is as close as I get to laboratory conditions. A quick glance at the process list showed that only one was running a little high - Dropbox1. It was only consuming 17% of a single core, but it still seemed a little high. When I finally got the cursor under control I was able to quit Dropbox and the lag ceased immediately. I’m guessing it might have been something to do with the methods by which Dropbox monitors the filesystem for changes.

A quick inspection of the Account pane under the Dropbox preferences revealed that I was still running the 1.4.7 version, rather than the Mountain Lion-compatible 1.4.12. This wasn’t all sloppiness on my part - as usual Dropbox have said that the clients should auto-update, but, also as usual, Dropbox’s auto-update facility is woefully behind the time.

All good fables need a moral, so here goes: “If your Dropbox client hasn’t updated itself, then you might want to do so manually.”

  1. If you’ve not yet used Dropbox and would like to sign up, here’s a referral link.

Php-mcrypt on Mountain Lion

I was having a few PHP problems post-Mountain Lion upgrade. Specifically I couldn’t get any sites working that utilised mcrypt with PHP.

Executing php at the command line told me the following:

PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/local/Cellar/php53-mcrypt/5.3.13/mcrypt.so’ - dlopen(/usr/local/Cellar/php53-mcrypt/5.3.13/mcrypt.so, 9): Library not loaded: /usr/lib/libltdl.7.dylib Referenced from: /usr/local/Cellar/php53-mcrypt/5.3.13/mcrypt.so Reason: image not found in Unknown on line 0

I decided to reinstall Jose Gonzalez’ convenient php53-mcrypt formula. Post install, the PHP was able to start up without error and a quick otool -L on mcrypt.sorevealed that it no longer had a dependency on libltdl.7.dylib.