The Wearable

One of my favorite quotes from Jony Ive is this one, from a 2012 Interview with the London Evening Standard:

Q: How do you know consumers will want your products?

Ive : We don’t do focus groups - that is the job of the designer. It’s unfair to ask people who don’t have a sense of the opportunities of tomorrow from the context of today to design.

Ive expands upon this idea further during a 2013 interview with Charlie Rose (emphasis added):

I don’t think it’s the user or consumer’s job to imagine what the future could be, because what the future can be is so often afforded by technology. Something new can be new, because there’s some new technology or some new process. So unless you’re aware of what those processes and technologies are, how can you possibly know what’s possible?

Think back to January 8th, 2007. What were you expecting Apple to announce at Macworld? I think John Gruber’s prediction post sums it up nicely:

  • IPOD MOBILE PHONE — Even just a few days ago, I did not expect to see Apple announce a phone this week. But over the weekend I flip-flopped, and I now think it’s more likely than not. Not a VOIP phone that depends on Wi-Fi or anything like that, but an honest-to-god mobile phone. It seems like there has to be some sort of “Wow, I thought maybe Apple would announce a phone but I didn’t think they’d do it like this!” factor, but damned if anyone knows what it is. My wild unlikely-but-wouldn’t-it-be-cool-as-shit guess: that it’s not an iPod phone, but rather the introduction of a new mobile device OS.

It’s safe to say that we were all expecting, as Gruber calls it, an “iPod phone”1. Just watch the keynote, and pay attention to the audience reaction when Steve Jobs introduces the iPhone. Both the “iPod” and “phone” slides receive uproarious applause, while “internet communicator” gets a relatively tepid response. Ironically, it was the “internet communicator” (Multi-touch UI + ubiquitous networking + App Store) component that went on to define the modern computing era and leave a lasting impact on society. It was something we didn’t expect, because we didn’t realize it was possible.

Turns out, the “wild unlikely-but-wouldn’t-it-be-cool-as-shit guess” panned out.

It’s for this reason that I’ve been hesitant to speculate on Apple’s wearable plans. Much of the public thinking this year has positioned it as a “fitness tracking, heart rate sensing, notification delivering smartwatch and activity tracker”, but I feel that’s as myopic as expecting an “iPod phone”, before the iPhone. I don’t know exactly what Apple has up its sleeve and I have little knowledge as to what they could have developed, particularly as it pertains to breakthroughs in hardware engineering2.

Fortunately, more details have surfaced over the last few weeks and I think I’m better equipped to talk about this. Suffice to say, I think there’s more going on than just health tracking and notifications.

Over the last few weeks, the rumor mill has thrown out a few more use cases for the wearable device, in addition to health tracking. By my estimation, they are:

  • HomeKit Integration
  • Payments
  • Continuity (Handoff, seems very interesting here, where the wearable could store and physically transport your state as you physically move between devices)

You’re probably thinking “so what, a smartphone can do all of these things”. And you’re right, a smartphone can do all of these things, but that doesn’t mean it’s the best way to do all of these things, in the same way that a smartphone isn’t the best device to write a report or watch a video on.

In the introduction video for the iPhone 5, Jony Ive says:

When you think about your iPhone, it’s probably the object that you use most in your life. It’s the product that you have with you have all the time.

He, of course, is correct. The smartphone is easily the most personal device we own today. It is our life in our pockets. It is our connection to the rest of the world. We take them with us and use them where ever we can.

Yet there is still a gap, there is still some friction. There are moments when we are without our phones. We can lose them. They can be stolen. On the less sinister side, we give them to other people to look at something, or to hold for us.

There is simply no guarantee that the person using a smartphone is the person to whom it belongs. That’s problematic when you look at use cases which benefit from stronger ties between a user’s identity and presence in the physical world. Use cases like health tracking, home automation, payments, and transitioning between devices. These use cases benefit from something better, something more omnipresent to the user than a smartphone.

So what is more personal than a smartphone? What is more omnipresent than your life in your pocket that goes with you almost everywhere? What device would literally go with you everywhere until you took it off? What device would be better for these use cases?

It’s a wearable device. A wearable device has the potential to be better at those key tasks than a smartphone because it’s literally with you all the time. It’s way more omnipresent than a smartphone. And I don’t think it’s a coincidence that all of these use cases have been rumored for the iWatch3.

As we approach tomorrow’s event, I can’t help but think about the iPad introduction, the last time Apple entered a new product category. Specifically, the part of the keynote where Steve Jobs makes the core case for the iPad:

The bar is pretty high. In order to really create a new category of devices, those devices are going to have to be far better at doing some key tasks. They’re going to have to be far better at doing some really important things. Better than the laptop, better than the smartphone.”

Jobs enumerates those tasks and then says:

If there’s going to be a third category device, it’s going to have to be better at these kinds of tasks than a laptop or a smartphone. Otherwise, it has no reason for being.

Those last two sentences, to me, are the table stakes for the success of a new product category. If a product is not clearly, obviously better at some key tasks, then it has no reason to exist4. And that’s the sense I get from most of the existing wearables on the market today. These products5 don’t really consider or use the omnipresence of the form factor to do things better than a smartphone. As a result you have the general public voting “no” with their wallets and approaching the entire product category with a lot of skepticism. In other words, they’ve become a product category which no one thinks they want or need.

Call it wishful thinking, but I don’t think that will be the case much longer.

  1. By which I mean an iPod, with a click wheel, that could make phone calls. 

  2. Apple’s innovation on the hardware front (batteries, radios, sensors, input devices, etc.) has always seemed undervalued to me. The multi-touch UI on the iPhone is as much a triumph of hardware as it is software, and that took years of development to get right. 

  3. So why all the emphasis on health tracking? I’d argue it’s the most obvious use case where a wearable beats a smartphone hands down, no questions asked. 

  4. I could go on and say “Now, some people have thought that’s a smart watch. The problem is smartwatches aren’t better at anything. They’re slow, they have tiny displays, and they have clunky smartphone software and user interfaces, so they’re not better than a smartphone at anything, they’re just smaller. They’re just smaller, less capable smartphones.” 

  5. For the record, I think fitness bands do deliver on this front. A FitBit/Jawbone is a better fitness tracker than an iPhone 5s because its always with you, while an iPhone isn’t. 

Apple + IBM

Apple PR:

CUPERTINO, California and ARMONK, New York—July 15, 2014—Apple® and IBM (NYSE: IBM) today announced an exclusive partnership that teams the market-leading strengths of each company to transform enterprise mobility through a new class of business apps—bringing IBM’s big data and analytics capabilities to iPhone® and iPad®.

The landmark partnership aims to redefine the way work will get done, address key industry mobility challenges and spark true mobile-led business change—grounded in four core capabilities: * a new class of more than 100 industry-specific enterprise solutions including native * apps, developed exclusively from the ground up, for iPhone and iPad; * unique IBM cloud services optimized for iOS, including device management, security, analytics and mobile integration; * new AppleCare® service and support offering tailored to the needs of the enterprise; and
new packaged offerings from IBM for device activation, supply and management.

This is a huge deal. Some interesting quotes from Tim Cook1, from Arik Hesseldahl’s report at Re/code:

“If you were building a puzzle they would fit nicely together with no overlap,” Cook said of the relationship. “We do not compete on anything. And when you do that you end up with something better than either of you could produce yourself.”


Apple has never made much noise about its enterprise sales, and has famously shied away from having a dedicated enterprise sales force. In teaming up with IBM, Cook said, Apple is getting the best of both worlds. Were Apple to fully embrace its potential opportunity in the enterprise it might have to build a new division to the company. In teaming up with IBM it won’t have to go that far.

“We’re good at building a simple experience and in building devices,” Cook said. “The kind of deep industry expertise you would need to really transform the enterprise isn’t in our DNA. But it is in IBM’s.”

When I look at this move, in context with the announcements at WWDC this year, I think of the closing of John Gruber’s post "Only Apple":

New Apple didn’t need a reset. New Apple needed to grow up. To stop behaving like an insular underdog on the margins and start acting like the industry leader and cultural force it so clearly has become.

Apple has never been more successful, powerful, or influential than it is today. They’ve thus never been in a better position to succumb to their worst instincts and act imperiously and capriciously.

Instead, they’ve begun to act more magnanimously. They’ve given third-party developers more of what we have been asking for than ever before, including things we never thought they’d do. Panic’s Cabel Sasser tweeted:

My 2¢: for the past few years it’s felt like Apple’s only goal was to put us in our place. Now it feels like they might want to be friends.

Apple in a position of strength, not weakness. I’m impressed not just by what Apple can do, but by what it wants to do.

I also think of Apple’s philosophy of “a thousand no’s for every yes" and how that seems to have subtly changed.

During the Steve Jobs era (particularly with regards to iOS), “no” seemed to mean “we can’t do this alone, we aren’t going to do this, no one else is going to be able to do this on our platforms”. In the Tim Cook era, “no” seems to mean “we can’t do this alone, we aren’t going to this alone, maybe someone can help us do this”. And that takes maturity and humility.

The times they are a-changin”, indeed.

  1. It’s worth noting that Tim Cook spent 12 years of his career at IBM. 

Amazon Turns to Authors in Hachette Dispute

David Streitfeld, reporting for the New York Times:

The confrontation between Amazon and Hachette is growing louder and meaner, as the combatants drop all pretense that this is a reasonable dispute among reasonable people.

Amazon has proposed giving Hachette’s authors all the revenue from their e-book sales on Amazon as the parties continue to negotiate a new contract. Hachette’s response on Tuesday was to suggest that the retailer was trying to make it commit suicide.


With its newest proposal, Amazon is trying to break the impasse by getting Hachette’s writers to switch allegiances.

That would take the heat off Amazon, which has never suffered as much sustained criticism as it is getting now. Which might also be the reason Hachette summarily rejected it.

The offer came in a letter to a few writers and agents from David Naggar, an Amazon executive who works with publishers and independent authors. It proposes “a big windfall for authors” by taking them “out of the middle” of the conflict. On Tuesday, Amazon sent the proposal to Hachette itself.

This is a very bold move by Amazon, in that it paints Hachette as an intermediary that doesn’t add much value for authors. Streitfeld’s report continues:

Amazon wanted to cut the publisher off entirely from “its own revenue from e-books sold by Amazon, which would be a suicidal action,” Hachette said. “Once again, Amazon acknowledges that their unilateral actions, in trying to extract much higher terms from Hachette, are harming authors.”

Nonsense, Amazon responded.

“Hachette is part of a $10 billion global conglomerate,” said Amazon, which will achieve $100 billion in revenue this year. “It wouldn’t be ‘suicide.’ They can afford it.”

What’s interesting about all of this is how it it more or less falls in line with a post Ben Thompson wrote back in May, called “Publishers’ Deal With The Devil”1:

The publishers need Amazon because they need the Kindle’s DRM, because they know without that artificial friction their contribution to a book’s fixed costs would become untenable. As George Packer recounted in his anti-Amazon article Cheap Words:

Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiencies, starting with overpriced Manhattan offices.

I’ve worked with publishers, and here’s the thing: Amazon is right. It’s not that publishers don’t add value, but rather that their economics are wholly incompatible with the reality of the Internet. If publishers are to have a future free of Amazon, that future will be as a service with upside directly tied to a book’s success.


The problem with publishers is that, due to their own incompetence and (understandable) unwillingness to change, they gave the keys to the castle to Amazon, and it’s no surprise they are now paying the price; the devil always has its due.

Given this, I find it quite ironic that Hachette claims Amazon is “trying to make it commit suicide”. This whole ordeal seems to be something Hachette inadvertently brought on itself2.

  1. I swear I almost titled this post “Does Jeff Bezos read Stratechery?” 

  2. That’s not to say I agree with what Amazon’s doing. 

Console Gaming is Obsolete

Ben Thompson, writing at Stratechery:

Over the last two generations of consoles, however, prices have actually risen, and today a Playstation 4 or Xbox One is nearly the same price as an average PC.

In some respects, this makes no sense: why hasn’t Moore’s law had the same impact on consoles as it has had on PCs? Moreover, when you consider that consoles now compete with a whole host of new time-wasters like phones, tablets, social networks, dramatically expanded TV offerings, the Internet, etc., it’s downright bizarre.

I think the answer lies in a specific part of disruption theory. Specifically, incumbents are driven by their best customers to add more and more features that drive up the price, causing the incumbents’ product to move further and further away from the average customer’s needs (needs which have actually been decreasing as more entertainment options become available):

Thompson then goes on to pitch an Apple TV product that could disrupt the existing console model followed by Sony and Microsoft:

The net result is that traditional consoles are about as far removed from average consumers as they could be. There is clearly a core gamer market, and Sony and Microsoft are fighting ferociously for it, but no one is growing the pie. I think there is an opening.

Imagine a new TV product, with two models:

  • $99 with a full set of entertainment options, but no gaming
  • $179 with a full set of entertainment options, plus gaming

This TV product would be on an annual release cycle; average consumers would only upgrade every few years (the core OS and most games would support 3 generations), while more serious gamers would upgrade every year providing a nice bit of recurring revenue (this would be much more feasible today, as developers have long since developed the expertise to make games available across multiple architectures). Video games would be delivered not as packaged goods, but rather through an app store. Prices would likely be significantly lower than traditional consoles, but the aforementioned serious gamers would support a higher-price tier for AAA titles and ambitious indies. This console would also integrate seamlessly with the devices carried by many of its potential customers: video and photos could easily be transferred wirelessly, and you could even share screens or use the TV for video calling.


I’ve gone back and forth on the Apple TV as a console; there is certainly a strategic incentive to own the TV, and the way to do that is by doing the jobs TV does. Still, though, the timing needs to be right, and now the tech is there, the APIs are there, and more importantly, I believe the market is there:

Meanwhile, Sony and Microsoft will be stuck with increasingly old consoles that are too expensive and, sooner rather than later, less capable than the continually upgraded Apple TV. At that point they will lose the high end gamers as well, and the textbook disruption will be complete.

When I think of disruption in the console gaming space, I think of the Nintendo Wii. From Horace Dediu’s 2012 post on Asymco, “The parable of Nintendo”:

When the Wii launched in late 2006 Nintendo had been facing the simultaneous attack from the “seventh generation” Xbox 360 which launched a year earlier as well as the PlayStation 3, both of which set as their bases of competition 3D graphics at HD resolutions. Many wrote off the company and called the console market a two horse race.

Then, in what seemed a desperate downward leap, the Wii was launched into a different trajectory. It addressed non-consumers with a new, more intuitive controller and standard resolution rather than competing for hardcore gamers with more power and richer graphics.

Because the Wii was asymmetric and addressed non-consumption rather than trying to be a “better” console in what was becoming a horsepower race, Nintendo expanded the console market. Its innovation was interesting enough that many hard core gamers used the console in addition to a PS3 or Xbox and many non-gamers bought it as their first console.

The result is that 95 million Wii’s have been sold vs. 66 million Xbox 360′s and 62 million PS3′s.

This sounds a lot like what Thompson believes Apple TV could do to address low end customer needs.

What’s notable is how quickly Nintendo’s fortunes changed for the worst as mobile devices began to take off. Dediu continues:

The graph above shows that the growth from Wii has stopped. In fact, the console has been in decline since early 2009.


The data points a bleak picture for Nintendo. The Wii was a success due to its competitiveness vs. non-consumption of gaming. But another low end disruption took hold in 2007. Good enough games on phones and mobile computers are taking consumption away from dedicated consoles, both fixed and mobile.

In addition, the rate at which game experiences are improving on iOS imply that they will overtake the “quality” of consoles in the not too distant future. Differences in distribution models also guarantee lower pricing for software for end users and enabling a far larger catalog of titles.

Did the Wii get disrupted by mobile devices? Is Apple TV going to disrupt Sony and Microsoft the way the Wii did a generation ago? I think there is more at play here.


In a Stratechery post titled “Obsoletive”, Thompson says:

Disruption is low-end; a disruptive product is worse than the incumbent technology on the vectors that the incumbent’s customers care about. But, it’s cheaper, and better on other vectors that different customers care about. And, eventually, as the new technology improves, it takes the incumbent’s market.

This is not what happened in cell phones.

The problem for Nokia and BlackBerry was that their specialties – calling, messaging, and email – were simply apps: one function on a general-purpose computer. A dedicated device that only did calls, or messages, or email, was simply obsolete.

An even cursory examination of tech history makes it clear that “obsoletion” – where a cheaper, single-purpose product is replaced by a more expensive, general purpose product – is just as common as “disruption” – even more so, in fact. Just a few examples (think about it – you’ll come up with a bunch more):

  • The typewriter and word processor were obsoleted by the PC
  • Typesetting was obsoleted by the Mac and desktop publishing
  • The newspaper was obsoleted by the Internet
  • The CD player was obsoleted by the iPod
  • The iPod was obsoleted by the iPhone

So how does the idea of “obsoletion” apply here? The obvious answer is that smartphones and tablets are obsoleting consoles. Games no longer require dedicated hardware: they’re just another app on the home screen. But this is only part of the larger story.

What are PlayStation, Xbox and Nintendo? To the consumer, they are simply products that let them play games. In actuality, they are platforms: integrated ecosystems comprised of hardware, software, and services specifically for the creation and consumption of video games. This has been the model for console gaming since the 1980s.

iOS and Android are general purpose computing ecosystems which do everything from games to productivity. Games are just another app on the home screen, one component of a larger experience that transcends devices and form-factors. What Apple TV does is extend the iOS ecosystem to the TV. Today, that’s largely limited to non-interactive entertainment content. It’s not a stretch to imagine that video games are next, especially when you consider what was announced last week at WWDC.

To me, it’s not that the Wii was merely disrupted by iOS. Instead, it’s that the discrete gaming ecosystem (Nintendo, Xbox, PlayStation) is being obsoleted by the general purpose computing ecosystem (iOS, Android). It started in 2008 with the launch of the App Store and it shows no signs of slowing down, thanks to smartphones and tablets driving widespread adoption of iOS and Android. It’s not a matter of “if”, but a matter of “when” it hits the living room.

Unfortunately for Nintendo, Microsoft and Sony, “when” appears to be soon.