At The New York Times Company’s R&D Lab, the group’s collective of technologists, artists, and journalists talks a lot about “information shadows” — the auras of data that surround us in our daily lives. Tracking and processing the info trails we leave, the thinking goes, allows for deeper insights into ourselves — and it can also help media organisations to provide their users with news consumption experiences as intimate as they are relevant. We tend to emphasise the “self” aspects of “the quantified self“; the R&D Lab is exploring what it means to be a part of a quantified community — and, for the Times, what it means to be a provider of information to that community.
I recently had the chance to visit the R&D Lab, which is housed on the 28th floor of the Times building in New York. In the video above, Matt Boggie, The Times Co.’s Media & Technology Strategist for R&D, demonstrates the Times’ screen-top version of a kitchen table. It’s based on Microsoft’s Surface technology, modified by the R&D Lab to create a Times-oriented user experience that reimagines the old “around the breakfast table” reading of the paper. You’ll notice that, in the demo, news is both highly personal and highly social — and that the line between “consumer” and “news consumer” is a thin one. Ads look pretty much the way we’re used to them looking, but they’re also integrated into the tabletop flow of information.
And news itself, in the same way, collapses into the broader universe of information. We’re used to thinking of “the news” as its own category, as something to be consumed primarily during commutes or during post-work relaxation in the evening. But news is becoming more pervasive (there’s evidence that many people, at the moment, consume the bulk of their news during the day, integrated into their work), and the R&D platforms reflect its ubiquity. The prototypes on display at the R&D Lab consider how news can be used, in particular, in the home, woven into the intimate contexts of the morning coffee, the family dinner, the daily getting-ready routine. They explore what it means to brush your teeth with the Times.
It’s been two years since the last time the (Nieman Journalism) Lab visited the R&D Lab. In a series of videos back then, Josh and Zach offered glimpses into the group’s experiments with the next version of the newspaper, the intra-platform shuffling of news content, the living room of the future, lifestream integration for the news, and advertising innovations.
Since those videos were shot, the R&D team has been experimenting with more new platforms that project — and also guess at, and also hint at — how news will be used in the next several years. (We wrote about one of those platforms — a Twitter visualisation system that can be used to track stories’ social lives on the web — in April.) The future isn’t static, Michael Zimbalist, the Times Company’s vice president of R&D, told me — so the Lab can’t be, either.
We’ll post the other video we recorded during our latest Lab visit — of a “magic mirror” that integrates information consumption into the daily routine of getting ready in the morning — tomorrow. Meanwhile, should you prefer to read about the table rather than watch it, here’s the transcript.
So the first thing you’ll notice here is that we have changed the way that the layout works. We’ve gotten rid of that sort of broadsheet design of columns and headlines in favour of a more tactile experience. Working at a table, you expect to be able to manipulate physical objects. So what we’ve represented each section as is a stack of the photographs for each of the articles that live therein.
If you’re looking for a section and you can’t find it here, you can scroll each of these — they’re a scrollable column — and the idea here is that you can share this space, as well. You might be sitting across from someone and sharing the paper with them. So you can turn these columns so that they’re facing the opposite direction.
What I can then do is open up any of these sections and pull up a carousel of articles. As you can see, we’ve left space for advertising, again, to work with our partners and continue to make this a viable business. And then what we can do, when we open them up, is page through them just like any other reader application. You can just swipe to the next page. And if you’re here by yourself, you can unfold the paper as you would with a regular paper and take up a little more room here at the table.
Here in the reader view, though, the photography tends to take a bit of a backseat as compared to the navigation. So what we’ve been able to do is tap on the photos, and for any article, the photos sort of spring out of the template. And now we can take them, move them around, scale them up, and show them to our partner across the table.
And then once we’ve done with the article, the photos themselves can continue to live on in this space, making the table a little messy and a little more playful.
In addition to that, we wanted to make sure we kept those social features of being able to share an article and send it to a friend. So what I can do there is, again, open up a carousel, pick something, I can leave a note on any of these different areas … so we can leave little notes on any of these articles. Here, just sort of a quick “did you see this?” typed hastily, with typos. And then what I can do is, I can share that. I can share that with people who work here at the Lab, or at Facebook, or on Twitter.
But then that begs the question: How can I announce my presence to the table? What’s my feature for logging in? It didn’t seem right to be able to walk up and type in a login, or have it scan your hand, necessarily. But, you know, typically, when you get home at the end of the day, you throw your keys, your phone, your bag onto the kitchen table. That gives us an opportunity to use this as an opportunity to recognise that I’m here. I put my phone down, I get these little red radials coming out of it. And now I’m presented with a list of those articles that have been shared with me. I can tap that last one that’s been left, and then it comes right back. I can take a look at that and see what’s been left for me. And that could have naturally spawned some sort of an alert on my phone, or on my laptop at work, in a couple of different ways.
And certainly we can have the table react to other objects, as well — you know, it is a table, first and foremost. So you might be eating your breakfast or having a cup of coffee. That gives us an opportunity to be a little playful with the ad experience, as well.
So then the next thing that we do here with this table surface is to talk a little bit about the way we think technology, particularly consumer technology, will be changing the experience of consuming news and creating news. So what we’ve done here is to use this application as an opportunity to learn a bit more about different devices. I can take a device — for example, this is a 3-D camera from Panasonic — and rather than describing all of its features, and providing more of a view into that, I can use the table as a tool for this portion of the presentation. I get a price tag that shows what kind of device this is, what its model number is, a range of prices that we were able to find on the Internet, as well as a range of reviews that we found.
And then we can attach content to it, as well. For example, this is an article that was written back in January that compared the Panasonic camera here to another Sony camera that was similar and came out at the same time. And it shows us, in a couple of different ways, how New York Times content will be finding its way into experiences that we don’t necessarily own or control. And that’s by design.
We’ll be doing a lot of work in tagging our articles with different locations, or people, or concepts, and opening that up to APIs and developers where they can build them into their own experiences. So this is one example of where that might happen.
And then the last thing I’ll show you, here, is, to the question of, “Great, we’ve got all these devices in here; how do we as people begin to interact, as well?” — well, we’re instrumenting ourselves increasingly. For example, this is the docking station for a FitBit. (I’ll get that off out of your way.) I’ve been wearing one of these for a while here in the Lab — it’s just a simple electronic pedometer. It’ll track how many steps you’ve taken, and it’ll use this dock to sync it up or to charge it and bring that information into a service that shows you how much activity you’ve had over the course of the last few days, done in 15-minute intervals.
The trouble, though, is that this kind of experience of sitting at the table may not be where you want to be presented with that information. But there are some places within the house where that kind of data and context makes a lot more sense. So, for example, getting ready in the morning. You might be weighing yourself, checking out your figure, seeing if your clothes fit really well. Presenting you with this kind of information might provide you with a sort of behavioural cue.
So what we wanted to do is build that experience. And unlike the table here, where we were able to use a commercial product, we actually had to build that ourselves. So we built a “magic mirror,” which we’ll show you next.