Facebook CEO Mark Zuckerberg gave out some dramatic news on his
Q3 earnings callwith analysts yesterday afternoon: Facebook has a new Artificial Intelligence unit and, separately, the company is working on a new speech recognition product.
(That news was largely overshadowed by the revelation that for the first time the company had seen a slight decline in usage by U.S. teens.)
It’s not clear how the AI and speech projects are linked. But traditionally, the “Turing test” for artificial intelligence is how well a machine responds to a conversation, so it would not unexpected for the two projects to be developed in tandem. Also, Zuckerberg talked about them one after the other on the call.
In almost the same breath, Zuckerberg talked about Facebook’s search developments, “Post Search” and “Graph Search.” He noted that Facebook now has an index of 1.2 trillion Facebook posts, and they are all searchable. He inferred that, somehow, the AI product would be driven by the post index:
In the last quarter, we started testing what we call host search allows you to search all the unstructured text and posts that people have ever made on Facebook. About 1.2 trillion more posts. The folks on the team who have worked on web search engines in the past tell me that the Graph Search corpus is bigger than any other web search index out there. It’s still early for Graph Search, because it’s still in beta, only in English and we haven’t launched our mobile version yet, but it’s something I am really excited about.
So that’s the context. A little later, Zuckerberg discussed the launch of the AI project:
In September, we formed the Facebook AI Group to do world-class artificial intelligence research using all the knowledge that people have shared on Facebook. The goal here is to use new approaches in AI to help make sense of all the content that people share so we can generate new insights about the world to answer people’s questions.
This description of Facebook AI being based on “all the content people share” seems like a reference to the 1.2 trillion posts he mentioned a bit earlier. Next, he made a statement that seems to sew these things together:
We started assembling a team of some of the best people in the field to work on these problems. We also announced the acquisition of Mobile Technologies, a speech recognition and machine translation company that will help expand our work in the field beyond just photo recognition to voice. Over time, I think it is going to be possible to build services that are much more natural to interact with and can help solving many more problems than any existing technologies today. I am excited that we are working on this problem and I am looking forward to doing a lot more here.
This idea that services should be “natural” to interact with dovetails neatly with AI and speech recognition. What could be more natural than asking Facebook a question and have it deliver the answer in a seamless conversation?
It’s notable that Zuckerberg mentioned this development after he talked about the 1.2 trillion-post searchable index Facebook has created: As we reported on Oct. 6, Facebook, Google and Apple have all shown signs that they believe the future of search will be about conquering “latent,” abstract or conversational queries, not typed keywords. That future is being driven by the increasing number of devices that are voice activated, like Siri, Google’s voice assistant … and now an as-yet unnamed speech recognition product deep within Facebook.
Disclosure: The author owns Facebook stock.