Professor Yann LeCun is a force to be reckoned with in the world of artificial intelligence.
With approximately 20 years in the field, the director of Facebook’s AI lab has authored over 180 academic papers.
One of his most notable pieces of work came in 1988, when he helped to develop a piece of AI software known as a “convolutional neural network” that could recognise handwritten numbers. Trained on a data set of thousands of characters, this revolutionary system detected visual patterns directly from the pixel. Essentially, it let computers open their eyes and use data to learn on their own.
The technology has since evolved to help improve everything from speech recognition to defeating the top player in the world at the game of Go. Importantly for Facebook, it’s also been used to ensure your social media feed only shows you things you care about.
We caught up with the Parisian in London last week where he was presented with the Lifetime Achievement Award by The Lovie Awards.
The interview has been lightly edited for the sake of clarity.
Tell me about your department’s role at Facebook.
The role of FAIR [Facebook Artificial Intelligence Research] — we call ourselves the “FAIRies” — is to advance the science and the technology of AI and do experiments that demonstrate that technology for new applications like computer vision, dialogue systems, virtual assistants, speech recognition, natural language understanding, translation, things like that.
There’s a lot of basic science behind it which is not particularly geared towards an application, it’s more about making progress and understanding intelligence and AI.
Then we work very closely with another group which is about twice our size called applied machine learning [AML]. They turn the science into visible technology and build platforms for the company that product groups can use to deploy AI-based services in the company. So there’s a lot of people working on AI in the company, it’s not just the FAIRies.
Can you give me an idea on how many people are working in your lab?
Yes. There’s about 75 people at FAIR. As I said, about twice as much at AML. And then a lot of people within the company using that [AI] technology.
Is FAIR purely in California or is it spread around the world?
No it’s spread in between New York, where I’m based, Menlo Park [where Facebook is headquartered], and Paris. There’s a small group also in Seattle.
Which part of Facebook do you think can be improved the most with AI?
One of the big questions, or problems, that Facebook needs to solve every microsecond is [how to] pick the best content to show to people. For that you have to understand content and you have to understand people and match the interests of people with the right content. So that’s one very important aspect. That’s what decides what you see in your News Feed every time you connect to Facebook.
Long term, beyond that, there is a need to build really intelligent machines with which you can hold a dialogue [so] they can answer any questions [and] help you in your daily lives. And that’s a very, very challenging area of AI at the moment. Dialogue systems, natural language understanding. One of the big scientific problems that underlies all of this is the problem of getting machines to learn common sense. We don’t really know how to do it. We have lots of ideas.
Say, for example, I’m a Facebook user and I’m fed up of seeing people’s babies crop up in my News Feed, is Facebook looking at allowing people to use natural language to tell Facebook that?
That would be one way but it’s actually a fairly inconvenient way for people to tell Facebook about this. So once we have those dialogue systems, you can tell the dialogue system: “Please don’t show me baby pictures anymore.” But right now the way it’s done is Facebook knows that you’re not looking at those pictures. Or you are looking at them very quickly or clicking on them and not responding and things like this. So there is some sense of what your interests are based on how you interact with content.
Which company do you think is winning the global AI race?
There’s no company that’s winning. There are a lot of companies that have a lot of R&D efforts in AI. And there’s big competition for talent. But there is no company that is really far ahead in the sense that whatever new technique appears it can be reproduced by another company within three months.
I would say the three or four main companies that are very present on this are Facebook, Google — particularly with DeepMind, I mean it’s Alphabet really, it’s not part of Google anymore — and Microsoft, who has traditionally had quite a big effort in this area. IBM is also ramping up a lot. And then various other companies.
You mentioned Google and DeepMind there. DeepMind (acquired by Google for a reported £400 million) built the AlphaGo agent, which was able to beat the world champion of the board game Go. I’m aware that Facebook was also in that race. Were you disappointed when Facebook didn’t beat DeepMind?
No, no, not disappointed, because it’s a great success story for the field as a whole. Some of our former students and postdocs participated in this project who work at DeepMind.
The system that analyses the Go board and decides on the set of moves to play is actually a convolutional network, which is one my inventions. So we build on each other’s ideas.
We had a very small effort at Facebook, it was basically two people, to play with Go. Mostly as a vehicle for research in planning and exploration. We worked on this. It was working OK and we open sourced it. But it wasn’t a big effort like DeepMind’s.
In terms of talent, how do you go about ensuring you get the best AI people at Facebook?
Well you know, it’s nurturing relationships with academic laboratories that have a track record of producing interesting students [and] offering researchers the possibility of doing open research, which is publishing their work.
In fact, at FAIR, it’s not just a possibility, it’s a requirement. So, you’re a researcher, you assume that you’re going to publish your work. It’s very important for a scientist because the currency of the career of a scientist is the intellectual impact. So you can’t tell people ‘come work for us, but you can’t tell people what you’re doing,’ because you basically ruin their career. That’s a big element, which I think we pioneered within this context.
And then the interaction with the best colleagues in the world. There’s sort of various things but a lot of it is nurturing relationships with academics.
How much does salary come into it?
Oh, it’s important. Particularly when there is a competitive situation with Microsoft, DeepMind, Google, other parts of Google etc. But the other fundamentals have to be right. If they’re not right, people are just not even considering coming to work for you.
Can you give me an idea as to how much the top AI people earn?
No I can’t.
What’s going on with your chatbot “M” at the moment?
Initially M was an experiment to figure out how people would use a human-level intelligent virtual assistant. It was an experiment in which most of the tasks were actually fulfilled by humans, which means you can’t scale it up to millions of users. That’s why we restricted it to a small number.
And then as time progressed, we kind of learned more about what people ask and started automating a lot of the tasks that the people do and building slightly more specialised bots for particular domains. That’s where it’s going at the moment. So relatively specialised bots that are experts in movies, restaurants and whatever. And then basic research on trying to get machines that can answer any question whose answer is somewhere on the web, maybe on Wikipedia. That’s still at the research level.
Beyond the Partnership on AI initiative, what is Facebook doing to ensure that the artificial intelligence it develops is ethical and safe?
There are several real or imagined dangers about AI. Today, the danger of a Terminator scenario or something like this, or some of the scenarios that are described in Nick Bostrom’s book, “Superintelligence,” of you know, let’s build a machine to build paperclips and it will transform the entire galaxy into paperclips — those are not things that we’re worried about because we just don’t have the technology to build machines like that. And we’re not going to have it for another several decades. Although there are meetings, symposiums, discussions, papers discussing the long term ethical problems of developing super intelligent machines.
What we are working in the context of the Partnership on AI is how to properly deploy AI systems that don’t reflect any biases, for example, that could be present in the data. So for example, we’re not driving self-driving cars, but if you’re building a machine learning system that essentially drives cars by itself you want to test it really thoroughly and the best practices for this are not entirely clear so there’s a lot of work to do there.
There are rumours that Facebook at one point was thinking about acquiring DeepMind. Do you think DeepMind would have been a good acquisition for Facebook?
You know, things played out the way they played out. There’s a lot of very good people at DeepMind. I think the nature of DeepMind eventually would have been quite a bit different from what it is now if DeepMind had been acquired by a different company than Google.
The challenge I think that DeepMind has is that it’s geographically separated from the mothership in California and that makes it very difficult to build technology that can be used in products. So it pushes DeepMind to some extent to try to survive on its own. Maybe start activities, development activities, for example, in healthcare applications of AI. They put a big emphasis on communication and public relations because that’s important for the group as a whole. Particularly if it’s more difficult for them them to produce products. So, it’s a challenge for them.
It’s very nice to have several companies that work on this space in an open fashion because we build on each other’s ideas. So whenever we come up with an idea, very often DeepMind will build on top of it and do something that’s better and vice versa. Sometimes within days or months of each other. We work on the same team. They hire half of my students basically.
They [DeepMind] have the backing of Sergey Brin and Eric Schmidt. So the fact that some of Google’s most powerful people are strongly behind DeepMind, surely that will help them to get their tech into Google’s products, no?
You have to have support from the management for any kind of advanced research effort because the impact of research is in the long run. It’s very indirect. You cannot plant a seed, and then technology is developed around it, and then eventually some entire product line appears or the business is completely transformed.
You have support from the top because this effect is long term. It’s not how much do I invest now and how much am I going to get in six months. So it requires forward looking visionaries at the top, and there are such people at Google, there are such people at Facebook.
NOW WATCH: Tech Insider videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.