Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook's AI lab (facebook.com)
238 points by vkhuc on Dec 9, 2013 | hide | past | favorite | 91 comments


The AI facebook has today is already freaky to me at times... while I love any new advancement in AI, these advancements will probably not be used in my interests.


Value of Facebook consists of two things, you and your data. It is not good for long term strategy of Facebook to hurt either.

Here are some things where smart AI could help: 1. Better filtering of spam bots. 2. Better filtering of ads that would significantly improve ad quality. 3. Smart assistant that learns your habits and makes smart suggestions such as: which events to attend, your writing style, and filters immediate notifications based on personalized importance score.

Or Facebook could exploit machine learning/AI techniques to simply drive up click through ratio on their ads.


>3. Smart assistant that learns your habits and makes smart suggestions such as: ...

Recognize your romantic partner using FB network[0]

[0] https://news.ycombinator.com/item?id=6759866


The "which events to attend" thing is already up and running


"It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter." -- Nathaniel Borenstein


Completely agreed. Not looking that forward to this.


Companies like Facebook and Google do a lot though to promote these areas by building great generic tools. Their work in, say, map-reduce has been invaluable to a wide range of applications.


Again, these applications are important to their customers (advertisers), not to the users. And while the research might be reused somewhere else, these companies are so big that the negative impact to users will be much larger.


It looks like they don't have anything particularly taxing compared to the likes of Google. There is only so much they can apply to the FB feed. Graph search while cool still isn't that advanced.


There's plenty of data on the world wide web that pertains to you but isn't linked to Facebook - yet. AI that can do context aware authorship analysis could have huge returns for Facebook.


Yann deals with Machine Learning. His specialty (simplified) was pattern recognition in images (first handwriting then objects).

I'd imagine projects that group similar topics (i.e. these people are all in front of the Great Pyramids or these people are all taking pictures of their babies) which probably will help group/filter content


although I agree - it might be worth considering it's creepy because of the uncanny valley


Uncanny Valley doesn't really happen with AI on the internet.

People are already used to people saying and doing dumb things online, so something that is nearly "smart human like" is a big advance (on average).


Am I the only one who is skeptical about this announcement? I am sure the amount of data Facebook has will be huge asset to any sort of AI development, but precisely because of the amount of data and the kinds of data they have, it's just scary what this could be used for...

AI that has personal information of 500M+ people, using it to manipulate people...(first to click on advertisements, and then much more). In the hands of government, I shudder to think what's possible. With NSA already snooping around, perhaps it's not all that distant.


The announcement of an organization who's revenue is based on advertising doing research into AI deeply scares me. Take a look at this MIT study. Each day our corporation based government takes a step closer to 1984 and each day people become more and more apathetic and quick to change the subject at the first sign of the conversation.

http://web.mit.edu/people/amliu/Papers/PentlandLiu_NeuralCom...


Umm what do you think google's revenue is based on?


Google's mission is "Dont be evil". They have since crossed this line, and will again. I do not trust google either.


If you consider selling ads and complying with the appropriate authorities when compelled to "evil", that is.


That was published in 1999 and we still don't have pre-crime reporting security cameras flagging me everywhere I go.

Just saying.


Technology takes time. We'll have them soon, just wait.


Would facebook's data actually be useful to the NSA? I think they would get a lot more actionable data out of emails, SMSes, phone transcripts, accounting software, project management software, etc. The only thing I can imagine facebook being good for is perhaps knowing who they want to start looking at in the future. Search engine history might be more effective for that, though.


The point is, this data exists. The NSA has all of it. What are they doing with it? Probably nothing good.


"We were foolish to think Google was the entity that would become Skynet, when it was Facebook all along." - John Connor


AI always inspired me to become better at programming and learn new languages, methods and idea's. And even with the currently available tools and technologies, it amazes me how little of these dreams I had actually exist or are being worked on.

EDIT: Just came across an article: "The Mother of All Demos" is 45 today; Douglas Engelbart did exactly what I'm hoping someone today will be able to pull of.


Link to the article?



Wow. With Hinton at Google and LeCun at Facebook, industry has definitely been pulling some big names away from academia.


I beg to differ. They still publish papers and big names in academia are often consultants or receiving grants from private industry. And according to LeCun he will remain as professor at NYU and run his research lab from NYU. He will serve as director and lead researcher.


Oh man that guy was my machine learning professor at NYU :) Sparked my interested in Lisp, but now I seem to recall that he wrote all the problem examples in his own language.... Lush. Even so, all my knowledge of Lisp syntax and neural nets stems from his bad-ass lectures


I started taking that class but had to drop it -- tried it way too early in my CS career. Having to use his language definitely didn't help! But you're right, I absolutely loved the lectures, even if a bunch of it went over my head.


i had the good fortune of just auditing ;) i was there for music DSP, not CS


Forcing folks to learn his own language is a pretty big conceit. You think he'll be able to get away with that at Fbook?


It's just a lisp variant with a procedural/matrix mathy bend. It's super easy to pick up if you've written any lisp.


yeh it was rough at the time but looking back, if i'd known lisp at all beforehand it wouldnt have been so bad. it was directly listed by many as a turn-off though and is a big part of why a lot of people from my dep't just audited


ahh - got it. My dept was very heavy on Scheme. Reasonable to have a LISP-variant for AI.


He switched it up to torch last year (framework built on top of lua).

I don't think the Neural Net library was quiet what it was in Lush, but it was getting there.


You are mistaken.

LeCun is now running a high profile lab for a $100B company. He's got the personal attention of the founder. He's only going to be at NYU as a talent conduit. It's not the same as Hinton's arrangement at all.


I am not mistaken. Clearly I never said he would be teaching all the time. He even said he was going to remain as professor on a part time basis.

Pulling someone to industry doesn't stop one from contributing back to the field. This is what they do, what people actually care about. In fact, if you are a big player in a field, you spend most of your time conducting your research and running a lab, not teaching classes. This is the norm and half of the famous professors I had barely ever come to lecture, only the TA would show up (some schools are strict on this though, like MIT). And I often tell people if you want real good solid education, especially for an undergraduate, don't attend big player's lecture. Your TA or your average not-so-well-known professor can be more helpful.


I beg to differ.

How many people are they teaching in person these days?


He dropped the neural networks class he was slated to teach in the spring, as far as I know.

I wouldn't be surprised if, at this point, his work split 90% FB, 9% NYU Center for Data Science, 1% NYU CS.


Really changes the phrase 'those who can't do teach'.


That sassy phrase (Those who can, do; those who can't, teach.) is irritating to me. Now I'm not casting judgement on anybody who _does_ find it entertaining, or useful, but for what it's worth, the _original_ phrase is:

Those who know, do; those who understand, teach.

-Aristotle.


How do you feel about this?[1] I scoured the internet looking for something to the contrary, but couldn't find anything that positively attributed your quote to Aristotle, or even anything that suggests it predated Shaw's.

[1] http://answers.yahoo.com/question/index?qid=20100823190727AA...


"Those who can't do, teach. And those who can't teach, teach gym.” - Woody Allen

Obviously it's an iterating/evolving meme.


Interesting, I didn't know that's where it originated from which I suppose is your point.


This is huge. LeCun is one of the leading AI experts in the world. With resources of Facebook I expect great things. I also see this as a technology bet by Facebook on long term strategy.

I own Facebook stock, so I might be biased.


Will "great things" be "better ads"? Or maybe something worth while...



I'm not sure what I should think of this. What are your thoughts with regard to the amount of personal information that Facebook possesses? Regardless of whether they are good or evil, what do you think the potential implications are? I'm curious about HN's thoughts.


>> What are your thoughts with regard to the amount of personal information that Facebook possesses?

"Fucking disgusting"

>> Regardless of whether they are good or evil, what do you think the potential implications are?

Less privacy, more data in the hands of governments, more police state.


The FB data is being given to them freely by their users


Define: freely

I think Moxie Marlinespike's talk at Defcon 18 (2010) was a really good one: http://www.youtube.com/watch?v=eG0KrT6pBPk

He talks about how we do not really have a free choice when it comes to things like having a Facebook account or having a mobile phone. Namely due to the network effect, you don't really have a choice if you want to be a normal member of society.


How is any of that Facebooks fault?


That's exactly their goal. Get as many people to use it as possible so it will be painful to not use. Then exploit the information they gain as much as possible because the users have no alternative.


No alternative? No one is being forced to use Facebook.


There is an increasing amount of peer pressure though.


"How is any of that Facebooks fault?"

As repeated by resonator several posts up.


None of the previous replies addressed this question, so I think it's unfair to downvote this.

As I see it, Facebook could at least attempt to act like they have the user's best interest at heart. This means actively educating them about privacy, risks, and not setting defaults to public. Also not everything on private as this would confuse some people, but at least use non-public and reasonable settings. And/or have a public discussion about it.

Facebook is becoming ubiquitous enough that it's almost something like "internet" or "calling" in general. For such services we have made laws, e.g. in the Netherlands there is a net neutrality law, and sending spam is illegal. But for individual website we have no laws, nor do I think we can make any in the near future without a huge lot of fuzz about it. I'm not saying governments should make laws for Facebook to follow, I'm just saying it won't any time soon, if at all.

So then how do we make sure such a big service acts in our best interest? I think Facebook should start acting less like a for-profit company and more like a government would. Not with elections per se, but at least act in the public's interest instead of working against the users.

And that is how I'd answer the question of "how is this Facebook's fault?". Being a popular for-profit company is not a 'fault', but with so many users they have a responsibility, and I think that it's fair if we, hackers that understand the technology, try to make them act responsibly.


What don't you get? It's exactly facebook's goal to get critical mass to get the network peer pressure.

If it's my goal to create a product that forces people via social pressure to do something, it's definitely my fault when it works.

Your response is the same as people saying "don't fly" when they complain about the TSA. 'Someone had an unpleasant experience with the TSA? How is that the TSA's fault? That person didn't have to fly.'


NO. It depends. Let's reduce your argument to the absurd. If I make it a goal of mine that <foo> happens by next week, and I do <bar> in the meantime, and in fact <foo> happens, whether or not <foo> is my fault depends on what <bar> I did. If <bar> = nothing, then obviously <foo> was not my fault.

Facebook has designed a killer product that makes it really easy to use for social interaction. But Facebook is not forcing you to use it, Facebook is not forcing your friends to use it, Facebook is not forcing social peer pressure on you, and if in fact someone's friends are wholesale excluding that person because they're not on Facebook, Facebook is not responsible for the fact that someone has shitty friends that are too lazy to include that person.


just like nobody is being forced to speak english, or go to university, or work some terrible shitty job?

This is one thing that people often miss-understand, just because there is no explicit threat of retaliation does not mean systematic coercion does not exist.


I took LeCun's class in undergrad at NYU and it definitely wasn't easy. He's a fantastic speaker, definitely made it more enjoyable.


Same here. His Machine Learning course was one of the hardest but most rewarding classes I've ever taken. Definitely glad to see him expanding his reach outside of Academia.


This is all about finding patterns for businesses to better target and sell to people. Valuable information is likely NOT going to reach or become usable by say government organizations or other organizations that are trying to problem solve big problems - at least not be affordable, because they'll be competing with for-profit businesses for bidding on access those users.


Star power at NIPS:

"Facebook CEO Mark Zuckerberg, CTO Michael Schroepfer and I are at the Neural Information Processing Systems Conference in Lake Tahoe today. Mark will announce the news during his presentation at the NIPS Workshop on Deep Learning later today."

It seems Zuckerberg will have an apres-ski Q&A, and be in a panel discussion at day's end (https://sites.google.com/site/deeplearningworkshopnips2013/s...). Hope the room is big enough.


At NIPS now. There have been rumors going around that Zuckerberg might have just been interested in attending, giving a casual Q&A, not announcing anything like this - but with this announcement, I foresee them having to take down all of the dividers in Harveys ;)


Could you post some bullet points of what was said there? Would be really appreciated by everyone I'm sure!


Unfortunately I didn't make it to his talks - was in other workshops where it would have been impolite to leave. Did get to see him as he was walking off stage though! More relevantly, though, many of my collaborators did attend the deep learning workshop, so I'll grill them about the details over dinner and post about it!


Thanks! Would be interesting to hear what areas FB is trying to apply deep learning to? One obvious area is recognition of objects and people in photos, but I wonder if they plan to use it on other type of data like say text for sentiment analysis or for recommendation engines of various kind?


That would be interesting. I used to be involved with NIPS in the late 1990s, but have crossed paths with LeCun on and off since then. He's an amazing talent, with tremendous breadth.


In the eyes of Facebook, we are all undressed. Now it will be like they are throwing in a free colonoscopy while they're at it.

I bet the guys over at Fort Meade are beside themselves with happiness over this :-)


Seeing Peter Norvig congratulate him in the status gave me chills. Great times to be living around these titans indeed.


When Google announces AI research it makes sense because self driving cars and trying to understand what a user means by typing stuff in a search box.

But Facebook? They could create a million virtual DanBCs and then A/B test something against them, and then present me with the irresistible ad, perfectly pitched to draw me in?


"X Labs"-type sub-divisions rarely do things of immediate practical interest to X.

Microsoft Research, for example, doesn't restrict itself to the domain of features for Windows, Office, or XBox. Instead, you get giant touchscreens (which you could say inspired Windows 8's unified tablet-PC approach, but didn't really) and PhotoDNA.

Likewise, I'd expect the things coming out of a Facebook AI Lab to be more of the "new ideas that could replace our entire business model when this one runs dry" than "better ways to advertise." Personal, private social agents that live in the cloud and continuously execute on your goals without requiring direct instruction, say (I'm sure there's a speculative-fiction name for these, but all that comes to mind is "NetNavis.")


And what makes you think Google can't or won't do the same? Their latest aggressive push of Google+ everywhere including the forced Youtube integration indicates that they have the capability or have been doing exactly what you suggest.

Not to mention the huge amounts of data they collect through searches and Gmail that they already have been using to tailor ads to each visitor.


Oh, I totally agree.

I see Google announcing AI research, and I think "self driving car" or "understanding search terms", but I have no doubt that it's also "serve better ads" or "slurp more data".

You're right, I should have put that in.


Perhaps they want to understand what users mean too.


Can you imagine the datasets FB has to work with? Statuses, pictures, locations, social graphs. It's incredible. It's hard to blame LeCun/Hinton/Ng for moving towards industry with data like that. I'd bet that the only place with more data than FB/Google is our good friends at the NSA.


My friend used to work at NSA research. He said they don't have much ready to use data yet


Apparently, it's because NSA researchers are spending all their time playing WoW instead.


Nice try, NSA.


This is awesome and incredibly creepy at the same time.


The combination of AI and social networks reminds me of Friendship is Optimal[1].

It's a story about an AI who has the job "to satisfy everybody's values through friendship and ponies" in an MMORPG but breaks out and starts optimizing the real world.

[1] http://www.fimfiction.net/story/62074/


Google has won the search indexing game. Who will win the next level when robots understands the content?


I hope these labs are as productive as Google or the old DEC labs: http://www.hpl.hp.com/techreports/Compaq-DEC/


Is any of the conference videos available online, or is that only for paying customers?


Cool! Will soon see online/offline AI in action!


random thoughts on this announcement

I took Yann's ML class during the first semester of my program at NYU two years ago. It was a terrible experience, given that I hadn't worked with linear algebra or mathematical statistics since about 2008. I could barely write Python at the time, so my Lush code was some ungodly non-functional crap. That said, I'm very happy I took the class in 2011 despite hyperventilating my way through the final--it was only offered once more, and I suspect it will never be offered again given today's news.

Yann really knows his stuff. His convolutional nets (and their application to MNIST and image segment classification) represented a significant improvement in computer vision, and he demoed some incredible low-latency image segmentation stuff for us. He runs one of the best neural net labs in the world (up there with Hinton's and Bengio's), and he has some incredible students at NYU. I can see this shaping up as a delayed acquihire of sorts...he will not struggle to find excellent candidates.

There's a lot of discussion around the privacy implications, but I think everyone's rehashing old points. Facebook already hires excellent researchers and data scientists -- John Myles White and Sean Taylor just moved out west -- but they don't focus on images just yet, from what I know. This hiring represents an investment in image analysis on Facebook's behalf that matches what they put into unstructured textual data and graphical inference. If you've already stayed with Facebook through the "graph search" announcement, this shouldn't surprise you either.

As someone interested in this type of work, it's an exciting time to live in New York City. Finance and adtech have been here forever, but things have expanded. Many startups (Foursquare, Tumblr, Knewton, Etsy) have invested in applied statistics and machine learning, hiring excellent researchers and engineers. Columbia and NYU have announced data science initiatives in the last 6 months. Very smart people (and others, like me ;) ) are very active in the community here.

There are some obvious applications of Yann's work to Facebook's advertising goals:

> Identifying strong friendships through co-occurrence in photos

> Digit or character recognition applied to marketing in photos (shop signs, brands, etc)

> Image segment classification (e.g. beach, park, road) for use in predicting a photo's location (for those uploaded after the fact)

> All the Instagram photos. I mean seriously. They're committed to making ads seamless--why not use image segmentation + likes to identify photographic structures people are attracted to?

Can anyone think of others?


Facebook has a known, established history of carrying out large-scale, intrusive personal surveillance. Imagine if I said "NSA already hires excellent researchers and data scientists ..." We don't legitimize NSA that way, so why do we legitimize it for Facebook? Does something become morally acceptable just because it is done for money?


Difference is, Facebook can't send me to jail. Or kill me.


Looks awesome for once!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: