Hacker News new | past | comments | ask | show | jobs | submit login
Facebook has struggled to hire talent since the Cambridge Analytica scandal (cnbc.com)
430 points by Despegar 8 hours ago | hide | past | web | favorite | 279 comments





I just spent three months hiring in NYC, and now that I think about it, I haven't seen a single person mention they were considering counteroffers from Facebook. For context, Facebook and Google are the two largest tech companies with a significant NYC presence. It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.

> Usually half of the close is done for recruiters with the brand Facebook has

I'm also finding that company brand plays a huge role in closing candidates. Our company's brand is generally pretty strong, and I've found one of the things candidates respond to most is the story we tell about our company's past, present, and future. Facebook's story has become "we were founded by a jerk who didn't care about privacy, our not caring about privacy has had massive consequences for American and global society, and our promises to improve our approach to privacy in the future have proven to be disingenuous smokescreens."

It's no wonder the substantial portion of people who care about their employer's ethics are turned off.


There is also an issue with the 'evaporative' effect. If no one who works there is seen as 'ethical', then you'd expect the people that do work there to be unethical/dubious. So, trying to get a promotion is then more cut-throat, the lunch crew has a few more 'jerks', the HR is a bit more biting, etc. Your hackels get raised and you are more suspicious of the motivations (however begnin) of others. Better to just not get involved.

Sheryl hired the swiftboat campaigners to stop Congress. Finding out about that made me assume that over time most of their employees would trend in the opposite direction of optimistic.

I wonder if the exec team realizes that the ad-tech industry had their Great Financial Crisis. Nobody is in love with them anymore. They'll get as much reception from politicians as Wall Street did when Congress passed Dodd-Frank. Banks don't earn much more than their cost of capital anymore.

> Banks don't earn much more than their cost of capital anymore.

This doesn't sound right. Can you add some citation or detail?


Goldman Sachs's return on equity used to average 20-30% before the crisis. Now a decade after the crisis they're glad to be doing more than 10%.

https://i.imgur.com/gtE05WX.png


I’m on mobile so I can’t read the numbers on the excel screenshot you provided but the historic high return on average equity for banks[1] (not including brokerage) was 16.29% in 1999. At last measure it was 11.85%. Dodd-Frank was merely a speed bump. The vast majority of banks have long since recovered from the crisis.

[1] https://fred.stlouisfed.org/series/USROE


It isn't right. The majority of banks are profitable, the industry is somewhere around $200B/year of profit (I think that's just retail banking, not including brokerages and stuff).

This is historically high.


There’s also the reputation impact as well. When all of the bad things at Uber eventually became public, Uber engineers started reporting difficulty in getting new jobs. Apparently hiring managers assumed, perhaps correctly, that anyone who stuck it out at a toxic place that long was possibly the source of toxicity themselves.

> "It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook."

intersting anecdote. google is a bigger concern for privacy and personal liberty, yet jobseekers are shunning facebook because of the more wide-ranging negative press.


>google is a bigger concern for privacy and personal liberty

Big claim. Any proofs?

With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

Extremely worrisome if you prefer to have people elected through a democratic process that is based on the discussion. This is not like understanding what people want or how do they think through big data analysis but manufacturing it.

Sure - provocations and lies are not new, it was always the case for politics but with a social media everything is at scale and everything is happening violently.


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

(EU citizen here): I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process. If that's not feasible, then the second best option is, indeed, that they provide the same service to all candidates, no matter who those candidates are.

Am I getting it right that you'd prefer they pick some candidates to help in detriment of others? Because that option does not sound very healthy to me, personally.


My ideal system is to eliminate all private money from elections. You as a candidate are given a stipend by the FEC at the beginning of the campaign season, the same amount as any other candidate for the same office, and you are free to spend it. You ran out? Tough. See you next election cycle.

Nobody is allowed to give you money or anything else of value (including free airtime) -- not individuals, not companies, just the FEC. Anything else is bribery, and a crime. That way, you're not going to do things just to please your benefactors and get you an edge over your opponent next time, as your war budget is already accounted for. Instead you can focus on doing what's best for the people.


If you really want money out of politics then you replace all of them with sortition (assemblies of the people). They are representative, being drawn at random, for the whole population and they are not elected, so they don't need to campaign. Similar to politicians, they need to be supported by experts and advisors in the specific topic they are working on. I'd rather trust a group of random people deliberating than a bunch of professional liars. Sortition is a way for people to participate in democracy more than voting once every couple of years and posting on FB.

OK, what if people with money want to spend that money on political speech _without_ coordinating with the candidate? That's the Citizens United problem.

There's no quid pro quo bribery, but if the NRA spends a bazillion dollars attacking your opponent but not you, it'd be hard to say there's no influence on your decision making process. At the same time, it's really tricky to ban. Is something like Michael Moore's Farenheit 9/11 a form of political advertising?


Well you'd have to toss out the 1st Amendment to get that idea off the ground.

And what if a candidate wants to spend that stipend on Facebook ads. Is Facebook not allowed to have a salesperson take that money and sell ads to the candidate?

Speaking as a U.S. citizen, your thoughts were exactly my reaction to that comment.

https://energycommerce.house.gov/sites/democrats.energycomme...

https://www.eff.org/deeplinks/2019/04/googles-sensorvault-ca...

Sensorvault:

“includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”

US law enforcement had been regularly accessing Sensorvault user data in a dragnet-like fashion to obtain location details for hundreds or thousands of users


With Facebook, it's easy for an average individual to leave the platform for good: stop using Fb/Insta/Whatsapp and install something like Privacy Badger to avoid tracking on all the other sites that have some form of Fb integration.

Leaving Google, by contrast, is way more difficult, their ecosystem reaches literally every corner of the web and you have to deal with it even if you don't consciously use any Google product, for example if Recaptcha doesn't like you, everyday online tasks like paying public school fees [1] or signing up to an online forum become much harder. Another example is Amp, where the fact that you are reading an article hosted into Google infrastructure is often hidden from you, there are many more examples. Trying to quit Google feels like that episode of Black Mirror where that woman is ostracised by everyone because she doesn't have the same cybernetic implant that everyone else is using. Just because Google hasn't been caught in any scandal comparable to the Cambridge Analytica one, it doesn't mean that it's OK for them to have so much unchecked power.

[1] see my submission history for details


A bigger problem to me is Google's search bias and subtle manipulation. The same goes for Facebook's news curation algorithm. These things can directly impact our democracy, yet it's much harder to tackle or even investigate, because the whole thing is so elusive and subjective.

To me privacy seems to be already a lost cause. We've lost it and there's little hope to take it back. Also privacy violation is a relatively easy problem to understand. For bias and manipulation, however, we don't even know what to do.


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

I have become more and more convinced that this is Facebook's real business model; that enabling instances of archetype CambridgeAnalytica is the purpose the company actually exists for.


>>google is a bigger concern for privacy and personal liberty

>Big claim. Any proofs?

Well. How about the following facts?

-- Android market share is about 75 % of all smartphone users. Do non-smartphone users still exist?

-- It's probably safe to say that it's much much easier to avoid using Facebook's services consciously than to avoid using Google's services consciously?

-- Android's hard coded DNS server is a Google DNS server. The vast majority of people don't use a VPN (the only way in Android to change DNS server is through a VPN setup) to get around this. I haven't looked it up, but it's probably safe to assume that all Chromebooks also use Google's DNS by default? My limited networking knowledge tells me that this means that Google knows: who's using what Android/Google device at what time (match IP to Google login to Android device), who's visiting what website at what time, who's using what app at what time (requests to the app's server and to Google's servers for location, payment, auth and other info).

-- A lot of people use Gmail. Google can literally read all Gmail emails. Even those sent into Gmail from outside of Google servers.

-- The vast majority of Android users has enabled "Google location services" in Android. This is a one time "click OK to continue dialog" to permanently enable these location services until disabled manually again in settings (nobody does this). Weather apps, Tinder, Navigation apps, etc almost all require this. This means those people are continually sending < ~ 500 ms resolution data points of their location to Google servers. Even when those apps are turned "off" (meaning running in background, "off" doesn't exist in Android). Google can literally know how long you poop, who your secret girlfriend is and what specialist you visited at the hospital. NYT had a huge piece about this: https://www.nytimes.com/interactive/2018/12/10/business/loca...

-- People use Chrome. With automated Google login. Meaning Google knows everything they do online.

-- I'm not even going to go into other popular Google apps, we all know them: Youtube, etc. With all the accounts nicely automatically linked to one another.

-- All of the above information and probably much more can be used for profiling information about users. I think it's safe to say that Google knows absolutely everything about it's users at this point?

Than the following:

-- Law enforcement (internationally?) can request any data from any Google user in their country. AKA: they can request to look into all of the details of one's life.

-- The NSA can secretly request access to any of Google's data. Google is not allowed to disclose this access. By law. That's what they can legally do. Snowden has told us what they illegally do: anything they want. The NSA literally knows everything there is to know about everybody. In the world. And Google, by law, has to help them with that. In secret. This obviously has political consequences for the simple fact that "information = power". Those political consequences are not in favor of democracy.

I rest my case?


It's getting a little wearying to have to rehearse the ways in which Google is a threat to privacy. But let's get the band together one more time:

Google runs search and email for essentially the entire web, controls the market dominant browser and mobile OS, has tracking scripts on >75% of the top million websites and runs a fair amount of the internet's infrastructure. It is the senior partner in the online advertising duopoly (together with Facebook) and runs one of the three major cloud computing services. It has also become the de facto standards authority for the internet and runs a massive continuous operation to collect photos of every street on the planet, which it is now expanding into interior spaces. It sells always-on microphones for the home, as well as a line of internet-connected home appliances. It does so much invasive stuff that I've probably forgotten half of it here.

So it's neither a big or controversial claim in 2019 to point out that Google has unique breadth of visibility into both the physical world, and anything that touches a connected device.


No one disputes that Google has its tentacles in many pots -- and definitely needs to be kept on a leash. But the claim was that Google was not just a matter of concern -- somehow a clearly bigger threat than FB.

Can anyone provide substantiation for that claim?


That seems obvious. If you don't use Facebook you're pretty much outside of the Facebook tracking network with a few exceptions wrt Facebook cookie tracking which you can kill with a browser plugin like Facebook Disconnect. With Google, the tracking surface area is orders of magnitude more ubiquitous - everything from Search to YouTube to Chrome to Email to Android and on and on. Facebook is almost (but not quite) negligible in comparison.

Google also sees pretty much every website visit for every website in the world, through Google Analytics.

Has Google ever disclosed exactly what data they collect, what they do with it, who can look at it, etc? We "know" that Google takes privacy "seriously", but that is a faith based position.

Actually, Google has. [0]

And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].

And Google also gives you clear ways to delete this data, as referenced in that NYT article [2].

And moreover, Google has been consistently on track to store less private data. Example: location data is going to be auto-deleted for users that want that, as of this month[3]. Maps now gets an incognito mode[4].

>but that is a faith based position.

Hope the links I referenced will help dispel this notion. Google does take privacy seriously.

(Disclaimer: I work for Google. The opinions expressed here are mine and not of my employer; etc - what I said is public knowledge.).

[0]https://policies.google.com/technologies/retention?hl=en-US

[1]https://www.nytimes.com/2019/04/13/technology/google-sensorv...

[2]https://support.google.com/accounts/answer/3118687?hl=en

[3]https://mashable.com/article/google-auto-delete-location-his...

[4]https://www.theverge.com/2019/5/7/18535657/google-incognito-...


> And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].

How did you read that article and come away with the conclusion that Google has been "pretty transparent". The story was written after more than a year of other news outlets reporting on law enforcement using Google's location data to fish for suspects. Google has been providing this data for at least two years before the Times reported on it [0].

> And moreover, Google has been consistently on track to store less private data.

Such as credit card transaction data collected without most people's knowledge [1] or location data after you've explicitly told it not to [2]?

Technology companies need to understand that both words "informed consent" are important. We currently have very little in the way of choices when it comes to data collection. It is simply not possible to opt-out anymore without tremendous effort and personal cost. I like this quote from Maciej Ceglowski:

"A characteristic of this new world of ambient surveillance is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. However sincere our commitment to walking, the world around us would still be a world built for cars. We would still have to contend with roads, traffic jams, air pollution, and run the risk of being hit by a bus. Similarly, while it is possible in principle to throw one’s laptop into the sea and renounce all technology, it is no longer be possible to opt out of a surveillance society."

[0]: https://www.wral.com/Raleigh-police-search-google-location-h...

[1]: https://www.cnbc.com/2017/05/24/google-can-now-track-your-of...

[2]: https://www.apnews.com/828aefab64d4411bac257a07c1af0ecb


Facebook has their privacy policy too. So what? Even if all the listed policies are followed, even if they don't have loopholes (and they almost certainly do), Google still collects and retains metric fuckton of information that isn't necessary to provide the actual services it provides. The NYT article is great demonstration. And there is very little oversight around this.

Pre-Disclaimer: I don't mean to only pick on Google here, it applies to any company that collects such a vast amount of personal data on users. Also.. nothing personal :)

>Actually, Google has.

In extremely vague terms, yes. I want to see an itemized list.

For e.g. At company X, this is what we collect:

1) Your Name, age, location, DOB. 2) Your location is sent to COmpany X every 10 minutes 3) Your IP is tracked per-session 4) All this data is linked to your profile 5) Any thing you type in the search bar is sent to a company X server 6) After anonymizing (if we do it) this is what your data looks like 7) We never delete any of the above for the following reasons etc,etc,etc

>And moreover, Google has been consistently on track to store less private data.

The default should be zero/as little as possible collection of data. From what you've said it seems like people can opt-out of some data collection, but its vague as to the specific nature of what data is still being collected versus what isn't.

>Hope the links I referenced will help dispel this notion. Google does take privacy seriously.

Unfortunately they don't. I won't dispute your second claim.


Far better than an itemized list, you can download all your data from Google

https://support.google.com/accounts/answer/3024190?hl=en

> The default should be zero/as little as possible collection of data.

Really? What about telemetry for self-driving cars? Is it immoral to develop a system that leads to less blunt trauma and death on roads? We (HN users, I don't work for any of these companies) can define your term "as little as possible" about like you seem to define parent's term "seriously". The point being that such adjectives are difficult to pin down but also difficult to avoid. Define "difficult" however you see fit.


> What about telemetry for self-driving cars?

They own the cars so they can track them all they want.

Tracking me all over the place after I click the "Do Not Track Me" button isn't acceptable.

> Is it immoral to develop a system that leads to less blunt trauma and death on roads?

It quite could be. Just as we humans decided to not use the scientific research generated by the Nazis on unwilling human subjects there are definite limits to what is acceptable even if the overall benefits are huge.


Collectively, we did no such thing. Many individual researchers and journals refused to use Nazi research, but many felt that it was unethical not to use it if it could save lives. In particular, I believe that the results of Nazi hypeothermia experiments were extensively used after the war. It's certainly not a cut-and-dry problem with an obvious ethical answer.

It's all here, and you can delete it (including batch delete by period or source): https://myactivity.google.com/

This page includes other types of data (e.g videos you upload to youtube or mails in Gmail): https://policies.google.com/privacy


Thanks for linking to the policy document. They have this convenient line that allows them to do anything.

"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures."

>It's all here, and you can delete it (including batch delete by period or source)

That scratches the surface, but an iceburg hides underneath. For one, how do we know its all the data? For another, there is no indication as to who has seen it or how Google uses it. That is my point. Google has never detailed those things..I suppose for legal reasons. A user has a right to know exactly what they are trading with Google in exchange for free services. They can then make up their own mind if they think its worth it. I'm just picking on Google here, because its a soft target, but it should apply to any service. We need new privacy regulations to formalize this.


Sounds like they just needed to spin up one "affiliate" and provide the data to that for data mining / etc purposes.

Anyone deleting the data "Google" holds would have zero effect on the affiliate, while giving some people the feeling Google was doing the right thing.


>>google is a bigger concern for privacy and personal liberty

>Big claim. Any proofs?

All this skepticism around Google's capacity to abuse their troves of data and invasive services is a clear indicator that this discussion has very little to do with real privacy. It is mostly a playground for various corporate, political and media shills.


Just look at the amount of personal information Google knows/records about you. Your search history, web stats through Chrome, location history through Android, with whom you exchange emails if you're using GMail, which sites you visit and how long you stay on them through Google Analytics, probably online purchases with a combination of AdSense/AdWords & Analytics, everything you watch on YouTube etc.

They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.


> The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.

This is the whole point though, is it not? As far as we know, Google treats the data they collect more thoughtfully and responsibly than Facebook. And so they are (rightly or not) viewed as less of a threat to the public good.

Of course, they could just be better at hiding their abuse of our data... But that's a conspiracy theory, not a matter of public record like the Cambridge Analytica scandal.


Is that really any better? Google is so monolithic and all encompassing that data collected by their services can be shipped around internally instead of having to be sold to third parties.

> be shipped around internally

How is that a problem? The issue at hand is the irresponsible handling of data (especially wrt 3rd parties), not the general handling of 1st-party data competently within an internal network.

So yes, it's a LOT better.


No, it's not. They share the data indirectly by allowing companies to target individuals for advertising purposes based on that data. You search for shoes on Google and then ads about shoes follow you all over the web. So while you can't download users' posts like CA did in order to profile them for their political affiliation you can surely target them for whatever product you want to sell. If it was just about ads on Google everything would be hunky dory. But it's not. Just because they're nice and cool doesn't mean we have to give them a free pass to our personal lives.

>They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.

And that is something that is much more relevant to many users. I don't mind sharing a lot of my data as long as I know where my data actually ends up. If Google uses my data to improve their ad algorithm I'm fine with it, if my Facebook data ends up in the hand of some election manipulation company I'm not fine with it, no matter how much data it is.


> I know where my data actually ends up

And how do you know what Google does with it? AFAIK Google has never officially stated in specific detail what data they collect, what they do with it, who can access it, etc.

Their Privacy Policy gives them a giant escape hatch to essentially do anything with it -

"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures. "

https://policies.google.com/privacy?hl=en-US#infosharing


I think you not quoting the rest of that sentence is quite disingenuous

>"...For example, we use service providers to help us with customer support"

As far as I'm aware there is no evidence that Google shares my personal information, without my explicit consent, with third parties like Cambridge Analytica, which collected tens of millions of individual user profiles.


Sorry, how is it disingenuous? I didn't consider the example relevant to the policy itself, and I provided a link to the source material for anyone to read. Giving a benign example is meant to downplay the fact that Google can do anything they want with your data.


Anecdote time.

My wife and I typically donate to a few non profits, such as the ACLU and Trout Unlimited. They occasionally mail us, but we did give them our address so that’s ok.

But one day she donated to the environmental defense fund. Since then the number of surveys and donations requests from random non profits has exploded to 3-4 a week, including weird ones like evangelical surveys and pro-Israeli things. My wife is pissed at the EDF, and will never give them another dollar.

The point? We were both fine having the non-profits having our address and using it, but knowing that one of them sold that data really pissed her off.


Google has your data and uses it for themselves. Facebook has your data and gives it (or leaks it) to anyone with money.

They both sell ads and offer advanced targeting options. Very similar businesses.

Google makes a lot of genuinely useful products and services. We've all got to wrestle with the privacy tradeoffs of "free" maps, "free" email, "free" Android, etc. But at least the satisfaction of using well-built tools to accomplish more is enough of an offset to many people.

Facebook is much more likely to be seen as a guilty pleasure, or a marvelous time-waster, or something else that's a bit farther down the utility curve.


Perhaps in some countries/demos. I'd bet Instagram and WhatsApp for free, basic communication are seen as much higher utility in a significant amount of the global population.

Facebook makes products that many people find genuinely useful, it just makes fewer of them.

I wonder how much of it is that Facebook isn't all that much fun anymore, and working there would provoke all sorts of "I don't use it anymore" comments from one's peers.

Or maybe prospects don't use Facebook either and it seems odd to contribute to a website you don't even visit.

Instagram and WhatsApp are more popular than ever, though.

Google collects more data, but they're much less free-wheeling with how they share it around. Pick your poison I guess.

If you have time, can you elaborate why you believe Google is a larger threat to personal liberty and privacy?

well, it's the nature of the advertising business to defy privacy and liberty. competition occurs around how well you know consumers and how well you can manipulate those consumers into actions favorable to you (i.e., exerting power over you). further, online advertising is basically a duopoly of google and facebook, with google being twice as big as facebook and much more invasive.

google's, or more broadly, alphabet's, only competitive advantage is a thin lead on what might be called data intelligence (or surveillance, for the more cynical). they collect data across all internet ingresses/egresses, on not just those who opt-in, but even those who actively avoid google (through android, gmail, google apps, analytics, dns, internet access, etc.). and that data is super-valuable--alphabet had $30B in profits on $137B in revenue (an extraordinary margin).

to be clear, i'm not attempting to judge or disparage individual engineers at google. i'm sure most are mighty fine folks.

but for the foreseeable future, google really has no choice in the matter, not until it finds a different massive market from which to derive revenues. it's the nature of the business. and in the meantime, it's also under assault from intelligence, paramilitary, corporate, and governmental organizations from across the globe.

at least for americans, privacy and liberty are fundamental and inalienable rights. even though the consitution explicitly forbids only governmental interference in those rights, they apply more broadly to any entity, and particularly global corporations, attempting to exert power on individuals. and while inalienable, citizens still have a duty to be vigilant against such infringements.


I too was curious of this balance weighting. FB slurps in all of the data that users voluntarily post. Google just learns things through inference about users whereas FB is getting data posted directly by the user. Seems to me that FB is able to be way more invasive.

> FB slurps in all of the data that users voluntarily post.

That seems likely to be a grand understatement. FB has the opportunity to collect a great deal of data about their users beyond what they explicitly post -- for example, data about when and how they use Facebook mobile apps, how they interact with the Facebook web site, and what external web sites they visit which contain Facebook Like widgets.


But Google does all of this as well with its api/fonts/analytics/etc being used.

Don't forget offline credit card transactions, FB and Google both.

https://www.bbc.com/news/technology-45368040


On the other hand, FB is inherently social. I assume everything I give to FB has a chance of being public one day. I have some private conversations, but in the back of my head is that time the UI was deceiving and made seemingly direct messages public. FB is for sharing things. Google runs my phone, my work and personal email, my calendar, and more. I think they have a better attitude toward it, hence my willingness to trust them so far, but from a standpoint of ability to be invasive, Google blows everyone else out of the water on my devices.

Do you not consider Gmail data posted voluntarily by the user? How about search queries or calendar entries?

I can see your concern about messages via email, but I know for me personally, email is just not a thing anymore. Forgetting plain SPAM, corporations/marketing/etc have ruined email into this signal that has such a low S/N ratio that it's just not useful. What percentage of internet users actually use email for communication anymore? Sure, some, but it's not my largest attack vector (I consider Google/FB as attacking me).

Anything serious goes trough emails and this is the data I’d be most worried about leaking - anything from security related stuff like login/id confirmation to receipts, confirmations, sensitive data, professional communication.

Waaay more valuable than FB scraping my phonebook and photos


This may be true for personal communication but any sort of business deal is going to be happening over email. Mortgages, selling your company, large sales... All of the contracts are going to end up in your inbox.

Google has far greater potential for invading an individual's privacy than Facebook does. Google has Android, Chrome, search, maps and gmail (and now photos). Those are all very critical pieces to a person's real world life. Facebook has FB, Instagram and WhatsApp. Yes, some private communication but it's limited to "social networking". Your taxes and utility bills don't get mailed to FB Messenger. You don't search for cancer research on Instagram and Facebook can't tell what other apps are installed on your phone.

I’ve been getting ads on Instagram for health issues I discussed on reddit.

Social networks live by the sword of voyeurism and die by the sword of voyeurism. This is not something Google has to worry about.

Has there been any evidence of abuse and misdirection on the part of Google at the same level as Facebook?

This is more than just negative press, this is a question of how data collection has been misused, and what lies executives have told about current and future plans surrounding privacy and data abuse.

But, personally, I'm staying away from FB, Google, Amazon, Snapchat, et al for the reasons you've mentioned; negative press or no, I cannot ethically work for companies that are haphazardly building the foundations of a potential technocratic dystopia in their chase for profitability.


And, yet, here in the bay - my company (a startup) sent out two offers to candidates quite recently and they both went to FB instead.

There is no shortage of people joining FB because there's no shortage of people wanting to join a big company. Maybe if they're all comparing offers between big companies then they'll join some other big co but if the difference is startup vs Facebook... FB wins.


It seems like your computer should consider remote workers. I live in Denver and have told Facebook recruiters that I’m specifically not interested in working at Facebook, but I would consider a remote position at a startup. I’m sure as hell not relocating to the Bay Area is all.

If we wanted remote workers then we'd hire people in Romania. Like the last place I worked at did.

Are you offering a competitive salary?

I mean... does any startup when compared to FAANG? Salaries are basically the same but the total compensation is, obviously, wildly different since expected value for startup stock is horrible.

> my company (a startup)

ad-tech startup?


Nah. FinTech.

It is not only ethnical.

With SO much negative press, I feel that Facebook had lost its mission among wider public. If it is net bad for the society, even just the perception of it, it is hard to hire someone who shared that vision with you, only mercenaries.

Good people are weird, though. They work for money, like everyone else, but not just money.


I agree with everything you've said until the last part. Google is only marginally better than fb when it comes to some of these issues of privacy. The issue people have with facebook is that it has a reputation for being a pressure cooker.

In other words, Facebook now has no redeeming qualities.

I've gotten a bunch of pings from them over the last few months, and I just chuckle, say "hahahano", delete it and move on. I don't know if it's a coincidence that the pings happened after the scandal or if they have gotten into 'look under every rock' mode.


I think it is the latter. All the pings I see now are so mundane and banal (most of mine are friend suggestions for people I’ve never met). They really must be scraping the metaphorical bottom of the barrel.

Anecdotal, but in the past year, I had tons of recruiters from Google/Amazon/etc. knocking on my LinkedIn box. However, not a single one from Facebook. Maybe they just simply didn’t fund recruiting efforts as much as the other tech companies or weren’t hiring as aggressively.

IIRC, Google places a much higher emphasis on making counteroffers in the first place, as well as making those counteroffers hard to refuse.

Definitely not true.

> It's no wonder the substantial portion of people who care about their employer's ethics are turned off

Nope. People who I know have turned down FB offer was purely because they see them as less stable company and have doubts if their stock will keep falling. No one wants to wake up a month later to find out that their signing bonus just got reduced by 10% due to bad news cycle. I would estimate that less than 10% of people turn down employer due to privacy related ethics. Also, on side note, FB has jacked up stock bonuses for existing employees. Their attrition rate is virtually unaffected despite of all the bad news.


I'm not sure how the journalist fact checked this, but in 2016 CMU sent 12 people to Facebook[1]. In 2018 CMU sent 27 people to Facebook[2].

[1] https://www.cmu.edu/career/documents/2016_one_pagers/scs/scs... [2] https://www.cmu.edu/career/documents/2018_one_pagers/scs/1-P...


Or 29 if you include WhatsApp.

> After the publication of this story, Harrison contacted CNBC to say “these numbers are totally wrong.”

> “Facebook regularly ranks high on industry lists of most attractive employers,” Harrison said in a statement. “For example, in the last year we were rated as #1 on Indeed’s Top Rated Workplaces, #2 on LinkedIn’s Top Companies, and #7 on Glassdoor’s Best Places to Work. Our annual intern survey showed exceptionally strong sentiment and intent to return and we continue to see strong acceptance rates across University Recruiting.”

Perhaps it's best not to take a couple ex-recruiters word as blanket truth about company wide trends.

Of course, the article simply mentions this then goes straight back to asserting company wide morale problems, which is an interesting narrative to pursue, when that's not really what the majority of employees are feeling (which is further reflected by strong hiring numbers and low engineer attrition).


This is not a surprising headline. If you have values about privacy, decency, civil discourse, honesty or integrity you wouldn’t want to work there. Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it’d be a big fat “no” to working there. And it’s not just our democracy that is undermined by FB. There’s a litany of abuses that they have either been horribly naive too or downright negligent in addressing.

If you are bright-eyed optimistic about Facebook I'd be interested to hear your counterpoint to all of the scandal. I don't think there is any company in the FAANG that is an altruistic enterprise but it isn't surprising that FB would have a decline in hiring.


> I don't think there is any company in the FAANG that is an altruistic enterprise

I feel like Google started that way, and then lost its way sometime between 2009-2012.

Projects like Google Scholar, Google Books, Google Summer of Code, Google Reader, Google Open Source, Google.org, and pulling out of China didn't really have much of a business justification, but were simply something good that they could do. Unfortunately they're a public company, and when you start struggling to meet analysts' (perpetually inflating) estimates, being good - or at least not evil - is usually the first thing on the chopping block.


Google never figured out how to make serious bank outside of the marketing department.

The fact that they kept the wheels on as long as they did, I gotta give them some respect for that. But they were always destined to end up being amoral at best and a cesspool at worst.

If you are starting a company and think you want to be proud of it for the rest of your life, sell a real product, not your users.


Not true: they make billions from cloud services.

Re: “you are the product” meme. I guess it’s a mechanism for raising awareness of privacy violation, but I really don’t like it. If you were literally the product, you would be a slave. You’re not. What they sell is your attention.

A big reason for not liking “you are the product” memes is it misses the key aspect of manipulation, which phrases like “the attention economy” capture. You are being manipulated into giving up more of your time and attention.


If by "marketing department" you mean "advertising business" you would be correct. But I am skeptical you meant that.

I don't think that it has anything to do with altruism. Back then, it was not the right time to optimise for profit as new and exciting things were happening daily, it was time to explore not to exploit.

These days the exciting things are happening in other areas, so for the Internet giants, it's time to optimize for profit.


Also, as a smaller player, they stood more to benefit from open source projects (Android and Chrome) and open standards (the web and email). Now that they're on top, the most rational strategy is to secure their position by destroying the bridges they used to get there, locking down those open technologies.

> it was time to explore not to exploit.

Is that not a pretty good definition of not being evil. IMO it is still the time to explore not exploit, even if that's not what they're doing.


Google had been receiving shit from people for violating privacy since they had the novel idea to release a free email service that scanned your emails to deliver you targeted ads. The consequent centralization of email (ISP provided email pretty much died after) was subsequently used to allow the NSA to scan a huge amount of peoples personal information.

I think in the last few years is when things tipped to me distrusting Google more than the boogeyman of older times - Microsoft.


> and then lost its way sometime between 2009-2012.

Tahrir Square was the high water mark of the old school techies. The failure of tech to effect real and lasting change really hasn't been understood by the techies, even still. That optimism about the future and tech's role in it, is gone.


Based on discussions I’ve had with Egyptians, Facebook was used to track down dissidents after the counter-revolution that brought Sisi into power. Not sure if it was Tahrir-era posts that got them into trouble, or criticism of the Sisi government.

The only lasting legacy of social media’s role in the Arab spring seems to have been inflating the self-worth of high level execs, and blinding Obama-era officials to the way these sites could be turned into tools of disinformation and repression.


"The only lasting legacy of social media’s role in the Arab spring seems to have been inflating the self-worth of high level execs"

When the media talked about the "Twitter revolution" I still remember thinking that there were people risking their lives on the streets and how ridiculous it was that some social media guys drinking lattes in their offices got the credit.


When you spend enough time in the future, you forget all the shitty things about the past that tech has changed and only notice the problems that stand out today. Not sure if you're specifically referencing Tahrir Square with your second sentence, but tech has definitely led to real, lasting and immensely positive change worldwide.

Forget the election, just what social networking is doing to young people's minds. They're making money by making a lot of people miserable - just not how I'd want to make a living.

Making people miserable and unable to understand the world outside these addictive platforms. I know so so many 20 somethings who genuinely don't understand the facade that is social media. They're giving up their youth in pursuit of a drug and they don't even realize it.

I have no money and a very shitty laptop, and thanks to Google Colab's free, hosted Jupyter Notebooks I'm having a blast learning Keras.

I'm not saying they're saints, but they've given me something free that's improved my life. Maybe it's ultimately greedy in the sense that later if I need a cloud platform I'll definitely use GCP. But I think that kind of mutualism is actually better in practice than altruism.


Don't forget about WhatsApp. It was the main channel of dissemination of fake news in Brazilian Election. Now we have a global warning denier in the presidency and Amazon deforestation is reaching record levels.

Sure there isn't any company in the FAANG that is an altruistic enterprise, but to be only pure evil one is Facebook.

What really impresses me is that there's still a lot of talented people working there.


Any communication platform that is easy to use and easy to reach people on, and will therefore be popular, is great channel of dissemination of fake news. Well, guess what, it is also a great channel for communicating nonfake news, and talking to people that matter to you, and sharing your interests with likeminded people, and...

Blaming the platform for carrying fake news seems disingenuous. Fake news have been spreading over any available channels ever since humans learned to talk and figured out that they can tell lies to each other. Blame people for believing most of anything they're told.


I think one needs to be very intentionally oblivious to not notice the qualitative difference between fake news of the past and fake news right now.

Fake news in the past always had an identifiable source, because there was still an institution, a company, or someone with their name on the door between reader and publisher. As it stands, no such barrier exists any more. Things can be inserted by malicious actors into the debate, and they spread automatically simply because they have the tendency to 'go viral', something entirely absent in the past. That has added a completely new set of problems.

>Blame people for believing most of anything they're told

Precisely because it is very much in everyone's nature to suffer from these mechanisms it makes no sense to blame ' the people'. What does this imply, a great re-education of everyone? Obviously the only thing we can change is the companies, institutions and rules that determine how we consume the news, not how human brains disseminate them.


I asked one, who pointed out he'd like to make change from within rather than blog about them being evil from the outside.

I don't know which way is right.


FB's core business model is the root problem. "Working from within" is vain, naive, and futile; only Zuckerberg has the power to change the business model, and we all know thats not happening.

Unpopular opinion:

I don't have a problem with business model (targeted ads) but I have a massive problem with lack of honesty, and this is the distinction between Facebook and Google for me. Google tells you what they collect and gives you the controls to delete it. This is enough for me.

Facebook struggles to remember that I want my timeline kept private.

I also believe that Cambridge Analytica was no accident, FB knew what they were doing, and they decided to throw them under the bus when they changed the media turned on them.

Trust is hard to build up and can be shattered in a day.


> Don't forget about WhatsApp. It was the main channel of dissemination of fake news in Brazilian Election.

How would you solve this problem?


>Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it’d be a big fat “no” to working there. And it’s not just our democracy that is undermined by FB

Come on. According to FB the IRA had 80000 posts over a two year period. In the same period there were 33 trillion FB posts. What moron still believes this garbage?

FB was hung out to dry by Congressional democrats too spineless to own up to their own pathetic failure to defeat Trump.


I don't think you can discount that a concerted effort to create viral content will spread much farther than arbitrary wall posts by individuals. There are statistical methods Facebook could use to figure out how much of an impact that they had and I have not seen any such analysis yet.

Its also worth noting that when something goes viral, its often not contained on one social network, and it becomes impossible for the platform to measure its reach and impact.

How could twitter, for example, really measure the impact of something like that video of the Covington High School kids, which was amplified on twitter (shared by a fake account, IIRC), picked up by the media, and then talked about incessantly for weeks, all over the place?


That's only one reason this whole thing is bullshit. The other is that the alleged content is just random gibberish with no obvious intent or means to subvert anything. It's only by assuming that every post had its maximum theoretical pernicious effect (and that a pernicious effect was the intent in the first place, which is just supposition) that this whole thing becomes meaningful.

It is the desire to make this assumption (that Russia subverted the campaign) that drives the conclusion more than anything else. None of which is to say FB is innocent of blame. But their crime is hooking up an ad network to the social network, not colluding with Russians.


How many ads were there? If trillions of messages are needed to influence behavior, then Facebook ads would have no value.

As much as everyone wants to believe this is because all the applicants are suddenly taking strong ethical stances, I bet it has more to do with Facebook simply not being considering cool or exciting anymore.

Sure, but one of the biggest reasons it isn't considered cool or exiting anymore is all the negative press.

Really? Obvious data privacy issues finally becoming mainstream is what is finally convincing programmers to not want to work there?

Hasn’t all of this stuff been obvious forever to programmers?


It's possible now, that this information has gone mainstream, programmers worry about how their non-tech friends view them for working there.

> Hasn’t all of this stuff been obvious forever to programmers?

Yes, but it wasn't at the "oh crap elections were manipulated, democracies toppled, dissidents tracked down, and genocides enabled" level.

The fact that Apple, the world's richest company in the world, now has a mainstream marketing campaign around privacy tells you it is now officially mainstream mainstream, not just programmer mainstream.


Yes, I think we're finally getting to the state of engineering and medicine in the 1800s, where bridges and buildings were collapsing, snake oil salesmen and physicians were indistinguishable to the layman, etc. Enough catastrophe will eventually motivate society to regulate the upstarts.

I think this story is submarine PR paid for by Facebook to garner sympathy.

Agreed. I would argue that Facebook is not considered cool as a direct result of all the outrage surrounding it.

Facebook was uncool before the outrage really took off. It's bloated and fewer young people from each cohort take to it each year.

And the root cause of its suddenly "not being considering cool or exciting anymore" would be?

... not really innovating? its main product is still centered in social signaling and gossip ... just like day 1. Also the social craze is not so crazy anymore (I wonder, how are the other social apps doing?)

Limited upside for the stock?

Its ML research is exciting. I would like to work with Yann Lecun

Yeah, it's seen as the platform your parents (or worse, grandparents) use. Pretty much a step above Next Door. Why would you want to work for that over some of the other companies out there?

Facebook is also instagram and whatsapp, two platforms used by young people

My teenagers have pretty much moved on from those.

> My teenagers have pretty much moved on from those.

Out of curiosity what have they moved on to?


TikTok is pretty popular these days, for one.

Recently I think the scandals haven't been the single biggest factor when deciding between Facebook and other firms.

The common reason I heard from most of my friends who turned down FB, or quitted FB was that the working culture is too demanding and kind of pressure. Google on the other hand is more laid back and family friendly. So people who started building a family will prefer Google over FB. The nice thing is FB tends to offer higher level than Google, so in some cases, if you get matched, it works out pretty well.

I have a friend who worked at FB, after he came back from paternity leave, his manager told him he has been slacking (his reviews were always "meet all"/"exceeding" before), it's time to put in more work, he quitted after a month.


Personal anecdote: I had a job offer from Facebook and a couple other big tech companies. The Facebook offer was substantially better fiscally than the other ones and it was clear to me that they were having trouble hiring. Their initial equity grant has no cliff and the signing bonus was massive for somebody two years out of school: $75,000 cash in first paycheck.

However I ultimately turned it down because of ethical concerns about working there combined with a sense that people would not approve of my job choice. I.e. even if I don't find what they're doing ethically questionable (and I do, although I don't think they're so bad), I didn't want to have to explain myself or defend them to everybody when I mentioned where I worked. Just my two cents as somebody who was one of the 50% of candidates who turned down the job.


> Their initial equity grant has no cliff and the signing bonus was massive for somebody two years out of school: $75,000 cash in first paycheck.

* Google got rid of the cliff too. * The $75K sign-on bonus is nothing new. These are not signals that we're having trouble hiring now.

> However I ultimately turned it down because of ethical concerns about working there combined with a sense that people would not approve of my job choice. I.e. even if I don't find what they're doing ethically questionable (and I do, although I don't think they're so bad), I didn't want to have to explain myself or defend them to everybody when I mentioned where I worked. Just my two cents as somebody who was one of the 50% of candidates who turned down the job.

Honestly, working at FB as a SWE is awesome. Like beyond awesome. If impressing other people is what you're optimizing for, you do you, but just know that you're missing out big time.


As a current student, I'm actually surprised by this. Maybe I just hang out with evil people, but I don't get the impression most young programmers care that much about ethics. Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over. Perhaps there's other reasons?

I do joke with my friend who works at Bloomberg that the "evil" finance view has now flipped completely. Bloomberg is a pretty ethical company compared to Facebook, Google, etc.


Ironically working for a bank is probably often more moral / does more good than Google. Keeping people's money safe is a very real service. People complain about CC interchange fees, but it costs ~1-2% to process cash as well. (Safes, hiring armored trucks to pick it up, etc).

Not all of finance is swapping debt like a commodity til the economy crashes and forclosing on people who had some bad luck.

Allowing people to use their money conveniently and securely is arguably bringing more value to the world than helping run psyops to convince people to buy things they don't actually need. People need checking, savings, and credit card accounts and they need them to be secure and reliable


Age is a proxy for putting value into such concepts as "ethics". When you're more or less senior and have your basic needs met, then you can afford to be picky on what you work. Fresh grads might not care because it takes to know evil to know good (see the story of the original sin and Tree of the knowledge of good and evil), and work experience is like a separate life experience.

Heck, I've heard a theory that you should only be counting programming years as life years, i.e. if they haven't been programming for 18 years, then they aren't adults in the world of software. And the funny thing is, once you're at least a teenager by this definition, then you start thinking that they might actually be onto something...


> When you're more or less senior and have your basic needs met, then you can afford to be picky on what you work.

If you start from there, it follows that countries with poor social support systems will tend to generate less ethical workers and less ethical companies.

I grew up in Canada. When I came to California, I had zero debt. That made it relatively easy for me to choose ethical (even altruistic!) places to work starting from very early in my career.

Imagine how different the entire technology landscape would be if most American computer science grads had the same flexibility.


What you think you want while in classes and the shifting realities once you enter the workforce and be pretty night-and-day.

Personally I would have entertained the idea at working at a "Big 4"-type company out of school knowing that they were ethically opposed to me. I guess mostly because the name on a resume is worth multiple other jobs in some scenarios and gives you an advantage over your peers.

Just a few years later, a 6-figure salary doesn't seem that outrageous and unique to those big corporations. Now that the difference is only in the $10s of thousands, these decisions become a little more nuanced and uncertain. Besides that, once you hit the high 5-figures, money becomes less and less of a driver in your life unless you are in desperate need due to your circumstances.

My point is that I think fresh graduates will put aside their ethical qualms because they don't yet know their worth and place in the workforce. That can change pretty quick.

That being said, plenty of people just don't care, and that's why these companies still have thousands of developers. It's easy to be blissfully ignorant of your contributions to privacy degeneration and corporate takeovers of our lives when "all you do" is write some React components or speed up some data pipelines. The executives and managers are the ones who will really need to reckon with their consciences, knowing they implemented all these nasty programs.


I work in finance (a trading firm, not a bank). In a way it's freeing to admit to yourself and to your coworkers that all you really care about is making money. No one's deluded thinking they're changing the world by selling options like FAANGs think they are by harvesting personal data.

It's not like we're making the world better but we're not actively harming it.


> Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over.

But the issue is that top recruits can get a 6 figure salary, cushy benefits and a signing bonus from Netflix or Google or Amazon or AirBnB etc. etc. It's still easy to at least to pretend to be moral of you can shun Facebook as an employer but don't need to give anything up to do so.


Yeah, but you're assuming they have a counteroffer that's Netflix, Google or Amazon. Sure, some people get 2-3 offers, but I'd bet it's more common to get 1-2. Not out of skill or anything, just because the FAANG hiring process is random and arbitrary.

It’s not just ethics - they have shit wlb and cut-throat culture. If you top it off with mountains of tech debt due to above and brain-dead hiring practices it’s not that surprising they have trouble hiring despite huge compensation.

Why do you hang out with people who don't care about ethics?

Fair question. I'd like to think my close friends are ethical people, although it's not like I've seen them do the trolley problem. I'd categorize the people who would accept a Facebook (or even Palantir) offer in a second as casual acquaintances. As for why I'd be casual acquaintances with people who don't care about ethics, well, sometimes you can hang out with people who you don't trust completely.

I've known several people that would no longer work for Facebook, but the Cambridge Analytica isn't the biggest concern. It's the fact that they are censoring people, even within private groups.

I have a friend that jokingly said (in a private group) that men are vile pigs. We knew she was joking - it was good natured. Yet, Facebook issued her a warning and removed her post and threatened her with a ban. First they came for Alex Jones and I said nothing because I don't like Alex Jones (and think he's insane), but now that the precedent is set that Facebook is the speech police, it will expand to us all (especially with their machine learning advancements that are here and yet to come).

The EFF has a really important article about this that I implore everyone to read[1].

[1] https://www.eff.org/deeplinks/2018/01/private-censorship-not...


For a detailed nuanced piece about how FB handles some of this complexity check this out: https://www.vanityfair.com/news/2019/02/men-are-scum-inside-...

FB has its problems, but I generally find the negative press overstated and wonder if Zuck's approach to interact with the press and congress actually backfires (compare to the other companies which largely ignore them). I appreciate how often he talks to the press to explain what they're trying to do though.

I also see the Cambridge Analytica scandal as what it is - permissive APIs that were abused and then locked down. Cambridge Analytica is to blame in this for abusing TOS and behaving badly, FB is arguably negligent - but I think the reaction is extreme.

Plus from people I know inside FB there really is a huge funded effort to stop abuse and manipulation via 'integrity' teams. It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.


It’s really easy to win the crowd by calling everyone else biased. When FB is one of the biggest lobbyists in Washington I highly doubt the press has ever been critical enough.

They got by on being the new darling child startup and built up an gargantuan pile of moral debt which they are now fairly paying for.


Wow that article was incredibly relevant and super interesting. Thanks so much for the link!

I have seen cold emails from Facebook and Instagram recruiters recently and they all start on the defensive about privacy, how it's "Zuck's" big thing and how he's taking it seriously. Seems a little desperate.

Does anyone actually believe Zuck's new found interest in privacy? His entire company is built on the sharing of data (even in ways users don't understand).

Anyone who trusts his newfound interest in privacy is indeed a Dumb Fuck.

Honestly I think its more than just that, Facebook is no longer the cool start up building the world's favorite website. They're a multi-national advertising mega corp, and TBH most people just don't want to work there.

But Apple, Google, Microsoft aren't the cool start-ups either and they are mega-corps, yet people still want to work there. So I don't believe that line of reasoning holds up.

Totally agree. There are dozens of companies that aggregate user information (Google amongst them). Like @bognition said, FB lost its cool a while ago.

A long while ago. In Australia I'd say that 2009 was probably the end of FB being perceived as (at least somewhat) cool. 2008/2009 was when I remember being encouraged to join up by peers. My joke way of deflecting that (because I've never wanted to join) was to say that I'd applied but been rejected for not being cool enough. This reply would often leave people non-plussed - I was ironically mocking the idea of FB being cool (at the time) but I don't think most people got the joke.

Since 2014 yes

Is anyone?

I can only think of SpaceX, but they're not a website.

Yes, but you haven't heard of them yet.

They're not "the cool startup building the world's favorite website" until we've heard of them.

My good dude, once you've heard of them, they are not cool anymore. Cool is exclusive. Cool is mysterious. Cool doesn't care what is popular. When cool goes mainstream, it's not cool anymore! The trendsetters have already moved on.

>Facebook candidates are asking much tougher questions about the company’s approach to privacy, according to multiple former recruiters.

This narrative is highly suspicious.

Zukerberg openly and repeatedly said that he doesn't care about anyone's privacy for well over a decade[1]. The whole company is built around collecting and selling private information. Why would people who care about privacy interview with Facebook in the first place?

[1] https://www.theguardian.com/technology/2010/jan/11/facebook-...


> Why would people who care about privacy interview with Facebook in the first place?

I believe this may come from the responses to cold e-mails. A recruiter working for FB presents an offer. They want to tell them to GTFO and they highlight the privacy concerns in the response to say thanks, but no thanks.

At least that's what I do when a recruiter working for a company I find morally incompatible approaches me. I reply with something like "The tech stack looks great and my professional experience aligns with what the job description requires. However I don't think I'd feel comfortable working for a <short-term loans | kids gambling | personal data mining> company, but I'm open to hear about similar positions in other areas if you had any in the future."


Mark Zuckerberg does not say he doesn't care about anyone's privacy in the article you cited "openly and repeatedly" or otherwise. I suppose you can infer that from the actions of the company he runs, but your citation does not support what you've said here. Someone reading your comment without reading the entire Guardian article could come away with an incorrect impression of what he's publicly said.

I think that over the past couple of years in particular, the real-world consequences of all of this have really come into the spotlight.

It's one thing to hear a tech CEO talk about something you may not agree with -- many people just categorize it as "a Facebook thing" (as in, huh, maybe I'll try to use their products less) and move on with their day. It's quite another to come to the realization that non-trivial parts of (what many see as) seriously negative political consequences have come from these products and, being fully aware of these, the CEO/company still hasn't meaningfully acted.

And with all the recent publicity (there's a difference between being mentioned in the technology section of a paper, and giving a congressional testimony), pretty much no one can say anymore that they aren't aware of it, or haven't thought about it.


I believe this is pretty true, cause I was interviewing with FB and I brought up some of those questions.

Before the scandal broke out, I didn't really know much about Zukerberg's view on privacy. The scandal definitive rose my awareness on this topic.

But the questions for me is less on FB's approach to privacy, but more on how much is Zukerberg dictating the company. In other words, how much are FB employees empowered to do what's right, to fix their problems. Empowerment and autonomy is a very important for tech talents. FB is not presenting itself too well in this perspective.


Maybe they’re just more aware of what Zuckerberg is saying now than what he has said and done in the past.

Perhaps developers now value privacy more on average than they did back then.

Yeah, I think it's really common to downplay this as an “everyone knows” situation but there seemed to be a sea-change where it went from “maybe people see more spam than they used to” to “major world events are impacted”.

They pay well, it's a good brand name to have on your resume, but on principle, I ignore any recruiter from Facebook.

It's glorified MySpace that exists to build de facto detailed psychological profiles on unsuspecting participants, and it's specifically engineered to manipulate their behavior. No thanks.


Same feeling about the recruiters from Tesla.

It sounds horrible to work there, there are lots of companies that make cars.


"In general, Facebook candidates are asking much tougher questions about the company’s approach to privacy, according to multiple former recruiters."

That made me smile <3


Facebook's brand is tainted enough now that smart engineers don't want any of the bleedover into their personal brand that would come from working there.

How many engineers, in hiring positions, do you know that have a positive opinion of FB?


Ok but how many engineers use React, yarn, or graphql? Facebook is still leading the way in front end development and it’s still a net positive to have that name on your resume. Their brand isn’t any more tainted than Google, Microsoft, or Amazon.

I will happily profit from the work FB has poured into all three of those technologies and smile knowing they won't make a single cent from me.

Facebook is not lacking on the problem in their web tech either. Like how terrible it's maintaining (or the lack of) flow. It's buggy as hell and I wish I didn't choose it couple years ago in my projects

And for React, the head of React left FB because of the hostile working environment earlier this year. https://www.cnbc.com/2019/01/17/facebook-manager-quits-after...

Facebook led the way in frontend. But for future candidate, the important question is, if they have faith that FB will continue to lead that way.


Sure but no company is going to be perfect and I’d say their track record of maintaining tech is better than Google’s right now.

That is a fair point. But, that's not what the topic is about. Just because they contributed to front-end development doesn't make them any less questionable. Not saying I wouldn't want to work there myself. They do some good work there i'm sure.

There's still much more chances that you will end up working on some Ad-Tech at Facebook and not cutting edge open-source development. As for the brand not being any more tainted than the other tech giants, I think the article shows the exact opposite

I've got good news for you. You can contribute to these projects (and put your contributions on your résumé) without having the Facebook taint. e.g.:

* https://github.com/facebook/react/pulls


It’s a lot easier when you’re getting paid for your time and are surrounded by other people working on the project.

Open source libraries like React should not be attributed to companies. It just so happened that Jordan Walke worked for FB when he wrote the library.

Its not like any of these companies specifically target creation of such tools as part of their business goals.


Those companies have teams that maintain them and they get tons of work done internally before they’re released. Microsoft has been great for Typescript just because of the brand that’s behind it and the teams that get paid to maintain it. React is the same way.

I think your point about the personal aspect of work is most relevant. I personally would be mortified to work at a company that was always in the news for various scandals and generally being full of shit. Not that I'd assume anyone who works there to be full of shit - just that I wouldn't want my friends making fun of me for working for the Zuck. Facebook just isn't cool anymore in my neck of the woods and there's no social capital in using it or working there as far as I can see.

Yes, I actually have an overall positive opinion of Facebook in terms of engineering talent. Facebook has made some questionable decisions, but I would not automatically assume that everyone who works there is so immoral that you shouldn't hire them.

Facebook AI research also has some of the top AI researchers.


I would hire an engineer from Facebook any day of the week. For the most part the devs are excellent. I would not let my personal opinion of a product stop me from hiring a great candidate.

I know lots of engineers, in hiring positions, that have a positive opinion of FB.

They still do great engineering and someone working there will learn a lot and take those skills to their next job.


There are engineers who were already working at Facebook when all the negative press started piling up.

Then there are the engineers who went to work for them in spite of that bad press.

Given the revelations that have come to light, that is a distinction worth considering.


What I know from friends at Facebook is that there is a mass exodus at the moment, at the same time mass hiring to compensate. Overall pretty uncomfortable so most of them consider leaving to other places.

FWIW, I think this past year everyone has been expecting Uber, Lyft, Airbnb and Pintrest to unlock a flood of money. Anecdotally, a researcher friend of mine turned down FB to work in Snap Inc, research and someone else's wife also turned down FB to work at Lyft went to get in before the IPO specifically and no other real reason. So as much as it makes a good story about ethics or privacy concerns, I think it's lots of things, but probably very little about Cambridge Analytica.

If US "Defense" is any indicator, just pay more money.

Someone is willing to kill people for more money.


The general feel I get from most of my engineer friends is that Facebook's product is definitely not inspiring and doesn't "make the world a better place" (whatever that means).

But the consensus is that given the right amount of money, they would all accept an offer from them. And Facebook is known to pay very well.


It's pretty true that the pay trumps them all. But Google is generally pretty good at matching Facebook's offer. So when choosing between FB and Google, or any of FAANG, the pay is usually less of a concern. So being considered "not inspiring" and "evil" is definitively not helping FB to close on the candidates.

AMZN<FB<GOOG<Wall Street in my experience for AI skills. #OneDatapoint

You seem to be lost. Twitter is down the hall, second door on the left.

W/r to comp, that's my data point, don't like? Then don't like...

FB has a salary cap below GOOG, AMZN is just cheap and prefers people leaving to countering. Then AMZN makes a big deal of hiring such people back for what they should have countered, proudly calling them "boomerangs."


This is the most encouraging thing I've heard recently. We might be doing better culturally, and more resilient, than it sometimes seems.

Wow! Every so often, I see something that makes me feel hopeful about the future. This is one of those times.

I feel bad for the Facebook employees below middle management level, but I also really would like some harsh penalties directed at Zuckerberg et al.

I hate that huge corporations can just kind of shrug and say "oops" to egregious crimes, without any meaningful consequences. Same with corporations not paying tax while still benefiting from the stable society created by those taxes.


If you're referring to the Cambridge Analytica Scandal, Facebook didn't break any US laws. While it may have been morally wrong (however, if you accept that Facebook's business otherwise wrt collecting user information is okay, I don't see how you could see this use of that information as any less okay), there weren't yet any US laws on the books to throw at them.

There are real signs of a moral change in this upcoming generation. I suspect a lot of people are going to be very surprised when that wave comes in and breaks.

The only thing that I see is a complete loss of faith in the rule of the law. Hence the Twitter lynchmobs.

It's a double edged sword. They seem to live in thicker bubbles. They don't seem particularly tolerant or forgiving.

While I share the same hope, don't get too excited. There are billions in the rising ranks who don't care and may be from a country that exist lower on Maslow's hierarchy of needs than first world countries.

I wouldn't get too optimistic. If the types of candidates that can make a positive social impact on the direction of the company don't join the company, that leaves Facebook with only those candidates joining that will continue to make it commercially successful without the positive social impact.

Basically, if you're unhappy with a Facebook with talented do-gooder employees, just wait until we've got a Facebook with only talented employees that don't care about do-gooding.


Wahoo! This is great for people who are interested in working for FB who can now have more bargaining room.

So has AOL, friendster and MySpace I heard. Different DNA than Google and based on the foundation of voyeurism with text entry boxes cut and pasted from MySpace, who cut and pasted them from friendster and the list goes on. You don't cut and paste algorithmic search technology.

My career is and has always been defined by a refusal to work for companies I'm not ethically comfortable with. Hopefully it becomes more of a norm.

This is a luxury and a privilege many people can't afford, and I am right there with you in exercising it consciously.

Correction: Facebook has struggled to hire talent [from top universities] since the Cambridge Analytica scandal.

Everyone knows if you didn’t go to a top school you don’t matter /s.

Anecdotally nobody who went to my undergrad got a new grad offer and then declined it at FB. Because most of them just can’t get offers there, and if they do they can’t get equivalent ones.


I tell everyone I can to avoid them, just based on my own personal experience. It was by far the worst process of any company I interviewed with.

Most companies go through this transition as they fill their niche, the raw eng talent starts to get hidden under an empire-building layer of middle management.

They've been able to hire some middle management to mind what remains. Replacing change-makers with reject managers from Netflix, etc. Not necessarily bad folks, just not going to be doing a lot of work.


A manager from fb reached out about a position, I told him to take a good look at what he is supporting with his work, and that I might reconsider if the leadership ever sees the light.

While many people may not care about a company's morals, they definitely will care about the reputation of a company in terms of how the company is perceived by the general population and others in the industry.

Facebook has began scaling back on hiring since the Cambridge Analytica scandal.

Alternative possible headline working just as well. People are still applying en masse to work at Facebook.

Among top schools, Facebook’s acceptance rate for full-time positions offered to new graduates has fallen from an average of 85% for the 2017-2018 school year to between 35% and 55% as of December.

A fall in acceptance rates may mean saturation in needed roles at Facebook.


I think you’re misinterpreting the term acceptance rate. It’s the number of offers extended that we’re accepted, not the number of applicants who were extended offers.

> A fall in acceptance rates may mean saturation in needed roles at Facebook.

No. If Facebook didn't need those people, they wouldn't have extended an offer to them in the first place, so it wouldn't have affected acceptance rates.


Then Facebook shouldn't be giving out offers or interviewing so many people?

I think these are primarily offers given to interns. Otherwise, why go through the interview process if you don't want to work there? It may be a better place to go for internship than for full time.

Then again, maybe their compensation packages may no longer be competitive. It's possible.


> their compensation packages may no longer be competitive

They're competitive cash-wise. But the career hit is unmistakable.

(My friends who had other options eventually got tired of carrying around reputation that comes with that place. If you have no other options, that's one thing. But not everyone loves broadcasting that fact.)


There is no career hit associated with Facebook. That’s just ludicrous wishful thinking. People love to complain about Goldman Sachs too, but that still looks good on a resume.

> There is no career hit associated with Facebook

Having witnessed it personally, yes there is. It's not evenly distributed. And if you do something amazing later on, people will look past it. But it's there in a way it isn't for e.g. Apple or Google or Amazon.

> People love to complain about Goldman Sachs too, but that still looks good on a resume

Goldman is universally respected within finance. Facebook is not universally respected within tech.

(Neither Goldman nor Facebook care about public opinion, outside of managing political risk, since neither sells its product to the mass market [1].)

[1] Goldman recently started doing this, though


Yes, I've seen it too. As you say, it's not evenly distributed, and it also depends on other factors such as when you started working for them and what your role was.

But there's definitely some amount of stigma associated with having Facebook on your resume. The interesting question is whether that's temporary, or whether it will become even more pronounced over time.

I could see it going either way.


Is there a scenario similar to Facebook's in finance, right now or historically?

Like Wells Fargo, or any of the folks in this list: https://www.investopedia.com/articles/investing/101515/3-big...

I'd be hesitant about someone who'd work for Madoff.


Personally, I wouldn't overlook someone who worked at Facebook. I don't really like them as a company, but it doesn't mean everyone who works there is a bad actor.

Yea that’s a dismal stat. Roughly, It means smart people don’t want to work for you.

in a current climate of talent shortage in tech and especially in machine learning/deep learning there is really no excuse for going to FB

I’ve recently left a CTO job and am going to a large software thing. Facebook was the one company I didn’t consider.

Facebook being “the past” played an equal role to “Facebook being evil” in my decision.


If there will be fewer people who go to work at Facebook who care about privacy, that seems like bad news?

I don't think that people who care about privacy could possibly affect what Facebook does by working there.

Why not? Tech firms work collaboratively and even new employees participate in (some) decision-making.

Making a decision that limits the growth of the core product means that you are a negatively impacting revenue... IE, career suicide. This isn't a government (ideally) trying to prevent conflict and unrest, this is just a company making money.

It's the eternal debate. What has more impact? Employees joining the company and making changes from the inside? You always have the risk of having the employer be: "see! no need to change! there's no problem, people are still applying and we are filling all our position easily!"

Or the employer realizing they can't hire anyone good anymore and decide to make real changes in order to be able to attract the right talent?

It's hard to say which one has the most impact. At the end of the day, management needs to be on board to make changes


Facebook's entire business model depends on being very intrusive into people's privacy. It's is extraordinarily rare for non-board members to be able to affect a company to the extent of actually altering its business model.

Hmm... the headline is in direct contradiction with what I read on Blind. People are still flocking at Facebook's gate for an offer.

Ok, but Blind is filled with the type of people who would fit right in at FB.

Ouch, there's honestly no worse insult than being compared to the average Blind user.

Ha, first thing I thought of when I saw a Facebook struggling to hire post is blind. If any reason is causing people to not interview with facebook, I would argue it was the weekly burnout thread that was happening on blind last year.

Oh, interviewing is one thing. I would advise everyone to interview with Facebook, especially when you don't want to work there (less pressure). If you get an offer, it would most likely be pretty generous so you can further use it as leverage at the companies you want to work.

But yeah, Facebook is the new Amazon when it comes to working people to death.


The difference being, the very best, with multiple offers, are choosing another path.

Its like the very best high school students with offers from Harvard, Princeton and Yale- but suddenly Facebook is being treated as if it's Dartmouth or Cornell instead of Princeton and being left with the leavings of the very best instead of having its pick. Still very talented people to be sure, but not the 'best.'


> The difference being, the very best,...

... at doing LeetCode all day.


how do we know that the ones no longer seeking employment are the very best and not just the ones that are most woke?

> Among top schools, such as Stanford, Carnegie Mellon and Ivy League universities, Facebook’s acceptance rate for full-time positions offered to new graduates has fallen from an average of 85% for the 2017-2018 school year to between 35% and 55% as of December, according to former Facebook recruiters. The biggest decline came from Carnegie Mellon University, where the acceptance rate for new recruits dropped to 35%.

It's entirely possible that the 35-55% of those that are still accepting offers at FB are the most talented of the people that have interviewed at FB and gotten offers.

I know a lot of great engineers, woke and politically indifferent. The most effective ones that I've worked with are the least politically engaged because they are more heavily engaged with building things than politics.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: