Ex-Google-Search engineer here, having also done some projects since leaving that involve data-mining publicly-available web documents.
This proposal won't do very much. Indexing is the (relatively) easy part of building a search engine. CommonCrawl already indexes the top 3B+ pages on the web and makes it freely available on AWS. It costs about $50 to grep over it, $800 or so to run a moderately complex Hadoop job.
(For comparison, when I was at Google nearly all research & new features were done on the top 4B pages, and the remaining 150B+ pages were only consulted if no results in the top 4B turned up. Difficulty of running a MapReduce over that corpus was actually a little harder than running a Hadoop job over CommonCrawl, because there's less documentation available.)
The comments here that PageRank is Google's secret sauce also aren't really true - Google hasn't used PageRank since 2006. The ones about the search & clickthrough data being important are closer, but I suspect that if you made those public you still wouldn't have an effective Google competitor.
The real reason Google's still on top is that consumer habits are hard to change, and once people have 20 years of practice solving a problem one way, most of them are not going to switch unless the alternative isn't just better, it's way, way better. Same reason I still buy Quilted Northern toilet paper despite knowing that it supports the Koch brothers and their abhorrent political views, or drink Coca-Cola despite knowing how unhealthy it is.
If you really want to open the search-engine space to competition, you'd have to break Google up and then forbid any of the baby-Googles from using the Google brand or google.com domain name. (Needless to say, you'd also need to get rid of Chrome & Toolbar integration.) Same with all the other monopolies that plague the American business landscape. Once you get to a certain age, the majority of the business value is in the brand, and so the only way to keep the monopoly from dominating its industry again is to take away the brand and distribute the productive capacity to successor companies on relatively even footing.
I think it is possible to make way, way better search engine because Google Search is no longer as good as it used to, at least for me.
I can no longer find anything remotely good quality, I discover new and quality stuff from social media like Twitter and HN.
The search results seem to be too general and too mainstream. Nothing new to discover, just a shortcut to the few websites like Reddit, StackOverflow for more techie thing and Wikipedia and the few mainstream news websites for the rest.
I usually end up to search HN, Reddit or StackOverflow directly as the resulting quality is better as I can get easily specific. Getting specific is harder on Google because it just omits or misinterprets my search query keywords quite often.
I see comments like this all the time. Am I alone in that search results, for me, have gotten significantly _better_ since a couple years ago?
I can't help but think it's partially due to people using tools _specifically designed_ to make Google's job harder (FF SandBoxes, uBlock, etc) and not understanding the implications of using them... and then blaming Google for returning "bad" results.
The reason for that is because Google's building for a mainstream audience, because the mainstream (by definition) is much bigger than any niche. They increase aggregate happiness (though not your specific happiness) a lot more by doing so.
It's probably possible to build a search engine for a specific vertical that's better than Google. However, you face a few really big problems that make this not worthwhile:
1) Speaking from experience, it's very difficult to define what "better" means when you don't have exemplars of what queries are likely and what the results should be. The reason search engines are a product is that they let us find things we didn't know existed before; if we don't know they exist, how can we tweak the search engine to return them?
2) People go to a search engine because it has the answers for their question, no matter what their question is. If you had a specific search engine for games, and another for celebrities, and another for flights, and another for hotels, and another for books, and another for power tools, and another for current events, and another for technical documentation, and another for punditry, and another for history, and another to settle arguments on the Internet, then pretty soon you'd need a search engine to find the appropriate search engine. We call this "Google", and as a consumer, it's really convenient if they just give us the answer directly rather than directing us to another search engine where we need to refine our query again.
3) Google makes basically 80% of their revenue from searches for commercial products or services (insurance, lawyers, therapists, SaaS, flowers, etc.) The remainder is split between AdSense, Cloud, Android, Google Play, GFiber, YouTube, DoubleClick, etc. (may be a bit higher now). Many queries don't even run ads at all - when was the last time you saw an ad on a technical programming query, or a navigational query like [facebook login]? All of these are cross-subsidized by the commercial queries, because there's a benefit to Google from it being the one place you go to look for answers. If you build a niche site just to give good answers to programming queries or celebrity searches or current events, there's no business model there.
> It's probably possible to build a search engine for a specific vertical that's better than Google.
Funny, I don't disagree with this, but my perception has been that Google seems to detect when I've switched roles from one type of programmer to another. I don't know if that's organic from the topics I'm looking up or not, but if I'm looking up a generic string search, it seems to return whatever language I've been searching for recently. (very recently in fact)
My point is, it seems like the search engine intuitively understands my "vertical" already. Maybe it's just because developer searches are probably pretty optimized.
I think its totally possible, two examples already:
Google Ads (used to?) lets you target by "bahaviour" vs "in-market". They can tell the difference between someone who is passionate about beds, maybe involved in the bed business (behavior) and the people who are making the once-in-a-decade purchase of a bed (in-market).
Google can tell devices apart on the same google account and keep together search threads. I might be programming on my desktop making engineering searches but at the same time I'm googling memes on my phone; both logged into the same account.
Sure, they have a business reason to do exactly what they do but I think as people grow up they specialize and the general stuff that fits everybody becomes useless. Google tries to personalize search results but that so far yielded echo chambers, not personalized discoveries.
I can't get better products by searching Google, I can get the best-spammed products or most promoted products only.
The fact that I am getting low-quality service and Google is printing money means that there is a place for good a good service and if that service cannot emerge due to Google's practices, it probably means that the regulators need to take action.
Or maybe the search is dead, long live social media.
The gist is, I am not happy with a service but the company that makes that product makes a lot of maney. Can't tell if I am an anomaly or if other people feel the same way because Google is a monopoly and maybe the regulators should make it possible to compete with Google and see if there's a space for a better service.
Yes yes, I am the product but I am the product only if I am happy with the stuff I'm getting in return.
If there were viable alternatives, people would shift over time.
If I type in “<name> Pentagon” on Google, the first link is LinkedIn. DuckDuckGo doesn’t even list it at all. There’s countless examples where DuckDuckGo just can’t find basic information. DDG is just unreliable beyond it’s silly name.
I'm always confused by this. I have ddg as the default on my home computer and Google is the default on my work. So I'm constantly using both. There aren't really any apparent differences to me in results. I'm not sure what everyone else is searching, but I search everything from how to spell a word that I should definitely know all the way to niche topics in physics.
Maybe it's because I don't have tracking enabled in Google (I'm not logged into my account when at work) and opt out of tracking where I can. Maybe this is the difference between the lack of difference I see and the huge difference so many others see. But I still don't see it as an issue because I generally find what I'm looking for with one search. Might be the third item, but that's not an issue to me.
I hear this so often that I assume something has to be different. I'm curious if others have ideas as to what it might be, or if I correctly identified them.
I try to use and like DDG, but the results just aren't as good. For example, it seems to be completely unaware of Docker Hub. Like, pages from that entire subdomain never show up. I can search "Docker hub" and it doesn't even show up.
I agree, unfortunately the search is really really sub-par and like others said, frequently doesn’t find basic things no matter how specific the keywords I use are.
I feel it might have even been better at one stage ?
Unless you're searching in Russian, DDG is mostly a skin for Bing search results anyways. The major players in the search engine space are Google, Bing/MSN, Yandex, and Baidu - with the latter two being mostly language-specific.
I find DDG has pretty acceptable or even good results most of the time.
The real power is in the "bangs", though; you can use the `!` to immediately jump to the first search result without seeing a search page, or use `!g` to switch to Google for this particular query, among others. It enables a sort of power-user usage that one wouldn't get with Google.
I'm saying that DDG can be "good enough", and that not having to click around on a results page can save you time if you know what you're doing.
I understand that for some people that's not enough of a time savings to make a difference, but I know DDG well enough to be able to `!` things and almost always immediately get to a successful result. I treat it as an extension of my brain at this point.
The !bang feature I use the most is !w for wikipedia, however I don't use wikipedia enough to justify making it my default search engine on the nav bar.
Neither DDG or Google return any LinkedIn results for me unless I also add LinkedIn to the search, in which case I get the same results for both search engines.
Google knows what you want before you even ask. You might find that convenient, I find it unsettling.
I guess it’s not as bad as Facebook; at least Google doesn’t spoon feed you.
I've been using Bing for the past few months; it's not great or terrible but is it "viable" enough for people to shift to over time? Or is it not viable because it's backed by a major corporation?
I'm sure there are search quirks with each engine but I've seen issues with Google too and yet it's the "devil we know" ... so people unconsciously work around them.
I've used Bing for years now. The only time I go back to Google is if I'm searching for something super specific (normally programming related). Bing takes care of most of my search needs.
The data was about 55TB of compressed HTML last I looked, so that's about 70 r5a.24xlarge instances, each going for $5.424/hour, so about $350/hour or $250K/month. That's not cheap, and definitely not something you'd put on your personal credit card, but it's well within the range of a seed-funded startup. Sizes may vary a bit depending upon the exact index format, but that should be a rough ballpark. With batch jobs being so cheap, you could experiment a bit with your own finances and then seek funding once you can demonstrate a few queries where your results are better than Google. If you actually have a credible threat to Google, you'll have investors breathing down your neck, because it's a $130B market.
API access to either the unranked or ranked index in memory wouldn't do anything useful, BTW. To have a viable startup you need something a lot better than Google, which means that you need algorithms that do something fundamentally different from Google, which means you need to be able to touch memory yourself and not go through an API for every document you might need to examine. Remember, search touches (nearly) every indexed document on every query - if you throw in 200ms request latency for 4B documents your request will take roughly 25 years to complete.
Knowledge Graph is already public - it was an open dataset before it was bought by Google, and a snapshot of its state at the point Google closed it to further additions is still hosted by Google:
"Remember, search touches (nearly) every indexed document on every query" - wait, why does that happen?
Doesn't it only touch ones with at least one of the search terms in, or stemmed/varied words relating to some of the terms? And does that via an index?
I struggled with how to word that in a way that's both true, understandable, and doesn't give away any proprietary information. Added "indexed" to clarify but I didn't fix up the numbers, so they're likely an overestimate.
Basically, yes, it uses an index and touches only documents that appear in one of the relevant posting lists. However, after stemming, spell-correcting, synonyms, and a number of other expansions I'm not at liberty to discuss, there can be a lot of query terms that it needs to look through, covering a significant portion of the index. Each one of these needs to be scored (well, sorta - there are various tricks you can use to avoid scoring some docs, which again I'm not at liberty to discuss), and it's usually beneficial to merge the scores only after they have been computed for all query terms, because you have more information about context available then.
There's a reason Google uses an in-memory index: it gives you a lot more flexibility about what information you can use to score documents at query time, which in turn lets you use more of the query as context. With an on-disk index you basically have to precompute scores for each term and can only merge them with simple arithmetic formulas.
> Each one of these needs to be scored (well, sorta - there are various tricks you can use to avoid scoring some docs, which again I'm not at liberty to discuss)
Google simply has the best search product. They invest in it like crazy.
I’ve tried bing multiple times. It’s slow, it spams msn ads in your face on the homepage. Microsoft just doesn’t get the value of a clean UX.
DuckDuckGo results are pretty irrelevant the last time I tried them. There is nothing that comes close to their usability. To make the switchover, it has to be much much better than Google. Chances are that if something is, Google will buy them.
I disagree. It works great for me. Maybe once every few days I will use !g when I can't find something, but I rarely end up finding it on Google either.
I read somewhere that someone used a skin to make ddg look identical to Google. After doing that, they never even thought about using Google again.
One thing to keep in mind when comparing DuckDuckGo to Google is that people do not use Google with an alternative backup in mind. When you DDG something and it fails, you can always switch to google.
But what about when Google fails? Unlike DDG, there is no culture of switching between search engines when googling. Typically, you'll just rewrite the query for google. And as rewriting the query is an entrenched part of googling, you are less likely to notice this as a failure. It is this training that's the core advantage nostrademons points out.
This right here is why I don't understand people who complain about DDG's search results. If you simply make the commitment to not use Google, for whatever reason that may be, then using DDG becomes exactly the same process of rewriting search queries until you get the thing you're looking for.
I've been using DDG exclusively since I was a contractor at Google years ago and have never had a problem finding things with it...
I've definitely noticed a decline in quality from Google results over the past few years in particular. I don't know if that's because SEO has gotten control of the results of if Google's algo is shoving lower quality up higher for revenue but it's become difficult.
Using a bit of Google-fu I'm usually able to find what I need quickly but it's still more of a hassle than it used to be.
There's exponentially more background noise than there used to be
It's easier to return the most relevant 10 results when there's only 10 thousand options than when there's 10 trillion options with 10 thousand new ones created every day.
My guess is that it's because Google Search now also has to cater to queries from Assistant. Being required to handle web, mobile, and assistant probably necessitated tradeoffs in quality of one over another.
More generally I feel like as the company gets bigger it just gets much harder to handle all the complexity and keep things focused.
I don't know why you're getting downvoted, because the quality has 100% tanked over the last few years. I agree that there may be some selection bias between us, but it's at least got some of my normie non-technical friends commenting about it, so it's not completely without merit. I have a couple of theories, one of them is also a warning.
First, I think search results at Google have gotten worse because people are not actually good at finding the best example of what they're looking for. People go with whatever query result exceeds some minimum threshold. This means when Google looks at what people "land on" (e.g. something like the last link of 5 they clicked from the search page, and then which they spend the most time on according to that page's Google Analytics or whatever), they aren't optimizing for what's best, they're optimizing for what is the minimum acceptable result. And so what's happening is years and years of cumulative "Well, I suppose that's good enough" culminating in a perceptible drop in search result quality overall.
Second, Google has clearly been giving greater weight to results that are more recent. You'd think this would improve the quality of the results which "survive the test of time" but again, Google isn't optimizing for "best" results, they're optimizing for "the result which sucks the least among the top 3-5 actual non-ad results people might manage to look at before they are satisfied". So this has the effect of crowding out older results which are actually better, but which don't get shown as much because newer results have temporal weight.
My warning is this, too, which you've surely noticed: Google search has created a "consciousness" of the internet, and in the 90s it used to be that digitizing something was kind of like "it'll be here forever" and for some reason people still today think putting something online gives it some kind of temporal longevity, which it absolutely does not have. I did a big research project at the end of the last decade, and I was looking for links specifically from the turn of the century. And even in 2009, they were incredibly hard to find, and suffered immensely from bitrot, with links not working, and leaning heavily on archive.org. Google has been and is amplifying this tremendously, by twiddling the knob to give more recent results a positive weight in search. Google makes a shitload of money from mass media content companies (e.g. Buzzfeed) and whatever other sources meet the minimum-acceptable-threshold for some query, versus linking to some old university or personal blog site which has no ads whatsoever. So the span of accessible knowledge has greatly shrunk over the last few years. Not only has the playing field of mass media and social media companies shrunk, but the older stuff isn't even accessible anymore. So we're being forced once more into a "television" kind of attention span, by Google, because of ads.
I find the single hardest thing to search for these days is anything more than a few months old on YouTube... They hate older videos, it feels like. Beyond that, I keep seeing suggestions on new content from years ago... it's just weird.
I know it's not google proper, but I'd guess a significant number of their searches are specific to youtube.
I believe they try to put newer content first in order to make a more fair distribution of views. If you order results by popularity on yt, you will see that it uses just an "order by view count desc" (no relationship with like/dislike ratio), which is bad because it keeps popular some not so good quality videos published on first yt years.
I don't necessarily agree. The hard part of search is building the index and differentiating _real_ promotion from the _fake_. There's a lot of SEO manipulation that Google does a good job avoiding.
Webspam is a really big problem, yes. It's very unlikely that you'd be able to catch up or keep up in that regard without Google's resources.
Building the index itself is relatively easy. There are some subtleties that most people don't think about (eg. dupe detection and redirects are surprisingly complicated, and CJK segmentation is a pre-req for tokenizing), but things like tokenizing, building posting lists, and finding backlinks are trivial - a competent programmer could get basic English-only implementations of all three running in a day.
I am not even that good of a programmer and I also agree with you that index relatively trivial. Other major issues, besides fighting spam:
- Hardware infrastructure and data center presence for extremely fast search from anywhere in the world. - Near real-time search suggestion. - personalized search results based on past search + geolocation. - Search to get instant results without having to go to a website.
Just to name a few. Google Search is the gold standard of a search engine, not because its Google or because they have been around for a long time and the brand name sticks (I am sure it helps too), but for the simple fact is no search engine is even remotely close to being as good as google and I have tried them all more of the less and given them shot. They are just not good at all.
I also don't understand the hate towards google being in charge of so many products so many people use, ie, Mail, Maps, Chrome, Android, Docs (to name a few). It's simply because they are damn good at it. If its a crime to make a product so good that people continue to use it, then I don't know what else people are supposed to do. As if we are asking google to make shit products, I just don't understand the reasoning.
It has nothing to do with the number of products, it’s what they do with their influence over the market. See AMP and incompatibilities between Gmail & IMAP, for example.
You concentrating on the literal interpretation of the phrase “give access to the index”. This is non-technical article which didn’t go into details, just read it as “give access to index & ranking”.
>The comments here that PageRank is Google's secret sauce also aren't really true - Google hasn't used PageRank since 2006.
That's quite a claim considering they were reporting PageRank in their toolbar until 2016, and toolbar PageRank was visible in Google Directory until 2011.
Are you talking about PageRank from the original patent?
Actually the omnibox made it really easy to switch to ddg. With an occasional fallback to google.
I have no problem with advertising etc. but the tracking and selling of data is such an idiotic thing. We as consumers should have a global internet-law, and be reimbursed for data leaks or usage outside the scope of the application.
By no problem with ads I mean the original ads of google. Very clear they were ads and not intermingled with the results. Scrolling down for the results is nuts. I will click ads if they’re relevant, regardless of if they’re on the right or in the results. So please stop supporting this fraud against advertisers.
I think that the "fallback to Google" might actually tend to diminish consumer confidence in DDG. Every time you use it, you basically say to yourself "$newRiskyStrategy fails sometimes, we still need $oldReliableStrategy".
Instead, what might help DDG is a plugin that detects when you go past the first or second page of Google search results, and suggests that you might get better results on DDG. It's a little intrusive, but the mental nudge becomes "$oldReliableStrategy has flaws, try $newRiskyStrategy". You get a positive emotional interaction with DDG rather than "forcing" yourself to use it all of the time and "failing back" to Google.
Page Rank is a synonym for link juice. So when you say Google hasn't used page rank since 2006, can you confirm that you are talking about link juice as opposed the the old toolbar representation of page rank? And assuming you do mean link juice, well why do links still work so well for seo?
>"Indexing is the (relatively) easy part of building a search engine. CommonCrawl already indexes the top 3B+ pages on the web and makes it freely available on AWS."
Interesting I would have thought that crawling at this scale and finishing in a reasonable amount of time would still be somewhat challenging. Might you have any suggested reading for how this is done in practice?
>"It costs about $50 to grep over it, $800 or so to run a moderately complex Hadoop job." Curious what type of Hadoop job you might referring to here. Would this be building smaller more specific indexes or simply sharding a master index?
>"Google hasn't used PageRank since 2006." Wow that's a long time now. What did they replace it with? Might you have any links regarding this?
Crawling is tricky but it's been commoditized. CommonCrawl does it for free for you. If you need pages that aren't in the index then you need to deal with all the crawling issues, but its index is about as big as the one most Google research was done on when I was there.
$50 gets you basically a Hadoop job that can run a regular expression over the plain text in a reasonably-efficient programing language (I tested with both Kotlin and Rust and they were in that ballpark). $800 was for a custom MapReduce I wrote that did something moderately complex - it would look at an arbitrary website, determine if it was a forum page, and then develop a strategy for extracting parsed & dated posts from the page and crawling it in the future.
A straight inverted index (where you tokenize the plaintext and store a posting list of documents for each term) would likely be more towards the $50 end of the spectrum - this is a classic information retrieval exercise that's both pretty easy to program (you can do it in a half day or so) and not very computationally intensive. It's also pretty useless for a real consumer search engine - there's a reason Google replaced all the keyword-based search engines we used in the 80s. There's also no reason you would do it today, when you have open-source products like ElasticSearch that'd do it for you and have a lot more linguistic smarts built in. (Straight ElasticSearch with no ranking tweaks is also nowhere near as good as Google.)
Okay, this is a relatively serious proposal to require Google to allow API access to its search index, with the premise that it would democratize the search engine ecosystem. There are some issues with the regulations he proposes (you have to allow throttling to prevent DDoS attacks, and you can't let anyone with API access add content to prevent garbage results), but it's roughly feasible.
The main problem is, I think the author is wrong about what Google's "crown jewel" is. Yes, Google has a huge index, but most queries aren't in the long tail. Indexing the top billion pages or so won't take as long as people think.
The things that Google has that are truly unique are 1) a record of searches and user clicks for the past 20 years and 2) 20 years of experience fighting SEO spam. 1 is especially hard to beat, because that's presumably the data Google uses to optimize the parameters of its search algorithm. 2 seems doable, but would take a giant up-front investment for a new search engine to achieve. Bing had the money and persistence to make that investment, but how many others will?
> 1) a record of searches and user clicks for the past 20 years
From what I can tell, Google cares a lot more about recency.
When I switch over to a new framework or language, search results are pretty bad for the first week, horrible actually as Google thinks I am still using /other language/. I have to keep appending the language / framework name to my queries.
After a week or so? The results are pure magic. I can search for something sort of describing what I want and Google returns the correct answer. If I search for 'array length' Google is going to tell me how to find the length of an array in whatever language I am currently immersed in!
As much as I try to use Duck Duck Go, Google is just too magic.
But I don't think it is because they have my complete search history.
Also people forget that the creepy stuff Google does is super useful.
For example, whatever framework I am using, Google will start pushing news updates to my Google Now (or whatever it is called on my phone) about new releases to that framework. I get a constant stream of learning resources, valuable blog posts, and best practices delivered to me every morning!
> Also people forget that the creepy stuff Google does is super useful.
For the same reasons you’re exalting them, I have non-technical friends who asked me how Google knows so much about them (and suggestions on how to avoid it) because they found it too creepy.
I don’t think people forget Google’s results are useful; some just think they’re more creepy than valuable. You seem to have picked your side in that (im)balance, and other people prefer the other side.
There’s also the relevant consideration that no matter how useful they may be, they should have no right to impose themselves on you. By this I mean that one should be free to refuse their creepiness, understanding the price is their usefulness. Yet, Google is the subject of privacy violations all the time, and they are caught time and again lying about what they collect on users.
I don’t think people forget Google’s results are useful; some just think they’re more creepy than valuable. You seem to have picked your side in that (im)balance, and other people prefer the other side.
Just as a general observation without taking either side:
People routinely fail to recognize both sides of a particular thing. It's why we have sayings like "You don't know what you've got til it's gone."
I wish interfaces were more straight up about their intentions and made it easier to implement account level partitions. For work I love Google's magic tracking effects, but at 1 am, hell no.
Right. I'm just saying it should be clearer. Ex: I want to have a list of accounts, netflix style, that I'm presented with on an empty chrome window. If in fact multiple identities don't merge data implicitly in anyway than this is just a UI issue.
But I have a hard time believing google truly partitions everything in a multi account setup.
It would be immensely useful if Google understood that normal people have multiple facades that they use in different contexts. Probably several professional (which project / component was I working on again), private but family friendly (planning gifts for relatives, etc), and private but clearly out there (stuff you don't want to shock 60 year old parents / young kids / etc with) profiles.
Also, for incognito stuff, it'd be nice to have read-only basing on stock profiles related to various activities or people.
It is actually possible to operate without relying on Google or any other big tech firm. Who is forcing you into these privacy dilemmas? All of their services are a choice you are making. You don't need to accept any of it if you don't want to.
> You don't need to accept any of it if you don't want to.
Tell that to the people who had their privacy violated by Street View[1]. And the people who specifically disabled location services on their Android devices but were still tracked[2]. Or all the people who have no idea what Google Analytics is and never consented to it, but are profiled by it everyday.
> All of their services are a choice you are making.
I do my best to avoid privacy invading companies, and as a technical user I find it tiring and know I deal with consequences (e.g. broken websites). It perplexes me that comments like yours still pop up. We’re not the only segment of the population that exists; non-technical users are the majority, and they have the same right to privacy as we do, with a modicum of transparency. If even technical people are regularly tripped by privacy invasions we didn’t know about, what chances do non-technical users have?
Street view is debatably invasive. I understand this might seem hand wavy to someone really concerned about privacy issues, but
1.generally speaking I would think VERY few people care about an image of their property being on street views. 2. It's not really illegal to take pictures so even from a legal standpoint it seems like a gray area. 3. I understand there can be individual reasons for not wanting this, but it seems to be a very large net positive. And I would apply that statement to most other tracking and data policies they have.
If they are lying about how their services track people, that is definitely grounds for concern. The transparency can definitely be improved, but still these are people with Android phones and people using Google Analytics. No one is forced to use these things they are free to use any other service or create their own.
And my attitude is out of pragmatism and how I think privacy issues should be handled. I don't have any problem with the way Google uses my data so I don't care to fix a non problem. And I don't see it as their responsibility to change a way of business when anyone is free to use any other service or create their own, since I don't find it offensive.
The first sentence of the linked New York Times story:
> Google on Tuesday acknowledged to state officials that it had violated people’s privacy during its Street View mapping project when it casually scooped up passwords, e-mail and other personal information from unsuspecting computer users.
That answers your first three paragraphs. There’s no “if” to their lying and privacy invasions. They’ve been caught and admitted their actions time and again.
> No one is forced to use these things they are free to use any other service or create their own.
It is here I will respectfully give up on continuing the conversation with you. You’re either ignoring my main point or truly don’t care for the majority of users. Most people don’t understand the ramifications of these choices and for good reason; they are hard to understand. By suggesting non-technical users create their own services and devices, I’m now wondering it you’re trolling me.
> And my attitude is out of pragmatism (…) I don't have any problem with the way Google uses my data
Which is valid, but irrelevant. I’ve already mentioned in the top post different people make different choices. I presented another side and used facts to justify it. If you’re going to answer with mere opinion, you’re not adding to the points made by the original poster.
What? You seem to be misunderstanding my statements.
My first points were about the streetview product. Scooping up passwords is obviously not the intent of that product, maybe that was an error or they changed the core product at some point? I can't read the paywalled article.
I'm not suggesting non-technical users create products... you're reading so far out of context. Just because user X can't create a new product does not mean that we should place sanctions on company Y. I'm glad you used facts somewhere else because in this post you just illogically connect a bunch of dots.
Yes some of it is my opinion and alot of this is yours. But a fact is still no one is forcing you to use these products, then you went off about stolen passwords and trolling and resigned yourself from the argument. That sounds like a rationality of a completely one-sided biased individual in itself, respectfully.
Yes everyone agrees transparency is good and lying is bad. Google is not Evil Or Benevolent. They're just people...
"And I don’t use them. I hoped that by continuing to mention non-technical users you’d get it, but this was never about me. You keep bringing up that argument, but read what you replied to in the first post — I recounted the experience of non-technical people I know, not my experience. Stop telling me I have a choice; the point is not us, it’s non-technical users who don’t have the knowledge to make informed choices!"
Haha you are so ridiculous. This was your first post:
There’s also the relevant consideration that no matter how useful they may be, they should have no right to impose themselves on you.
Then you say you don't know why I bring up that you don't need to use Googles services... C'mon man get real. That's why the point about using alternatives or creating new ones is very relevant and this entire thread is about sanctions. Don't start a convo you can't participate in and then just claim you won and leave, that's childish behavior.
> Just because user X can't create a new product does not mean that we should place sanctions on company Y. (…) in this post you just illogically connect a bunch of dots.
That is an insane extrapolation, and the reason I don’t want to continue the conversation with you: you’re answering points I’m not making. I haven’t even hinted at sanctions; I have no idea where you’re getting that from.
> But a fact is still no one is forcing you to use these products
And I don’t use them. I hoped that by continuing to mention non-technical users you’d get it, but this was never about me. You keep bringing up that argument, but read what you replied to in the first post — I recounted the experience of non-technical people I know, not my experience. Stop telling me I have a choice; the point is not us, it’s non-technical users who don’t have the knowledge to make informed choices!
> That sounds like a rationality of a completely one-sided biased individual in itself, respectfully.
Believe what you want. I just don’t want to keep wasting my night arguing with someone that started a discussion but refuses to address the points originally made. Why reply, then?
Maybe I’m not explaining myself well enough, or in the correct way for you to understand, or maybe you’re the one not grasping what I mean. It doesn’t really matter where the problem lies, just that it’s clearly not working.
Maybe if we ever meet in person we can resume this conversation, but tonight it’s not being productive, so I genuinely wish you a good week and sign out here.
I was curious about this, as I work in multiple languages every day. I almost never use Google though except as last resort if other engines can't find anything. So the result I got for array length was for Javascript. Which is quite high on the hype cycle now, but I only very rarely use it and search anything about it even less frequently.
Sо I wonder how much of the magic you perceive might be just your interests matching the interests of the most other people using Google and thus it's not just Google magically guessing you're into Javascript (for example) now, but Javascript being popular and this being the cause of both Google returning matches for it and you starting to use it? Did you ever do a clean experiment - e.g. try to learn APL or some other relatively obscure language and have Google return all results about APL and none about Javascript?
Going back to OPs point. Google is real good at associating search query to search result. Every time you search and click on something, google learns that association.
So it could very well be that as more users adopt the new language/framework in the first couple of weeks they have taught google those associations.
Google isn’t a search company. They are a distributed machine learning company that make most of their money from learning what people want and showing relevant ads to them.
They have adds to show first, telling what people want comes after that, knowing what people wanted is only to make the second easier and only interesting up to serving the first.
Really good or really bad only exists if there is something else to compare it to.
I always see posts like this here, and then I try it, and I get a page full of "array length" results for Javascript, while everything in the last year that I've searched for has been Java or Kotlin...
Same when I owned a Pixel after hearing about Google Now and their ML magic there. Nothing more magical than an iPhone in terms of suggestions. The camera was amazing, but not all this supposed contextual stuff.
Wild guess: in a surge of privacy consciousness you told Google to stay the heck away from your data. These checkboxes stick forever and couple years down the line some magic feature won't be able to learn from your data. E.g. despite working there, I still haven't figured out how to let Photos recognize people in my pictures, something that definitely is on by default.
For many people it is enough to be totally creeped out about Google.
Also, that Google remembers context can be handy but it is not essential. Without context, I am sure you would be equally capable of finding what you are looking for, although it might take a little more typing since you'll have to supply the context yourself. Imho, convenience is not a good argument for giving away your personal information.
Yeah I must echo your sentiments wrt their Google Now product, it is great. Not only does it provides relevant content but some of it is very new and or obscure which I really appreciate. I have linked people to videos I pulled off my Google Now feed and they are amazed that I know about a video on our very specific shared interest that is less than a couple hours old and has only a few hundred views.
The flip side of this is that it makes it harder for you to stumble upon something related, but new, outside of the filter bubble Google is making for you.
There's no arguing what you're describing is useful, but it's nice to keep in mind that there are downsides even if you ignore the privacy argument (which, IMO, shouldn't be ignored).
> Yes, Google has a huge index, but most queries aren't in the long tail.
I'm not quite sure about that. 15% of Google searches per day are unique, as in, Google has never seen them before. [1]. That's quite an insane number.
Sharing for anyone who didn't know there is a very good dataset you can use now. If you don't have a nvme ssd in your computer, I highly recommend getting one for fast i/o.
[edit] in my experience yacy works really well. You have it crawl the sites you frequently visit and their external links and it quickly accumulates to something more accurate than google.
Wow, 15% unique searches is indeed quite an interesting figure. With that said, what OP said is definitely not disproved. Just because 15% of searches are unique, that doesn't mean the most relevant result is buried in the tail end. I mean I can think of loads of my own searches that are probably unique or rare, but lead to the same popular results because of typos, improper wording etc.
Without some clear numbers on that from a major search engine, I think this might be very difficulty to infer.
Heh, yes, they do. Which is a reminder that devs are not "typical" users.
As a developer, I search using keywords; for example, if I was looking for property for sale in Inverness, I might search for "property Inverness", whereas I've seen and heard "typical" users use something like "find me a 2 bedroom house with a garden for sale in the North of Inverness" - much more verbose, and containing stop words and phrases unlikely to help (I think!).
I do the same as you, but was just thinking that if most users search using full sentences then Google will spend most effort optimizing for that, so maybe we're the ones getting the worse results?
No, the optimization they do for the low-quality query is more than balanced out by the higher clarity and relevance of a well-phrased query. There are often extraneous words that aren't simple stop words, and they're not 100% successful at removing these extraneous ones.
I almost always search keywords while my girlfriend uses sentences and we often get quite different results. If I'm having trouble finding a good result there's a pretty good chance she will find something quickly. Surprisingly this holds true even for programming questions on topics that I know well and she's never heard of before.
What does it matter whether it came from an assistant or not?
Natural language is likely the preferred search input method for kids under a certain age, who cannot yet type fluently. My kids formulate very long, complex queries verbally. The other day my son asked Alexa why the machine gun is such a deadly weapon. She replied with a snippet from Wikipedia that was surprisingly relevant.
I search full sentences (questions) from the keyboard. I figure I'm not the only to have had the question before, so I ask. Also, I find that blog posts, etc. tend to match well for full sentences.
Does that actually work? I must be old school, I always delete such IDs before searching, but then again I used Google back when it actually did what you told it instead of misinterpreting everything for you.
It doesn't seem to have any particular effect on the results that come up. I always used to delete them, and still do sometimes but Google seems to pretty much ignore them in practice.
Could this be explained by supposing that people are just searching for current events, sometimes national, sometimes international, sometimes very local? If so, you really wouldn't need much indexed to handle those queries. I imagine many queries are also just overly verbose and sentence-length, which artificially inflates the number of unique queries which are actually seeking roughly the same pages.
Good point and 15% is indeed much, but the question would be what "unique" means. If it means that the exact same character sequence appeared for the first time, it doesn't mean that the users searches for a term that has never been searched for.
I mean with the newest advantages like machine learning it's more and more possible to _semantically_ link queries. If that's the case, those 15% could become 5% truly unique searches or even less.
"how dumb is trump" and "how dumb is donald trump" are two different searches but they semantically belong together because they mean the same.
Probably quite a few. New things happen. Politics, wars, famous folks, movies, music, diseases, scientific studies, products, brands, model numbers for products, fads and slang. I'm guessing there are other things as well.
Some of the new things are probably variation as well - as others have mentioned, sentences and voice commands can give lots of new stuff.
I would think it’s pretty common. For a lot of people google is the internet. Or at least the reference. If google isn't working it’s almost certain it’s your end. I don’t think anyone else has that reputation for availability amongst the general public.
> 15% of Google searches per day are unique, as in, Google has never seen them before.
That is impossible, and therefore wrong (I'm wrong, please see below). To know if a search is unique, as in Google has never seen them before, Google must be able to decide if a query it receives was seen before or not. Even if we assume Google needed only one bit for each message it has ever seen, and assuming it only saw 15% of new messages each day since its creation more than 20 years ago, it would need to store more than 2^1471 bits.
What could be true is that each day 15% of all searches are unique on that day.
Edit: I'm wrong. The 15% of completely unique messages per day are in regards to the messages per day, and not in regards to all messages it has ever seen, therefore exponential growth doesn't apply. To see that, assume Google just received one search query each day for 20 years but it was unique random gibberish, then Google could easily save that even though 100% of all messages per day are unique.
This is somewhat a faulty analysis. One could easily use a high accuracy bloom filter to store whether a search has definitely not been seen before, and that would be an estimate on the lower bound of the error margin.
It is roughly 1.15^(365*20). That it is wrong was clear from its size. I wanted to use it's falseness to show that the assumptions are incorrect. Which they are, just not how I understood initially.
How are you computing that number? It's definitely wrong.
Assume Google receives 1 trillion queries per year, and has been around for 20 years. Using a bloom filter you can achieve a 1% error rate with ~10 bits per item. So a 200 terabyte bloom filter would be more than sufficient to estimate the number of unique queries.
If you have a list of 20 trillion query strings, and each query string is on average < 100 bytes, you're looking at a three line MapReduce and < 1 PiB of disk to create a table which has the frequency of every query ever issued. Add a counter to your final reduce to count how often the # times seen is 1.
I don't think it's necessarily impossible to calculate. Using probabilistic data structures arranged in a clever way, it's likely possible to calculate with some degree of accuracy.
I haven't thought this through, but take all the queries as they're made and create a bloom filter for every hour of searches. Depending when this process was started, an analytics group could then take a day of unique searches, and run them against this probabilistic history, and get a reasonable estimation with low error. Although the people who work on this sort of thing probably know it far better than I.
The real question though might be assuming the 15% is right, do we care about those 15%, are they typo's that don't merge, are they semantically different, are they bots search for dates or hashes, etc.
I believe that they're unique in a sense that nobody has typed in that exact query previously.
Of course, Google knows better but to treat every search query literally. Slight deviations and synonyms work for the majority of the people, even if us techies highly oppose them and look for alternative solutions (like DDG) that still treat our searches quite literally.
Tangential - but does anyone else feel that google results are useless a lot of the time? If you search for something, you will get 100% SEO optimized shitty ad-ridden blog/commercial pages giving surface level info about what you searched about. I find for programming/IT topics its pretty good, but for other topics it is horrible. Unless you are very specific with your searches, "good" resources don't really percolate to the top. There isn't nearly enough filtering of "trash".
Yes, I feel like Google search results have very gradually become more irrelevant and spammy over the past decade or so.
There are 2 issues, I think.
Firstly, the SE-optimised spam, which has become very good as masquerading as genuine content.
Secondly, Google has dumbed search syntax down a bit, and often seems to outright ignore double quoted phrases, presumably thinking it knows better than I what I want.
As a dev, I do accept I may be an outlier though - with the incredible wealth of search history and location data that Google holds, it seems likely things have actually improved for typical users.
Seeing as google has my search history for the past 14 years, they should be able to KNOW that I'm a slightly more technical user and can take advantage of power user features instead of treating me like an idiot
Google signed an armistice in the Great Spamsite War some time around '08 or '09, to the effect that spam can have all the search results aside from those pointing at a few top, trusted sites, so long as they provide any content at all. Bad content is fine. Farmed content is fine. Content that was probably machine-generated is fine. Just content. Play the game, make sure your markov chain article generator or mechanical turks post every day, throw some Google ads on your page, and G will happily put your spamsite garbage at result #3.
There’s a reason for this; click through rate on ads is higher on pages that don’t achieve the user goal.
I suspect that the AI models powering the search results develop a sort of symbiotic relationship with the spam - if the user actually finds what they are looking for by clicking through an ad on an otherwise spammy page, everyone “wins”; the user found what they were looking for with minimum effort, google got their ad revenue, and the spammy page got a little cut for generating content that best approximating the local minimum that links the users keywords to actual intent...
I agree with this. Most searches give me almost a whole page of ads and stuff up top before the things I’m interested in start showing up way down at the bottom of the page, and even then the results are often spam.
I’ve been using DuckDuckGo and have found I have this problem less. I don’t always find what I mean on DDG, as of now I’d say Google is still better if you’re not sure exactly what you’re looking for is called, but if you know the keywords you need DDG is often better.
Someone linked to an interesting site talking about how to make homemade hot sauce here on HNs. I partly read it and thought it was a great clean site and something I wanted to try. Later going back to find it again I literally spent hours searching, even though I'm pretty sure I remembered some of the exact phrases. For some reason recipe related search results are really really terrible on both Google and Bing.
Sometimes sites get dropped from the results because they are malware hosts. It’s more likely to happen to small independent sites. They are also more likely to just pack it up and shut down their sites.
Yeah, this is why I still use and like myactivity.google.com, as creepy as it is. It's helped me re-find so many interesting half-remembered sites and videos and songs I'd previously come across.
Yes, at least half the time I search about a particular topic, it seems the first few pages are written by some contractor in the Philippines probably getting paid $2 / hr who just spent the prior 30 minutes researching the topic.
It has gotten better over the years in some ways even if it feels like it also got worse. I recall pages of "ads and useful lookimh search result keywords" being more common in the past.
100% agree. For technical queries, as long as a StackExchange comes up, Google is still okay.
But for increasingly more basic searches about a product I'm interested in or a medication or anything else non-complicated that would have gotten me a clean list of decent, non-paid results even 5 years, I'm now getting half a page of sponsored BS and then another half a page of 'created content' written by a bot or shyster explicitly for gaming Google's SEO.
Not only has Google lost almost all their good will (i.e. Don't be evil), but their products aren't even that good anymore, at least not so much better than alternatives where the negatives of using Google outweigh the difference in quality.
You're not alone. From my perspective, the value of google search results has been dropping for years. And the quality of their search results seems to be dropping in a way I suspect is profitable for google. Most of the results I get back from google these days are trying to sell me something I have no interest in buying.
For example, suppose I do a google image search for "pear", because I want images of pears obviously. The first result is indeed a pear, good job google! Except the first search result just happens to come from Amazon, and also happens to be a pretty shitty thumbnail quality photograph (355x336). It's a pear alright, but why is this particular image of a pear first? Google didn't try to give me the best image of a pear, they tried to give me the pear image they thought most likely to induce a financial transaction. Or alternatively, google let itself get cheaply manipulated by Amazon's SEO. Neither is a good look.
A much better pear image, 3758x3336 from wikipedia, is further down the search results. So it's not like google was unable to find good pictures of pears. And a non-image search for "pear" returns the wikipedia page first, so it's not like google failed to noticed the relevancy of the wikipedia article about pears. Yet the shitty amazon thumbnail of a pear shows up higher in the image search results than a high resolution photograph of a pear from wikipedia.
What do folks even mean by "Google's index"?? Google results combine tons of signals, including personal histories for each user. Sharing metadata for the top billion urls wouldn't cover half the functionality, or make a competitive engine. And on the other hand, there may not be a single other organization in the world prepared to manage a replica of the entire data plane that impacts seatch. The proposal is somewhere between underspecified and nonsense.
Thanks, this is mainly what I came here to say. And I just don't see even the vaguely defined "index" as the crown jewel. If anything, it's "relevant results", which is something quite different.
I would assess Google (& FB's) "crown jewel" as, ultimately, their market share, which is related to your points... and causation runs both ways.
The user data helps/ed Google create the superior UX, as you say. The reach is what makes Google & FB valuable to advertisers. A search engine with 0.1% of Google's user volume cannot charge advertisers 0.1% of Google's as revenue. Returns to scale/reach/market-share are very substantial in online advertising.
I'm glad we're talking though. Those tech giants are too powerful.
Ultimately, the old antitrust toolkit is near useless today, for dealing with tech monopolies. It's not obvious what "break up Google" even means. There are strong network effects and other returns-to-scale. It's a zero-marginal cost business, which was rare enough in the past that economists a ignored it.
We need fresh thinking, a new vocabulary, new tools, but we do need to deal with it.
* an Office suite / enterprise company (Google Cloud + Docs + Gmail + Business)
* a phone company (Android)
* a search company (Google Search + Advertisement)
* and a media company (Google Play Movies, Music, Books and YouTube)
The names would probably become different in time, but you get the gist.
Amazon and Microsoft could be broken up much the same way, in neat categorical 'silos'. Facebook should be trisected into Facebook, WhatsApp and Instagram again. I have no idea how you would break Apple up without utterly destroying their core principle, vertical integration. There is no way to do what Apple does with MacBooks or iPhones if they don't control the entire stack. I'm not saying they shouldn't be, I just see no way.
(2) is one of the most important points. We have to stop Google from cross-financing new products from other revenue streams so they can no longer undercut or buy all competitors. Google Maps is a good example. They ran it for super cheap a long time to drive out competitors and now rack up the prices.
In contrast to most people here, I think breaking up Amazon is far more important to Facebook, Microsoft, Apple and many other tech companies. Only Google is as bad.
But you have to acknowledge that without the cross-financing those "markets" wouldn't even exist.
Before Google Maps we had a few online map services and they were terrible. Google Maps redefined what it means to have free access to web based interactive global maps, it changed how people find things and it was all payed by the ad business. Later on some monetizing efforts were made for it and competitors started to appear, mostly trying to catch up and copy what Google Maps did, but without the huge cash infusion of the ad business none of this would have happened.
A decade later, people take these things for granted and just want to split services up. I guess it makes sense from their point of view but to me it's not that clear what should happen while still allowing for the type of creativity and speed of development that allowed things like Google Maps to appear because I'm afraid "the next big" thing that could redefine our lives (and improve them) would be slowed down or simply made non-feasible.
> "Before Google Maps we had a few online map services and they were terrible. Google Maps redefined what it means to have free access to web based interactive global maps"
This is not true. MapQuest revolutionized things almost 10 years earlier than Google Maps. Google search is what allowed Google Maps to overtake MapQuest. Also, Android providing real-time traffic data of all their users gave them the winning formula.
You are right that traffic was revolutionary and that's why google maps became the defacto standard. However, in context with the original post, this is exactly why it's unfair. Google has an android that gives them user location data which they then use as a competitive advantage in another space to eliminate all competition. If Android were 1 business and GoogleMaps another, then people like MapQuest could also negotiate deals with Android to get user data and then it's a matter of who has the best platform that wins. That's what is best for the consumer as well. In the current structure, there is no way that a small business like MapQuest could build a smartphone to ascertain user data and nor should they have to. They should only have to build the best map application to succeed in the online mapping space. Having to also succeed in location data aggregation eliminates competition. It's designed so the giants can eat the small guys at will without them being able to fight back.
Yes, my thought was that by breaking everything off from everything else, these silo'd services would suddenly have to compete with the rest of their market at fair terms, instead of being propped up massively by other division(s), and thus would lose marketshare to a multitude of fresh and established competitors.
You are right though, it doesn't deal with the dominance of the search directly. My hope is a complimentary effect to the above also happens: Google no longer gets gobs of personal data from its other services, allowing other search engines to approach its efficacy.
As is clear I'm not really a fan of direct intervention in a single market, I see it as more of a problem when these giants muscle their way and control more and more markets, creating a vicious feedback loop.
> Yes, my thought was that by breaking everything off from everything else, these silo'd services would suddenly have to compete with the rest of their market at fair terms
I think it's instructive to look at the rest of the market. How is Mozilla funded? Basically a single gigantic contract with Google. Even Apple accepts payment from google to become the default, and it's not cheap: https://fortune.com/2018/09/29/google-apple-safari-search-en... The same logic applies to pretty much anything Alphabet spins off -- there's little difference between ownership and those contract.
About the only competition this setup produces is the ability for Mozilla to walk away to a competitor bid, which they did for like a year before bailing out at the first opportunity. There's a huge incumbency bias in these contracts. The first parallel that comes to mind is employer provided health insurance. Everyone gets to bid, but the incumbent knows the claims history far better than the competition and we'd only expect them to lose bids to companies overly optimistic about that history. Google knows how valuable various traffic sources are, but their competitors have to guess, and only when their guess is higher than Google's does it pay off. Does anyone think Yahoo winning Firefox was a good deal? I haven't seen any analysis to support that.
> My hope is a complimentary effect to the above also happens: Google no longer gets gobs of personal data from its other services, allowing other search engines to approach its efficacy.
Wouldn't the most profitable thing for these broken up companies be to sell their slice of the personal data pie as many parties as possible? This seems like a net loss for privacy. How much extra would it be worth to set up an exclusive arrangement?
>You are right though, it doesn't deal with the dominance of the search directly. My hope is a complimentary effect to the above also happens: Google no longer gets gobs of personal data from its other services, allowing other search engines to approach its efficacy.
I'm still not sure how this would work on Apple though, since their main differentiator is their design sensibilities and integration rather than their platform monopolies.
I guess iMessage and the App Store do rely on monopoly rents, but I can't think of any way to sever those links without making the iOS platform less secure.
I'm not sure how much an impact breaking Google up would have, and I say this as someone who has built a product that competes with Google's G-Suite. I want there to be a more level playing field, sure. But each of these siloed businesses would still be a monopoly in its own right.
Most of those products don't make money by themselves, they exist to keep people in the ecosystem, providing more data for the real moneymaker.
The biggest blow to Google wouldn't be to break it up into lots of small companies, you just need to separate the advertising business from everything else and you've effectively neutered the monopoly. Google's genius isn't in hiring the best engineers to providing a ton of services, it's in convincing people that they're not an advertising company, and that is where Facebook has been falling out of favor recently (I'm guessing that's why they bought Instagram, and why Google bought YouTube).
This is a supposition - while perhaps it seems to makes sense, seems true, “must be true”, it doesn’t mean it is true!
Unless you worked on search quality at google you really aren’t in a position to know if, say, google cloud, or android provides useful signals to search (outside of the signals they’d collect anyways if they were different companies).
One thing people are obscuring is just how crazily effective AdWords are. They work for the advertisers, and they earn google like 70+% of the revenue. Confirmed via sec filings which does break that out. Go play with creating an AdWords campaign and try to infer just how much data google really needs to deliver those ads - it’s less than you’d think.
In short: this overall move is more wishful thinking than solidly reasoned. Surveying the field of streaming video, given the amount of studio driven consolidation, is there really a tons of competitors being held down and will spring up? I am skeptical.
That's an interesting thought. I agree with you that most of those products are loss leaders for data mining and thus advertisement.
But my thinking was that if you simply cut off advertising all the products still have massive marketshares and could lean on each other, as long as some succeed. Not to mention investors probably willing to prop up such a massive aggregate marketshare (one only has to look at Uber).
If you 'silo' them, success of one division of previously-Google won't lead to all of them dominating.
Apple's already on it's descent, and at most you'd break off their cloud services, which would immediately die without the support line from the hardware.
I rather cleave them all vertically anyways, rather than be left with a bunch of mini horizontal monopolies.
Granted most of your examples wouldn't be, except for search, but it still seems more interesting to me to just have a bunch of mini googles made from cleaving teams. Certainly that would make for some crazier competition.
Breaking up companies like Google, Amazon, and Microsoft are just not gonna happen in 2019 where huge, global mega-corps are the only way to compete outside of small local markets.
Even though a lot of these corporations build offices, hire non-Americans, and pay tons of foreign taxes in countries in which they do business, the main executives and talent still live in the US, the IP is developed here, and the majority of profits end up back in the home country.
It's better for everyone who actually matters - shareholders, intel agencies, government officials, associated businesses, etc - that these companies remain large and globally dominant, even if it screws over US citizens by having to pay the monopoly taxes and suffer the privacy invasions. We're an insignificant sacrifice in the decision-makers' minds.
> Indexing the top billion pages or so won't take as long as people think.
This is what makes me wonder why we don't have a LOT of competing search engines. Perhaps i'm vastly under-estimating the technology and difficulty (I could well be - it's not my domain) but it surely it can't be THAT hard to spawn Google-like weighted crawl-based search results?
It's a long-since solved problem - heck, pageRank's first iteration recently came out of patent protection - it could just be copy'pastad. Why aren't all the big companies Doing Search?
I did a search earlier today on Google for "north face glacier" - turns out that the company North Face has a Glacier product so as far as I can tell that's all the search results contain.
Searching for "north face glaciation" did help as the first page of search results did have one entry on the topic I was actually searching on!
Maybe they should have a "I'm not buying anything" flag!
This has been the problem with results for the past few years. E-commerce gets priority in all things and you have to wade through pages of useless links if you want actual content about what you are searching for.
It's not just ML, but the people that provide the labeling for the ML.
Google pays some large number of people to do search and grade the various results they get to see if the answers are good, which then helps feed back ML.
Heck, according to this article[0], google has been paying people to evaluate their search results since 2004.
It doesn't feed back into the ML directly, according to Google. Instead they use it to evaluate changes to search algorithms. If they get an increase in thumbs up back from the Quality Raters then their changes were positive. If not, they figure out why.
I feel for certain topics, especially anything to do with tutorials or coding, even Google falls foul to SEO content. Just Google ‘android custom ROM <phone model>’ for instance. There’s stock pages for all of them, identical save for the phone model, and clearly not applicable.
PageRank was an innovation at the time but modern search engines require training models on lots of query logs to get good performance. Its expensive to make a really good search engine.
It is because people just stick with their best usually instead of using a variety of search engines. It becomes rather winner takes all.
Google for general search. Duckduckgo fir general if you want something a bit more private but not extreme enough to run your own spiders. Bing mostly for porn search - not being snarky some people do consider it to have better results.
"indexing" is only part of the problem, it's a batch job. I find being able to respond to searches across a huge data set in the order of milliseconds (while having planet scale fail over) be a lot more challenging to implement.
Querying an index isn't a solved problem, building it is.
It's easy to gather the necessary data, but it's hard to know which parts of that data are the most relevant for finding good content and avoiding bad content. Is it more relevant if key words show up in links or titles than in the body of the text? If so, SEO spam sites will include a bunch of keywords in links and titles. Is it more relevant if keywords show up in the first 200 visible words of the page? If so, spam pages will make tons of pages with relevant keywords at the top.
The hard part about building a search engine isn't indexing the internet, it's adapting to spam. Spammers are continually adapting to changes in the algorithm, so the algorithm needs to adapt as well. And the more popular your search engine is, the more money you make and the more able you are too adapt to spam (and the more spammers focus on your engine).
So, the problem isn't that Google has a better index (though I'm sure it does), the problem is that nobody else has the will to spend the money necessary to tune the search algorithm to stay on top of spammers. When Google started, companies didn't care as much about improving their index and instead focused on building their other content (Yahoo, MSN, etc). Google saw the value of search and got a lead on everyone else in terms of curating results, and now they have the momentum to stay in front and have shifted to building content to improve monetization. Nobody else has the monetization network for search that Google has, so they'll continue having the problem that other companies had (Microsoft wants to point you to their other services, DuckDuckGo is limited by their commitment to privacy, etc).
In short, Google wins because:
- it was better when it mattered - it makes money directly from search - its other services improve their ability to understand what users want, which improves search quality and ad relevance
You can't make a better algorithm by being clever, you make a better algorithm by having better data, and that's hard to come by these days. The only way I can think of a competitor stepping in is if they target an underserved demographic and focus data collection and monetization there, and DuckDuckGo is close by targeting privacy conscious power users.
> The only way I can think of a competitor stepping in is if they target an underserved demographic and focus data collection and monetization there, and DuckDuckGo is close by targeting privacy conscious power users.
The irony there is that DuckDuckGo can't collect much of that data precisely because of their privacy focus.
Most likely answer: lack of diversity in revenue models.
Outside of ad revenue, search has always been seen as something of a "charity" effort for the internet. It's "boring" infrastructure work that can be critically useful but doesn't really make money directly on its own. No one wants to pay a "search toll" and there's no government agency in the world that the internet would trust as a neutral index to run it as actual tax-basis infrastructure.
Aside from the quality issues that others have already mentioned, I think that simply gaining traction for a new search engine is incredibly difficult - people typically use whatever is the default in their browser, or/and Google/Baidu/Yandex (which are surely the best known in their respective regions).
Consider DuckDuckGo, which sells itself on privacy, but after more than a decade has only 0.18% market share. Without the power to make it the default in an OS or browser, you'd have to have a really strong value proposition to convince people to switch.
I don't think this is correct. For years, the #3 search query on Bing in the US was "Google", and globally it used to be a double-digit percentage of all Bing queries. That suggests to me that people with a default Bing search engine had learned in droves to click their way to the preferred engine regardless of what the default was, and did so without being technically skilled enough to change the default once and for all. I don't know how large a group the latter is, but it seems hard to argue that the two together are small.
It's so weird how about 1/3 of the time on DuckDuckGo, I add a !g in frustration .. half the time I still get nothing and I end up posting on Stackoverflow but half the time I get a little more useful information.
Google custom tailors results for each and every machine. Even if you're not signed in, Google uses your browser fingerprint, the OS it's reporting and location/IP data to custom fit results. There is no "stock" google result.
This is something DuckDuckGo et. al. can't do if they want to focus on a privacy model. DDG does offer location specific searches, which can be helpful.
It's not the 'raw' search itself. It's the billions (trillions) of queries they've captured: Person X searches for query Y and clicks on result Z.
This is far more valuable than the general page rank algorithms that were initially developed and have already been duplicated many times in academia and business.
Pretty much, and the potential for criminal activity is astronomical if you give them access to an open index. Things like every website on the web hit with the same zero day on the same day for maximum profit. Build your own best kiddie pron site evah! with direct access to the index and your own ranking system. What your admin pushed a config that left the admin pages open? Go time!
As someone who was operationally responsible for a search index (formerly VP Ops at Blekko) the kinds of things crooks tried to do was pretty instructive on how they use search in advancing their efforts.
> 1) a record of searches and user clicks for the past 20 years
If a government was serious about getting more players in the search industry, they would force Google (and all other players) to make this data public.
Simply say "All user-behaviour data used to improve the service must be freely published".
Make the law apply to any web service with more than 20 million users globally so small businesses aren't burdened.
If the data cannot be published for privacy reasons, the private parts must be seperated and not used by google or it's competitors.
> If the data cannot be published for privacy reasons, the private parts must be seperated and not used by google or it's competitors.
As a user that notices the impact of this data: please no, thanks though.
Have you ever visited youtube's home page in incognito mode? It's... bad. Really bad. Not allowing any company to use this (obviously very private) information in ranking would simply make their products suck, horribly, compared to today.
>Have you ever visited youtube's home page in incognito mode?
Do you like the personalized recommendations because of channel subscriptions?
I always get the "anonymous default" home page with YouTube and don't care. The home page is just a wasted load before I can start typing in the search bar. As a bonus, staying incognito means all the videos on the right-side panel are related to the current video. Not related to a music video I have playing in another tab.
> Bing had the money and persistence to make that investment, but how many others will?
I hypothesized once with an ex Microsoft HIGH up that it probably took 10B to launch bing. He said I was almost exactly on the nose.
Also this is a ridiculous thing to ask for. How much money do you think Google pays for the bandwidth to crawl the web? How much do you think it costs to run the machines that create indexes out of that? How do you value the IP involved in the process?
Google should give away the fruits of that labor for free, plus invest in a reasonable API to download that index? Plus the bandwidth of sharing that index with third parties? It’s probably not even feasible aside from putting disks or tapes on multiple semis to send to clients. The index is 100 petabytes according to [0]. With dual fiber lines, and no latency for mind bending numbers of API calls, that would take 12.6 YEARS to download a single snapshot.
The hacker news guidelines specifically advise against this kind of comment.
'Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."'
'Be kind. Don't be snarky. Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.'
Via API access you'd be effectively getting access to the index _plus_ the derivative search quality improvements _based on_ user data, even if you're not getting user data itself. That would certainly open the door to competition, especially on a niche basis e.g. you want to build a platform dedicated to drones - you can combine drone reviews and news with videos plus e-commerce results. The result could be awesome in sparking all kinds of small business building on Google's API.
> 2) 20 years of experience fighting SEO spam.
That's probably a key issue here though. Providing an API potentially makes it easier for spammers to identify ways to boost their content in a well automated manner.
> That's probably a key issue here though. Providing an API potentially makes it easier for spammers to identify ways to boost their content in a well automated manner.
How so? Unless you give reasoning for the scores, or provide live updates etc, just putting an API on search wouldn't change much - you can APIfy search now, there are multiple services offering it as a service. Granted, at some point it's getting expensive, but for SEO research, you're probably not running a million queries.
Totally agree. Googles' golden egg is not the index but the datasets containing searches done by the user (together with location data from Android and Maps, and speech data from Assistant).
As far as I remember Google is actually shrinking its index in terms of number of indexed websites because 90% of the internet are irrelevant for the majority of searches. Basically "quality over quantity" if you can say that.
> Basically "quality over quantity" if you can say that.
This is even more depressing. Google was such a wonderful tool for us nerds because we could finally find those usenet posts, personal blogs, tech mail lists, etc. of all the esoteric subjects that had been hard to find previously. Before Google, you'd use lists of curated links (e.g. Yahoo) for a given topic that had been traded back and forth between various sites and other interested netizens.
It's apparent that Google is becoming worse and worse for these types of searches, while it concentrates of more popular queries like "When is the next <my show> on" or "What is the current sports-ball score" or "How big are Kim Kardashian's boobs".
Just like Craig of Craigslist recently came out with an article saying the internet has actually made the news media worse, not better for informing citizens - something he did not predict correctly - it's apparent that Google is pushing us in the same negative direction in the ability to find quality information on non-consumer knowledge.
But that's where differentiation occurs. Every search engine will get short tail results correct. We go back to Google because it also performs with the weird queries.
I agree that algorithmic superiority will probably perpetuate Google's dominance. But making its index public is (a) legally precedented, (b) conceptually simple and (c) a small step in the right direction.
Gotta say my experience is very varying with long-tail type queries, I usually try DuckDuckGo and if that fails I search Google. They find very different things, DDG tends to be less filtered in terms of spam sites and fake news, but it also finds results of dubious copyright nature, for example.
I've had the same experience with DDG, which I use as my primary search engine. If I'm looking for a specific e.g. scientific paper or a recent news article, it doesn't have it. I run the search through Google. That's purely an indexing problem.
On the other hand, if I have a health-related search, I run it through Google. DDG has the proper content. It's just that it priorities the blog spam. That's an algorithm problem.
Relieving the former, as the author's proposal would do, makes DDG more competitive. As a second-order effect, it would also let DDG priorities resources towards the second problem, making them more competitive still.
I'd wager any startup that tries to crawl a few sites like Amazon, Yelp, Linkedin, etc will be blocked. Google, however gets a pass because they're Google. So yes, I believe their huge index, and ability to crawl any site at will is a huge, huge advantage for them.
Amazon lets anyone crawl them, Yelp has a whitelist and no you can't get on it, Linkedin has a whitelist and no you can't get on it, Facebook has a whitelist and no you can't get on it.
Storage and bandwidth are cheaper than ever before, people scrape a billion pages for much more mundane purposes these days, even for academic papers.
Having a full text index on that is more involved but hardly impossible. You're completely right that it's not at all Google's secret sauce. Bing has clearly indexed much more than that, plus invested a ton in actually returning good results from their index. And still nearly nobody cares. It's just not easy to make a better Google, and the people most likely to figure out how to do that already work there.
The Common Crawl corpus is already available and stored on S3 - so analyzing billions of web pages is literally already available with an AWS account and a simple map reduce job.
I'd actually advocate for making public an anonymized list of actual search queries.
Domain specific search engines could evolved based on the demand of what has already been searched for.
It depends which sense of "better" you mean. It's nearly trivial to make an ethically superior search engine by just not building the spyware bits of Google.
It's difficult to make a search engine that's "better" along the dimensions of speed, profitability, etc.
That exists, it's called duck duck go, and even less people care about it than Bing. For the most part, people don't actually care about Google collecting their entire search history and combining it with their other data on you. We may live to regret that in a hypothetical future where the government turns more authoritarian and requisitions that data for evil.
Some argue (not necessarily me) that Google isn't necessarily purely optimizing for quality using that 20 year click-and-search log, that they're accepting some inefficiency by biasing for political (left-leaning) gain or "censorship by obscurity". If competitors could more easily build alternatives, which, say, didn't have those biases, then arguably that'd put more competitive pressure on Google to not use their monopoly for bad stuff.
well considering the complaints I read about Google's search quality going down for users on HN all the time I have a theory that highly technical users are adversely effected by the search improvements so an improved search engine targeting that group would essentially be one searching on what you typed.
I also happen to think that is the search engine I would prefer. I think I could build that pretty quick if I had the api access.
It is the crown jewel because people choose Google precisely because they are understood to have the largest index. It's comparable to Verizon marketing 'the largest network,' but with many more benefits accrued to the company who is believed to have the largest search index.
More importantly, Google's core competency is PageRank. Sharing the index != sharing PageRank. As time goes on, others will use inferior algorithms, and become worse. This scheme will not accomplish what it intends to do. Also, you can't just force people to give away their property.
I think we've reached an equilibrium state on this that has significantly degraded the educational quality of search engine results.
The total garbage SEO spam we used to get is gone, which is nice, but what it's been replaced with is technically relevant but mostly manipulative advertising. Product searches will basically give you a bunch of no-name blogs who are almost definitely paid off by one vendor or another.
Even actual inquiries are inundated with search results that do answer the question, but do so in extremely cursory and incomplete way. Or, in the case of recipes, Google seems to prioritize results that give you long, meandering narratives before they actually talk about their recipes. It has some very weird ideas about what people actually want when they search.
One of the most annoying things is how impossible it is to actually find the website of a local business, especially a restaurant, by Googling. Your hits are always Googles' own cobbled together dossier on the restaurant first, then some combination of Yelp, Grubhub, Postmates, AllMenus, etc. pages. If the restaurant has a website you can't tell and it's probably way on the bottom or on a second page of results.
In the past it was a handful of very decent results amidst a sea of total garbage SEO spam. Now it's a sea of mediocre content farm stuff, but it ranged from difficult to impossible to actually dig into detail on things anymore. The old spam we could at least dismiss as crap within a fraction of a second of seeing it. The new spam you have to actually read most of it before you realize it doesn't have what you're looking for.
Since the author compares the proposed API to what startpage.com does, I'm guessing he's not talking about "index" as in "raw documents", but basically Search as an API with all the sorting and ranking done.
Robert Epstein (born June 19, 1953) is an American psychologist, professor, author, and journalist. He earned his Ph.D. in psychology at Harvard University in 1981, was editor in chief of Psychology Today,
He has also made some questionable claims about google manipulating search results to favor Hillary Clinton.
His research is based entirely on his own experience
“It is somewhat difficult to get the Google search bar to suggest negative searches related to Mrs. Clinton or to make any Clinton-related suggestions when one types a negative search term,” writes Dr. Robert Epstein, Senior Research Psychologist at the American Institute for Behavioral Research and Technology.
Google's claim that the algorithm is generic is demonstrably false. Type in "hillary clinton e" and there is no suggestion for "email", type "donald trump e" and email is the first suggestion. Given the news content that we know is out there, that can only be the result of adjusting the results for clinton specifically (if anything, we would not expect "email" to be autocompleted for trump). This is not research that tells us what exactly Google is doing, but you cannot deny the example.
This is not "research" period. Using one arbitrary search comparison to draw conclusions about the nature of a system that processes billions of queries a day is pretty weak. Additionally, I don't get the same results you do. "hillary clinton e" does not bring up emails, nor does "donald trump e" bring up emails (the first results I see are election, education, england visit, ex wife).
I'm not ruling out the possibility that google actually is manipulating search results, but this is not proof of that.
Try "hillary clinton emai". From a fresh chrome session in NYC I get nothing, not a single autocomplete result. On the other hand "donald trump emai" gets:
* donald trump email * donald trump email address * donald trump email list * donald trump email newsletter * donald trump email list signup
And just to drive the point home I tried "root_axis emai" and got "root_axis email". Try anyone else and you get similar results, 'barack obama emai', 'george bush emai', etc etc. So yes, this is proof that the results are scrubbed for Clinton email.
I got curious and tried the names of a bunch of public figures. Some of "<first and last name> e" yielded "email" as the suggestion. But these did not: elizabeth holmes, tom jones, tom cruise, brad pitt, gwyneth paltrow, roger federer, will smith, jimmy carter.
Since Hillary Clinton is not unique, then it's not proof that her results are treated differently.
Why would you expect “brad Pitt email” to be something that auto completes? You would, on the other hand expect “Hillary Clinton email” to auto complete because there was a huge controversy about it.
I’m not saying google is manipulating auto complete intentionally (though they might be), I’m just saying your counter examples are irrelevant.
It would be like “Donald trump Russia” NOT auto completing then someone saying “but neither does Taylor swift Russia, so we’re good.”
The poster claimed Hillary Clinton was unique, meaning the only person that applied to. For me, she was not. Since she's not unique, then her being unique can't be used as evidence.
Claiming that she's unusual, since you expect it to work for her based on stories written about her is a different claim.
1. I said "only for Clinton" in the contect of Trump vs Clinton. Then I compared to other U.S. politicians, where the example still holds. The intended meaning is perfectly clear.
2. Obviously the fact that Clinton was the topic of a scandal involving email is the assumed context here. That's why I said "if anything, we would not expect "email" to be autocompleted for trump" (implied: but for Clinton, email is a more relevant search term based on published news, etc.).
None of those other people were involved in major stories with email in the headline. If you look at the actual results, you'll see that it doesn't make any sense to not get suggestions for "Hillary Clinton email".
Seems to go both ways though as I'm not getting any auto completion results for "donald trump stormy daniels" either. I'm guessing they scrub things that are highly sensationalized in the news.
"stormy daniels" doesn't get any autocomplete results for me even without trump, my guess is this is more about adult search terms getting blacklisted rather than political. For example "donald trump e j" gets "donald trump e jean carroll", E Jean Carroll recently accused Trump of a serious sexual assault and this is autocompleting.
When I type "Donald Trump R" (or Ru or Russ etc) no autocomplete results contain the word Russia despite plenty of news coverage. When I type "Donald Trump Epstein" no autocomplete results. Must be a conspiracy by google to protect the president... or drawing conclusions based on individually cherry-picked autocomplete results is like drawing conclusions from numerology.
The same thing happens with Russia for "barack obama rus" "hillary clinton rus", "george bush rus", etc. - none autocomplete for me despite the fact that there was lots of news items relating those politicians and russia during their careers. However Clinton is the only one that doesn't appear to autocomplete for email, suggesting that it is specific to her. When I type "donald trump epstei" I get "theo epstein donald trump" (for some reason the word order is flipped), which would suggest to me that epstein is not blocked in the way you suggest, and it's just that not enough people are searching that term, or that the autocomplete algorithm hasn't caught up to the latest news on epstein and trump yet. However "donald trump e j" does autocomplete to "donald trump e jean carroll", which is relating to a serious scandal for the president. This isn't cherry picked I'm afraid, it really does look like intentional blocking of "hillary clinton email" from autocomplete.
None of those people had a gigantic Russia scandal though, Trump did, so you must still account for this unexplained aberration. If anything, the Trump/Russia scandal had more coverage than the Hillary e-mail scandal, so it's an even more difficult aberration to explain.
Also, if I type "hillary e" I get "Hillary Emails PDF" as an autocomplete suggestion. If I type "clinton e" I get several email suggestions "clinton email PDF" ,"clinton email film", "clinton email FOIA", "clinton email download".
Please include these confounding results in your analysis
I can believe Google is attempting to de-emphasize scandals on both sides. As an aside, I do get "russia investigation" as autosuggestion for trump.
I think the bigger problem is that these adjustment appear to be done manually. We know that's what they're doing to avoid racist autocomplete phrases (and reasonably so, I think most people would agree). Having such judgments made on specific political topics/events/scandals will inevitably result in political bias, especially in an organization whose workforce is so politically skewed to one side.
My only point is that autocomplete results for arbitrary one-off queries isn't instructive, especially because many people see different autocomplete suggestions. As I've already pointed out, its possible to find strange results for anything if you look hard enough.
If you type "Donald Trump Ru" into an incognito search you will not get the Russian Investigation. To GP's point, it's numerology to keep focusing on this and drawing conclusions of political bias.
Actually both Obama and Clinton had major stories relating to Russia including the failed 'reset', the annexation of Crimea during the Obama administration, obama's "red line" of using chemical weapons in syria which was circumvented by russia, and Obama telling Mitt Romney that Russia was not a threat in the 2012 presidential debates. So yes, they have huge stories relating to russia. And your autocomplete results just bolster the evidence that 'hillary clinton email' has been made a special case and is not organic!
Those stories barely saw a fraction of media coverage compared to Trump/Russia. The Obama/redline thing was primarily reported as a story about Syria, not Russia. If you type "obama red line" you get plenty of Syria suggestions which makes sense.
This is the problem with search query anecdotes, it ultimately produces a subjective and pointless debate about how one should interpret search suggestions for arbitrarily selected one-off queries. There is no methodology here, and we don't even know how widespread any particular suggestion results are. Any person with an agenda will be able to cherry-pick search queries that confirm their narrative.
That's the point: this is not research, but whatever is going on at Google, the explanation has to account for examples like these. It's simply one observation that you cannot discount.
I just tried searching again a few times with new private windows, and "email" alternates between first and fourth suggestion for trump. But the more important point is the absence of the suggestion for clinton: we know it's been in the news extensively, we know people searched for this phrase a lot, and now "email" has been removed from the suggestions only for Clinton. I tried searching a few more U.S. politicians, and for all of them "e" autosuggests "email" somewhere between first and fourth place. So the complete absence for Clinton does not look like a generic algorithm change.
Yes we have. Further observation shows Google removes autocomplete for controversial items, like "Russian Investigation" for Donald Trump. If that example doesn't answer your question then you have confirmation bias.
The question doesn't need answering any more than any other arbitrarily selected individual query needs an answer. Why does "trump helsinki" have zero suggestions? Why does "bill oreilly sex" have zero suggestions? Why does "alex jones sandyhook" have zero suggestions? I used right-wing examples because I presume any celebrity or left-wing examples will be considered evidence in favor of your position, but there are plenty of examples all over the place.
When I type "Donald Trump R" I don't see any autocomplete for "Donald Trump Russia" despite plenty of news coverage on this topic. So what? This isn't proof of anything. I can indeed discount "one observation" because it is literally a single search query used to draw a conclusion about an insanely complex system that processes billions of queries a day. I am open to the possibility that google is manipulating search results, but to demonstrate this you need to account for many other possible search queries that produce seemingly unexpected results. Dissecting one politically charged query and claiming it is proof of google's malfeasance doesn't make sense. Anyone can string together a couple strange query results to support their own subjective narrative about what should appear and the supposed sinister machinations behind the query results.
"hillary clinton emails" is a topic that is widely published around the web. Autocomplete should pick up on this and recommend it. I even went to the trouble of typing "hillary clinton emai" and no autocomplete suggestions were brought up.
It is a rather suspicious result. Suspicious enough that it is hard not to imagine a deliberate act is behind it. I admit that I don't have any proof.
This is not the most scientific test, since previous searches are generally taken into account. Was this test conducted from a system that mostly searches for / clicks on pro-trump or anti-trump content?
Well, that's kind of the point: it's not scientific, but it's relevant. I believe this was also the example that was recently used in a Project Veritas video, with the same results.
I searched from a Firefox private window over a VPN from the Netherlands. But since the results are the same (regarding presence/absence of "email" as an autocomplete term) I don't think it matters much.
How is autocomplete a sign of bias...Take an average voter...not very plugged in..he types in "hillary clinton e" and gets no autocomplete suggestions...he thinks Hillary Clinton didn't do anything wrong with her e-mail server? Do you seriously believe this?
Curious....Did you get any results for "hillary clinton e-mails"?
I'm not sure this answers the question. Clearly Vox has a progressive (maybe originally neo-liberal?) editorial bias, but that may or may not mean that they have dishonestly distorted facts. There is a big difference between editorial bias and dishonest reporting!
>Overall, we rate Vox Left Biased due to wording and story selection that favors the left and High for factual reporting based on only one failed fact check and appropriately issuing a correction to a second. (5/15/2016) Updated (M. Huitsing 5/30/2019)
speaking of cherry picking, I wouldn't use autocomplete suggestions as a source for bias. Has anyone on this thread claimed bias in the search results?
I've seen plenty of politifact and snopes fact-checks that go against the conspiracies that you guys seem to think are underway in those types of organizations.
tomweingarten can't see past the tip of his ideological nose. It's gonna be such a shocker to him when his megacorp gets shattered into a million little pieces.
Here is the trend data where google autocompletes two results while not autocompleting other two. The trend data groups the results pretty clearly, however the autocomplete engine does not agree.
If the trend data is not being used for the autocomplete results, what is? And why?
Just FYI the completion results in the omnibox have little to do with the search engine results. Clearly the search engine produces millions of hits for “Hillary Clinton emails”. The completions are a completely separate system based on what people type in the box, not what’s in the index, and it’s laser-focused on producing interactive results.
they were the only ones who would publish the findings without edits.
They only let him do that because it fits with their agenda. The real test is if they would let him publish an article on Putin's corruption...without edits..
> He has also made some questionable claims about google manipulating search results to favor Hillary Clinton.
Despite it being off topic, can we define why those claims are questionable? Is their data proving those claims wrong? Because with all the Google political controversies over the past few years, and given the political donation history of Google employees, it’s highly plausible that search results are manipulated to favor certain politics over others.
If the “questionable claims” have been disproven or are inaccurate, then it would seem that you’d provide some proof. Essentially, it you are to claim the search engine was not biased towards Clinton, certainly there would be some proof of that? It’s more reasonable to suspect Google manipulating search engines than not, given the political environment at Google.
The real “questionable claim” is that Google is neutral in any way — which is kind of the entire premise of the article. If Google were completely neutral, then why would their monopoly on search need to be broken?
In short, and ignoring my ad hominem attack on his motivations, I encourage you to read/skim his two "studies" [1][2] and see how absurd they are. You might dismiss my claims and summaries as biased, but I think I was pretty open-minded towards his conclusions until I read them.
What about Project Veritas? People claim the statements by Google employed were taken out of context, but I've gone back, listened to them, looked at the videos, and it's hard to think in what context anything they said is acceptable.
Even if the specific engineers and managers in the video clips don't have the level of authority to make the changes they're talking about, it's still chilling that their attitude could be common in Google and they see political ends of their great power as being some kind of great responsibility; instead of respecting the idea of equal/diversity of opinion.
Project Veritas has historically operated by guiding people into saying something ridiculous, either by themselves acting ridiculous (and convincing the person they're talking to that they're crazy) or just driving the conversation to ridiculous areas.
Maybe this Google 'expose' is the first time they're not guilty of that, but anyone who still finds them credible after their last several blatant mischaracterizations is far more forgiving than I would be.
" ... Project Veritas has edited the video to make it seem that I am a powerful executive who was confirming that Google is working to alter the 2020 election. On both counts, this is absolute, unadulterated nonsense, of course. In a casual restaurant setting, I was explaining how Google’s Trust and Safety team (a team I used to work on) is working to help prevent the types of online foreign interference that happened in 2016. Google has been very public about the work that our teams have done since 2016 on this, so it’s hardly a revelation. The video then goes on to stitch together a series of debunked conspiracy theories about our search results, and our other products. ... "
* instead of respecting the idea of equal/diversity of opinion.*
Going to go out on a limb and say I don't respect "Hitler was a bad man" and "Hitler did nothing wrong" equally. Individual employees are allowed to have opinions...even opinions I don't agree with.
>Despite it being off topic, can we define why those claims are questionable?
the claims are questionable because his methodology is questionable. If he claims google is biased, he should have a good peer reviewable study that proves this..not google is biased because google didn't auto prompt me with "created AIDS" when I typed in hillary Clinton....
And he's the one making the claim that google is biased...The burden of proof is on HIM.
This is a forum for people in the tech world, right? Shouldn't we question N=1 "studies"?
The "Manipulating instant search results in favor of Hillary Clinton" claim has been independently debunked and anyone still standing behind it are only signalling their technical illiteracy and/or political agenda for playing a victim card. [1][2][3]
That's not really off-topic - The fact that the author still supports the claim calls into question their ability to make further claims about the subject.
None of those blog posts debunk the idea that Google manipulates search results to favor particular political parties. Mashable has a statement from Google (the other two don't), saying that they don't but why would they say if they were?
None of these debunk the claims made by Google employees in the Project Veritas videos.
Except you're wrong in that you can't logically prove something doesn't exist. You can't prove that pink unicorns don't exist, just as much as you can't prove that political bias in the search results don't exist.
All you can do is disprove the claims of their existence. Someone tries to claim there's a pink unicorn in the garage, and you can check the garage and say that the it is pink unicorn free. Someone tries to claim political bias exist for hillary clinton in instant search results, and these articles disprove that claim as using cherry picked evidence.
Now, if you suspect that the instant search results are politically biased, then the burden of proof is on you to provide evidence of that existence - preferrably without cherry picking evidence to fit an agenda yourself. Otherwise it's just hand waving and click bait.
> Now, if you suspect that the instant search results are politically biased, then the burden of proof is on you to provide evidence of that existence
The proof is a senior Google employee admitting to bias and manipulating results in the Project Veritas video. There's also plenty of anecdotal evidence you can see for yourself as a user. In addition to that I know many people who work at Google and the vast majority of them have extreme political bias.
The "proof" is extremely shaky. Veritas has a record of creating these types of videos where they draw conclusions out of thin air. Here's what the person in particular has to say:
This is precisely the hand-waving I was talking about. An employee rambling in a bar does not constitute evidence of search result manipulation, especially when the person recording it is known for stretching information, inciting people to commit voter fraud, and crossing the U.S. Mexico border dressed as deceased Osama Bin Laden to prove some point.
Likewise, the handful of people you know having political views, in an organization of 85,000 people, is not evidence that that organization's search results being biased.
If there is so much anecdotal evidence then it should be easy for you to prove, right? Or are you afraid of being disproven?
You asked for proof and I gave it to you. I'm sorry you don't like the source, but unless you want to address what Jen Gennai said, I don't think you have much of a point.
>Project Veritas has edited the video to make it seem that I am a powerful executive who was confirming that Google is working to alter the 2020 election. On both counts, this is absolute, unadulterated nonsense, of course. In a casual restaurant setting, I was explaining how Google’s Trust and Safety team (a team I used to work on) is working to help prevent the types of online foreign interference that happened in 2016. Google has been very public about the work that our teams have done since 2016 on this, so it’s hardly a revelation.
This proposal won't do very much. Indexing is the (relatively) easy part of building a search engine. CommonCrawl already indexes the top 3B+ pages on the web and makes it freely available on AWS. It costs about $50 to grep over it, $800 or so to run a moderately complex Hadoop job.
(For comparison, when I was at Google nearly all research & new features were done on the top 4B pages, and the remaining 150B+ pages were only consulted if no results in the top 4B turned up. Difficulty of running a MapReduce over that corpus was actually a little harder than running a Hadoop job over CommonCrawl, because there's less documentation available.)
The comments here that PageRank is Google's secret sauce also aren't really true - Google hasn't used PageRank since 2006. The ones about the search & clickthrough data being important are closer, but I suspect that if you made those public you still wouldn't have an effective Google competitor.
The real reason Google's still on top is that consumer habits are hard to change, and once people have 20 years of practice solving a problem one way, most of them are not going to switch unless the alternative isn't just better, it's way, way better. Same reason I still buy Quilted Northern toilet paper despite knowing that it supports the Koch brothers and their abhorrent political views, or drink Coca-Cola despite knowing how unhealthy it is.
If you really want to open the search-engine space to competition, you'd have to break Google up and then forbid any of the baby-Googles from using the Google brand or google.com domain name. (Needless to say, you'd also need to get rid of Chrome & Toolbar integration.) Same with all the other monopolies that plague the American business landscape. Once you get to a certain age, the majority of the business value is in the brand, and so the only way to keep the monopoly from dominating its industry again is to take away the brand and distribute the productive capacity to successor companies on relatively even footing.
reply