Genuinely curious - what do people want to see from a new/different rendering engine?
The web is crazy complex these days because it is an entire app platform.
The incentive for anyone building a browser is to use the platform that gives you the best web compat especially at the outset when you don’t have enough users of your app to be able to make big changes to the platform. Even Chrome didn’t start from scratch - it used WebKit!
The Chromium community has built an excellent open platform that everyone can use. We are fortunate to be able to use it.
> trusted with big, critical open source projects.
You talk as if the community has appointed Google to take care of these projects. Google is spending $$$ writing code and open sourcing it. Not the other way around.
And as with anything open source, if you dont like the direction of open source code - fork it.
If I have an open source project, you dont say 'bitpush cant be trusted with the project'.
The Play Store services are not a critical open source project, though. The AOSP is still intact and maintained in accordance with the licensing.
The application signing backtrack is an issue, but more of a political problem than a technical one. America's lesson here has been written on the wall for years: regulate your tech businesses, or your tech businesses will regulate you.
> Genuinely curious - what do people want to see from a new/different rendering engine?
It should be fast when rendering HTML/CSS. I don't really care about JavaScript performance, because where possible I switch it off anyways.
It should be customizable and configurable, more than Firefox was before Electrolysis and certainly much more than Chrome.
It should support addons that can change, override, mangle, basically do everything imaginable to site content. But with configurable permissions per site.
It should support saving the current state of a website including the exact rendering at that moment for archiving. It should also support annotations (like comments, emphasis, corrections) for that. And it should support diffs for those saved states.
And if you include "the browser" in that:
I want a properly usable bookmarks manager, not the crap that current browsers have. Every bookmark should include (optionally, but easily) the exact page state at the time of bookmarking. Same for history.
Sync everything to a configurable git repo: config, bookmarks, history, open windows/tabs, annotations and saved website snapshots.
I want easily usable mass operations, like "save me every PDF from this tab group", "save all the pictures and name them sometopic-somewebsite-date-id.jpg" or "print all tabs that started with this search and all sites visited from there as PDF printouts into the documentation folder".
I want the ability to watch a website for changes, so the browser visits in the background and notifies me if anything relevant is different (this could be a really hard thing to get right I guess...).
I want "network perspectives" (for lack of a better word): show me this website as it would look from my local address, over this VPN, with my language set to Portuguese, ..., easily switchable per tab.
I want completely configurable keybindings for everything, like vimperator, but also for the bookmark manager, settings, really everything.
> The web is crazy complex these days because it is an entire app platform.
I'd prefer something that's not crazy complex, that's not "an entire app platform" designed and implemented by Google. Google essentially controls the W3C (Mozilla would vanish if Google stopped funding it), and controls the monopoly rendering engine.
Half of websites are better without JavaScript and web fonts, and 99% are just text, images, and videos with maybe a few simple controls. For the other 1% I can fire up Google Chrome and suffer the whole platform.
I want a web rendering engine for the 1%, that does the simple stuff quickly and isn't a giant attack surface around 30 years of technical debt and unwanted features calling itself an "application platform."
This actually reminds me that early in the HTML5 era one of its key selling points was that you could play videos using just the <video> element. There would not be a need for Flash, Silverlight or JS. However these days it is extremely rare to come across a site that can successfully play videos with JS turned off. Complicated JS has de facto become a requirement for videos but it doesn't have to be.
I too have nostalgia for a time when prices were reasonable, politicians didn't philander and children respected their elders.
And yet here we are :-)
For what it's worth, despite it being /en vogue/ to rag on Google, the Chrome team has some of the most talented and dedicated folks focused on building a vibrant and interesting web for most people in the world.
I think the concerns are not about feature requests but about leveraging embrace-extend-extinguish dynamics to push the web as a whole closer to being locked into dependence on Google as a platform. There are mountains of articles on the topic, ranging from ad blockers to privacy to DRM. But the critiques are old news to anyone who's been following the topic for a while.
Incognito clearly states how it works every time you start it, including what it doesn't protect against.
If we're saying that developers can't clearly and obviously state how things work, and are instead bound by however people think they work based on not reading anything at all, we're in a lot of trouble.
Though... can we at least then get rid of every intrusive TOS screen and cookie banner in existence? Because people click past all of those too without reading them.
I'm about a half hour into this, and listening to Marc talk about newsgroups brings strong pangs of nostalgia. These days I'm a bit of a greybeard (salt-n-pepper beard?) of web browsing, but I remember getting started in the late days of Netscape, as a teenage open source hacker discovering all the Netscape engineers sitting on the npm.* newsgroups.. how wild it was to be able to turn up there with a question about the browser you used every day and have someone working on it answer! Netscape didn't survive, but what a legacy.
That world lived on for quite a while through different mediums. I remember joining the webkit IRC channel in the early days and being full of wonder that folks like Hyatt were just hanging out willing to chat with me and answer questions.
There's something really special about the community and openness of folks who work on web browsers. Maybe it traces it's way back to the newsgroups.
The hierarchy there was basically a reflection of the company's browser team org chart. You could find a group for every team working on the browser where many of them were having their regular technical conversations.
Just now I am realizing that Slack is a lot more like a Usenet client than it is like an IRC client.
I mean. It’s still very far from actually being NNTP, and it’s not decentralized like Usenet or anything like that.
But all this time I’ve been thinking of Slack as “better IRC, with images and links and threads”.
When really Slack is more like “fancy Usenet service with client that renders images and other attachments”. (Although on the protocol and server and client implementation level it is very different from NNTP.)
Well. At least we don’t have to inefficiently yEnc encode attachments nor to split attachments into a bunch of pieces with par2 files. So there’s that.
node.js and Netscape are about 20 years apart ;) I also don't remember an npm. newsgroup hierarchy. As a teenager during that time I recall some binary newsgroups though :)
Have a BMW X5 with this auto-steer nonsense and have had several incidents where it's abruptly turned the wheel, where if I weren't grabbing it it would have caused an accident. Ended up disabling the assistant system.
Two years ago an rental Audi A2 almost crashed me several times into the tunnel wall on the right side. It was a rainy night and sometimes when I drove into the tunnel, the car steared really hard right.
Most Americans don't travel abroad. Those that are accustomed to frequent travel for business or leisure are acutely aware of the current situation because it's already blown up their year.
Quality can be worse in a long-time cycle project:
- Engineers are motivated to slam their feature in, because if they miss the train the next one's not for 12 months.
- You get one moment per year to connect with your customers & understand how well/not well your changes worked. This means either riskier things happen or that innovation slows to a crawl.
My 2 cents, speaking from some experience working on long time cycle projects and shorter cycle projects.
Firefox has been on a 6 weeks cycle for a few years now, clearly they found that short release suited them, and that it was if anything too long. Different strokes and all that.
It's not like features go straight from master -> release in 4 weeks. Changes have to go through developer-edition and beta channel first before landing in the release channel.
Why cycles at all? Why not look at what features have been integrated, whether they make up a set that you want to release, and then release?
Neither long cycles nor short cycles make any sense. Some features take a long time to develop, some take a short time to develop. Sometimes features that take a long time to develop aren't user-facing enough to be worth releasing for, and sometimes a quick fix has a huge impact that's worth a release, like a patch for a vulnerability that's actively being exploited by a spreading malware. Features simply don't line up with a single length of time. The problem isn't long or short cycles, it's cycles.
Generally speaking, you can release based on the calendar or based on whenever you think the feature set warrants it. You are advocating the latter which works well on low traffic projects. The former is a better idea on high traffic projects where there's always something worth shipping whether it's a new translation, a bug fix, or new feature.
It depends on the project, but in larger projects the calendar approach means politics takes a back seat as no one can hold back the release if their feature hasn't been merged yet due to blocking issues. And it helps keep the change-set small, and hence lower risk, if you release more often.
Also, Firefox already has a nightly release stream. I use the Aurora stream which releases a few times a week and have almost never had an issue with this frequency. I don't think a monthly release cycle is going to be an issue.
Because regular, predictable releases mean that developers know they can always "catch the next train", and users know they can plan around predictable upgrade schedules.
> Because regular, predictable releases mean that developers know they can always "catch the next train"
This is an argument for frequent releases, not regular, predictable releases.
> users know they can plan around predictable upgrade schedules.
I'm not sure this is actually how users plan upgrades.
The majority of individuals probably never turn off the auto-update flag. Planning doesn't enter the equation.
For organizations, my guess is that most organizations will try to build their upgrade process around security, but the reality will rarely be so clean. When I worked in IT we'd get computers into our shop that hadn't been updated. Period. We'd upgrade our provisioning images when there was a notable security patch, and besides that, we just would run updates on every machine every week at 2am Sunday night: that way it didn't interfere with users, but if something went wrong, we were on it with the full team first thing Monday morning. But if machines were turned off or whatever, they wouldn't run the updates. At no point did we ever even check the release schedule of a piece of software: the updates happened on our time, and theirs was irrelevant.
I didn't work in IT for very long, though, so someone with more IT experience should correct me if I'm wrong.
Is "releasing when it's ready" basically what was done in the past for e.g. CD-distributed software?
I imagine that could work well in some cases, but it also allows corporate bureaucracy and/or marketing teams to determine when things get released at larger scales and that might not be so ideal.
Is anyone concerned that even in the face of re-training, that two aircraft could find themselves in this position within the first year of operations?
Let's say you were on one of these aircraft and the pilots were able to recover. How terrifying would that be? This is the designed behavior?
It takes zero effort in IntelliJ (pictured above).
I remember reading a suggestion about proportional fonts in a discussion on HN about code editor preferences. Switched to them several years ago and never looked back.
Chrome team co-founder/engineer/etc. here. Glad you found it useful! Some components like our net stack are particularly cleanly factored. Others have more room for improvement.
I would say that at the beginning of the project (2006-2008) we didn't have so much of a focus on platform design, just on shipping a browser as quickly as possible. Some of the abstractions from that era haven't stood the test of time as the project has scaled to many platforms, features etc.
Over the course of time we've had various refactoring projects to try and pay down some of the technical debt. The first major one was the "content refactor" from 2011. This led to the separation of the multi-process browser shell from the UI layer, which has allowed for other chromium-derived browser apps to emerge.
Today, we've observed that even this layer is a bit too complicated, so we're running more projects to try and modularize it a bit more. My mental model is that the browser is kind of like a set of system services for an ephemeral app runtime, and it's good to imagine what the APIs & separation between those things should be. To aid this we've developed a new suite of IPC tools which are way more useful than the original stuff we have used for much of the lifetime of Chrome.
Anyway this kind of thing requires an ongoing investment and a set of people who thrive on the art of API design and in grungy, challenging refactoring work. I probably have many more thoughts on this topic but this'll do for right now :-)
Absolutely stoked to read your response. Thank you sir.
There is a dearth of quality conversations on the internet about good code in a real-world messy context, mostly because the people who're doing serious work don't have the time to talk about it. Would be a good thing if you write more. In fact you folks should be writing books!
May 2015, a few Chrome old timers reacting to the complexity of the code, decided we should try and build something new. This is nothing new for me personally (having worked on Netscape, Firefox and Chrome, and various false-starts along the way). We decided to design a browser based on a service-oriented architecture, using our new IPC and bindings tool (Mojo). This project was called Mandoline. We got a shell up and running that could complete some of Chrome's telemetry test suite. Performance was good. The architecture was clean. Problem was, the browser didn't do all that much. While a team of 6-7 people might have been able to build a browser 10-15 years ago, today browsers are just too complex (in feature requirements).
So our options were to try and convince the Chrome team (huge org) that they should drop everything and help us build this prototype into something real (unlikely, many past examples of failure of this kind of thing - see: Gecko transition/Netscape 6), or to find a way to bring this architecture into Chrome. The first not being a real option, we settled on the latter.
OK so you might ask - why labor under this delusion of building a new browser from scratch at all? Why not just stay within the confines of Chrome from the start, and look at the incremental projects that can be done to pragmatically improve things? My answer is that incrementalism should not be a destination or a goal. It's a tool that helps you get somewhere interesting. If you don't know where you're going, you're lost and incrementalism is just a delusion to trick you into thinking you're making meaningful progress. In the suffocating confines of a massive codebase, it can sometimes be hard to see the forest for the trees. It can be very valuable to step aside and try something else. Stepping aside can be creating a branch and hacking away liberally, or creating something entirely new. The other benefit of doing this is that it doesn't distract or further complicate the shipping product. But then bring the learnings back into the main line. And hopefully in your project you have leads on the main line willing to learn from such discoveries. On the Chrome team we're fortunate that we do.
So this is what we're doing now. We're bringing a service-oriented architecture to Chrome. A few of us have a pretty good mental picture in our heads of what the end state looks like (roughly) and we're using incrementalism to nudge the Chromium codebase there over a few years. The value in this approach is we get to validate our ideas against all the different platforms & features Chrome supports and test it on all of Chrome's users. It means if our changes land & stick that they really are by definition "good". By the end (if there ever really is one) we will have rebuilt much of the system architecture, while shipping every 6 weeks.
I was interested recently to read a similar story here about the plans in Firefox to integrate some of Servo into Gecko. The rationale was very similar. The reality is that you can't burn your user base by neglecting them while you build the massive new thing, or by expecting them to switch to something else. Instead you have to embrace the complexity & figure out how to work within it, while not giving up your dreams.
wow, that was amazing. Really nice to hear interesting stories like this.
Your take on incrementalism is interesting. I remember Joel Spolsky used to say never rewrite software from scratch as you will introduce new bugs. But I guess a well-balanced approach is always beneficial.
And yea, modularising Firefox and slowly replacing part by part with Servo is indeed a great idea.
I'd like to add my grain of salt here: I can agree about the don't rewrite from scratch "rule", but only depending on the context -- and also only when looking at all the details of a given project.
When bengoodger says "reacting to the complexity of the code", I doubt this is as messy as what can be found in some private codebase produced over a decade by some less-than-google-level employees who since left. So while the never-recode myth has some value in somehow (at least) good code bases, and depending on the angle of view (to even judge whether an approach constitute a rewrite or not), it also has a rather big cost, when some manager try to use it on projects far beyond the non-return point to justify forcing engineer to swim in the septic tank instead of doing any valuable work.
After all, for each Netscape (which BTW, involves a quite good amount of technical and commercial mix, and even then Mozilla is still with us today...) I could ask why a corresponding NT has not evolved from consumer Windows... (some parts were shared, but again: the angle of view...)
Not a counter-point at all, just an aside: consumer windows and NT were fundamentally different software. The "Showstopper! the Breakneck Race to Create Windows NT and the Next Generation at Microsoft" book is a packed guide of what happened at Microsoft at the time and covers the differences. Very interesting read.
The web is crazy complex these days because it is an entire app platform.
The incentive for anyone building a browser is to use the platform that gives you the best web compat especially at the outset when you don’t have enough users of your app to be able to make big changes to the platform. Even Chrome didn’t start from scratch - it used WebKit!
The Chromium community has built an excellent open platform that everyone can use. We are fortunate to be able to use it.