> But you, Jeff and Susam have all tried to say that SSGs are generally better, easier, safer or faster.
I'd just like to clarify that this is not what I was trying to say. In fact, once a website needs server-side programming to support interactivity (comments, subscription, etc.), I don't think SSGs are necessarily better. I think we both agree there to an extent. I think there are tradeoffs.
There is a certain inherent complexity in addressing concerns like performance, portability, etc. that, as waterbed theory suggests, just gets pushed to different places depending on whether we choose an SSG-based solution or non-SSG-based solution. What I have been trying in my other comments is to explain my reasons for why I made the arguably questionable choice to use an SSG for content pages while relying on server side programming for comments and subscriber forms.
I certainly see the appeal of serving the entire site, including both content pages and comment forms, using server side programming. It does simplify the 'architecture' in some ways. However, since I maintain my personal website as a hobby, I need to make the tradeoffs that feel right to me. I realise this is not a very satisfying answer to the question of whether a hybrid solution like mine (SSG plus server side programming) has merits. But I was never trying to argue its merits in the first place.
This is all entirely fair, thank you, I understand, I didn't mean to misrepresent.
> However, since I maintain my personal website as a hobby, I need to make the tradeoffs that feel right to me. I realise this is not a very satisfying answer to the question of whether a hybrid solution like mine (SSG plus server side programming) has merits. But I was never trying to argue its merits in the first place.
Hey, I'm a firm believer in the idea that the best solution for a personal website is the one that's the most fun for the owner to maintain! The reminder that it's not all about technical merit makes yours one of the more satisfying answers - thanks again for the considered responses:)
> SSGs are good for static sites with no interactivity or feedback. If you want interactivity or feedback, someone (you or a 3rd party service provider) is going to have to run a server.
For my website, I do both. Static HTML pages are generated with a static site generator. Comments are accepted using a server-side program I have written using Common Lisp and Hunchentoot.
I have always had a comments section on my website since its early days. Originally, my website was written as a set of PHP pages. Back then, I had a PHP page that served as the comment form. So later when I switched to Common Lisp, I rewrote the comment form in it.
It's a single, self-contained server side program that fits in a single file [1]. It runs as a service [2] on the web server [2], serves the comment and subscriber forms, accepts the form submissions and writes them to text files on the web server.
Nice! So you weren't forced to rewrite a comments solution when you shifted to an SSG, you just coincidentally had to do them at the same time?
It looks like you did exactly what Jeff did: got fed up with big excessive server sides and went the opposite way and deployed and wrote your own minimal server side solutions instead.
There's nothing wrong with that, but what problem were you solving with the SSG part of that solution? Why would you choose to pregenerate a bunch of stuff which might never get used any time anyone comments or updates your website, when you have the compute and processes to generate HTML from markdown and comments on demand?
The common sales points for SSGs are often:
- SSGs are easier (doesn't apply to you because you had to rewrite all your comment stuff anyway)
- cheaper (doesn't apply to you since you're already running a server for comments, and markdown SSR on top would be minimal)
- fewer dependencies (doesn't apply to you, the SSG you use is an added dependency to your existing server)
This largely applies to Jeff's site too.
Don't get me wrong, from a curious nerd perspective, SSGs presented the fun challenge of trying to make them interactive. But now, in 2026, they seem architecturally inappropriate for all but the most static of leaflet sites.
> [...] what problem were you solving with the SSG part of that solution? Why would you choose to pregenerate a bunch of stuff which might never get used any time anyone comments or updates your website, when you have the compute and processes to generate HTML from markdown and comments on demand?
I was not trying to solve a specific problem. This is a hobby project and my choices were driven mostly by personal preference and my sense of aesthetics.
Moving to a fully static website made the stack simpler and more enjoyable to work with. I did not like having to run a local web server just to preview posts. Recomputing identical HTML on every request also felt wasteful (no matter how trivially) when the output never changes between requests. Some people solve this with caching but I prefer fewer moving parts, not more. This is a hobby project, after all.
There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
An added bonus is portability. The entire site can be browsed locally without a server. In fact, I use relative internal links in all my HTML (e.g., '../../foo/bar.html' instead of '/foo/bar.html') so I can browse the whole site directly from the local filesystem, from any directory, without spinning up a web server. Because everything is static, the site can also be mirrored trivially to hosts that do not support server-side programming, such as https://susam.github.io/ and https://susam.codeberg.page/, in addition to https://susam.net/. I could have achieved this by crawling a dynamic site and snapshotting it, which would also be a perfectly acceptable solution. Static site generation is simply another acceptable solution; one that I enjoy working with.
> That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times.
This, definitely.
I think until you experience your first few DDoSes, you don't think about the kind of gains you get from going completely static (or heavily caching, sometimes at the expense of site functionality).
Jeff, you said in another thread that you don't get DDoS protection from the SSG side on Jeffgeerling, but from the cloudflare side. My whole point is that SSGs impact interactive site functionality because of how they work (basically like a cache) and you've just said here that you accept that caching often comes at the expense of functionality.
I feel like I'm going crazy here: you're both advocating for SSGs, but when pressed, it sounds like the only benefits you ever saw were to problems from many years ago, or problems which you already have alternate and more flexible solutions to.
Regardless, I'm going to hunt you down and badger you both with this thread in a few years to see where we all stand on this! Thanks again :)
Thanks for the considered response, I do really appreciate your time responding, I'm hyper aware my position feels controversial and at odds with people like you and Jeff for whom I have immense respect for.
Your post boils down to "I evolved into this form problems I had in the 2010 - 2020 period".
> There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
I really appreciate this explanation. It mirrors my experiences. But it's literally saying you did it for performance reasons at the time, and that doesn't matter now. You then say it allowed you to avoid caching, and that's a success because caching is extra moving parts which you want to avoid.
The SSG is an extra moving part, and it basically is a cache, just at the origin rather than the boundary.
Portability is a good point. My preferred/suggested alternative to SSG is dynamic rendering and serving of markdown content, and that gives me the same portability. Most markdown editors now respect certain formats of relative wiki links.
> Static site generation is simply another acceptable solution; one that I enjoy working with.
You are right. Fun is the primary reason why I'm being so vocal about this, because I spent 5 - 10 years saying and thinking and feeling all the things SSG advocates are saying and thinking and feeling about SSGs. I spent a few years with Jekyll, then Hugo, a brief stint with 11ty, and also Quartz. But when I wanted to start from scratch and did a modern, frank, practical analysis for greenfielding from a bunch of markdown files last year, I realised SSGs don't make sense for 99% of sites, but are recommended to 99% of people. If you already build, run and maintain a server side, SSG is just an extra step which complicates interactivity.
Having said all that, I don't really share my stuff or get any traffic though, so whilst I might be having fun, you and Jeff both have the benefit of modern battle
testing of your solutions! My staging subdomain is currently running a handcrafted SSR markdown renderer. I've been having fun combining it with fedify to make my stuff accessible over ActivityPub using the same markdown files as the source of truth. It might not work well or at all (I don't even use Mastodon or similar) but it's so, so much fun to mess around with compared to SSG stuff. If fun coding is your motivator, you should definitely at least entertain the throw-out-the-SSG way of thinking, y'know, for fun:)
> I'm hyper aware my position feels controversial and at odds with people like you and Jeff for whom I have immense respect for.
Thank you for the kind words. I don't find your position to be controversial at all. Preferring a server-side solution to serve a website, especially when one is going to do server-side programming for interactivity anyway, sounds like a perfectly reasonable position to me. If anything, I think it is my preference for two different approaches for the content and the comment forms that requires some defence and that's what I attempted in my previous commment.
> But it's literally saying you did it for performance reasons at the time, and that doesn't matter now.
Actually, it still matters today because it's hard to know when the next DDoS attack might come.
> The SSG is an extra moving part, and it basically is a cache, just at the origin rather than the boundary.
I agree and I think that is a very nice way to put it. I realise that the 'fewer moving parts, not more' argument I offered earlier is incorrect. You are right that I am willing to incur the cost of an SSG as an additional moving part while being less willing to add a caching layer as an additional moving part. So in the end it really does come down to personal preferences. The SSG happens to be a moving part I enjoy and like using for some of the benefits (serverless local browsing, easy mirroring, etc.) I mentioned in my earlier comment. While mentioning those benefits, I also acknowledged that there are perfectly good non-SSG ways of attaining the same benefits too.
> My preferred/suggested alternative to SSG is dynamic rendering and serving of markdown content, and that gives me the same portability. Most markdown editors now respect certain formats of relative wiki links.
Yes, sounds like a good solution to me.
> If you already build, run and maintain a server side, SSG is just an extra step which complicates interactivity.
Yes, I agree with this as well. For me personally, the server-side program that runs the comment form feels like a burden. But I keep it because I do find value in the exchanges that happen in the comments. I have occasionally received good feedback and corrections there. Sometimes commenters share their own insights and knowledge which has helped me learn new things. So I keep the comments around. While some people might prefer one consolidated moving part, such as a server-side program that both generates the pages on demand and handles interactivity, I lean the other way. I prefer an SSG and then reluctantly incur an additional moving part in the form of a server-side program to handle comment forms. Since I lean towards the SSG approach, I have restricted the scope of server-side programming to comment forms only.
> If fun coding is your motivator, you should definitely at least entertain the throw-out-the-SSG way of thinking, y'know, for fun :)
I certainly do entertain it. I hope I have not given the impression that I am recommending SSGs to others. In threads like this, I am simply sharing my experience of how I approach these problems, not suggesting that my solution is better than anyone else's. Running a personal website is a labour of love and passion, and my intention here is to share that love for this hobby. The solution I have chosen is just one solution. It works for me and suits my preferences but I do not mean to imply that it is superior to other approaches. There are certainly other equally good, and in many cases better, solutions.
A few years ago, I decided to migrate my personal website to a Common Lisp (CL) based static site generator that I wrote myself. In hindsight, it is one of the best decisions I have made for my website. It started out at around 850 lines of code and has gradually grown to roughly 1500 lines. It handles statically rendering blog posts, arbitrary pages, a guestbook, comment pages, tag listings, per tag RSS feeds, a consolidated RSS feed, directory listing pages and so on.
I have found it an absolute joy to maintain this piece of little 'machinery' for my website. The best part is that I understand every line of code in it. Every line of code, including all the HTML and CSS, is handcrafted. This gives me two benefits. It helps me maintain my sense of aesthetics in every byte that makes up the website. Further, adding a new feature or section to the site is usually quite quick.
I built the generator as a set of layered, reusable functions, so most new features amount to writing a tiny higher level function that calls the existing ones. For example, last month I wanted to add a 'backlinks' page listing other pages on the web that link to my posts and it took me only about 40 lines of new CL code and less than 15 minutes from wishing for it to publishing it.
Over the years this little hobby project has become quite stable and no longer needs much tinkering. It mostly stays out of the way and lets me focus on writing, which I think is what really matters.
Only problem I find with self-hosted blogs and certain personalities like mine is that I spend more time tinkering with the blog engine than actually blogging.
I ended up migrating back to a hosted solution explicitly because it doesn't allow me such control, so the only thing I can do is write instead of endlessly tinkering with the site.
> Only problem I find with self-hosted blogs and certain personalities like mine is that I spend more time tinkering with the blog engine than actually blogging.
I ended up separating out a "plumbing" blog, from the "real" blogs, with no discussion of the tinkering allowed on the real ones - so the plumbing blog grew in details but didn't "count" for the non-meta blogging I was trying to accomplish. A little bit of sleight-of-hand but it worked for me...
In my case it was less about the discussion of the tinkering and more the tinkering itself. I'd spend all my blogging time tinkering with the site, to the point where it's never ready and never actually deployed. As of right now in my projects folder I have an (actually finished and usable) Ghost theme and a handful of Wagtail blog projects in various states of functionality. Neither have actually been deployed. (at least I learnt enough Wagtail to be dangerous so I guess that's a win)
I ended up subscribing to Bear Blog and calling it a day. In fact I need to delete those half-baked attempts so I am never tempted to get back to them.
Honestly this is so true. I have a few blogs for various reasons, and the hosted ones are where I post most because it’s so effortless to do. There’s so much less inertia. You can go even further and post by email (I use Pagecord) which removes virtually all barriers to posting.
That said, building your own static site and faffing with all the tech is generally an enjoyable distraction for most techies
I'm also happy with the freedom and stability of a single-purpose static site generator. My previous project, Tclssg, was public and reusable from the start. This had big upsides: I learned to work with users and was compelled to implement features I wouldn't have. I actually wrote documentation. Seeing others use it was one of the best parts of the work. However, it also put constraints on what I could do. I couldn't easily throw away or radically change features, like how templates are rendered by default. With an SSG that's only for my site, I can.
If I were maintaining multiple large sites or working with many collaborators, I'd rely on something standard or extract and publish my SSG. For a personal site, I believe custom is often better.
The current generator is around 900 SLOC of Python and 700 of Pandoc Lua. The biggest threats to stability have been my own rewrites and experimentation, like porting from Clojure to Python. I have documented its history on my site: https://dbohdan.com/about#technical-history.
I did the same thing, but implemented my site generator in Go.
My site has grown by a lot over the years, but I can still build it from scratch (from MD files, HTML snippets and static files) in less than one second!
Also have a RSS feed generator and it can highlight code in most programming languages, which is important to me as I write posts on many languages.
I did try Hugo before I went on to implement my own, and I got a few things from Hugo into mine, but Hugo just looked like far too overengineered for what I wanted (essentially, easy templating with markdown as the main language but able to include content from other files either in raw HTML or also markdown, with each file being able to define variables that can be used in the templating language which has support for the usual "expression language" constructs). I used the Go built-in parser for the expression language so it was super easy to implement it!
Ha. Well, https://taoofmac.com was ported to Hy (https://github.com/rcarmo/sushy) in a week, then I eventually rewrote that in plain Python to do the current static site generator —- so I completely get it.
I am now slowly rebuilding it in TypeScript/Bun and still finding a lot of LISP-isms, so it’s been a fun exercise and a reminder that we still don’t have a nice, fast, batteries-included LISP able to do HTML/XML transforms neatly (I tried Fennel, Julia, etc., and even added Markdown support to Joker over the years, but none of them felt quite right, and Babashka carries too much baggage).
If anyone knows about a good lightweight LISP/Scheme dialect that has baked in SQLite and HTML parsing support, can compile to native code and isn’t on https://taoofmac.com/space/dev/lisp, I’d love to know.
The comment form is implemented as a server-side program using Common Lisp and Hunchentoot. So this is the only part of the website that is not static. The server-side program accepts each comment and writes to a text file for manual review. Then I review the comments and add them to my blog.
In the end, the comments live like normal content files in my source code directory just like the other blog posts and HTML pages do. My static site generator renders the comment pages along with the rest of the website. So in effect, my static site generator also has a static comment pages generator within it.
Not the poster, but what I did was to have a CGI script which would receive incoming comments and write them to "/srv/data/blog/comments/XXX/TIMESTAMP.txt" or similar.
The next time I rebuilt the blogs the page "XXX" would render a loop of all the comments, ordered by timestamp, if anything were present.
The CGI would send a "thanks for your comment" reply to the submitter and an email to myself. If the comment were spam I'd just delete the static file.
You could have a _somewhat_ static blog and incorporate something like Webmentions[0] for comments or replies. For example, Molly White's microblog[1] shows the following text below the post:
Have you responded to this post on your own site? Send a webmention[0]! Note: Webmentions are moderated for anti-spam purposes, so they will not appear immediately.
I find this method to be a sweet spot between generating content on your own pace, while allowing other people to "post" to your website, but not relying on a third-party service like Disqus.
On mine, I don't. Any interactivity is too much hassle for me to worry about wrt moderation etc. I also don't particularly care what random people have to say. If my friends like what I wrote, they can tell me on Signal or comment on the Bluesky post when I share the link.
> I decided to migrate my website to a Common Lisp based static site generator that I wrote myself.
Many programmers' first impulse when they start[0] to blog is to write their own blog engine. Props to you for not falling into that particular rabbit hole and actually using - as opposed to just tinkering on - that engine.
[0] you said you migrated it, implying you already had the habit of blogging, but still,
Your wife’s Python version is quite impressive. It wouldn’t have occurred to me to do the simple thing and just do some string-replacement targeted at a narrow use-case instead of using a complicated templating engine.
> just do some string-replacement targeted at a narrow use-case instead of using a complicated templating engine.
A neat little in-between "string replacements" and "flown blown templating" is doing something like what hiccup introduced, basically using built-in data structures as the template. Hiccup looks something like this:
(h/html [:span {:class "foo"} "bar"])
And you get both the power of templates, something easier than "templating engine" and with the extra benefit of being able to use your normal programming language functions to build the "templates".
I also implemented something similar myself (called niccup) that also does the whole "data to html" shebang but with Nix and only built-in Nix types. So for my own website/blog, I basically do things like this:
Similar to my "Go 101" books website, about 1000 line of Go code (started from 500 lines at 9 years ago). The whole website can be built into a single Go binary.
Writing blog generator is not only fun but also grants ultimate control, such as static syntax highlighting, equation rendering and custom build steps. Highly recommend!
Thank you. My current home page has about 70 entries. The HTML size is about 7 kB and the compressed transfer size is about 3 kB.
I created a test page with 2000 randomly generated entries here: <https://susam.net/code/test/2k.html>. Its actual size is about 240 kB and the compressed transfer size is about 140 kB.
It doesn't seem too bad, so I'll likely not introduce pagination, even in the unlikely event that I manage to write over a thousand posts. One benefit of having everything listed on the same page is that I can easily do a string search to find my old posts and visit them.
How large does the canvas need to get before pagination makes sense?
Modern websites are enormous in terms of how much needs to be loaded into memory- sure, not all of it is part of the rendered document, but is there a limit to the canvas size?
I'm thinking you could probably get 100,000+ entries and still be able to use CTRL+F on the site in a responsive way since even at 100,000+ entries you're still only about 10% of Facebooks "wall" application page. (Without additional "infinite scroll" entries)
I started blogging with emacs and an org-based solution, and it was horrid.
I had a vision of what I wanted the site to look like, but the org exporter had a default style it wanted. I spent more time ripping out all the cruft that the default org-html exporter insisted on adding than it would have taken to just write a new blog engine from scratch and I wish I had.
There's a way to set a custom export template, but I couldn't figure it out from the docs. I found and still do find the emacs/org docs to be poorly written for someone who doesn't already understand the emacs internals, and I wasn't willing to spend the time to become an emacs internals expert just to write a blog.
So I lived with a half-baked org->pandoc->html solution for a while but now I'm on Jekyll and much happier with my blogging experience.
My blog moved from #99 in 2024 to #41 in 2025. Although I have never had any intention of blogging consistently, it is nice to know that I had a good blogging year. :)
This is pretty much how I began developing websites too. Except it was 2001 instead of 2026. And it was ASP (the classic ASP that predates ASP.NET) instead of Python. And I had a Windows 98 machine in my dorm room with Personal Web Server (PWS) running on it instead of GCP.
It could easily have been a static website, but I happened to stumble across PWS, which came bundled with a default ASP website. That is how I got started. I replaced the default index.asp with my own and began building from there. A nice bonus of this approach was that the default website included a server-side guestbook application that stored comments in an MS Access database. Reading through its source code taught me server-side scripting. I used that newfound knowledge to write my own server-side applications.
Of course, this was a long time ago. That website still exists but today most of it is just a collection of static HTML files generated by a Common Lisp program I wrote for myself. The only parts that are not static are the guestbook and comment forms, which are implemented in CL using Hunchentoot.
I remember ASP (application service provider, before cloud became synonymous with hosting), you are making me nostalgic. Back then I was in sales, I was selling real time inventory control, CRM and point of sale systems distributed over Citrix Metaframe in a secure datacenter. Businesses were just starting to get broadband connections. I would have to take customers to the datacenter to motivate them to let us host their data. Eight years later, google bought the building for $1.8b and eventually bought adjacent buildings as well.
We are talking about different ASPs. I am referring to Active Server Pages (ASP), the server-side scripting language supported by Personal Web Server (PWS) and Internet Information Services (IIS) on Windows. It is similar to PHP Hypertext Processor (PHP) and Java Server Pages (JSP) but for the Windows world. I began developing websites with ASP. Over the years, I dabbled with CGI, PHP, JSP, Python, etc. before settling on Common Lisp as my preferred choice for server-side programming.
Also, don't forget to set up an RSS or Atom feed for your website. Contrary to the recurring claim that RSS is dead, most of the traffic to my website still comes from RSS feeds, even in 2̶0̶2̶5̶ 2026! In fact, one of my silly little games became moderately popular because someone found it in my RSS feed and shared it on HN. [1]
From the referer (sic) data in my web server logs (which is not completely reliable but still offers some insight), the three largest sources of traffic to my website are:
1. RSS feeds - People using RSS aggregator services as well as local RSS reader tools.
2. Newsletters - I was surprised to discover just how many tech newsletters there are on the Web and how active their user bases are. Once in a while, a newsletter picks up one of my silly or quirky posts, which then brings a large number of visits from its followers.
3. Search engines - Traffic from Google, DuckDuckGo, Bing and similar search engines. This is usually for specific tools, games and HOWTO posts available on my website that some visitors tend to return to repeatedly.
RSS is my preferred way to consume blog posts. I also find blogs that have an RSS feed to be more interested in actually writing interesting content rather than just trying to get views/advertise. I guess this makes sense—hard to monetize views through an RSS reader
It's funny back in the Google Reader days monetizing via RSS was quite common. You'd publish the truncated version to RSS and force someone to visit the site for the whole version, usually just in exchange for ad views. Honestly while it wasn't the greatest use of RSS it was better than most paid blogs today being ad-wall pop-up pay-gate nightmares of UX.
Even the short snippets are better if one wants to aggregate interesting topics and then read what seems interesting. Not just endlessly scroll each site individually.
Please also enable CORS[1] for your RSS feed. (If your whole site is a static site, then please just enable CORS site-wide. This is how GitHub Pages works. There's pretty much no reason not to.)
Not having CORS set up for your RSS feed means that browser-based feed readers won't be able to fetch your feed to parse it (without running a proxy).
If you want to get a red line, you need to use red ink. If you use blue ink, you'll get blue lines. And I can draw you cat. (I'm no artist, but I can give it a try.) But it it won't be a line anymore. A line and a cat: those are two different things.
Now that browser developers did their best to kill RSS/Atom...
Does a Web site practically need to do anything to advertise their feed to the diehard RSS/Atom users, other than use the `link` element?
Is there a worthwhile convention for advertising RSS/Atom visually in the page, too?
(On one site, I tried adding an "RSS" icon, linking to the Atom feed XML, alongside all the usual awful social media site icons. But then I removed it, because I was afraid it would confuse visitors who weren't very Web savvy, and maybe get their browser displaying XML or showing them an error message about the MIME content type.)
I use RSS Style[1] to make the RSS and Atom feeds for my blog human readable. It styles the xml feeds and inserts a message at the top about the feed being meant for news readers, not people. Thus technically making it "safe" for less tech savvy people.
Browsers really should have embraced XSLT rather that abandoned it. Now we're stuck trying yet again to reinvent solutions already handled by REST [1].
XSLT is the solution domain specialists and philosophers. Abandoning it is the vote of the market and market interests, the wisdom of crowds at work. This is the era of scale not expertise, enjoy the fruits.
Effectively no one was using XSLT at any point (certain document pipelines or Paul Ford like indie hackers being the exceptions that proved the rule). Browsers keep all kinds of legacy features, of course, and they could well have kept this one, and doing so would’ve been a decision with merit. But they didn’t, and the market will ratify their decision. Just like effectively no one was using XSLT, effectively no one will change their choice of browser over its absence.
Its hard to judge usage when browsers stopped maintaining XSLT with the 1.0 spec. V1.0 was very lacking in features and is difficult to use.
Browsers also never added support for some of the most fundamental features to support XSLT. Page transitions and loading state are particularly rough in XSLT in my experience.
Blizzard used to use it for their entire WoW Armory website to look people up, They converted off it years ago, but for awhile they used XML/XSLT to display the entire page
RSS.style is my site. I'm currently testing a JavaScript-based workaround that should look just like the current XSLT version. It will not require the XSLT polyfill (which sort-of works, but seems fragile).
One bonus is that it will be easier to customize for people that know JavaScript but don't know XSLT (which is a lot of people, including me).
You'll still need to add a line to the feed source code.
> message at the top about the feed being meant for news readers
There's no real reason to take this position. A styled XML document is just another page.
For example, if you're using a static site generator where the front page of your /blog.html shows the most recent N posts, and the /blog/feed.xml shows the most recent N posts, then...?
Shout out to Vivaldi, which renders RSS feeds with a nice default "card per post" style. Not to mention that it also has a feed reader built in as well.
Isn't ironic that browsers do like 10,000 things nowadays, but Vivaldi (successor to Opera) is the only one that does the handful of things users actually want?
I don't use it myself because my computer is too slow (I think they built it in node.js or something). But it makes me happy that someone is carrying the torch forward...
With the lack of styling, I'm sorry to say I didn't notice the RSS icon at first at all. Adding the typical orange background to the icon would fix that.
For a personal site, I'd probably just do that. (My friends are generally savvy and principled enough not to do most social media, so no need for me to endorse it by syndicating there.)
But for a commercial marketing site that must be on the awful social media, I'm wondering about quietly supporting RSS/Atom without compromising the experience for the masses.
Is there any reason today to use RSS over Atom? Atom sounds like it has all the advantages, except maybe compatibility with some old or stubborn clients?
Based on my own personal usage, it makes total sense that RSS feeds still get a surprising number of hits. I have a small collection of blogs that I follow and it's much easier to have them all loaded up in my RSS reader of choice than it is to regularly stop by each blog in my browser, especially for blogs that seldomly post (and are easy to forget about).
Readers come with some nice bonus features, too. All of them have style normalization for example and native reader apps support offline reading.
If only there were purpose-built open standards and client apps for other types of web content…
This is what I use. It’s on macOS too and amazing on both. Super fast, focused, and efficient.
It’s by far the best I’ve tried. Most other macOS readers aren’t memory managing their webviews properly which leads to really bad memory leaks when they’re open for long periods.
iCloud sync is a nice feature too. I use the Mac app mostly for adding feeds and the iOS app for reading. Anytime I read an interesting web post, I pop its url into the app to see if it has a RSS feed.
Same question, but for Android and desktop / laptop too. Never used RSS much before, hardly, in fact, I don't know why, even though I first knew about it many years ago, but after reading this thread, I want to.
The question is, do you have this traffic because of RSS client crawlers that pre-loaded the content or from real users. I'm not pro killing RSS by the way, but genuinely doubtful.
> The question is, do you have this traffic because of RSS client crawlers that pre-loaded the content or from real users.
I have never seen RSS clients or crawlers preload actual HTML pages. I've only seen them fetching the XML feed and present its contents to the users.
When I talk about visitors arriving at my website from RSS feeds, I am not counting requests from feed aggregators or readers identified by their 'User-Agent' strings. Those are just software tools fetching the XML feed. I'm not talking about them. What I am referring to are visits to HTML pages on my website where the 'Referer' header indicates that the client came from an RSS aggregator service or feed reader.
It is entirely possible that many more people read my posts directly in their feed readers without ever visiting my site, and I will never be aware of them, as it should be. For the subset of readers who do click through from their feed reader and land on my website, those visits are recorded in my web server logs. My conclusions are based on that data.
> I have never seen RSS clients or crawlers preload actual HTML pages
Some setups like ttrss with the mercury plugin will do that to restore full articles to the feed, but its either on-demand or manually enabled per feed. Personally I dont run it on many other than a few more commercial platforms that heavily limit their feed's default contents.
Presumably some the more app based rss readers have such a feature, but I wouldnt know for certain.
I do not deliberately measure traffic. And I certainly never put UTM parameters in URLs as a sibling comment mentioned, because I find them ugly. My personal website is a passion project and I care about its aesthetics, including the aesthetics of its URLs, so I would never add something like UTM parameters to them.
I only occasionally look at the HTTP 'Referer' header in my web server logs and filter them, out of curiosity. That is where I find that a large portion of my daily traffic comes via RSS feeds. For example, if the 'Referer' header indicates that the client landed on my website from, say, <https://www.inoreader.com/>, then that is a good indication that the client found my new post via the RSS feed shown in their feed aggregator account (Inoreader in this example).
Also, if the logs show that a client IP address with the 'User-Agent' header set to something like 'Emacs Elfeed 3.4.2' fetches my '/feed.xml' and then the same client IP address later visits a new post on my website, that is a good indication that the client found my new post in their local feed reader (Elfeed in this example).
Aside: Since I'm talking about POSIX here, it's worth mentioning that process substitution using '<(commands)' is not specified in POSIX, but it's supported in bash, zsh, ksh93, etc.
+1 Whenever I create a favicon.png, I always run it through ImageOptim and I consistently get an optimised PNG that is about 30% to 40% smaller than the original.
I'd just like to clarify that this is not what I was trying to say. In fact, once a website needs server-side programming to support interactivity (comments, subscription, etc.), I don't think SSGs are necessarily better. I think we both agree there to an extent. I think there are tradeoffs.
There is a certain inherent complexity in addressing concerns like performance, portability, etc. that, as waterbed theory suggests, just gets pushed to different places depending on whether we choose an SSG-based solution or non-SSG-based solution. What I have been trying in my other comments is to explain my reasons for why I made the arguably questionable choice to use an SSG for content pages while relying on server side programming for comments and subscriber forms.
I certainly see the appeal of serving the entire site, including both content pages and comment forms, using server side programming. It does simplify the 'architecture' in some ways. However, since I maintain my personal website as a hobby, I need to make the tradeoffs that feel right to me. I realise this is not a very satisfying answer to the question of whether a hybrid solution like mine (SSG plus server side programming) has merits. But I was never trying to argue its merits in the first place.
reply