Fun fact: both HN and (no doubt not coincidentally) paulgraham.com ship no DOCTYPE and are rendered in Quirks Mode. You can see this in devtools by evaluating `document.compatMode`.
I ran into this because I have a little userscript I inject everywhere that helps me copy text in hovered elements (not just links). It does:
[...document.querySelectorAll(":hover")].at(-1)
to grab the innermost hovered element. It works fine on standards-mode pages, but it's flaky on quirks-mode pages.
Question: is there any straightforward & clean way as a user to force a quirks-mode page to render in standards mode? I know you can do something like:
I wish `dang` would take some time to go through the website and make some usability updates. HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:
No kidding. I've set the zoom level so long ago that I never noticed, but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.
I trust dang a lot; but in general I am scared of websites making "usability updates."
Modern design trends are going backwards. Tons of spacing around everything, super low information density, designed for touch first (i.e. giant hit-targets), and tons of other things that were considered bad practice just ten years ago.
So HN has its quirks, but I'd take what it is over what most 20-something designers would turn it into. See old.reddit Vs. new.reddit or even their app.
Overall I would agree but I also agree with the above commenter. It’s ok for mobile but on a desktop view it’s very small when viewed at anything larger than 1080p. Zoom works but doesn’t stick. A simple change to the font size in css will make it legible for mobile, desktop, terminal, or space… font-size:2vw or something that scales.
Please don’t. HN has just the right information density with its small default font size. In most browsers it is adjustable. And you can pinch-zoom if you’re having trouble hitting the right link.
None of the ”content needs white space and large fonts to breathe“ stuff or having to click to see a reply like on other sites. That just complicates interactions.
And I am posting this on an iPhone SE while my sight has started to degrade from age.
Yeah, I'm really asking for tons of whitespace and everything to breathe sooooo much by asking for the default font size to be a browser default (16px) and updated to match most modern display resolutions in 2025, not 2006 when it was created.
HN is the only site I have to increase the zoom level, and others below are doing the same thing as me. But it must be us with the issues. Obviously PG knew best in 2006 for decades to come.
> HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
On what devices (or browsers?) it renders "insanely small" for you? CSS pixels are not physical pixels, they're scaled to 1/96th of an inch on desktop computers, for smartphones etc. scaling takes into account the shorter typical distance between your eyes and the screen (to make the angular size roughly the same), so one CSS pixel can span multiple physical pixels on a high-PPI display. Font size specified in px should look the same on various devices. HN font size feels the same for me on my 32" 4k display (137 PPI), my 24" display with 94 PPI, and on my smartphone (416 PPI).
I find it exactly the right size on both PC and phone.
There's a trend to make fonts bigger but I never understood why. Do people really have trouble reading it?
I prefer seeing more information at the same time, when I used Discord (on PC), I even switched to IRC mode and made the font smaller so that more text would fit.
I'm assuming you have a rather small resolution display? On a 27" 4k display, scaled to 150%, the font is quite tiny, to the point where the textarea I currently type this in (which uses the browsers default font size) is about 3 times the perceivable size in comparison to the HN comments themselves.
Yup, and these days we have relative units in CSS such that we no longer need to hardcode pixels, so everyone wins (em, rem). That way people can get usability according to the browsers defaults, which make the whole thing user configurable.
OTOH, people with 30+ inch screens probably sit a bit further away to be able to see everything without moving their head so it makes sense that even sites which take DPI into account use larger fonts because it's not really about how large something is physically on the screen but about the angular size relative to the eye.
Yeah, one of the other cousin comments mentions 36 inches away. I don't think they realize just how far outliers they are. Of course you have to make everything huge when your screen is so much further away than normal.
I have mild vision issues and have to blow up the default font size quite a bit to read comfortably. Everyone has different eyes, and vision can change a lot with age.
There is a better option, but generally the answer is "no"; the best solution would be for WHATWG to define document.compatMode to be writable property instead of readonly.
The better option is to create and hold a reference to the old nodes (as easy as `var old = document.documentElement`) and then after blowing everything away with document.write (with an empty* html element; don't serialize the whole tree), re-insert them under the new document.documentElement.
* Note that your approach doesn't preserve the attributes on the html element; you can fix this by either pro-actively removing the child nodes before the document.write call and rely on document.documentElement.outerHTML to serialize the attributes just as in the original, or you can iterate through the old element's attributes and re-set them one-by-one.
I'm not a web developer, so if someone can please enlighten me: Why does this site, and so many "modern" sites like it have it so that the actual content of the site takes up only 20% of my screen?
My browser window is 2560x1487. 80% of the screen is blank. I have to zoom in 170% to read the content. With older blogs, I don't have this issue, it just works. Is it on purpose or it is it bad css? Given the title of the post, i think this is somewhat relevant.
Probably to not have incredibly wide paragraphs. I will say though, I set my browser to always display HN at like 150% zoom or something like that. They definitely could make the default font size larger. On mobile it looks fine though.
I have HN on 170% zoom too. this a bad design pattern. I shouldn't have to zoom in on every site. Either increasing the font or making sure the content is always at least 50% of the page would be great for me.
I understand not using the full width, but unless you zoom in, it feels like I'm viewing tiny text on a smart phone in portrait mode.
You would think browsers themselves would handle the rest, if the website simply specified "center the content div with 60% width" or something like that.
that will ensure screenreaders skip all your page "chrome" and make life much easier for a lot of folks. As a bonus mark any navigation elements inside main using <nav> (or role="navigation").
I’m not a blind person but I was curious about once when I tried to make a hyper-optimized website. It seemed like the best way to please screen readers was to have the navigation HTML come last, but style it so it visually comes first (top nav bar on phones, left nav menu on wider screens).
Props to you for taking the time to test with a screen reader, as opposed to simply reading about best practices. Not enough people do this. Each screen reader does things a bit differently, so testing real behavior is important. It's also worth noting that a lot of alternative input/output devices use the same screen reader protocols, so it's not only blind people you are helping, but anyone with a non-traditional setup.
Navigation should come early in document and tab order. Screen readers have shortcuts for quickly jumping around the page and skipping things. It's a normal part of the user experience. Some screen readers and settings de-prioritize navigation elements in favor of reading headings quickly, so if you don't hear the navigation right away, it's not necessarily a bug, and there's a shortcut to get to it. The most important thing to test is whether the screen reader says what you expect it to for dynamic and complex components, such as buttons and forms, e.g. does it communicate progress, errors, and success? It's usually pretty easy to implement, but this is where many apps mess up.
Just to say, that makes your site more usable in text browsers too, and easier to interact with the keyboard.
I remember HTML has an way to create global shortcuts inside a page, so you press a key combination and the cursor moves directly to a pre-defined place. If I remember that right, it's recommended to add some of those pointing to the menu, the main content, and whatever other relevant area you have.
Wouldn’t that run afoul of other rules like keeping visual order and tab order the same? Screen reader users are used to skip links & other standard navigation techniques.
TFA itself has an incorrect DOCTYPE. It’s missing the whitespace between "DOCTYPE" and "html". Also, all spaces between HTML attributes where removed, although the HTML spec says: "If an attribute using the double-quoted attribute syntax is to be followed by another attribute, then there must be ASCII whitespace separating the two." (https://html.spec.whatwg.org/multipage/syntax.html#attribute...) I guess the browser gets it anyway. This was probably automatically done by an HTML minifier. Actually the minifier could have generated less bytes by using the unquoted attribute value syntax (`lang=en-us id=top` rather than `lang="en-us"id="top"`).
Edit: In the `minify-html` Rust crate you can specify "enable_possibly_noncompliant", which leads to such things. They are exploiting the fact that HTML parsers have to accept this per the (parsing) spec even though it's not valid HTML according to the (authoring) spec.
37 Signals [0] famously uses their own Stimulus [1] framework on most of their products. Their CEO is a proponent of the whole no-build approach because of the additional complexity it adds, and because it makes it difficult for people to pop your code and learn from it.
It's impossible to look at a Stimulus based site (or any similar SSR/hypermedia app) and learn anything useful beyond superficial web design from them because all of the meaningful work is being done on the other side of the network calls. Seeing a "data-action" or a "hx-swap" in the author's original text doesn't really help anyone learn anything without server code in hand. That basically means the point is moot because if it's an internal team member or open source wanting to learn from it, the original source vs. minified source would also be available.
It's also more complex to do JS builds in Ruby when Ruby isn't up to the task of doing builds performantly and the only good option is calling out to other binaries. That can also be viewed from the outside as "we painted ourselves into a corner, and now we will discuss the virtues of standing in corners". Compared to Bun, this feels like a dated perspective.
DHH has had a lot of opinions, he's not wrong on many things but he's also not universally right for all scenarios either and the world moved past him back in like 2010.
Even with TS, if I’m doing web components rather than a full framework I prefer not bundling. That way I can have each page load the exact components it needs. And with http/2 I’m happy to have each file separate. Just hash them and set an immutable cache header so it even when I make changes the users only have to pull the new version of things that actually changed.
Can't say I generally agree with dropping TS for JS but I suppose it's easier to argue when you are working on smaller projects. But here is someone that agrees with you with less qualification than that https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701...
I was introduced to this decision from the Lex Fridman/DHH podcast. He talked a lot about typescript making meta programming very hard. I can see how that would be the case but I don't really understand what sort of meta programming you can do with JS. The general dynamic-ness of it I get.
There is some irony in then-Facebook's proprietary metadata lines being in there (the "og:..." lines). Now with their name being "Meta", it looks even more proprietary than before.
Maybe the name was never about the Metaverse at all...
> A technical standard may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc.
PDF is now an international standard (ISO 32000) but it was invented by Adobe. HTML was invented at the CERN and is now controlled by W3C (a private consortium). OpenGL was created by SGI and is maintained by the Khronos Group.
All had different "ownership" paths and yet I'd say all of them are standards.
Note that <html> and <body> auto-close and don't need to be terminated.
Also, wrapping the <head> tags in an actual <head></head> is optional.
You also don't need the quotes as long the attribute doesn't have spaces or the like; <html lang=en> is OK.
(kind of pointless as the average website fetches a bazillion bytes of javascript for every page load nowadays, but sometimes slimming things down as much as possible can be fun and satisfying)
This kind of thing will always just feel shoddy to me. It is not much work to properly close a tag. The number of bytes saved is negligible, compared to basically any other aspect of a website. Avoiding not needed div spam already would save more. Or for example making sure CSS is not bloated. And of course avoiding downloading 3MB of JS.
What this achieves is making the syntax more irregular and harder to parse. I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient. It would greatly simplify browser code and HTML spec.
Implicit elements and end tags have been a part of HTML since the very beginning. They introduce zero ambiguity to the language, they’re very widely used, and any parser incapable of handling them violates the spec and would be incapable of handling piles of real‐world strict, standards‐compliant HTML.
> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.
Well, to parsing it for machines yes, but for humans writing and reading it they are helpful. For example, if you have
<p> foo
<p> bar
and change it to
<div> foo
<div> bar
suddenly you've got a syntax error (or some quirks mode rendering with nested divs).
The "redundancy" of closing the tags acts basically like a checksum protecting against the "background radiation" of human editing.
And if you're writing raw HTML without an editor that can autocomplete the closing tags then you're doing it wrong anyway.
Yes that used to be common before and yes it's a useful backwards compatibility / newbie friendly feature for the language, but that doesn't mean you should use it if you know what you're doing.
It sounds like you're headed towards XHTML. The rise and fall of XHTML is well documented and you can binge the whole thing if you're so inclined.
But my summarization is that the reason it doesn't work is that strict document specs are too strict for humans. And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
I didn't have a problem with XHTML back in the day; it tool a while to unlearn it; I would instinctively close those tags: <br/>, etc.
It actually the XHTML 2.0 specification [1] that discarded backwards compatibility with HTML 4 was the straw that broke the camel's back. No more forms as we knew them, for example; we were supposed to use XFORMS.
That's when WHATWG was formed and broke with the W3C and created HTML5.
> It would greatly simplify browser code and HTML spec.
I doubt it would make a dent - e.g. in the "skipping <head>" case, you'd be
replacing the error recovery mechanism of "jump to the next insertion mode"
with "display an error", but a) you'd still need the code path to handle
it, b) now you're in the business of producing good error messages which
is notoriously difficult.
Something that would actually make the parser a lot simpler is removing
document.write, which has been obsolete ever since the introduction of the
DOM and whose main remaining real world use-case seems to be ad delivery.
(If it's not clear why this would help, consider that document.write can
write scripts that call document.write, etc.)
Yeah, I remember, when I was at school and first learning HTML and this kind of stuff. When I stumbled upon XHTML, I right away adapted my approach to verify my page as valid XHTML. Guess I was always on this side of things. Maybe machine empathy? Or also human empathy, because someone needs to write those parsers and the logic to process this stuff.
oh man, I wish XHTML had won the war. But so many people (and CMSes) were creating dodgy markup that simply rendered yellow screens of doom, that no-one wanted it :(
i'm glad it never caught on. the case sensitivity (especially for css), having to remember the xmlns namespace URI in the root element, CDATA sections for inline scripts, and insane ideas from companies about extending it further with more xml namespaced elements... it was madness.
It had too much unnecessary metadata yes, but case insensitivity is always the wrong way to do stuff in programming (e.g. case insensitive file system paths). The only reason you'd want it is for real-world stuff like person names and addresses etc. There's no reason you'd mix the case of your CSS classes anyway, and if you want that, why not also automatically match camelCase with snake_case with kebab-case?
The fact XHTML didn't gain traction is a mistake we've been paying off for decades.
Browser engines could've been simpler; web development tools could've been more robust and powerful much earlier; we would be able to rely on XSLT and invent other ways of processing and consuming web content; we would have proper XHTML modules, instead of the half-baked Web Components we have today. Etc.
Instead, we got standards built on poorly specified conventions, and we still have to rely on 3rd-party frameworks to build anything beyond a toy web site.
Stricter web documents wouldn't have fixed all our problems, but they would have certainly made a big impact for the better.
And add:
Yes, there were some initial usability quirks, but those could've been ironed out over time. Trading the potential of a strict markup standard for what we have today was a colossal mistake.
There's no way it could have gained traction. Consider two browsers. On follows the spec explicitly, and one goes into "best-effort" mode on encountering invalid markup. End users aren't going to care about the philosophical reasoning for why Browser A doesn't show them their school dance recital schedule.
Consider JSON and CSV. Both have formal specs. But in the wild, most parsers are more lenient than the spec.
I agree for sure, but that's a problem with the spec, not the website. If there are multiple ways of doing something you might as well do the minimal one. The parser will have always to be able to handle all the edge cases no matter what anyway.
You might want always consistently terminate all tags and such for aesthetic or human-centered (reduced cognitive load, easier scanning) reasons though, I'd accept that.
<html>, <head> and <body> start and end tags are all optional. In practice, you shouldn’t omit the <html> start tag because of the lang attribute, but the others never need any attributes. (If you’re putting attributes or classes on the body element, consider whether the html element is more appropriate.) It’s a long time since I wrote <head>, </head>, <body>, </body> or </html>.
`<thead>` and `<tfoot>`, too, if they're needed. I try to use all the free stuff that HTML gives you without needing to reach for JS. It's a surprising amount. Coupled with CSS and you can get pretty far without needing anything. Even just having `<template>` with minimal JS enables a ton of 'interactivity'.
From what I can tell this allows some screen readers to select specific accents. Also the browser can select the appropriate spell checker (US English vs British English).
I still don’t understand what people think they’re accomplishing with the lang attribute. It’s trivial to determine the language, and in the cases where it isn’t, it’s not trivial for the reader, either.
Quirks quirks aside there are other ways to tame old markup...
If a site won't update itself you can... use a user stylesheet or extension to fix things like font sizes and colors without waiting for the maintainer...
BUT for scripts that rely on CSS behaviors there is a simple check... test document.compatMode and bail when it's not what you expect... sometimes adding a wrapper element and extracting the contents with a Range keeps the page intact...
ALSO adding semantic elements and ARIA roles goes a long way for accessibility... it costs little and helps screen readers navigate...
Would love to see more community hacks that improve usability without rewriting the whole thing...
Don’t need the “.0”. In fact, the atrocious incomplete spec of this stuff <https://www.w3.org/TR/css-viewport-1/> specifies using strtod to parse the number, which is locale dependent, so in theory on a locale that uses a different decimal separator (e.g. French), the “.0” will be ignored.
I have yet to test whether <meta name="viewport" content="width=device-width,initial-scale=1.5"> misbehaves (parsing as 1 instead of 1½) with LC_NUMERIC=fr_FR.UTF-8 on any user agents.
Not to mention the functions are also translated to the other language. I think both these are the fault of Excel to be honest. I had this problem long before Google came around.
And it's really irritating when you have the computer read something out to you that contains numbers. 53.1 km reads like you expect but 53,1 km becomes "fifty-three (long pause) one kilometer".
> Not to mention the functions are also translated to the other language.
This makes a lot of sense when you recognize that Excel formulas, unlike proper programming languages, aren't necessarily written by people with a sufficient grasp of the English language, especially when it comes to more abstract mathematical concepts, which aren't taught in secondary English language classes at school, but it in their native language mathematics classes.
Not sure if this still is the case, but Excel used to fail to open CSV files correctly if the locale used another list separator than ',' – for example ';'.
Sometimes you double click and it opens everything just fine and silently corrupts and changes and drops data without warning or notification and gives you no way to prevent it.
The day I found that Intellij has a built in CSV tabular editor and viewer was the best day.
Given that world is about evenly split on the decimal separator [0] (and correspondingly on the thousands grouping separator), it’s hard to avoid. You could standardize on “;” as the argument separator, but “1,000” would still remain ambiguous.
aha, in Microsoft Excel they translate even the shortcuts. The Brazilian version Ctrl-s is "Underline" instead of "Save". Every sheet of mine ends with a lot of underlined cells :-)
The behaviour predates Google Sheets and likely comes from Excel (whose behavior Sheets emulate/reverse engineer in many places). And I wouldn't be surprised if Excel got it from Lotus.
> <!doctype html> is what you want for consistent rendering. Or <!DOCTYPE HTML> if you prefer writing markup like it’s 1998. Or even <!doCTypE HTml> if you eschew all societal norms. It’s case-insensitive so they’ll all work.
I spent about half an hour trying to figure out why some JSON in my browser was rendering è incorrectly, despite the output code and downloaded files being seemingly perfect. I came to the conclusion that the browsers (Safari and Chrome) don't use UTF-8 as the default renderer for everything and moved on.
Another funny thing here is that they say “but not limited to” (the listed encodings), but then say “must not support other encodings” (than the listed ones).
> the encodings defined in Encoding, including, but not limited to
where "Encoding" refers to https://encoding.spec.whatwg.org (probably that
should be a link.) So it just means "the other spec defines at least these,
but maybe others too." (e.g. EUC-JP is included in Encoding but not listed
in HTML.)
When sharing this post on his social media accounts, Jim prefixed the link with: 'Sometimes its cathartic to just blog about really basic, (probably?) obvious stuff'
Every day you can expect 10000 people learning a thing you thought everyone knew: https://xkcd.com/1053/
To quote the alt text: "Saying 'what kind of an idiot doesn't know about the Yellowstone supervolcano' is so much more boring than telling someone about the Yellowstone supervolcano for the first time."
Fun fact: both HN and (no doubt not coincidentally) paulgraham.com ship no DOCTYPE and are rendered in Quirks Mode. You can see this in devtools by evaluating `document.compatMode`.
I ran into this because I have a little userscript I inject everywhere that helps me copy text in hovered elements (not just links). It does:
[...document.querySelectorAll(":hover")].at(-1)
to grab the innermost hovered element. It works fine on standards-mode pages, but it's flaky on quirks-mode pages.
Question: is there any straightforward & clean way as a user to force a quirks-mode page to render in standards mode? I know you can do something like:
document.write("<!DOCTYPE html>" + document.documentElement.innerHTML);
but that blows away the entire document & introduces a ton of problems. Is there a cleaner trick?
I wish `dang` would take some time to go through the website and make some usability updates. HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:
https://github.com/wting/hackernews/blob/5a3296417d23d1ecc90...
No kidding. I've set the zoom level so long ago that I never noticed, but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.
I trust dang a lot; but in general I am scared of websites making "usability updates."
Modern design trends are going backwards. Tons of spacing around everything, super low information density, designed for touch first (i.e. giant hit-targets), and tons of other things that were considered bad practice just ten years ago.
So HN has its quirks, but I'd take what it is over what most 20-something designers would turn it into. See old.reddit Vs. new.reddit or even their app.
Overall I would agree but I also agree with the above commenter. It’s ok for mobile but on a desktop view it’s very small when viewed at anything larger than 1080p. Zoom works but doesn’t stick. A simple change to the font size in css will make it legible for mobile, desktop, terminal, or space… font-size:2vw or something that scales.
Please don’t. HN has just the right information density with its small default font size. In most browsers it is adjustable. And you can pinch-zoom if you’re having trouble hitting the right link.
None of the ”content needs white space and large fonts to breathe“ stuff or having to click to see a reply like on other sites. That just complicates interactions.
And I am posting this on an iPhone SE while my sight has started to degrade from age.
Yeah, I'm really asking for tons of whitespace and everything to breathe sooooo much by asking for the default font size to be a browser default (16px) and updated to match most modern display resolutions in 2025, not 2006 when it was created.
HN is the only site I have to increase the zoom level, and others below are doing the same thing as me. But it must be us with the issues. Obviously PG knew best in 2006 for decades to come.
On the flipside, HN is the only site I don't have to zoom out of to keep it comfortable. Most sit at 90% with a rare few at 80%.
16px is just massive.
> HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
On what devices (or browsers?) it renders "insanely small" for you? CSS pixels are not physical pixels, they're scaled to 1/96th of an inch on desktop computers, for smartphones etc. scaling takes into account the shorter typical distance between your eyes and the screen (to make the angular size roughly the same), so one CSS pixel can span multiple physical pixels on a high-PPI display. Font size specified in px should look the same on various devices. HN font size feels the same for me on my 32" 4k display (137 PPI), my 24" display with 94 PPI, and on my smartphone (416 PPI).
On my MacBook it's not "insanely small", but I zoom to 120% for a much better experience. I can read it just fine at the default.
I'm sure they accept PRs, although it can be tricky to evaluate the effect a CSS change will have on a broad range of devices.
I find it exactly the right size on both PC and phone.
There's a trend to make fonts bigger but I never understood why. Do people really have trouble reading it?
I prefer seeing more information at the same time, when I used Discord (on PC), I even switched to IRC mode and made the font smaller so that more text would fit.
I'm assuming you have a rather small resolution display? On a 27" 4k display, scaled to 150%, the font is quite tiny, to the point where the textarea I currently type this in (which uses the browsers default font size) is about 3 times the perceivable size in comparison to the HN comments themselves.
Agreed. I'm on an Apple Thunderbolt Display (2560x1440) and I'm also scaled up to 150%.
I'm not asking for some major, crazy redesign. 16px is the browser default and most websites aren't using tiny, small font sizes like 12px any longer.
The only reason HN is using it is because `pg` made it that in 2006, at a time when it was normal and made sense.
Yup, and these days we have relative units in CSS such that we no longer need to hardcode pixels, so everyone wins (em, rem). That way people can get usability according to the browsers defaults, which make the whole thing user configurable.
1920x1080 and 24 inches
Maybe the issue is not scaling according to DPI?
OTOH, people with 30+ inch screens probably sit a bit further away to be able to see everything without moving their head so it makes sense that even sites which take DPI into account use larger fonts because it's not really about how large something is physically on the screen but about the angular size relative to the eye.
Yeah, one of the other cousin comments mentions 36 inches away. I don't think they realize just how far outliers they are. Of course you have to make everything huge when your screen is so much further away than normal.
I have HN zoomed to 150% on my screens that are between 32 and 36 inches from my eyeballs when sitting upright at my desk.
I don't really have to do the same elsewhere, so I think the 12px font might be just a bit too small for modern 4k devices.
I have mild vision issues and have to blow up the default font size quite a bit to read comfortably. Everyone has different eyes, and vision can change a lot with age.
Even better: it scales nicely with the browser’s zoom setting.
A uBlock filter can do it: `||news.ycombinator.com/*$replace=/<html/<!DOCTYPE html><html/`
Could also use tampermonkey to do that, also perform the same function as OP.
There is a better option, but generally the answer is "no"; the best solution would be for WHATWG to define document.compatMode to be writable property instead of readonly.
The better option is to create and hold a reference to the old nodes (as easy as `var old = document.documentElement`) and then after blowing everything away with document.write (with an empty* html element; don't serialize the whole tree), re-insert them under the new document.documentElement.
* Note that your approach doesn't preserve the attributes on the html element; you can fix this by either pro-actively removing the child nodes before the document.write call and rely on document.documentElement.outerHTML to serialize the attributes just as in the original, or you can iterate through the old element's attributes and re-set them one-by-one.
I'm not a web developer, so if someone can please enlighten me: Why does this site, and so many "modern" sites like it have it so that the actual content of the site takes up only 20% of my screen?
My browser window is 2560x1487. 80% of the screen is blank. I have to zoom in 170% to read the content. With older blogs, I don't have this issue, it just works. Is it on purpose or it is it bad css? Given the title of the post, i think this is somewhat relevant.
Often times that is to create a comfortable reading width. (https://ux.stackexchange.com/questions/108801/what-is-the-be...)
Probably to not have incredibly wide paragraphs. I will say though, I set my browser to always display HN at like 150% zoom or something like that. They definitely could make the default font size larger. On mobile it looks fine though.
I have HN on 170% zoom too. this a bad design pattern. I shouldn't have to zoom in on every site. Either increasing the font or making sure the content is always at least 50% of the page would be great for me.
I'm not sure, but when I was working with UX years ago, they designed everything for a fixed width and centered it in the screen.
Kinda like how HackerNews is, it's centered and doesn't scale to my full width of the monitor.
I understand not using the full width, but unless you zoom in, it feels like I'm viewing tiny text on a smart phone in portrait mode.
You would think browsers themselves would handle the rest, if the website simply specified "center the content div with 60% width" or something like that.
I know this was a joke:
but I feel there is a last tag missing: that will ensure screenreaders skip all your page "chrome" and make life much easier for a lot of folks. As a bonus mark any navigation elements inside main using <nav> (or role="navigation").I’m not a blind person but I was curious about once when I tried to make a hyper-optimized website. It seemed like the best way to please screen readers was to have the navigation HTML come last, but style it so it visually comes first (top nav bar on phones, left nav menu on wider screens).
Props to you for taking the time to test with a screen reader, as opposed to simply reading about best practices. Not enough people do this. Each screen reader does things a bit differently, so testing real behavior is important. It's also worth noting that a lot of alternative input/output devices use the same screen reader protocols, so it's not only blind people you are helping, but anyone with a non-traditional setup.
Navigation should come early in document and tab order. Screen readers have shortcuts for quickly jumping around the page and skipping things. It's a normal part of the user experience. Some screen readers and settings de-prioritize navigation elements in favor of reading headings quickly, so if you don't hear the navigation right away, it's not necessarily a bug, and there's a shortcut to get to it. The most important thing to test is whether the screen reader says what you expect it to for dynamic and complex components, such as buttons and forms, e.g. does it communicate progress, errors, and success? It's usually pretty easy to implement, but this is where many apps mess up.
Just to say, that makes your site more usable in text browsers too, and easier to interact with the keyboard.
I remember HTML has an way to create global shortcuts inside a page, so you press a key combination and the cursor moves directly to a pre-defined place. If I remember that right, it's recommended to add some of those pointing to the menu, the main content, and whatever other relevant area you have.
Wouldn’t that run afoul of other rules like keeping visual order and tab order the same? Screen reader users are used to skip links & other standard navigation techniques.
You want a hidden "jump to content" link as the first element available to tab to.
TFA itself has an incorrect DOCTYPE. It’s missing the whitespace between "DOCTYPE" and "html". Also, all spaces between HTML attributes where removed, although the HTML spec says: "If an attribute using the double-quoted attribute syntax is to be followed by another attribute, then there must be ASCII whitespace separating the two." (https://html.spec.whatwg.org/multipage/syntax.html#attribute...) I guess the browser gets it anyway. This was probably automatically done by an HTML minifier. Actually the minifier could have generated less bytes by using the unquoted attribute value syntax (`lang=en-us id=top` rather than `lang="en-us"id="top"`).
Edit: In the `minify-html` Rust crate you can specify "enable_possibly_noncompliant", which leads to such things. They are exploiting the fact that HTML parsers have to accept this per the (parsing) spec even though it's not valid HTML according to the (authoring) spec.
Anyone else prefer to use web components without bundling?
I probably should not admit this, but I have been using Lit Elements with raw JavaScript code. Because I stopped using autocomplete awhile ago.
I guess not using TypeScript at this point is basically the equivalent for many people these days of saying that I use punch cards.
37 Signals [0] famously uses their own Stimulus [1] framework on most of their products. Their CEO is a proponent of the whole no-build approach because of the additional complexity it adds, and because it makes it difficult for people to pop your code and learn from it.
[0]: https://basecamp.com/ [1]: https://stimulus.hotwired.dev/
It's impossible to look at a Stimulus based site (or any similar SSR/hypermedia app) and learn anything useful beyond superficial web design from them because all of the meaningful work is being done on the other side of the network calls. Seeing a "data-action" or a "hx-swap" in the author's original text doesn't really help anyone learn anything without server code in hand. That basically means the point is moot because if it's an internal team member or open source wanting to learn from it, the original source vs. minified source would also be available.
It's also more complex to do JS builds in Ruby when Ruby isn't up to the task of doing builds performantly and the only good option is calling out to other binaries. That can also be viewed from the outside as "we painted ourselves into a corner, and now we will discuss the virtues of standing in corners". Compared to Bun, this feels like a dated perspective.
DHH has had a lot of opinions, he's not wrong on many things but he's also not universally right for all scenarios either and the world moved past him back in like 2010.
Dunno. You can build without minifying if you want it to be (mostly) readable. I wouldn’t want to give up static typing again in my career.
Even with TS, if I’m doing web components rather than a full framework I prefer not bundling. That way I can have each page load the exact components it needs. And with http/2 I’m happy to have each file separate. Just hash them and set an immutable cache header so it even when I make changes the users only have to pull the new version of things that actually changed.
God yes, as little tool chain as I can get away with.
Can't say I generally agree with dropping TS for JS but I suppose it's easier to argue when you are working on smaller projects. But here is someone that agrees with you with less qualification than that https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701...
I was introduced to this decision from the Lex Fridman/DHH podcast. He talked a lot about typescript making meta programming very hard. I can see how that would be the case but I don't really understand what sort of meta programming you can do with JS. The general dynamic-ness of it I get.
Luckily Lit supports typescript so you wouldn't need to drop it.
> Anyone else prefer to use web components without bundling?
Yes! not only that but without ShadowDOM as well.
For clarity and conformity, while optional these days, I insist on placing meta information within <head>.
I often reach for the HTML5 boilerplate for things like this:
https://github.com/h5bp/html5-boilerplate/blob/main/dist/ind...
There is some irony in then-Facebook's proprietary metadata lines being in there (the "og:..." lines). Now with their name being "Meta", it looks even more proprietary than before.
Maybe the name was never about the Metaverse at all...
Are they proprietary? How? Isn't open graph a standard and widely implemented by many parties, including many open source softwares?
They're not, at all. It was invented by Facebook, but it's literally just a few lines of metadata that applications can choose to read if they want.
Being invented by $company does not preclude it from being a standard.
https://en.wikipedia.org/wiki/Technical_standard
> A technical standard may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc.
PDF is now an international standard (ISO 32000) but it was invented by Adobe. HTML was invented at the CERN and is now controlled by W3C (a private consortium). OpenGL was created by SGI and is maintained by the Khronos Group.
All had different "ownership" paths and yet I'd say all of them are standards.
Did you mean to type "does not" in that first sentence? Otherwise, the rest of your comment acts as evidence against it.
Yep, it was a typo. Thanks! Fixed.
how do you find this when you need it?
Note that <html> and <body> auto-close and don't need to be terminated.
Also, wrapping the <head> tags in an actual <head></head> is optional.
You also don't need the quotes as long the attribute doesn't have spaces or the like; <html lang=en> is OK.
(kind of pointless as the average website fetches a bazillion bytes of javascript for every page load nowadays, but sometimes slimming things down as much as possible can be fun and satisfying)
This kind of thing will always just feel shoddy to me. It is not much work to properly close a tag. The number of bytes saved is negligible, compared to basically any other aspect of a website. Avoiding not needed div spam already would save more. Or for example making sure CSS is not bloated. And of course avoiding downloading 3MB of JS.
What this achieves is making the syntax more irregular and harder to parse. I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient. It would greatly simplify browser code and HTML spec.
> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.
Who would want to use a browser which would prevent many currently valid pages from being shown?
Implicit elements and end tags have been a part of HTML since the very beginning. They introduce zero ambiguity to the language, they’re very widely used, and any parser incapable of handling them violates the spec and would be incapable of handling piles of real‐world strict, standards‐compliant HTML.
> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.
They (W3C) tried that with XHTML. It was soundly rejected by webpage authors and by browser vendors. Nobody wants the Yellow Screen of Death. https://en.wikipedia.org/wiki/File:Yellow_screen_of_death.pn...
> They introduce zero ambiguity to the language
Well, to parsing it for machines yes, but for humans writing and reading it they are helpful. For example, if you have
and change it to suddenly you've got a syntax error (or some quirks mode rendering with nested divs).The "redundancy" of closing the tags acts basically like a checksum protecting against the "background radiation" of human editing. And if you're writing raw HTML without an editor that can autocomplete the closing tags then you're doing it wrong anyway. Yes that used to be common before and yes it's a useful backwards compatibility / newbie friendly feature for the language, but that doesn't mean you should use it if you know what you're doing.
It sounds like you're headed towards XHTML. The rise and fall of XHTML is well documented and you can binge the whole thing if you're so inclined.
But my summarization is that the reason it doesn't work is that strict document specs are too strict for humans. And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
The merits and drawbacks of XHTML has already been discussed elsewhere in the thread and I am well aware of it.
> And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
Yes, my point is that there is no reason to still write "invalid" code just because it's supported for backwards compatibility reasons.
I didn't have a problem with XHTML back in the day; it tool a while to unlearn it; I would instinctively close those tags: <br/>, etc.
It actually the XHTML 2.0 specification [1] that discarded backwards compatibility with HTML 4 was the straw that broke the camel's back. No more forms as we knew them, for example; we were supposed to use XFORMS.
That's when WHATWG was formed and broke with the W3C and created HTML5.
Thank goodness.
[1]: https://en.wikipedia.org/wiki/XHTML#XHTML_2.0
> It would greatly simplify browser code and HTML spec.
I doubt it would make a dent - e.g. in the "skipping <head>" case, you'd be replacing the error recovery mechanism of "jump to the next insertion mode" with "display an error", but a) you'd still need the code path to handle it, b) now you're in the business of producing good error messages which is notoriously difficult.
Something that would actually make the parser a lot simpler is removing document.write, which has been obsolete ever since the introduction of the DOM and whose main remaining real world use-case seems to be ad delivery. (If it's not clear why this would help, consider that document.write can write scripts that call document.write, etc.)
You're not alone, this is called XHTML and it was tried but not enough people wanted to use it
Yeah, I remember, when I was at school and first learning HTML and this kind of stuff. When I stumbled upon XHTML, I right away adapted my approach to verify my page as valid XHTML. Guess I was always on this side of things. Maybe machine empathy? Or also human empathy, because someone needs to write those parsers and the logic to process this stuff.
oh man, I wish XHTML had won the war. But so many people (and CMSes) were creating dodgy markup that simply rendered yellow screens of doom, that no-one wanted it :(
i'm glad it never caught on. the case sensitivity (especially for css), having to remember the xmlns namespace URI in the root element, CDATA sections for inline scripts, and insane ideas from companies about extending it further with more xml namespaced elements... it was madness.
It had too much unnecessary metadata yes, but case insensitivity is always the wrong way to do stuff in programming (e.g. case insensitive file system paths). The only reason you'd want it is for real-world stuff like person names and addresses etc. There's no reason you'd mix the case of your CSS classes anyway, and if you want that, why not also automatically match camelCase with snake_case with kebab-case?
I'll copy what I wrote a few days ago:
The fact XHTML didn't gain traction is a mistake we've been paying off for decades.
Browser engines could've been simpler; web development tools could've been more robust and powerful much earlier; we would be able to rely on XSLT and invent other ways of processing and consuming web content; we would have proper XHTML modules, instead of the half-baked Web Components we have today. Etc.
Instead, we got standards built on poorly specified conventions, and we still have to rely on 3rd-party frameworks to build anything beyond a toy web site.
Stricter web documents wouldn't have fixed all our problems, but they would have certainly made a big impact for the better.
And add:
Yes, there were some initial usability quirks, but those could've been ironed out over time. Trading the potential of a strict markup standard for what we have today was a colossal mistake.
There's no way it could have gained traction. Consider two browsers. On follows the spec explicitly, and one goes into "best-effort" mode on encountering invalid markup. End users aren't going to care about the philosophical reasoning for why Browser A doesn't show them their school dance recital schedule.
Consider JSON and CSV. Both have formal specs. But in the wild, most parsers are more lenient than the spec.
I agree for sure, but that's a problem with the spec, not the website. If there are multiple ways of doing something you might as well do the minimal one. The parser will have always to be able to handle all the edge cases no matter what anyway.
You might want always consistently terminate all tags and such for aesthetic or human-centered (reduced cognitive load, easier scanning) reasons though, I'd accept that.
<html>, <head> and <body> start and end tags are all optional. In practice, you shouldn’t omit the <html> start tag because of the lang attribute, but the others never need any attributes. (If you’re putting attributes or classes on the body element, consider whether the html element is more appropriate.) It’s a long time since I wrote <head>, </head>, <body>, </body> or </html>.
I'm not sure I'd call keeping the <body> tag open satisfying but it is a fun fact.
Didn't know you can omit <head> .. </head> but I prefer for clarify to keep them.
Do you also spell out the implicit <tbody> in all your tables for clarity?
I do.
`<thead>` and `<tfoot>`, too, if they're needed. I try to use all the free stuff that HTML gives you without needing to reach for JS. It's a surprising amount. Coupled with CSS and you can get pretty far without needing anything. Even just having `<template>` with minimal JS enables a ton of 'interactivity'.
Yes. Explicit is almost always better than implicit, in my experience.
> `<html lang="en">`
The author might consider instead:
`<html lang="en-US">`
Interesting.
From what I can tell this allows some screen readers to select specific accents. Also the browser can select the appropriate spell checker (US English vs British English).
I still don’t understand what people think they’re accomplishing with the lang attribute. It’s trivial to determine the language, and in the cases where it isn’t, it’s not trivial for the reader, either.
Doesn't it state this in the article?
> Browsers, search engines, assistive technologies, etc. can leverage it to:
> - Get pronunciation and voice right for screen readers
> - Improve indexing and translation accuracy
> - Apply locale-specific tools (e.g. spell-checking)
Quirks quirks aside there are other ways to tame old markup...
If a site won't update itself you can... use a user stylesheet or extension to fix things like font sizes and colors without waiting for the maintainer...
BUT for scripts that rely on CSS behaviors there is a simple check... test document.compatMode and bail when it's not what you expect... sometimes adding a wrapper element and extracting the contents with a Range keeps the page intact...
ALSO adding semantic elements and ARIA roles goes a long way for accessibility... it costs little and helps screen readers navigate...
Would love to see more community hacks that improve usability without rewriting the whole thing...
> <html lange="en">
s/lange/lang/
> <meta name="viewport" content="width=device-width,initial-scale=1.0">
Don’t need the “.0”. In fact, the atrocious incomplete spec of this stuff <https://www.w3.org/TR/css-viewport-1/> specifies using strtod to parse the number, which is locale dependent, so in theory on a locale that uses a different decimal separator (e.g. French), the “.0” will be ignored.
I have yet to test whether <meta name="viewport" content="width=device-width,initial-scale=1.5"> misbehaves (parsing as 1 instead of 1½) with LC_NUMERIC=fr_FR.UTF-8 on any user agents.
Wow. This reminds me of Google Sheets formulas, where function parameters are separated with , or ; depending on locale.
Not to mention the functions are also translated to the other language. I think both these are the fault of Excel to be honest. I had this problem long before Google came around.
And it's really irritating when you have the computer read something out to you that contains numbers. 53.1 km reads like you expect but 53,1 km becomes "fifty-three (long pause) one kilometer".
> Not to mention the functions are also translated to the other language.
This makes a lot of sense when you recognize that Excel formulas, unlike proper programming languages, aren't necessarily written by people with a sufficient grasp of the English language, especially when it comes to more abstract mathematical concepts, which aren't taught in secondary English language classes at school, but it in their native language mathematics classes.
Not sure if this still is the case, but Excel used to fail to open CSV files correctly if the locale used another list separator than ',' – for example ';'.
I’m happy to report it still fails and causes me great pain.
Reall? Libreoffice at least has a File > Open menu that allows you to specify the separator and other CSV stuff, like the quote character
You have to be inside Excel and use the data import tools. You cannot double click to open, it outs everything in one cell…
Sometimes you double click and it opens everything just fine and silently corrupts and changes and drops data without warning or notification and gives you no way to prevent it.
The day I found that Intellij has a built in CSV tabular editor and viewer was the best day.
Excel has that too. But you can't just double-click a CSV file to open it.
Given that world is about evenly split on the decimal separator [0] (and correspondingly on the thousands grouping separator), it’s hard to avoid. You could standardize on “;” as the argument separator, but “1,000” would still remain ambiguous.
[0] https://en.wikipedia.org/wiki/Decimal_separator#Conventions_...
Try Apple Numbers, where even function names are translated and you can’t copy & paste without an error if your locale is, say, German.
aha, in Microsoft Excel they translate even the shortcuts. The Brazilian version Ctrl-s is "Underline" instead of "Save". Every sheet of mine ends with a lot of underlined cells :-)
Same as Excel and LibreOffice surely?
Yes
Oh, good to know that it depends on locale, I always wondered about that behavior!
The behaviour predates Google Sheets and likely comes from Excel (whose behavior Sheets emulate/reverse engineer in many places). And I wouldn't be surprised if Excel got it from Lotus.
I wish I could use this one day again to make my HTML work as expected.
<bgsound src="test.mid" loop=3>
> <!doctype html> is what you want for consistent rendering. Or <!DOCTYPE HTML> if you prefer writing markup like it’s 1998. Or even <!doCTypE HTml> if you eschew all societal norms. It’s case-insensitive so they’ll all work.
And <!DOCTYPE html> if you want polyglot (X)HTML.
We need HTML Sophisticated - <!Dr. Type, HtML, PhD>
I hate how because of iPhone and subsequent mobile phones we have bad defaults for webpages so we're stuck with that viewport meta forever.
If only we had UTF-8 as a default encoding in HTML5 specs too.
I came here to say the same regarding UTF-8. What a huge miss and long overdue.
I’ve had my default encoding set to UTF-8 for probably 20 years at this point, so I often miss some encoding bugs, but then hit others.
outerHTML is an attribute of Element and DocumentFragment is not an Element.
Where do the standards say it ought to work?
Nice, the basics again, very good to see. But then:
I know what you’re thinking, I forgot the most important snippet of them all for writing HTML:
<div id="root"></div> <script src="bundle.js"></script>
Lol.
-> Ok, thanx, now I do feel like I'm missing an inside joke.
It's a typical pattern in, say react, to have just this scaffolding in the HTML and let some frond end framework to build the UI.
The "without meta utf-8" part of course depends on your browser's default encoding.
What mainstream browsers aren't defaulting to utf-8 in 2025?
I spent about half an hour trying to figure out why some JSON in my browser was rendering è incorrectly, despite the output code and downloaded files being seemingly perfect. I came to the conclusion that the browsers (Safari and Chrome) don't use UTF-8 as the default renderer for everything and moved on.
This should be fixed, though.
I wouldn’t be surprised if they don’t for pages loaded from local file URIs.
html5 does not even allow any other values in <meta charset=>. I think you need to use a different doctype to get what the screenshot shows.
While true, they also require user agents to support other encodings specified that way: https://html.spec.whatwg.org/multipage/parsing.html#characte...
Another funny thing here is that they say “but not limited to” (the listed encodings), but then say “must not support other encodings” (than the listed ones).
It says
> the encodings defined in Encoding, including, but not limited to
where "Encoding" refers to https://encoding.spec.whatwg.org (probably that should be a link.) So it just means "the other spec defines at least these, but maybe others too." (e.g. EUC-JP is included in Encoding but not listed in HTML.)
All of them, pretty much.
It's 2025, the end of it. Is this really necessary to share?
Yes. Knowledge is not equally distributed.
When sharing this post on his social media accounts, Jim prefixed the link with: 'Sometimes its cathartic to just blog about really basic, (probably?) obvious stuff'
Feels even more important to share honestly. It's unexamined boilerplate at this point.
Every day you can expect 10000 people learning a thing you thought everyone knew: https://xkcd.com/1053/
To quote the alt text: "Saying 'what kind of an idiot doesn't know about the Yellowstone supervolcano' is so much more boring than telling someone about the Yellowstone supervolcano for the first time."
And here I was, thinking everybody already knew XKCD 1053 ...
XKCD 1053 is a way of life. I think about it all the time, and it has made me a better human being.