10 users for mobile notifications is a non-starter for me. I’d rather host XMPP then, I guess. Or a Matrix server, it seems like it allows the mobile notifications.
Have you worked with both Matrix and Zulip? Looking at both for a small team and wondering which way to go with. Matrix seems more complex to set up and less tailored to function as a Slack alternative. What has been your experience?
Unfortunately, not yet. I tried all of them locally earlier this year, but very briefly, like 15 minutes each. We use Matrix as our family messenger, which is not self-hosted. It’s mainly for occasional and brief text communication, so I’m not too motivated to go self-hosted, yet I had my plans.
Biggest problem of mdBook is the lack of the good PDF and ePub export[1]. This is why Quarto[2] (based on pandoc) is a better choice if you need both online and offline rendered documentation/text. And Typst (or LaTeX for more conservative folks) for the offline documentation/articles/books.
I get where you're coming from but I'm not sure if PDF generation was a goal for mdbook to begin with.
I'm pretty sure they're working on it now, but I'd stick to a domain specific tool, personally I find typst excellent to generate documents for prints.
For simple HTML docs mdbook is also excellent. I don't know if you could combine these two domains into one tool nicely. To me they're just too different.
although I have made presentations with mdbook using custom written preprocessors and a custom renderer (all of which were extremely crude, but did the job)
The concept is widely covered in the amazing book Silence on the Wire[1] by Michal Zalewski. I wish he or someone else would write modern equivalent (or at least a new, updated edition) of the book.
I believe he found new solace - woodwork[1][2]. Given the state of security and things in general in the IT, I definitely understand. Especially with the AI slop influx. His blog is quite interesting to read though, highly recommend.
List is incomplete without some other high quality channels already mentioned in other threads, but also Anton Petrov[1]'s channel, Sixty Symbols[2], Science Clic[3] and Artem Kirsanov[4]. Animalogic[5] certainly worth mentioning. And absolutely stunning Journey to Microcosmos[6].
I want to hijack this comment to plug AlphaPhoenix (https://youtube.com/@alphaphoenixchannel), who is responsible for by far the best explanations of electricity that I've encountered in any format. For instance, this one that clarifies what happens when you try to send electricity on an open circuit, and (IIRC this is the right one) what impedance mismatch is actually about: https://youtu.be/2AXv49dDQJw
ScienceClic has extremely good animations. And it's run by some guy in his early 20s. He knows the physics, he can create these amazing animations and he comes up with genuinely novel metaphors and perspectives.
That would be easier if both GPU and display manufacturers weren't eschewing newer DisplayPort versions for older versions with DSC (which is not lossless despite its subjective claims of being "visually lossless"), while building in newer HDMI versions with greater performance.
To be fair, the DisplayPort 2.0/2.1 standardisation process was riddled with delays and they ended up landing years after HDMI 2.1 did. It stands to reason that hardware manufacturers picked up the earlier spec first.
what resolution is it that you can drive with "newer HDMI versions" but you cannot drive with DisplayPort 1.4 w/o DSC? The bandwidth difference is not really that much in practice, and "newer HDMI versions" also rely on DSC, or worse, chroma subsampling (objectively and subjectively worse).
I mean, one has been able to drive 5K, 4K@120Hz, etc. for almost over a decade with DP1.4, for the same res you need literally the latest version of HDMI (the "non" TDMS one). It's no wonder that display screens _have_ to use the latest version of HDMI, because otherwise they cannot be driven from a single HDMI port at all.
Having monitors that supported its native resolution through DP but not HDMI used to be a thing until very recently.
There are a lot of PC boards where the iGPU only has an HDMI 2.1 output, or with a DP1.4. But DP1.4 doesn't support some of the resolution/refresh combinations that HDMI 2.1 does. Normally this doesn't matter, but it could if you have, for example, the Samsung 57 inch dual 4K ultrawide.
The iGPU on my 9950X is perfectly capable of driving my Dell U4025QW 5k2k ultrawide. Yeah it would suck for any modern 3D games, but for productivity or light gaming it's fine.
It requires I use the DisplayPort out on Linux because I can't use HDMI 2.1. Because the motherboard has only 1 each of DisplayPort and HDMI this limits my second screen.
It works fine with intel and amd igpu's. They won't run many games at the native resolution though. Doesn't really matter to me, as the igpu's are in work laptops for me, so 60hz or better passes for "adequate".
Even a raspberry pi 4 or newer has dual 4k outputs, that can fill the entire screen at native resolution. Macs have been the worst to use with it so far.
"Just don't support the majority of consumer displays" isn't really an acceptable solution for an organization attempting to be a player in the home entertainment industry.
the problem only affect a subset of HDMI 2.1 features, not HDMI 2.0
but the steam machine isn't really super powerful (fast enough for a lot of games, faster then what a lot of steam customers have, sure. But still no that fast.)
So most of the HDMI 2.1 features it can't use aren't that relevant. Like sure you don't get >60fps@4K but you already need a good amount of FSR to get to 60fps@4k.
Just because the Steam Machine isn't powerful enough to support high framerates in modern AAA games doesn't mean it can't do so with older or less graphically-intensive games.
VRR and HDR are presumably the biggest issues, because HDMI 2.0 should already have enough bandwith to support 8-bit 2160p120 with 4:2:0 chroma subsampling, which should work fine for most SDR games, and 144 Hz vs 120 Hz is, in my experience at least, not noticeably different enough to be worth fussing over.
Some people will want to use their Steam Machine as a general-purpose desktop, of course, where RGB or 4:2:2 is nonnegotiable. Though in this case 120 Hz — or 120,000/1001 Hz, thanks NTSC — is, again in my experience, superior to 144 Hz as it avoids frame pacing issues with 30/60 Hz video.
>
"Just don't support the majority of consumer displays" isn't really an acceptable solution for an organization attempting to be a player in the home entertainment industry.
I would recommend Valve to create an official list of consumer displays that ("certified by Valve") do have proper support for the most recent version of Display Port with support for all features relevant to gaming.
This way gamers know which display to buy next, and display vendors get free advertising for their efforts that is circulated to an audience that is very willing to buy a display in the near future.
Aren't DP-HDMI adapters good enough for the majority of consumers? On my ancient (2017) PC with integrated graphics I can't tell a difference between the DP out vs the HDMI out.
The article mentions that the Club3D adapters don't exist anymore (=the popular ones), only off-brand alternatives. VRR is not officially supported via adapters, a big problem for a gaming device.
I have it, it does not. Well, it may. It depends on the firmware you install on the cable. Depending on the firmware, different things will be broken. I tried them all. There's no version that will consistently support 2160p@120 and 4:4:4/RGB and HDR and VRR, and without random handshake issues.
I frequently see comments that say the TV companies are the ones getting the royalties, so I looked it up.
According to Gemini, the royalties go to the _original_ HDMI founders. That includes Sony, Panasonic, Philips, and Toshiba. It does not include Samsung, or LG.
There's no financial incentive. No other mass consumer device besides PCs use DisplayPort, heck, even PCs generally have an HDMI port. So the percentage of TV buyers who actually need to use DisplayPort (basically Linux users) would be a very very very small minority.
I'd assume if they aren't part of HDMI cartel as the above post suggests, they are paying patent fees for this garbage.
And they are in a good position to unblock this situation by increasing adoption of patent free alternatives, therefore I don't see why they wouldn't have an incentive to avoid paying.
So I'd rather see them as somehow complicit then, instead of having no incentive in this case.
They have to pay the fees regardless, since no TV would sell if it didn't have an HDMI port. So unless the TV manufacturers can also convince set-top box makers, game console manufacturers, Blu-Ray makers etc to include DisplayPort, they'll need to continue including an HDMI port.
So this needs to be an industry-wide switch, not just TV makers.
For now, but that doesn't stop them from nudging things in the direction where HDMI will become obsolete by doing their part. I.e. it's not an instant thing, but each step in that direction helps and they can make a pretty significant one.
So the argument of no incentives just doesn't make sense, but it's a gradual process to get there. Unless their bean counters only understand super short term incentives. Then they should be blamed too for why things aren't improving in this regard.
The incentive seems very thin/weak. Pay extra now to push DP adoption and hope that in ~10-15 years you can drop the HDMI port? Meanwhile you still pay the cartel, and they invest your money directly against your interests. And it all hinges on predicting consumer adoption which is nearly impossible. I honestly don’t see how they could justify making such a step in that direction let alone a significant one.
That's a catch 22 / circular argument that can always be used to excuse inaction, but it's not a real argument. Yes, it's a long term problem to solve and has many moving parts. But if they don't solve their part, they are only slowing it down even more. Any contribution to move things forward moves things forward, and lack of it delays things.
I.e. if you are saying "we feed the cartel, let's not do anything about it, since doing anything will only potentially help later, so we still need to feed the cartel in in the interim" doesn't really stand any argument grounds. I.e. feed the cartel and do nothing is worse than feed the cartel and do what you can to stop that over time.
And their piece of this is pretty big (huge portion of TV market), that's why they in particular should be asked more than others, why they aren't doing their part.
It's not so much that it's a catch 22, its that there's no financial incentive for them. TVs are a low margin item already, and Samsung/LG get their margin by being brand names and advertising fancy features.
I doubt they would meaningfully save money over investing in DP, and the opportunity cost is greater for them to spend that money on the next "Frame" TV or whatever.
LG, Samsung and Sony are the only actual panel manufacturers and they probably bake those license fees into the panels they sell back to HDMI Forum.
May be, but by not solving the problem, they become part of the problem, even if they aren't part of HDMI cartel directly. So it's their fault too problems like above happen.
For DP adoption it's too late. They should push for USB4 / Thunderbolt 4 instead. We are in the phase where about every new laptop has USB4. Connecting your laptop/phone to a TV might be a selling point. I'd love that for hotel TVs.
That doesn't explain why they wouldn't want to get rid of HDMI to avoid paying patent fees for it. Adding USB 4 / DP to their TVs is a major step in that direction.
If you think this is proof of it being true, then I am both worried and astonished. How about looking for the information yourself, instead of relying on LLMs? This is HN I thought?!
Please don't post random LLM slop on HN, there's more than enough of it on the internet as is. The value of HN is the human discussion. Everyone here is capable of using an LLM if they so desire.
They should make pedestrian-only streets in most dense places of Manhattan and use these money to improve public transportation. Even just a few blocks of no cars would make a huge difference for livability of the city center.
There's a large (long time) movement to do this to lower Manhattan, the most public transit connected area in the US (probably North America, definitely up there in the world). It's getting pick up again.
We use the tree-sitter[1] for parsing C declarations in Rizin[2] (see the "td" command, for example). See our custom grammar[3] (modified mainstream tree-sitter-c). The custom grammar was sadly necessary, due to the inability of Tree-Sitter to have the alternate roots[4].
Code correctness should be checked automatically with the CI and testsuite. New tests should be added. This is exactly what makes sure these stupid errors don't bother the reviewer. Same for the code formatting and documentation.
This discussion makes me think peer reviews need more automated tooling somewhat analogous to what software engineers have long relied on. For example, a tool could use an LLM to check that the citation actually substantiates the claim the paper says it does, or else flags the claim for review.
I'd go one further and say all published papers should come with a clear list of "claimed truths", and one is only able to cite said paper if they are linking in to an explicit truth.
Then you can build a true hierarchy of citation dependencies, checked 'statically', and have better indications of impact if a fundamental truth is disproven, ...
Could you provide a proof of concept paper for that sort of thing? Not a toy example, an actual example, derived from messy real-world data, in a non-trivial[1] field?
---
[1] Any field is non-trivial when you get deep enough into it.
I'd say my expectation is papers should be minimal in their effect, and compounding. If your project proves new facts, either they should be clearly enumerable (with as much specificity as possible), or your project/presentation/paper should be broken up to the point your findings ARE enumerable.
hey, i'm a part of the gptzero team that built automated tooling, to get the results in that article!
totally agree with your thinking here, we can't just give this to an LLM, because of the need to have industry-specific standards for what is a hallucination / match, and how to do the search
One could submit their bibtex files and expect bibtex citations to be verifiable using a low level checker.
Worst case scenario if your bibtex citation was a variant of one in the checker database you'd be asked to correct it to match the canonical version.
However, as others here have stated, hallucinated "citations" are actually the lesser problem. Citing irrelevant papers based on a fly-by reference is a much harder problem; this was present even before LLMs, but this has now become far worse with LLMs.
Yes, I think verifying mere existence of the cited paper barely moves the needle. I mean, I guess automated verification of that is a cheap rejection criterion, but I don’t think it’s overall very useful.
this is still in beta because its a much harder problem for sure, since its hard to determine if a 40 page paper supports a claims (if the paper claims X is computationally intractable, does that mean algorithms to compute approximate X are slow?)
reply