Noise is, because of its random nature, inherently less compressible than a predictable signal.
So counterintuitively, noise reduction improves compression ratios. In fact many video codecs are about determining which portion of the video IS noise that can be discarded, and which bits are visually important...
That doesn't make it just a compression algorithm, to me at least.
Or to put it another way, to me it would be similarly disingenuous to describe e.g. dead code elimination or vector path simplification as "just a compression algorithm" because the resultant output is smaller than it would be without. I think part of what has my hackles raised is that it claims to improve video clarity, not to optimise for size. IMO compression algorithms do not and should not make such claims; if an algorithm has the aim (even if secondary) to affect subjective quality, then it has a transformative aspect that requires both disclosure and consent IMO.
> That doesn't make it just a compression algorithm, to me at least
It's in the loop of the compression and decompression algorithm.
Video compression has used tricks like this for years. For example, reducing noise before decode and then adding it back in after the decode cycle. Visual noise doesn't need to be precise, so it removing it before compression and then approximating it on the other end saves a lot of bits.
Perhaps it would raise your hackles less if you read the Youtube comment as "improve video clarity at a particular file size", rather than how you presumably read it as "improve video clarity [with no regard for how big the resulting file is]".
I think the first comment is why they would position noise reduction as being both part of their compression and a way to improve video clarity.
Bazzite has a command, `ujust setup-boot-windows-steam`, which when run adds an entry to Steam-in-Bazzite that causes a Windows boot.
It also has a command `ujust regenerate-grub` which adds a Windows entry to the bootloader.
Each of these is a single command which only must be run one time after install. I suppose it could take a few hours to either do it by hand, or to discover one of these options, but they are both documented and in particular the guide at https://docs.bazzite.gg/General/Installation_Guide/dual_boot... (which you implied you followed) mentions the latter command.
It works on each of my Bazzite machines without any manual tinkering/intervention. Not sure why it would not Work On Your Machine (TM).
I guess you've never read the English of a Dutch person ;) During my PhD defense I was told I "should have checked with a native speaker." Pre-LLMs, I'd go to my American colleague and she'd mostly remove text and rewrite some bit to make texts much more readable.
Nowadays, often I put my text into the LLM, and say: Make more concise, include all original points, don't be enthusiastic, use business style writing.
And then it will come with some lines of which I think: Yes! That is what I meant!
I can't imagine you'd rather read my Dunglish. Sure, I could have "studied harder", but one simply is just much more clever in their native tongue, I know more words, more subtleties etc. Over time, and I believe due to LLM use I do get better at it myself! It's a language model after all, not a facts model. I can trust it to make nice sentences.
I am telling you my own preferences, as a native speaker of English. I would rather read my coworkers' original output in their voice than read someone else's writing (including a machine edit of their own text).
I doubt that very strongly and would like to talk to you again after going though 2 versions (with and without LLM) of my 25-pager to UMC management on HPC and Bioinformatics :)
I understand the sentiment, even appreciate it, but there are books that draw you into a story when your eyes hit the paper, and there are books that don't and induce yawning instead (on the same topic). That is a skill issue.
Perhaps I should add that using the LLM does not make me faster in any way, maybe even slower. But it makes the end results so much more pleasant.
"If I Had More Time, I Would Have Written a Shorter Letter". Now I can, but in similar time.
As they said, they are telling you their preference, there is nothing to doubt.
Recently there was a non-native english speaker heavily using an LLM to review their answers on a Show HN post, and it was incredibly annoying. The author did not realize (because of their lack of skills in the language) but the AI-edited version felt fake and mechanical in tone. In that case yes, the broken original is better because it preserves the humanity of the original answers, mistakes and all.
Ok, well it depends on the context then and the severity of the AIness (which I always try to reduce in the prompt, sometimes I’ll ask it to maintain my own style for example).
You know maybe it is annoying for native speakers to pick up subtle AI signals, but for non-natives it can be annoying to find the correct words that express what you want to say as precisely as in your mother tongue. So don’t judge too much. It’s an attempt at better communication as well.
The only difference between a `fun doThing: Result<X, SomeError>` and a `fun doThing: X throws SomeError` is that with the checked exception, unpacking of the result is mandatory.
You're still free to wrap the X or SomeError into a tuple after you get one or other other. There is no loss of type specificity. It is no harder to "write functional code" - anything that would go in the left() gets chained off the function call result, and anything that would go in the right() goes into the appropriate catch block.
You've discarded the error type, which trivialised the example. Rust's error propagation keeps the error value (or converts it to the target type).
The difference is that Result is a value, which can be stored and operated on like any other value. Exceptions aren't, and need to be propagated separately. This is more apparent in generic code, which can work with Result without knowing it's a Result. For example, if you have a helper that calls a callback in parallel on every element of an array, the callback can return Result, and the parallel executor doesn't need to care (and returns you an array of results, which you can inspect however you want). OTOH with exceptions, the executor would need to catch the exception and store it somehow in the returned array.
Either works, but now you have two ways of returning errors, and they aren't even mutually exclusive (Either-retuning function can still throw).
Catch-and-wrap doesn't compose in generic code. When every call may throw, it isn't returning its return type T, but actually an Either<T, Exception>, but you lack type system capable of reasoning about that explicitly. You get an incomplete return type, because the missing information is in function signatures and control flow, not the types they return. It's not immediately obvious how this breaks type systems if you keep throwing, but throwing stops being an option when you want to separate returning errors from the immediate act of changing control flow, like when you collect multiple results without stopping on the first error. Then you need a type for capturing the full result type of a call.
If you write a generic map() function that takes T and returns U, it composes well only if exceptions don't alter the types. If you map A->B, B->C, C->D, it trivially chains to A->D without exceptions. An identity function naturally gives you A->A mapping. This works with Results without special-casing them. It can handle int->Result, Result->int, Result->Result, it's all the same, universally. It works the same whether you map over a single element, or an array of elements.
But if every map callback could throw, then you don't get a clean T->U mapping, only T -> Either<U, Exception>. You don't have an identity function! You end up with Either<Either<Either<... when you chain them, unless you special-case collapsing of Eithers in your map function. The difference is that with Result, any transformation of Result<T, E1> to Result<U, E2> (or any other combo) is done inside the concrete functions, abstracted away from callers. But if a function call throws, the type change and transformation of the type is forced upon the caller. It can't be abstracted away from the caller. The map() needs to know about Either, and have a strategy for wrapping and unwrapping them.
catch lets you convert exceptions to values, and throw convert values to exceptions, so in the end you can make it work for any specific use-case, but it's just this extra clunky conversion step you have to keep doing, and you juggle between two competing designs that don't compose well. With Result, you have one way of returning errors that is more general, more composable, and doesn't have a second incomplete less flexible way to be converted to/from.
I think you're missing the key point about return types with checked exceptions.
`int thing()` in Java returns type `int`. `int thing() throws AnException` in Java returns type `int | AnException`, with language-mandated destructuring assignment with the `int` going into the normal return path and `AnException` going into a required `catch` block.
The argument you're making, that the compiler doesn't know the return type and "you lack type system capable of reasoning about that explicitly", is false. Just because the function says its return type is `int` doesn't mean the compiler is unaware there are three possible returns, and also doesn't mean the programmer is unaware of that.
The argument you are making applies to UNchecked exceptions and does not apply to CHECKED exceptions.
It's not a single return type T that is a sum type. It's two control flow paths returning one type each, and that's a major difference, because the types and control flow are complected together in a way that poorly interacts with the type system.
It's not `fn(T) -> U` where U may be whatever it wants, including Ok|Exception in some cases. It's `fn(T) -> U throws E`, and the `U throws E` part is not a type on its own. It's part of the function signature, but lacks a directly corresponding type for U|E values. It's a separate not-a-type thing that doesn't exist as a value, but is an effect of control flow changes. It needs to be caught and converted to a real value with a nameable type before it can work like a value. Retuning Either<U, E> isn't the `U throws E` thing either. Java's special alternative way of returning either U or E is not a return type, but two control flow paths returning one type each.
Compiler is fully aware of what's happening, but it's not the same mechanism as Result. By focusing on "can this be done at all", you miss the whole point of Result achieving this in a more elegant way, with fewer special non-type things in the language. Being just a regular value with a real type, which simply works everywhere where values work without altering control flow is the main improvement of Result over checked exceptions. Removal of try/catch from the language is the advantage and simplification that Result brings.
Result proves that Java's special-case exception checking is duplicating work of type checking, which needlessly lives half outside of the realm of typed values. Java's checked exceptions could be removed from the language entirely, because it's just a duplicate redundant type checker, with less power and less generality than the type checker for values.
Games run pretty great on Linux, but if you do want a VM, passing through a graphics card to that VM via vfio provides 95%+ the performance of native.
Virtual reality headsets with dual 4K screens running at 75Hz+ perform well on a Windows VM done that way. A normal flatscreen game is going to be just fine.
Let's imagine, hypothetically speaking, that demand is perfectly inelastic. The price of a good is $10, and buyers will absolutely refuse to pay more than $10 under any circumstances.
Before a tariff is imposed, the seller sells the good for $10 and keeps $10 in revenue.
If a tariff of $1 is imposed under these hypothetical circumstances, does the buyer pay more? Does the exporter get paid the same as before?
Clearly, it's neither guaranteed that the buyer will "pay more" nor that the export will "get paid the same as before". In reality because demand is neither 100% elastic nor 100% inelastic, what tends to happen is that the cost of the tariff is split in some ratio between the buyer and seller.
I find it mildly amusing that there are so many people claiming that it's 100% on one side or other, when it's trivially easy to see why that can't be GUARANTEED TO BE the case.
You can go into hypotheticals, but unfortunately for you the data exists.
And the data shows that American buyers are not paying their international supplies less for goods than they were before. In fact, if anything, they are paying slightly more, which maj be explained by general inflation and the fact that tariffs mean American buyers are placing smaller orders and therefore getting smaller percentage volume discounts.
That opens up greater margin for local production. Not everything is elastic, but as long as the producer side cheats in term of local subsidies, less regulation, slave labor etc, implementing tariffs seem a good choice.
you cannot just carbon tax everything locally and then let the other corner of the word produce at a fractional price polluting the same world, exploiting worker etc, without wrecking your internal labor market.
What you see as customer paying more is cause by government letting this shit go on for too long, and now the correction is ugly. But it not like its not needed, and at some point needs to happen before it reaches the breaking point.
I'm not in favor of the current round of tariffs as used by current administration which seem a baseless negotiating tactic, but the effect of outsourcing to bad faith actors has pushed the working class out of balance, they simply have no way of competing internationally unless by accepting a step downgrade in working and living conditions
> That opens up greater margin for local production
My country mostly produce pine wood (and other soft wood). I like hardwood furniture, but its only imported stuff because we have very few producers. Putting a tariff on hardwood furniture could be a good idea to increase local production, as long as hardwood is not tariffed. If both hardwood and hardwood furniture get taxed, i will have to pay more, and local production will never have greater margin, as those will be hit by base material tariffs.
(To be clear: I live near on of the biggest hardwood harbour in Europe, and buy my wood directly out of the sawmill, but my point stands)
Yeah and thats where I was going with the last point about tariff needing to be integrated with the rest of the economic system as a tool and not arbitrarily as a tool for negotiation. Tariff are a damper to any economic system and reduce efficiency, they need to be proportional, predictable and non escalatory (well, as much as possible)
> That opens up greater margin for local production.
It opens up a larger profit margin for local producers for sure. Production? Maybe. Maybe not. Because there is no incentive to produce more or better. Because the cheap bad faith actor is gone and prices can now match the export price or be just slightly below it.
>but the effect of outsourcing to bad faith actors has pushed the working class out of balance, they simply have no way of competing internationally unless by accepting a step downgrade in working and living conditions
> What you see as customer paying more is cause by government letting this shit go on for too long, and now the correction is ugly. But it not like its not needed, and at some point needs to happen before it reaches the breaking point.
You don't seem to see the contradictions in both these statements. If the prices go up and working class isn't paid as much for their effort then it is for naught. The failure hasn't been to continue outsourcing, failure has been to improve wage conditions - because market was supposed to correct it or worst case it is "socialism" to even try and raise wages.
But as always people want to test economic theories for themselves and they should. See if their lives improve under a capitalist government which is going to trample on their rights.
The exporter may sell less to the US, but typically they will then sell the difference into non-US markets, reducing the impost. This is exactly what happened in a lot of (not all) markets a few years ago, when China tried to intimidate Australia with trade restrictions [1]. When Chine dropped the restrictions, they found that they were now competing with more buyers and so paying higher prices.
Losing sales isn't the same as paying the tariff. The person importing the item pays the tariff. Their item won't be released from customs if they don't pay. They pay to the US government.
The correct thing to say is that the tariff has an effect on demand because of the impact of adding a tariff on top of the price.
The one importing pays the tariffs. If that is a person, say buying directly from AliExpress or some other site, then that person pays.
If it's a company, the company pays and might pass it on.
Edit: to be accurate, the importer is legally responsible for the customs declaration and the tariffs, regardless of who does the declaration and who pays. Typically someone else does the declaration on your behalf, and typically they forward any tariffs to you.
> In reality because demand is neither 100% elastic nor 100% inelastic, what tends to happen is that the cost of the tariff is split in some ratio between the buyer and seller.
That is the argument of the Administration:
>> Kevin Hassett's theory of tariffs: "China has got to sell a lot of stuff to us to maintain political stability. And so if we put a tariff on their stuff, then they cut the price so that our consumer is basically still able to demand as much stuff as they need to sell to be politically stable."
> If he were right, the import price index (which measures pre-tariff prices) would have fallen by enough to offset the sharp tariff hike. It didn't.
This is all true, but in practice end consumer demand tends to be much more elastic for almost everything else in the chain. You don't get to decide not to buy toothpaste for more then $2.50 when you run out, you need a new phone when your old one breaks (and not when the price goes back down), etc... Consumers buy products to fill needs, and *needs* are the inelastic part.
In particular your "Let's imagine" case is sort of ridiculous. There are no such goods, nor anything even comparable. The very existence of inflation disproves the idea (since if those inelastic goods existed, they'd see demand drop to zero if the price needed to inflate).
>I find it mildly amusing that there are so many people claiming that it's 100% on one side or other, when it's trivially easy to see why that can't be GUARANTEED TO BE the case.
Yup. And it can't be guaranteed that the sun will rise tomorrow.
Finally. It’s not a cut and dry as one side or the other. People have lost their minds. It’s case by case for every product and every consumer.
Some companies might chose to loose the margin (few but still passable ). Some might try to pass some or all to the sale price (which creates all sorts another dynamics) and finally the customer does not have to buy that product. There are many note breakdowns that all adjust who pays and when they pay.
>I find it mildly amusing that there are so many people claiming that it's 100% on one side or other, when it's trivially easy to see why that can't be GUARANTEED TO BE the case.
To be fair most people on one side think they know better than Adam Smith and the people on the other side usually never opened a book, so it's a tough bargain.
Would it have been better if I had asked for sources instead? How backed up was the talking about reality? What you agree with is reality and what you don't is out of people's ass when everyone was just sharing opinions!
It's pretty clear who pays the tariffs. the buyer pays the manufacturer. the manufacturer ships the product. the product gets held in customs until the buyer pays the tariff.
Yeah we can literally see it happening in real time. If you have a product with competitors in the market and you are a foreign entity you will eat some of the cost to try to stay competitive in the market. Your only other option is to leave the market. A good example of this is Brasil who tariffs a ton of stuff.
Isn't your example actually perfectly elastic? It does not change the conclusion at all, of course.
One problem with this analysis is that I can't imagine Trump doing it, or even understanding it. Well, it's not a problem with the analysis, but with the overall situation.
Let's define "more secure" as "preventing a particular behavior that is against the device owner's conscious or unconscious wishes".
It would be "more secure" to have a per-application firewall that blocks particular apps from outbound traffic over certain networks or to certain destinations. This prevents a malicious app from consuming roaming data.
LineageOS can have that, at the owner's preference. Graphene explicitly forbids it.
It would be "more secure" to allow backing up apps and all their data. This would mitigate the damage of ransomware. Graphene, again, forbids it (following google guidelines prioritizing the wishes of an app's developer over the device owner).
There are many such examples. Lineage is philosophically owned by the person who installed it onto the phone. Graphene is owned by the Graphene devs, NOT the phone owner. Sometimes the Graphene devs purposefully choose to let software on the device restrict the valid owner of that device.
>It would be "more secure" to have a per-application firewall that blocks particular apps from outbound traffic over certain networks or to certain destinations. This prevents a malicious app from consuming roaming data.
LineageOS can have that, at the owner's preference. Graphene explicitly forbids it.
Not sure what is meant by forbidding it? GrapheneOS provides per-app network access control via a user-controllable Network permission which is not implemented in AOSP or LineageOS afaik. They do not forbid using local firewall/filtering apps like RethinkDNS (to enforce mobile data only or Wi-Fi only iirc) and InviZible. They only warn that 'blocks particular apps from outbound traffic ..to certain destinations' cannot be enforced once an app has network access which makes sense to me.
>It would be "more secure" to allow backing up apps and all their data. This would mitigate the damage of ransomware. Graphene, again, forbids it (following google guidelines prioritizing the wishes of an app's developer over the device owner).
Contact scopes, storage scopes, the sensors permission and the network permission are examples that show precisely the opposite (GrapheneOS prioritises the device owner over the application developers). To my understanding, the backup app built-in to GrapheneOS even 'simulates' a device-to-device transfer mode to get around apps not being comfortable with data being exfiltrated to Google Drive. That being said, I understand they have plans to completely revamp the backup experience once they have the resources to do so.
They're referring to the leaky network toggles in LineageOS for different kinds of networks. GrapheneOS won't include that because it doesn't work correctly and gives people the false impression that it's going to stop apps communicating over those networks when it only stops most (not all) direct connections.
LineageOS has the same Seedvault backup system with the same limitations. There are few limitations left since Android 12's API level stopped apps opting out of all backups by redefining it as an opt-out of cloud backups and similarly redefined the file exclusions as only being for cloud backups. The new system supports very explicitly omitting files from device-to-device backups but it has to be explicitly specified that way and few apps do it. The problems with apps opting out of backups due to not wanting cloud backups for space, bandwidth or privacy reasons has been solved for several years now. It doesn't mean all app data is portable between devices, such as Signal encrypting their database with a hardware keystore key making it fundamentally impossible to do backups at a file level for it rather than using their own backup system.
No, I'm specifically referring to iptables-based firewalls (like AFWall), which Graphene does not allow the user to create and Lineage does (via root access).
These are not an android VPN provider and allow blocking traffic based on the combination of source app AND DESTINATION SERVER ADDRESS.
> LineageOS can have that, at the owner's preference. Graphene explicitly forbids it.
That's not true.
You can use apps like RethinkDNS providing local monitoring and filtering of connections while still supporting using a VPN on either LineageOS or GrapheneOS. GrapheneOS fixes 5 different kinds of outbound VPN leaks which are still present on LineageOS, which is quite relevant to this. There are no known outbound VPN leaks remaining for GrapheneOS as long as Private DNS is set to Off.
The reason GrapheneOS doesn't include the finer grained network toggles LineageOS does is because they're leaky and do not work correctly. Our Network toggle doesn't have those kinds of leaks. We do plan to split up the Network toggle a bit but doing that correctly is much harder and comes with some limitations since it still has to block generic INTERNET permission access if anything is disabled and only permit cases which are specially handled.
GrapheneOS has Storage Scopes, Contact Scopes, a Network toggle and a Sensors toggle not available on LineageOS along with other app sandbox and permission model improvements. Users have much more control of their apps and data on GrapheneOS.
LineageOS provides privileged access for Google apps while we take a different approach.
> It would be "more secure" to allow backing up apps and all their data. This would mitigate the damage of ransomware. Graphene, again, forbids it (following google guidelines prioritizing the wishes of an app's developer over the device owner).
That's also not true. LineageOS has the same limitations and backup system.
Both GrapheneOS and LineageOS use Seedvault with the same kind of integration. Since the Android 12 API level, apps can only opt-out of cloud backups and existing exclusion files only apply to cloud backups. There's a new exclusion system which can be used to explicitly omit files from device-to-device backups such as Google's device transfer system, but that's rarely used and it exists for good reason due to device-specific data that's not portable.
> There are many such examples. Lineage is philosophically owned by the person who installed it onto the phone. Graphene is owned by the Graphene devs, NOT the phone owner. Sometimes the Graphene devs purposefully choose to let software on the device restrict the valid owner of that device.
You haven't raised any examples of GrapheneOS restricting what can be done in a way that's not done by LineageOS. All you did is bring up a feature approached differently by both operating systems where the most flexible solutions such as RethinkDNS are available for both. If people want to modify either GrapheneOS or LineageOS, they can do it for each. We provide very good build documentation for production releases with proper signing. We strongly recommend against using Magisk but people do modify GrapheneOS with that projects and use it. Our recommendations are not restrictions on what people can do.
As an example of something lineage allows me to do which graphene forbids: Lineage allows me, the owner of my phone, to use an app of my choice to serve as a location provider.
Graphene requires that I use google services (sandboxed) and does not PERMIT me, the owner of the device, to choose otherwise without compiling my own fork.
I'm using Graphene but honestly the biggest thing is that Lineage devs wouldn't care if you root, while Graphene devs obviously do because it screws the whole point of Graphene