Hacker Newsnew | past | comments | ask | show | jobs | submit | bdonlan's commentslogin

Fusible traces aren’t uncommon - but they would effectively destroy the device if current limits are exceeded, and they certainly would be if the power supply is non-isolated, so it wouldn’t actually be a solution to the firmware update problem.

The usual solution here is an optical coupling of some kind - optocouplers (a box with a LED, photodiode, and air gap between them) are very common for this purpose, and are an inexpensive and effective option for digital signaling across an isolation boundary.

In this case they’ve basically built a custom optocoupler out of discrete components, which is a bit unusual, but makes sense when you consider the risk of corrosion due to water ingress at the diagnostic ports, and the need to access it while - for example - a dishwasher cycle is running.


It was originally a switching/signalling waypoint, and later started seasonal passenger service for skiiers in 1932, before later switching to year-round service. Apparently it used to be a popular hiking destination as well, but with the establishment of more convenient rail and road routes became less popular in the 80s.


Convenient — your hike (or climbing with skiis) starts immediately at the train platform!


I would rather take stairs instead a really long escalator, it's scary af (when going down).


This would have been a great opportunity for an Akira Elevator:

https://www.rockpapershotgun.com/why-the-mysterious-love-aff...


You might enjoy this (now 2 part series) - https://www.youtube.com/watch?v=M2oELc61XHE


Really? I've gone down long escalators before and it doesn't feel scary, and I'm scared of heights.


I took this[0] in Kyiv, it was really uncomfortable. Apparently it's "Deepest Metro station in the world, and a long escalator ride" [1]

[0]: https://dynamic-media-cdn.tripadvisor.com/media/photo-o/15/9...

[1]: https://www.tripadvisor.com/Attraction_Review-g294474-d80742...


I think it has more to do with lighting as well as the height of the ceiling. In NYC the escalator going to Grand Central Madison is very long and yet it doesn't feel uncomfortable at all. It has way more lighting than your picture and the ceiling seems higher too.


Do you feel nothing at all when you're on a very long escalator and you look down? What if someone pushes you, or a person behind you? People would fall like domino.


Lots of people will grab the handrail which will dampen the domino effect.


no it doesn't. long escalators are just really fucking scary. regular length are scary too, so it just follows


I'm reading and acknowledging all these comments, but I don't get it. Can one of you describe the fear?


I'm going to guess the person suffers from vertigo, as have I to a small degree - particularly on the metro escalator in Rosslyn, VA (across the river from DC). The sensation occurs going down or up very tall escalators in a tunnel. When it hits, you feel like you are traveling horizontally with some weird tunnel vision. This is terrifying and can cause you to feel like you're falling - even when you know you are going down or up, your eyes are telling you you're traveling horizontally.

I've also gotten this driving a car through long tunnels as well, going down or up (Baltimore 895 harbor tunnel can do this).


I’m pretty scared of heights and generally haven’t been triggered by any long escalators into subways, but there’s one on the DC Metro (Adams Morgan maybe?) that kind of freaked me out.


I'd guess that a long narrow escalator could make someone feel "trapped", in a way that regular stairs generally wouldn't.


Yes, really. Avoided if at all avoidable-really. Regular length is terrifying and more than that is no-go territory.


> 486-step staircase

> service for skiiers

Can't help but think of trudging up the gondola stairs at heavenly in ski boots at the end of the day.

this is why snowboarding will win in the end.


In this case, the problem was the seed was low entropy, so if you generated a bunch of random numbers with a single initialization of the RNG seed you would get unique values. It's a tricky scenario to test for if you don't know of the failure mode...


This is called disgorgement and is indeed frequently used for certain types of crimes (particularly financial crimes).


There’s also upcoming changes to paper bills too, vending machine operators might be waiting to do both at the same time.


WinRAR supports explicitly selecting the character set to use to decompress a .zip file - super helpful when opening Japanese ZIP files.


Unfortunately that means it's not possible to deploy this without violating the AGPL...


No one cares. It's a two week violation and no one is going to hunt anyone down who released this early internally.


Even though this is technically a violation, licenses aren't black & white. The objective and intent of the AGPL is not being violated by delaying release by a couple weeks to give time for security patches to be applied.


Link appears to be broken.


``` All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the Court will strike any filing from an attorney who fails to file a certificate on the docket attesting that the attorney has read the Court’s judge-specific requirements and understands that he or she will be held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing. ```


> While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.

While I like the idea, the reasoning here is kinda silly, even if some developer had sworn such an oath it is entirely unclear how that should affect a AI. We just don't have good techniques to prevent llms from hallucinating.


Not just the developer, but anyone who posted any sort of text in the training data. I assume that includes a pretty wide majority of English speakers. That makes the likelihood of actual agreement with the oath for all “participants” in the AI training low, to say the least.

That point about swearing an oath is a good one. It’s a reasonable position (assuming it’s the court’s position) that an AI (assuming this is an “AI”) being asked to provide testimony or a legal argument is doing so on behalf of all who trained it. Why would “my” training argue in “your” case? What if one doesn’t agree with the oath?


the hallucination issue was a separate clause. the court does not think that so swearing will fix hallucinations.


That's surprisingly well informed about tech for a judge.


One of hardest, time consuming, and most important parts of being a judge is understanding the issues presented before you. Be it generative AI or communal animal husbandry.

There are clearly gross examples of judges failing in this duty, but others take that duty extremely seriously and I've read a number of judicial opinions on technology-related cases that demonstrated remarkable, nuanced grokking of the matter at hand.


Works for me, but you can also read the text of the order here: https://reason.com/volokh/2023/05/30/federal-judge-requires-...

(The linked blog post is where I discovered this, but I submitted the original source instead.)


In federal courts, when you file something under your account (often, attorneys have staff do the clicking to file), it is the equivalent of signing. And what you sign, you vouch for.

This standing order reiterates what any practitioner _should_ know. It a rule which states that you should follow the rules! But as we learned from the Air Avianca filing, at least one lawyer is missing something very basic about what it means to sign a filing in federal court.


Works for me too. Nice to see some judges have integrity.


The issuing bank for loans isn't necessarily where the company keeps the majority of their assets.


OP's claim was that Stripe Capital's loan underwriter was SVB, hence why they weren't extending loans to SVB affected customers.

Stripe itself was, and probably still is, using Wells Fargo for US corporate accounting, per https://qr.ae/pvERsZ


Stripe is still saying they are extending loans via Capital with no change to recent terms they've been offering. Confirmed with our account manager this afternoon, just in case.


Any kind of turn costs lift, and that’s not what you want to spend when you’re trying not to hit something on the ground. As such turns are generally not performed below a certain altitude.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: