Hacker Newsnew | past | comments | ask | show | jobs | submit | w0utert's commentslogin

>> The biggest technical hurdle is the inability to run external processes on iOS and iPadOS. >> Apps on iOS and iPadOS must use Apple’s Javascript interpreter, JavaScriptCore.

> Both of these really suck because they are policy, not technical, decisions.

They are policy decisions that kind of make sense for a device like a tablet or phone though. Even though you could technically allow installing a complete development toolchain on an iPad, I can't imagine what the process would look like in practice. Download and install a complete *nix userland through the app store? Plus a compiler toolchain and each and every tool used in the build phase for your product? Who is going to maintain and distribute all these parts if the whole ecosystem is designed around the idea that apps are sandboxed and distributed through a curated app store? Imagine the customer support burden if you are the maintainer of some app that depends on external tools that can be used in a zillion different build/deploy configurations.

You could of course argue that the iOS ecosystem should not be based around a curated app store and sandboxed applications, but that would make it a MacBook...

Maybe we should put the whole idea of having one device that does everything to rest and accept that there are advantages to have a split between 'real computers' and tablets/phones. That's just my opionion though...

Edit: ah great, an immediate -3 because apparently people here think it is absolutely required to downvote straight away because they disagree with some opinion that is not their own.

Goodbye Hacker News, after ~10 years I'm finally done with the comment sections here and will deactivate my account and ask for it to be deleted


Apple advertises their own iPads as computers now, they certainly don't want their customers to look at iPads and think, "that's great, now I'm going to buy a real computer instead". They want their customers to buy an iPad. The only real way for iPadOS to go is "up", as in, absorbing more "real computer" features.

Also, what you're describing already exists, it's called iSH. It runs an x86 emulator with a copy of Alpine Linux inside. Somehow, they even convinced App Review to allow it (yes, Apple did threaten to remove it at one point, they backed down). You can use this penalty box to run pretty much any developer tool you like, you can mount file providers inside of the VM, etc. The only limitation is that it's x86 emulation is incomplete, I can't get it to run cargo so I can't compile Rust programs on it yet.


> they certainly don't want their customers to look at iPads and think, "that's great, now I'm going to buy a real computer instead". They want their customers to buy an iPad.

They want their customers to buy both. Apple has nothing to gain by killing off the Mac via the iPad.


> Apple has nothing to gain by killing off the Mac via the iPad.

If tablets are going to replace laptops for consumers in the long term, Apple would prefer that those tablets be iPads. If that shift means the iPad eats the Mac, so be it.

This already worked for them once - the iPhone cannibalized the (then-very profitable) iPod business with Apple’s explicit support, and now the company’s worth a trillion dollars.


They clearly want ipad to replace macs for consumption, not coding.


> They clearly want ipad to replace macs for consumption, not coding

I don't know what they want, but they're clearly preparing for a future where iPads are the default for consumption and creation.

There are multiple physical keyboard options, the Pencil is consistently refreshed, iPadOS has mouse/trackpad support now, and there's even an official - if very limited - iPad IDE[1] for learning Swift.

1. https://www.apple.com/sg/swift/playgrounds/


Swift playgrounds does not change their “app console” philosophy as can be seen from their documents shown in their legal battle with Epic (I’m on mobile so I don’t have the link handy)


If we’ve learned anything from Apple’s history, it’s that what they say today has little bearing on what they’ll do tomorrow.


The truest comment I've ever read. Honestly I hope you're right! How cool would that be?


The line is that the adding additional functionality requires an informed user's explicit and understood consent. This is a blurry line of policy which is not able to be limited via technology. The developers of tools like iSH and Pythonista have to tread carefully.


Apple advertises their own iPads as computers now, they certainly don't want their customers to look at iPads and think, "that's great, now I'm going to buy a real computer instead".

Just because most of us on HACKERnews write code doesn't mean your average 'computer user' does. iPads work very well as computer replacements for the majority of people who just want to browse the web/do shopping/use their favourite video and other media viewer etc etc.

It's an everyday computer for the masses. Not a developer's workstation.


Wow, you can actually use iSH to install PHP, run `php -S localhost:8080` and view index.php or whatever in Safari. I had no idea this was possible on iOS. Thanks!


Yeah, iSH is cool, but x86? Seems odd that they didn't run an ARM version of linux in a container or VM.


The iSH author had more experience with x86 than ARM.

I would not be surprised if iPadOS 15 ships with virtualization support, since the M1 has ARM EL2. If that is the case and Apple allows iSH to use it, then it probably would make sense to add ARM support to iSH for extra performance.


You cannot download and execute arbitrary files, so had to have something interpreted.

They could pick something easier to emulate perhaps, but x86 has benefits from a compatibility standpoint.

Maybe one day there would be a benefit of targetting WASM with a natively implemented syscall api?


We looked into it; the problem is that this makes system calls unbearably slow because they require IPC.


There isn’t any support for this in iOS.


I downvoted you for several reasons: while I do agree with the basic idea (an iPad shouldn’t be a MacBook with touch), I think the way you argue for it lacks nuance and doesn’t hold up. First of all, there is no reason why you can’t have both, *nix tools and a central App Store. Most people don’t use nix tools? Don’t install them. This would also work with sandboxing, e.g. I wouldn’t care if every app brings their own compilers even if it wastes memory. But even that is too much in apple’s eyes. The reason I need a MacBook that has the same processor as an iPad to develop for the iPad is completely arbitrary. Also, ideally I would like to not have to carry around multiple devices but more importantly, don’t buy them because it costs money (for some reason this argument rarely comes up, but money matters, especially in developing regions). Lastly, having devices that serve multiple purposes is a good thing for the environment. It’s also the way forward for the last 2 decades. When was the last time you had a separate MP3 player, a camera a calculator and a GPS device with you? Why shouldn’t my iPad be capable of enabling actual productive work?


"Should" in an ethical sense often conflicts with "does" in a business sense.


> Even though you could technically allow installing a complete development toolchain on an iPad, I can't imagine what the process would look like in practice.

Like every other computer ever.

> Imagine the customer support burden if you are the maintainer of some app that depends on external tools that can be used in a zillion different build/deploy configurations.

Microsoft and Google seem to work just fine. People seem to be able to ship when they have the tools to do so.

> They are policy decisions that kind of make sense for a device like a tablet or phone though.

They are strategic decision under the guise of policy decisions. Apple is "protecting you from dangerous apps" (read: dangerous apps = competition for Apple).

Apple is anti-competitive.


It's got 8GB of RAM and 3GHz processor. It runs Photoshop, for God's sake. An iPad is a small computer with touch.


An iPad Pro and a MacBook Air have the same core hardware - even the same M1 CPU. Add a keyboard and they look really similar except that the iPad has a touchscreen!

But there are lots of hybrid tablets and touchscreen laptops. What makes the iPad an amazing device for me is its outstanding software library (e.g. Procreate) and the Apple Pencil.

I could certainly imagine Apple bringing its Pro apps - notably Final Cut, Logic, and XCode - to the iPad. But I can't imagine Apple opening up iOS any time soon any more than I would imagine Nintendo opening up the Switch.


With this level of reasoning, SMS also makes sense for phones, and banning messenger apps is no big deal.

People have different needs, and a minority is always pushing the edges, and this pushing needs to happen so that the mainstream can pick and choose from the newly explored territory.


I was looking at some old notes the other day and remembering that I had made a plan for going the other direction, of slaving other devices to my IDE for faster round tripping of UI development.

That’s a very heterogenous example, but at some point we will be discussing personal clouds, where people have a little cluster of commodity/older ARM hardware that they balance a bunch of services across.

For example, you can download the server part of Don’t Starve Together as a separate app that you can then leave running even if you log off. That should be the standard for coop games, and probably for multiplayer games in general.

We are also overdue for a rethink of CI/CD pipelines, and I don’t mean As A Service.


> You could of course argue that the iOS ecosystem should not be based around a curated app store and sandboxed applications, but that would make it a MacBook...

Exactly what I would argue, and the only thing that would bring me back to iOS at this point.

> Maybe we should put the whole idea of having one device that does everything to rest and accept that there are advantages to have a split between 'real computers' and tablets/phones.

Google "convergence Pinephone", and imagine how powerful that would be with an iPhone running convergent macOS. And how much more powerful having macOS (with a mobile-optimized GUI) on the phone would make it on the go.


I understand your frustration with downvotes but it's not too bad in general in my experience. It's Apple discussions in particular that are hopeless, you have the rabid fanboys one one side and the rabbit haters on the other. I gave up on commenting on these stories, you can try to make a constructive comment only to be immediately grayed out.


Downvoting seems to turn any opinionated discussion into a stupid game/power struggle between upvotes and downvotes. As if you somehow "win" whenever someone with a different perspective is downvoted to grey.

It's bad on HN, and it's much worse on other sites.


Ive found this as well. Offering input from a highly specialized experience set (US IC community) being downvoted because I share the reality of some things that conflict with how people think things should be.


> They are policy decisions that kind of make sense for a device like a tablet or phone though. Even though you could technically allow installing a complete development toolchain on an iPad, I can't imagine what the process would look like in practice. Download and install a complete *nix userland through the app store? Plus a compiler toolchain and each and every tool used in the build phase for your product? Who is going to maintain and distribute all these parts if the whole ecosystem is designed around the idea that apps are sandboxed and distributed through a curated app store? Imagine the customer support burden if you are the maintainer of some app that depends on external tools that can be used in a zillion different build/deploy configurations.

I've got Termux running on my phone, complete with vim plugins, language server support, several compilers and all kinds of other tools. Combined with a bluetooth keyboard, it can be very useful in a pinch. It'll stop working on Android 11 because of "security concerns", but either thankfully or sadly, my phone has no stable Android 11 release yet. Everything is running inside a sandbox, I don't even have root access, and the binaries are distributed through a normal Linux package manager. With the right software you can even run a normal GUI on it through VNC or Spice, although that's something I haven't explored yet.

No need for other app developers to have any relation with Termux, that's what the sandbox is for. On Android, you can theoretically implement a system for sharing binaries and virtual files quite easily if Termux would support it, but I haven't seen such need myself.

These tools are maintained by volunteers and the Termux developer, and can be extended by adding repositories made by other people. So "who is going to maintain and distribute all these parts" comes down to the same question as "who is maintaining and distributing all of these Debian packages": the developers who want to make the ecosystem and apps function.

Most users won't use their phone or tablet like this, but I honestly don't see why they shouldn't be allowed to if they wish to. Apple is selling a complete keyboard and display stand for iPads, so these devices are clearly being targeted for productive use. Yet Apple refuses to allow developers to be productive on these devices, because they don't want competition for their crappy mobile browser engine.

As far as hardware is concerned, the touch screen, keyboard and OS are pretty much the only serious differences between the iPad and the Macbook Air. If you prefer a two-in-one tablet/laptop combo (which quite a lot of people do), the iPad is the closest Apple product to fit the description, if it would allow users more software freedom.

I do see the advantage of the curated app store, but I don't see the advantage of banning customers from not using said app store for the end user. You don't _have_ to install any apps from outside the app store, you just get the option to do so if you wish. I don't know any non-technical people who have installed apps from outside the Play Store, so it's not like allowing any lifted restrictions will make the ecosystem collapse.

I have a hard time understanding why you would want a company to tell you what you can and cannot use a device for. Their suggestions are always welcome, but why would you be in favour of their restrictions?


> I have a hard time understanding why you would want a company to tell you what you can and cannot use a device for. Their suggestions are always welcome, but why would you be in favour of their restrictions?

1. I'm in favor of locked-down devices for certain classes of users, because it reduces the technical support burden, one that I might otherwise be saddled with!

2. I'm willing to put up with walled gardens that have high-quality software, such as certain iPhone games and music apps, or first-party Nintendo games on the Switch. DRM is irritating, but I can live with it if it doesn't get in my way too much.

3. I'm in favor of several of Apple's developer restrictions that are aligned with my priorities of privacy, security, and battery life, so I'm willing to put up with the others that support Apple's business interests. Sideloading obviously makes such restrictions less enforceable.


I see. I don't think that all of these restrictions are really necessary to achieve the goals we both share for smartphones, but I can understand the rationale better now.

I actually agree with most of those reasons as long as there's a developer mode setting somewhere deep down to turn them off. I've only seen hidden settings being accessed on a large scale once, which was during the Pokemon Go hype, to allow GPS spoofing through the developer options; something that can't be easily done anymore.

With an off switch, normal users are protected and given a nice ecosystem while anyone else can benefit from the freedom of using their device the way they want to. This solves the technical support burden and the software quality issue, because to reach that audience, you still need to go through some form of accepted app store. If you prefer Apple's judgement for whatever reason, you just stick to their store and ignore the existence of any other app out there.

I have to disagree with you on the developer restrictions, though, and I think that's where my lack of understanding your mindset came from. The mandatory 15/30% cut and arbitrary rules (such as the ban on most parental control apps the moment Apple brings out a competitor) make it impossible for me to tolerate the other minor annoyances that come with Apple's decisions. Of course, Google has been going the same route, sadly.

Google has been applying many of the same protections, except for many of the privacy ones, and their platform doesn't suffer a side-loading problem at all. This indicates that the ecosystem would be fine if Apple would loosen up a bit, sticking to their privacy guns but allowing developers to still compete with whatever project they've come up with next.

Neither Google nor Apple have my best interests at heart, but in the case of Google I can at least work around their stupidity. I'll gladly lose access to some "exclusive" content if that's what it takes to install open source apps onto my phone.


Ironically, your comment was in positive when I read it...


> will deactivate my account and ask for it to be deleted

I don’t think accounts can be deleted? I tried once and was told no. :(


I think they can, but they just refuse to. I’ve seen (a few times) some comments with the username and text as “[deleted]”. But I’m not @dang, so I can’t say for sure.


yes, downvoting hurts, and sometimes it's not fair, i got to feel that too. but it has been said repeatedly that downvoting is reasonable to voice disagreements. replying would be better, but not everyone can put their thoughts into words.

try to think about it as a strong disagreement.

(EDIT: i wonder who downvoted this comment now ;-)


I haven't downvoted you, but voicing disagreement via downvoting isn't reasonable, since it tends to have dissenting opinions not be heard at all. When we're here to have a discussion after all, aren't we?


I agree with you but HN does not. HN specifically says downvoting for disagreeing is a valid and even encouraged used of downvoting on HN. I've been informed of this by Dang when complaining about downvoting before.

https://news.ycombinator.com/item?id=16131314

I wish I could downvote downvoting


well, yes, i used to think like that too, but i changed my mind. even when i received downvotes. they don't say much, but they did tell me that there are people who disagree with me. it is a weak signal, but it is a signal, and so it's not useless nor unreasonable.

personally, i only downvote if i feel someone says something unreasonable or worse. but not if it is a good argument, even one that i disagree with. in those cases i even counterupvote other downvotes.

as for the downvote on my comment, that was more a rethorical question. i was actually just laughing at that, given the subject of the message. and the subsequent upvotes show that a lot of people agree with the comment.

(edit: it gets funnier. by now my above comment received at least 8 upvotes and 4 downvotes (or up to 4 people changed their mind))


Why is there voting at all? It's so childish.


Upvoting that.


> replying would be better, but not everyone can put their thoughts into words.

Exactly, downvoting as a way to disagree is the easy way, it’s childish, puerile, and ridiculous. But let’s put things into perspective. A comment is just an opinion in a sea of random opinions. Opinions, for the most part, are not even personal, people tend to borrow them. To think through something and come up with an original opinion takes a lot of work. A downvote is just an easy dismissal, in a sea of easy dismissals. That’s not a proper way to communicate.

Downvoting is imperfect, but that said, I understand how people can find it useful as a curating system. I never downvote comments I disagree with because it doesn’t accomplish anything. It also takes too much energy.


As long as Apple makes money from allowing people to buy "Pro" apps like IDEs, REPL , other creation apps then you wrong, otherwise Apple should reject this apps as not allowed because the device is not capable for Pro creator usage.


>> This is a company that actively fights right to repair and implements software DRM to lock out non-Apple authorised replacements.

But they do all these things for obvious reasons. Reasons you and I may not agree with or be happy about, but still obvious reasons. In the case of repair/replacement it's just because they want you to use expensive replacement parts, they want to lure you into their Apple stores, and they don't want any liability/accountability for repairs with 'unofficial' parts.

I don't see how providing specifications about how their GPU's work so someone can make a Linux driver out of it hurts their commercial interests or liability though. Yes people may screw up their system if they install Linux on a Mac and it doesn't boot anymore, but as long as you can still take it into an Apple store and they can restore it to MacOS, why would Apple actively fight the extremely small minority of people who want to do that? And even if more people (developers/enthusiasts) would buy M1 hardware and immediately slap Linux on it, why would they care about that? They still made the sale, and these people will still walk around with a machine with a big fat Apple logo on it?

They previously spent a lot of effort accomodating people who wanted to run Windows on Macs using Boot Camp, so why would they be worried about people running Linux on M1 macs?

Edit: I can imagine Apple want to protect their IP and hence don't want to disclose anything about out it, period. Much like NVidia and most other GPU manufacturers do. But if AMD and Intel can be OSS-friendly, Apple could be too, apparently IP protection does not have to be a deal-breaker.


Very simple. Booting a different OS gets in the way of Secure Boot and other security features. Why can't you install Linux on an iPhone?


Secure boot on macOS differs from iOS precisely because it lets you install Linux on Macs but not iPhones.


You can install Linux on iPhones via bootrom exploits:

https://checkra.in/ https://projectsandcastle.org/


It's even worse when they fall off close to shore. Last year a small number of containers (around 5 I think) fell off a ship not far north of the Netherlands, close to a very vulnerable and quite rare ecosystem of small islands and tidal plates that are partially submerged during high tide. It's a home for all kinds sea life, birds, a resting area for migrating birds, etc. Some of these containers apparently had stuff like small plastic beads in them, hundreds of thousands of them, which somehow escaped from the container and washed down on the shore. You can imagine how impossible it is to ever clean this up, and what the risk is to wildlife...


I feel like caring about things like this and prioritizing care for the environment should be one of those few things we all can agree needs to be addressed immediately. Sadly, it isn’t.


It's definitely interesting to see how people's workflows can be so different, I get by with at most ~10 tabs, and close things as soon as I'm done with them. At the end of the working day, I prefer to have at most 2 or 3 left. I sincerely start to experience existential anxiety when the number of tabs goes up too much :-P. Probably related to some subconscious feeling that I need to 'do something' with all these tabs and when they increase in number it starts to feel like I'm 'running behind'. Different people, different workflows, that's perfectly fine.

What I don't really see is why this service needs to exist to solve that particular problem (browser gets slow because too many tabs), because IMO that problem has already been solved very well by most decent browsers. They just swap out the inactive tabs and are able to restore them fast enough even on low-end systems, as long as they have an SSD. Inactive tabs that are not swapped out don't take a lot of CPU resources either. This service sells you a cloud browser with 16GB of RAM, which is pretty much the norm for laptops and desktops now, so it's not going to save you much if 'too many tabs' is causing slowness.


I keep the things I need to do in a separate window. If it gets to crowded I drag some less important ones to a different window. I get anxiety when behind but also if I forget to live. Switching between topics effectively is hard if you are not used to it and it definitely eats away my focus if I don't pay attention.

For a while I use different browsers simultaneously for different things. The session turns out entirely different for some reason as if one is a different person in a different location. I could see a cloud browser as something like that. I have no idea what would happen. Portability will probably influence the session.

I wish bookmarks were good enough, I use tabs in stead to preserve scroll audio and video offset and to have a bunch of tabs for a domain with related tabs next to them. Browsers have poor organization for large numbers of tabs but bookmarks are even worse.

I have no real idea how the session should be organized but I'm sure there are tons of visualizations out there that would work wonderfully. Perhaps some filters with a flow chart for the entire browsing history. Full text search? I don't know.

The price doesn't really matter as I spend way to much time online. 1 euro per day is nothing.


Two whole frames per second... Not sure if serious :-/

Two fps at 34KB each is ~500kbps by the way, not 60 kbps


Yep I clearly meant 60KBps not bps :)

> Not sure if serious :-/

And I said not really joking so I guess you don't want to believe me XD


Oh I believe you can ‘stream’ stuff at 2 fps over a 500 kbps line alright, the ‘not serious’ part is how anyone could find that acceptable. Even if all you have is 500 kbps...

If you would use your 2fps streaming browser to read, say, hacker news, every scroll operation would be hideously slow and pull in another ~60KB per second, even though the page data itself is only a few KB and never changes. Your ‘streaming solution’ only makes sense if the total amount of data to fetch for the page itself outweighs the total amount of data for all the frames you need to stream while you are using the page. Which is probably almost never, unless you always look at static single-page applications which continuously pull in data on the backend without presenting anything new at the front end. Highly unlikely.


Your logic is sound, just some experience seems to be missing.

> the ‘not serious’ part is how anyone could find that acceptable

I guess you don't have a beeline on what everyone finds acceptable. That's normal, you can only share your perspective not everybody's.

> every scroll operation would be hideously slow

I guess you haven't experienced it because what you describe is not how it works.

The two frames per second is not streaming a 60 frame per second source down to you at two frames per second it's capturing two frames per second from the source and sending them to you because that's what your bandwidth will permit.

> Your ‘streaming solution’ only makes sense if... Highly unlikely.

Only if the goal is a reduction in bandwidth used viewing the page. There are many other goals were streaming the browser makes a helluvalotta sense.

I get you had this focus on bandwidth because i think it's the main obvious focus of this thread but there's an expanded context in which these things operate. I'm sure you'd appreciate that if you'd experience it.


Yeah, you can even go back much further in time if you're not too worried about performance. I've been running a fanless Atom J1900 based mini-PC as a home server for ~8 years nonstop now. It's trivial to build such a system but at least back then the cost was literally 10x to 20x the price of a rPI today. I would guess that even though it's an 8-year old CPU it's probably about 2x as fast as an rPI 4B, and for something that's been chugging along for all this time the cost/depreciation over time vs. a much cheaper rPI isn't really an issue for me.


Like the other comments I'm confused, pretty much any game I've played for the past few years supports surround sound, most of them 7.1 even.

What I don't understand is why Dolby Atmos is not used much more for games. Xbox One and PC support it but only very few titles use it. PS4 and PS5 don't support it all for games (only for video content), despite all Sony's bragging about their dedication to PS5 audio. Dolby Atmos seems perfect for games, for developers because it effortlessly maps audio directly to any 3D position in space, and for users because it scales all the way from headphones to soundbars to full 7.4.2 setups.

I was royally pissed off to learn PS5 would not natively support Dolby Atmos, I have a full 7.4.1 home-theater setup with height speakers and movies and Dolby Atmos demo's sound absolutely awesome. Yet if I play games the best I can get is 7.1 which is nice, but the height speakers go totally unused. It's probably related to licensing costs, but it is extremely disappointing having waited for the PS5 for so long and not seeing any kind of upgrade to the audio.


Sure if you watch movies in EN you are on the lucky side when it comes to support of high audio bitrate atmos++ support. Once you watch localized movies you are out of content quickly. Why pay a licensing fee (dolby) for your games when the niche is high. Its like the same with supporting Linux for games. I had some hopes for that because Stadia requires Linux ports. But I also expected Stadia to fail on a large scale.


I suspect it's due to cost and licensing to use that particular brand and technology.


Sure, but why does the Xbox One support it then, but not the PS5? And wouldn't it be possible to somehow incorporate the licensing cost in the console by means of a paid software upgrade (like on Windows) or a 'premium' version of the console that has it by default?

As far as I know Dolby Atmos can be seamlessly mapped onto to 5.1 or 7.1, so from the developer perspective there should be no effort/cost to provide Dolby Atmos audio. I might be wrong about this but I assume the licensing cost would be for the playback device and not for the 'right' to bundle an Atmos audio track with your game?


It shouldn't even be necessary. If you look at the DirectSound channel listing from ages and ages ago, I think even in Windows XP, there are height channels listed. But I've never seen a sound card that used them or let you select any higher than 7.1 in the Sound control panel.


I do this with QEMU/KVM with passthrough of an RTX 3090, an NVME SSD, and one of the onboard USB controllers. Works like a charm, though the VM boot time is very high if you allocate a lot of RAM to it (there's some kind of bottleneck in the linux kernel when pinning huge amounts of consecutive memory pages while using passthrough, don't fully understand it but it's a known problem).

Performance is indistinguishable from native, e.g. I can easily drive the screen at the max 144 Hz refresh rate, G-sync works, etc. I did put some effort in figuring out how to pin CPU threads to cores, optimize for the CPU core topology so Windows only gets cores on the same CCX (it's an AMD Zen 3, before that it was Zen 2), etc. But all of this is documented in many places.

Do note that depending on your motherboard not all of this is possible if the chipset & BIOS do not provide enough MMIO isolation, you might not be able to isolate a USB controller or a separate NVME drive, or you might get only half the PCIe lanes for the passthrough GPU if you need to use the other full-lane slot for something like a second GPU.


I would pay good money for a ready to use distribution/setup that supports this incl. making sure it continues to work after updates


The main problem would be to have to maintain the myriad of known working configurations, since they are all different depending on motherboard, BIOS, GPU, CPU, etc. If you are careful about picking the right parts (and assembling them properly), it's actually really easy to configure on any recent Linux distro, you can just click together the VM using virt-manager if you don't need anything special.

I know I had to jump through a lot of hoops to make it work though, my X470 motherboard didn't isolate the USB controller without a BIOS update for example, and after that the USB controller exhibited USB FLR (function level reset) problems causing it to hang the VM. This required blacklisting it's PCI ID from the Linux kernel and patching the kernel to disable FLR (fortunately these changes were later merged into the mainline kernel). I also had problems with the second GPU, if I plugged it into any slot other than the bottom x1 slot, the motherboard BIOS would reshuffle the IOMMU groups making it impossible to pass through the NVME and USB controller, or (if I put it in the second x16 slot) it would halve the PCIe bandwidth to the RTX3080.

All in all it took me the better part of a weekend to get everything working, but if I had to do it again from scratch and did some research into (in particular) the motherboard and BIOS, I would be able to set it everything up again in less than an hour.


I was so disappointed that there is no such thing available that its just plug-and-go. It really feels like this setup is breaking ground for some reason.


>> this is seriously going to change my life.

Now I'm curious. Can you give a small code example of the kind of thing this solves and how it will change your life? ;-)


I constantly use both lambdas and structured bindings; without this feature, I am having to constantly redeclare every single not-a-variable I use in every lambda level and then maintain these lists every time I add (or remove, due to warnings I get) a usage. Here is one of my lambdas:

nest_.Hatch([&, &commit = commit, &issued = issued, &nonce = nonce, &v = v, &r = r, &s = s, &amount = amount, &ratio = ratio, &start = start, &range = range, &funder = funder, &recipient = recipient, &reveal = reveal, &winner = winner]() noexcept { return [=]() noexcept -> task<void> { try {

And like, at least there I am able to redeclare them in a "natural" way... I also tend to hide lambdas inside of macros to let me build new scope constructs, and if a structured binding happens to float across one of those boundaries I am just screwed and have to declare adapter references in the enclosing scope (which is the same number of name repetitions, but I can't reuse the original name and it uses more boilerplate).


Ah I see, yes that's horrible.

It's kind of weird structured bindings where not captured with [=](){} before, actually. I'm still stuck at C++11 for most of my work so I cannot use structured bindings at all, but I would not have expected to have to write that kind of monstrosity in C++17


Out of curiosity, what kind of domain is this?


I work on a "streaming" probabilistic nanopayments system that is used for (initially) multihop VPN-like service from randomly selected providers; it is called Orchid.

https://github.com/OrchidTechnologies/orchid


How did you fall into such a niche? I don't mean that as a pejorative. It just seems so specific. And esoteric to me.


It's a VPN Service that uses cryptocurrency as a means of payment.

What seems really esoteric to me is that the 'Orchid' Ethereum Token has a $737,057,000.00 fully diluted market cap, which I'm struggling to understand: https://etherscan.io/token/0x4575f41308ec1483f3d399aa9a2826d...


I dunno... Brian Fox (the developer of bash) got involved, and he tapped me (someone he has worked with before) as a combination networking and security expert? FWIW, if you describe anything with the technical precision I just did, almost anything will sound "esoteric" ;P.


Most semiconductor production processes like etching, doping, polish etc are done on the full wafer, not on individual images/fields. So there is nothing to be gained there in terms of production efficiency.

The litho step could in theory be optimized by skipping incomplete fields at the edges, but the reduction in exposure time would be relatively small, especially for smaller designs that fit multiple chips within a single image field. I imagine it would als introduce yield risk because of things like uneven wafer stress & temperature, higher variability in stage move time when stepping edge fields vs center fields, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: