As far as adoption is concerned, I'm not sure it should be that big of a concern.
After all, D is supported by GCC and Clang and continually being maintained, and if updates stopped coming at some point in the future, anyone who knew a bit of C / Java / insert language here could easily port it to their language of choice.
Meanwhile, its syntax is more expressive than many other compiled languages, the library is feature-rich and fairly tidy, and for me it's been a joy to use.
GCC usually drops frontends if there are no maintainers around, it already happened to gcj, and I am waiting for the same to happen to gccgo any time now, as it has hardly gotten any updates since Go 1.18.
The team is quite small and mostly volunteers, so there is the question how long can Walter Bright keep at it, and who will keep it going afterwards when he passes the torch.
And was for several releases delayed due to personal issues, which is understandable in a small team among open source projects, however it is a problem.
So uh, funny story: I didn’t know this a few years back. GDC was missing the sqlite interface in GDC’s phobos. This made it so the dlang onedrive client and some other packages wouldn’t compile with it. Situation was like this for years: https://forum.dlang.org/thread/doevjetmiwxovecplksr@forum.dl...
I eventually complained that it was easier to argue with Walter about politics on HN than get a library fixed on his programming language. Fortunately the right people saw it and it was fixed soon after: https://bugs.gentoo.org/722094
Problems with scaling have been the biggest timewaster in my career:
1. In some large businesses I've worked in, so many people have been hired that some systems and processes have wound up being controlled by entirely different people from the people who need them. So coordination between people and waiting for people who have little to no incentive to do the thing they're being asked to takes up a large part of the working day.
2. In other businesses, a large fraction, or even a large majority of the employees have had no discernible job except to talk and write about the job performed by the few people doing an actual job. So a lot of time in these businesses would be spent dodging meeting invitations, rejecting grand ideas about revolutionizing the business with AI on the blockchain, saying no to "if you could X, that'd be great" and generally reminding people that they're not in charge.
The great thing about these problems is that you're not very likely to have them in a small startup, but if you decide to grow the organization later, you'll need to be very vigilant about how you scale.
Many projects have taken longer and been more stressful and had worse outcomes than needed. A lot of the work being done hasn't even been intended to deliver any business value, but to provide an opportunity for one or more people to be seen to be doing something. Actual value creation does occasionally take place as well, but more as a happy accident or a side effect than anything else. I'm very glad I'm not a major shareholder in any of these corporations.
You should be able to make it think you have another card:
export HSA_OVERRIDE_GFX_VERSION=10.3.0
The possible values are said to be:
# gfx1030 = "10.3.0"
# gfx900 = "9.0.0"
# gfx906 = "9.0.6"
# gfx908 = "9.0.8"
# gfx90a = "9.0.a"
Telling ROCm to pretend that your RDNA 3 GPU (gfx1102) is an RDNA 2 GPU (gfx1030) is not going to work. The ISAs are not backwards-compatible like that. You might get away with pretending your gfx1102 GPU is a gfx1100 GPU, but even that depends on the code that you're loading not using any gfx1100-specific features. I would generally recommend against using this override at all for RDNA 3 as those ISAs are all slightly different.
In any case, the possible values can be found in the LLVM documentation [1]. I would recommend looking closely at the notes for the generic ISAs, as they highlight the differences between the ISAs (which is important when you're loading code built for one ISA onto a GPU that implements a different ISA).
Recompressing an already lossily compressed file is almost guaranteed to produce information loss, whereas storage media is getting cheaper and cheaper over time. An 18TB hard disk is now within the budget of many people, and they're likely to get cheaper still.
So if your purpose is to archive these files because they're worth keeping, buying a bigger disk may make even more sense.
I don't consider hardisk now although I have tons of them. I keep multiple copies of those files but it is a pain in the ass to distribute the same backup to different disks simply because their R/W rate is too slow. After transfering files, I run validation program to make sure they are all right. These processes take me a week or so. And I have to do this regularly to ensure errors do not accumulate through time. Therefore now I want SSD but the price is still 4X of HDD per TB.
Slight degradation in quality is not my concern, since ultimately I use realtime upscaling tools to watch them. But I don't know how exactly H.265 affects the quality of a video.
By making the file smaller, I can
1) distribute to other disk faster, 2) validate correctness faster, 3) set a higher redundant rate because now I have more free space.
But the problem is will H.265 become obsolete before it becomes infrastructure. You know AV1 is a better algorithm and companies are pushing it.
Or H.265 is not available in the future due to I don't know royalty issue or something like that?
The "Open Location Code" is often mentioned on Hacker News, but is sadly neither open, nor a location code.
To pick one example, if you go to 0°06'40.6"S 28°56'27.0"E
(-0.111271, 28.940829) in Google Maps, it'll give the Open Location Code "VWQR+F8W Maipi, Democratic Republic of the Congo", or some variation thereof, depending on your local language.
The most significant bytes, "Maipi, Democratic Republic of the Congo", are obviously not a location code, but a place name, and thus cannot be decoded at all.
Moreover, if you go to OpenStreetMap and look up "Maipi", it returns three places in Indonesia, and none in DR Congo. So even using a location service plus the algorithm could land you on the wrong continent.
The "Open Location Code" is essentially only usable as a search key for Google Maps. "Go look it up on Google" isn't a location code, it's advertising.
Binary search and similar forms of successive approximation. It can be used to solve such a wide array of problems given just a minimal amount of information.
After all, D is supported by GCC and Clang and continually being maintained, and if updates stopped coming at some point in the future, anyone who knew a bit of C / Java / insert language here could easily port it to their language of choice.
Meanwhile, its syntax is more expressive than many other compiled languages, the library is feature-rich and fairly tidy, and for me it's been a joy to use.
reply