Sadly, based on the responses I don’t think many people have read the report. Just read how the essay discusses “exfiltration” for example, and then look at the 3 places that shows up in the NIST report. The content of the report and the portrayal by the essay are not the same. Alas, our truncated attention spans these days appears to mean a clickbaity web page will win the eye share over a 70 page technical report.
I don't think the majority of human's ever had the attention spans to read and properly digest a paper like the NIST report to make up their minds. Before social media, regular media would tell them what to think. 99.99% of the population isn't going to read that NIST report, no matter what decade we're talking.
Because it isn't just that one report. Every single day we're trying to make our way in the world and we do not have the capacity to read the source material of every subject that might be of interest. Human's rely on, and have always relied on, authority like figures or media or some form of message aggregation to get their news of the world and form their opinions on it from that.
And for the record, in no way is this an endorsement for shallow takes or thinking and then strong views on this subject, or another. I disagree with that as much as you. I'm just stating that this isn't a new phenomenon.
That does happen: foundations will fund specific research and universities apply and get it. What is often different is that foundations rarely put out open calls outside the areas the foundation is specifically interested in. That is where government funding tends to be better: covers many more areas than foundations tend to be interested in. There’s nothing stopping foundations from doing that, but I haven’t seen it very often other than a couple calls here and there. I’ve been a researcher chasing money for decades: I’d love it if foundations would fill this role, but alas, they don’t so far. Plus, the scale doesn’t match: if you added up all the private funding that is available, it’s tiny compared to the federal science budgets.
No. Just google for NVIDIA and Adacore to see how Ada is quite alive in NVIDIA land. Ada is quite a nice language that more or less anticipated a lot of the current trends in languages that the safe languages like Rust and friends are following. Spark is quite a cool piece of work too. I think the perception of old-ness is the biggest obstacle for Ada.
Yeah, I saw that and was tempted to say the same thing. Ocaml is alive and well, and SML still is in active use. Ocaml has a relatively broad application space, whereas SML is more or less limited to the world of theorem provers and related tools (e.g., PolyML is used to build things like HOL4, and CakeML is a pretty active project in the verified compilers space that targets SML and is built atop HOL4). SML is likely regarded as irrelevant to industrial programmers (especially those in the communities that frequent HN), but it's still alive. F# is still alive and kicking too, and that's more or less an Ocaml variant.
The RandomAccess (or GUPS) benchmark (see: https://ieeexplore.ieee.org/document/4100365) was looking at measuring machines on this kind of workload. In high performance computing this was important for graph calculations and was one of the things the Cray (formerly Tera) MTA machine was particularly good at. I suppose this benchmark wouldn’t be very widely known outside HPC circles.
I worked on the MTA architectures for years among several other HPC systems but I don’t remember this particular benchmark. I suspect it was replaced by the Graph500 benchmark. Graph500 measures something similar and was introduced only a few years after GUPS.
The HPCS benchmarks predated Graph500. They were talked about at SC for a few years in the early 2000s but mostly faded into the background. It’s hard to dig up the numbers for the MTA on RandomAccess, but the Eldorado paper from ‘05 by Feo and friends (https://dl.acm.org/doi/10.1145/1062261.1062268) mentions it and you can see the MTA beating the other popular architectures of the time in one of the tables.
Feo was a major MTA stan and proponent, even years later. Honestly, it is probably my favorite computing architecture of all time despite the weaknesses of the implementation. It was extraordinarily efficient in some contexts. Few people could design properly optimized code for them though, which was an additional problem.
There were proofs of concept by 2010 that the latency-hiding mechanics could be implemented on CPUs in software, which while not as efficient had the advantage of cost and performance, which was a death knell for the MTA. A few attempts to revive that style of architecture have come and gone. It is very difficult to compete with the economics of mass-scale commodity silicon.
I hold out hope that a modern barrel processor will become available at some point but I’m not sanguine about it.
Not all of us fell into that trap! My dissertation was written almost entirely using a default document class and a handful of packages, and only towards the end did I apply the university document style to come into compliance. I had more than enough to do on the subject of the PhD and didn’t have the patience to burn time on typesetting or fiddling with macros.
I’ve found in the decades since then that my most productive co-authors have been the ones who don’t think about typesetting and just use the basics. The ones who obsess over things like tikz or fancy macros for things like source layout and such: they get annoying fast.
Tikz is misplaced in this list; it is how you make any kind of vector drawings in LaTeX. It's not the only way, but perhaps the best documented and most expressive one. If you have any such drawings in your work, you won't get around putting some effort into it. Not comparable with boxed theorems or fancy headings.
I think the annoyance with TikZ is twofold: (1) it tries to do a really hard thing (create a picture with text in a human writable way), (2) it is used infrequently enough that it’s hard to learn through occasional use.
That said, nobody makes you use TikZ, fire up Inkscape and do it wysiwyg.
I’m guessing this isn’t just to optimize for binary size. If you have the resources to avoid third party dependencies you eliminate the burden of having to build a trust case for the third party supply chain. That is the number one reason we sometimes reimplement things instead of using third party packages where I work: the risk from dependencies along with the effort required to establish that we can trust them is sometimes (not always) greater than just replacing it in house.
That was absolutely not what was said. The way it was phrased indicates it only applies to a subset of projects, plus there were weasel words to indicate that maybe it's not actually quite that high, plus AI was not explicitly mentioned and it easily could include a lot of traditionally-generated code.
I ran across the dashboard where I work that is tracking Copilot usage. According to the dashboard 22% of suggestions are accepted. I assume Microsoft is quoting a similar stat. This is VERY misleading, as more often than not, the suggestion is trash, but has 1 thing in it I want for reference to look up something that might actually help me. I accept the suggestion, which increases that stat, but AI didn’t ultimately write the resulting code that went to production.
I took a glance around this project, and it seems to be really high quality Rust. I would be shocked if it was AI-generated to any significant degree, given my own less-than-impressive results trying to get LLMs to write Rust.
Edit: I see the author isn’t very familiar with Rust, which makes it even more impressive.
How much cost reduction does 30% ai written code translate to? It's easy to imagine that ai doesn't write the most expensive lines of code. So it might correspond to 10% cost reduction.
10% is nothing to scoff at, but I don't think it should factor into the decision to rewrite existing packages or trust third parties if you're very security minded.
AI-writing has the cost of human orchestration, debugging, review. Code is now cheap to write, but for there to be a net efficiency gain, those other tasks have to not bloat too much.
I use it as a C++ alternative on Linux. We ported a substantial code base from C++ to Swift last year and it works great. Performance is better in some places, comparable to C++ in others. Productivity is definitely improved over the C++ codebase. We didn’t use rust for this project after evaluating how that migration would impact how it was designed internally, and we decided it wasn’t the right way to go. I think the “swift is only relevant in apple ecosystem” view is inaccurate these days. Swift certainly isn’t the answer to every project, just like rust or any other language isn’t the universal answer to every project. It’s worth considering though if it is appropriate.
There is this belief that Swift is not really useful outside of the Apple ecosystem or is somehow clunky, and that could not be farthest from the truth. In fact, having written a few backends in Swift, I can say that the experience of writing a Swift backend on Linux was much more ergonomic than what I am used to with writing Swift for iOS.
Can this be aimed at ollama or some other locally hosted model? It wasn’t clear from the docs since their config examples seem to presume you want to use a third party hosted API.