Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How an Engineering Company Chose to Migrate to D (dlang.org)
210 points by pjmlp on June 20, 2018 | hide | past | favorite | 156 comments


While the article states that other languages were evaluated, it seems as though the project was motivated entirely by the author and the evaluation was just paying lip service to what was basically a foregone conclusion. Don't get me wrong, I like D from the small amount of playing I've done with it -- and maybe it is the best tool for their purposes -- but this doesn't strike me as an impartial evaluation.

The need for nested functions bugged me as well. That seems like an odd deal-breaker and hints at code smell, rather than a genuine technical need. If it's just for the sake of one-to-one transpilation, then I can see the point, but a refactoring exercise seems like a better effort to make than language migration. (Also, don't Rust and Go have closures? Surely that would satisfy this requirement.)


>While the article states that other languages were evaluated, it seems as though the project was motivated entirely by the author and the evaluation was just paying lip service to what was basically a foregone conclusion.

Now you understand how all of these "How we chose this language" writeups work.


Yeah, almost all the time its the inclination of an individual towards a new language that becomes the driving factor of migration rather than the actual shortcoming of current language.


Except that "the inclination of an individual towards a new language" is because of "the actual shortcoming of current language."


It's much easier for the inclination of an [a single] individual to be due to the perception of [just a single] individual, than if the impetus is a team-level inclination.

I've worked on projects where people had gripes about the codebase and/or language, but they often couldn't agree on which language to move to. So a single person taking over based on their own preference is either a bug or a feature: as a bug, it's ignoring everyone else's preferences and disagreements, and possibly moving to something the team is even less happy with. As a feature, it's making a decision instead of endlessly trying to find a perfect decision, and at least ending up somewhere better off than before.

In this case, it sounds like it could be the positive sort of change if only because they were starting from a basically-dead language. It sounds like without that single person's initial effort, they may have not started any effort to migrate to anything.

But this gives me major pause: "My colleagues will continue to develop in Extended Pascal as usual, and once my transpiler is able to translate all or almost all of it, we will make the switch to D overnight." I'd love to fast-forward to see what happens the days after that... and how long that takes to get to, for that matter!


Actual or imaginary would be a better wording...


I see mostly imaginary. As a back end systems guy for over 20 years, I deal with guys all the time wanting to introduce new tools into the mix that add zero value. There is a reason why *nix tools are still around. There is nothing that Python can do better than awk for grabbing data columns and piping them into some other tool.

There is a reason why COBOL still exists, for example, what with its ability to ensure accuracy out to 38 digits. Nothing else comes close w/o tons of extra crap libraries, questionable code mangling, and TRUST. Banks trust COBOL because it has an almost 60-year history of trust.

When kids get all shiny-eyed over golang or Rust or any other "new" language or tool and think it would be a good fit in the financial arena, I start to get a little nervous.


You never said "C" but perhaps you think there is "a reason" why C still exists too. The only reason ineffective tools like C exist is because there are people at the other extreme from those kids: Not take any risk but stick with tried and wrong tools.


Nope, there are tons of other reasons.

Huge pool of programmers. Huge pool of libraries. Tons of existing code that should continue to run. Excellent documentation. Very fast compilers. Debuggers and profilers a plenty. Top notch vendor support. Choices of IDEs. Works great in embedded. Predictable.

The alternatives being what, e.g. Rust, a 5 year old language with a single implementation that still tries to find its place, and brings extra baggage to the table?

>Not take any risk but stick with tried and wrong tools.

Engineers don't take risks. You wouldn't want risks in the people building your bridges and planes, why would you want in your OSes and network infrastructure?


Rust, D or Go could be alternatives, but they have limitations in certain domains and types of projects.

C++ doesn't though, and that's why traditional C projects switch. At least in automotive it seems to be the language of choice.


C still has it's place. It's not glamorous, but it works. COBOL has no decent replacements. Yet. Some have tried. Almost all have failed. Old does not mean useless. If it works, then it's not wrong.


On UNIX clones and embedded developers that won't take anything else even at point gun.

.NET, Java are good COBOL replacements.

http://www.fujitsu.com/global/products/software/developer-to...

https://www.microfocus.com/products/visual-cobol/


>.NET, Java are good COBOL replacements.

Not even close.

As for the links, those are still about COBOL the language.


I guess you need to inform yourself about migration projects that use those products to bring COBOL codebases to modern platforms, where new features are then written in Java/.NET languages, while the old working code is left as is.


Nobody argued that new features could be "written in Java/.NET languages".

Not even that COBOL codebases could be bridged and work "left as is" alongside those.

The main objection was with the word "good".


Except you have your definition of "good" and I have mine.

This statement is meaningless without additional information.

> COBOL has no decent replacements.


“I wanted to move to this language and here’s the blog post I’m using to tell a story to my boss”


Yeah, it looks like the author started playing around with D and writing a Pascal to D compiler for the migration, then did the evaluation with a "requirements" list that seems fairly heavily weighted towards D and their approach to translation by doing a simple recursive walk of the Pascal parse tree and outputting equivalent D, then decided to go with what they had already started working on.

Rust does support nested functions that don't close over their environment, and has a separate syntax for closures which do, and closures can either move or borrow their captured variables. This makes it a bit harder to write a translator that produces good, idiomatic results; you could always use a closure that borrows the closed over variables, which I think matches the Pascal semantics, but that would mean the code would look a little odd for cases in which you had a nested function just to factor out some functionality and didn't need the closure.

  fn main() {
      let a = 10;
  
      fn function() {
          println!("Hi!");
          // println!("a is {}", a)
          // fails with:
          //   can't capture dynamic environment in a fn item
          //   use the `|| { ... }` closure form instead
      }
    
      let borrow_closure = || println!("a is: {}", a);
      let move_closure = move || println!("a is still: {}", a);
    
      function();
      borrow_closure();
      move_closure();
  }


D nested functions are defined exactly like regular functions, including all attributes, parameter types, etc. There's nothing new to learn.

My experience as a language designer is that syntax matters, and leveraging the users' existing knowledge is worthwhile.


The biggest problem is they have engineers who have turned their hands to programming, one would expect, without the formal training and so have latched onto various antipatterns without knowing any better because Pascal allowed them to get away with it - painting themselves into a corner.

He mentioned an awful lot of code that would have to be refactored, and it could be that they are just too far down the rabbit hole to get themselves out.


In this case, I'd look into compiling the existing code into .dll/.so files, and just linking between Extended Pascal and ${NEW_LANGUAGE}. Then, convert the code to ${NEW_LANGUAGE} as needed, if needed.

That solution is probably the lowest cost / safest course... Assuming the Extended Pascal compiler they have supports it.

But I suspect choosing D was a fait accompli.


This is the approach I took to modernise Fairway (which is just one component of our suite). It doesn't free us from the limitations of the Extended Pascal compiler. It still works, but we want a way out.


Hello, I assume by your username that you're the author!

Transpiling is normally a little risky. Transpiling with a custom transpiler over ~500KLOC is downright terrifying.

I'd be concerned that the cure is worse than the disease.

I understand the allure of this solution. Bringing 30ish years of code into the modern era with a single piece of code sounds like a huge win. But you owe it to your employer (who seems really gracious and trusting) to ask:

1. Are there edge cases? Programmers are a clever bunch, and you can never be too sure about what features/bugs they've used/abused!

2. How will success of the transpile be determined? I'd use unit tests, but that'd take a lot of unit tests!

3. What % of that code path is critical? Can you get away with ensuring the functionality of a subset of the codebase? If so, could you just manually port that?

4. "Blaming the compiler" will be a reasonable excuse for any problem over the next year or two after deployment. I guarantee you'll be spending time proving that other peoples' mistakes are not your fault. How much flak are you and your boss going to take for this?

5. Since the code can be linked, might a more popular language (that will be known by a wider community) with a slow porting plan be reasonable?

6. Python is commonly used with scientific computing. Why does this case preclude it? If you are doing mostly numeric processing, that gets directly passed to the hardware anyway!

You have been given a rare chance to pay down technical debt. From what I understand of your solution, I believe the bespoke transpiler and semi-known language will ultimately increase it. And I know you love D, but if I were brought in as a consultant, I'd be recommending the .dll/.so solution with Python. If something is really time sensitive, use Python binding to work with other languages (avoiding premature optimization).

Even if myself or my fellow posters made a point that resonated, it might be too late for such discussions. I can't weigh the political costs for changing strategy now. So I'll wish you good luck, and hope that Extended Python doesn't have too many sharp edges for you to contend with. I hope to see a postmortem a few years from now detailing the outcome of this plan.


1. Unconsidered edge cases? We'll have to find out... We can always change the _source_ code to eliminate known unhandled edge cases.

2. We ourselves are heavy users of our software, and are likely to be the first to discover "oddities". More serious are subtle numeric differences, and we'll have to create tests against that. One approach to validating the transpiler is instrumenting the code. We could write a Pascal-to-Pascal transpiler that inserts extra code, for example logging the return value of every function. If we do the same for the Pascal-to-D transpiler, and do an identical run, we can compare the logs. And variations of that approach.

3. If a small percentage is unproportionally hard to translate mechanically, we can isolate that and translate manually.

4. I fail to see what you are getting at.

5. Of course we can link code. We think the community is wide enough for us.

6. You mean it gets passed to C.

Thanks for the suggestion. We have a working dual language situation with C++, we don't see that as an attractive option for our other tools. Why would Python be better than D?


I think point one is the concern of most people on this page. The most I've ever transpiled is 40kloc (this was for an R&D project, I'd think twice before doing it in production). ~500kloc is a different matter.

What % of the codebase are you familiar with? What % of the code is known by your team? If you had multiple, conflicting errors pop up at once, how much effort would it take to isolate them (not fix, just identify). What about in a section where no one has worked in a long time (assuming such a thing exists)?

Like any old codebase, I'm sure there are functions that have "just worked" for years (or in this case, decades). With a single action, every piece of code that's been proven solid over time is about to be suspect.

For point two, I think you have a lot more confidence in your ability to detect problems than I would in your shoes.

For point three, I'd much rather manually port necessary code piece by piece over a period of years. Unit testing like crazy while doing so. Thanks to linking, this is an option... One I'd recommend.

Point four is similar to majormajor's comment about the aftermath of the switch. I'm attempting to project beyond the immediate... This port could become a whipping boy... That could bleed into political fallout. No matter what technology you introduce, people will complain. Doubly so if problems arise. Adding a custom transpiler on top of that increases the risk significantly. The easiest transition path to a ${NEW_LANGUAGE} is through linking.

I'm not sure about Python vs. D. My instincts are towards the former, especially if I'm supporting SEM (STEM minus the T) grads as programmers. I also wouldn't easily write off interpreted languages (I'd rather work with them when possible).

I think you have your heart set on this path. But losing the binary safety net in favour of a bespoke transpiler is not a gamble I'd make with large amounts of production code...

Good luck!


Python has two big issues: performance and dynamic typing.

The solution of using a HLL and dropping down to C for performance strikes me as overly optimistic: the only language where one can legitimately claim that it's pleasant to use C for extra performance is Objective-C. Anything else will be different degrees of painful.

I didn't get the solution of using a dynamic lib and calling/extending that from Python. One would end up with 500kloc of Pascal and a Python one liner. All the work still happens in Pascal.


I think there is an understanding that the whole situation is less than ideal. I'm proposing a low risk method to transition from the current situation to a healthy one over the long term.

Having binary compatibility with the old version would be a great safety net while transitioning to another language. The main weakness of the author's plan, as the article explains it, is that they will be dealing with a lot of newly generated code at once (as well as an unknown number of unknowns). The plan would be fine if it was only ~10kloc, but ~500kloc is a very different beast!

Come to think about it... It would be wise to have a "Plan B" still written in D, but linked to the binary rather than transpiled. That way if anything goes wrong after deployment, users can temporarily use that rather than revert to Extended Pascal.

There probably isn't an easy solution for this problem. But maybe the author has found a silver bullet.

I brought up Python because it sounds like SARC is adjacent to scientific computing, and it is likely they'd be able to pick up physics/math/engineering majors that are already familiar with Python. Plus, I bet they'd be able to use some really good Python libraries (possibly replacing some of that 500kloc custom code with open source libraries).

The article makes it clear that naval architects (who are not expected to learn C++) are programming, so I doubt high performance computing is required in 100% of the code (at least I hope not). If there are genuine performance issues, tools alone won't save you, you need to hire programmers that know how to optimize (and they'll be fine writing C/C++ libraries with interfaces to Python).

I strongly suspect the justification that interpreted languages don't meet performance requirements is more ideological than empirical.

Besides, it's likely the existing binary will run faster than its transpiled descendant!


> I strongly suspect the justification that interpreted languages don't meet performance requirements is more ideological than empirical.

Nope, it is based on production code life experience and seeing everything crumble with high CPU usage for little work being delivered.

Any programming language without a JIT or AOT as part of their default toolchain is not worth using for anything besides scripting.

> Besides, it's likely the existing binary will run faster than its transpiled descendant!

Given how old Extended Pascal compilers are and that D has three backends, two of them based on GCC and LLVM respectively, I pretty much doubt it.


This approach has always been the one that makes the most sense to me, but yet it also seems to be rare. I couldn't explain why.


Very strange in a hardcore technical programming context, I would have thought they would have used Fortran which is still being actively developed.

I have never seen a reasoned argument why C++ vs Fortran is better for technical programming.


Given they started in the 80's Pascal probably has fewer corners to paint oneself into then many languages then (and now probably).


Indeed!


To be fair, I work with a large number of formally trained engineers who commit far worse in terms of anti-pattern and data hygiene infractions on the regular and I’ve had the fortune of refactoring their messes.

I don’t think it’s solely dependent on having afforded formal training.

Knowing enough to know that you might not know and just might need to measure or look something up and caring enough to do so is more of the core of it.


I have a hunch the majority of domain specific engineering products are at least initially implemented by engineers of the domain and not software engineers.


> If it's just for the sake of one-to-one transpilation, then I can see the point, but a refactoring exercise seems like a better effort to make than language migration.

I think the author makes it clear that direct conversion from Pascal is the primary goal. I'm guessing that once the codebase is fully transpiled to D, refactoring would be the logical next step.


Confirmed.


> (Also, don't Rust and Go have closures? Surely that would satisfy this requirement.)

Indeed they do. It's completely unclear why they don't satisfy the OP's requirement.


To be frank: I have never used Rust nor Go. Nevertheless, I am quite sure closures and goroutines would complicate the translation due to different syntax from ordinary functions. The same argument is being discussed at reddit https://www.reddit.com/r/programming/comments/8si75b/how_an_...


Speaking at least for Go, the "different syntax" between closures and functions is that for closures, you don't give a function name. The only way they could be any more similar is if closures did take a function name, which is generally a bad idea because there's no single behavior I've seen that captures what programmers expect to happen in that case. (In general, for the code

    x = func blah() { ... }
there is a conflict between "x is the only new binding we should have as a result of executing that line" and "the function name 'blah' should do some sort of binding".)

If that's enough to throw your translation process, you've got bigger problems....

(Cards on the table, D is probably a good choice and I'm not advocating for Go here.)


> I am quite sure closures and goroutines would complicate the translation due to different syntax from ordinary functions.

It's unclear where the root of that confidence lies. The translation is just as mechanical in either language as it would be otherwise: see an inner function, replace it with the one thing that can capture variables. Speaking for Rust, it would actually be even easier to do the translation using closures, because Rust closures can have their signatures inferred whereas Rust functions cannot.


I can see at least two issues. I'm sure you're aware of these, but:

- Rust closures don't really support recursion, at least without serious hacks.

- Rust closures borrow the variables they reference as soon as the closure is created; you can never write to those variables again as long as the closure is alive, and if the closure mutates the variable (resulting in a mutable borrow), you can't read from them either, outside the closure itself.

The latter isn't as a big a deal for some other use cases for closures, but if you want to use them to emulate nested functions, you're probably creating the closure at the beginning of the outer function and keeping it in scope for the whole function; this means that what you can do with mutable variables is severely limited.

Both issues could be worked around if the mechanical translator liberally sprinkled the code with Cell/RefCell – which would also be necessary to deal with pointers in the source language. But the result would be highly unidiomatic and verbose Rust code.

(In any case, I'm not sure I'd have recommended Rust to OP as they claim to put a high priority on fast compile times, but that's a separate issue.)


From: https://www.reddit.com/r/programming/comments/8si75b/comment...

"In Rust, only lambdas (unnamed functions) can be closures".

The OP's post seems to show they didn't spend much time understanding whether Go closures could serve their need. Although from my own superficial understanding, D is "closer" to Pascal than Go.


>The need for nested functions bugged me as well. That seems like an odd deal-breaker and hints at code smell

Nested functions were a very common pattern on languages like Pascal, Modula or oberon. They are equivalent to code blocks inside a function in C.


The equivalent in C is code blocks connected with gotos. In my conversions of C code to D, I'm able to replace these gotos with nested functions, which makes the code significantly easier to read.


Indeed, it serves encapsulation.


Given the Pascal history and even consideration of Ada, with the focus on expressiveness and ergonomics I'm surprised that Nim didn't make the shortlist. Nim even has an official Pascal-->Nim converter (called pas2nim) as the very early versions of Nim before bootstrapping were in Delphi.


Framing closures as a “requirement” is pretty strange. Depending on the context they can be an ergonomic nice-to-have, but it’s not like you can’t express a program without them.


>Depending on the context they can be an ergonomic nice-to-have, but it’s not like you can’t express a program without them.

When talking about turing complete languages, everything is an ergonomic nice-to-have. All language features aspire to be so nice to have, that you come back around and call it a necessity. But you can still express a program without them, unless you strip out so much as to lose turing completeness.


Right, which is why "requirement" is a silly framing that ascribes a sort of objective engineering necessity embedded in the problem to what is actually an subjective aesthetic choice between solutions.

(I'm generally pretty skeptical of the whole "write down your requirements with no awareness of available solutions and then choose a solution that meets them" approach to decision-making).


Why do you find it strange? To me, "requirement" signals that there's some sort of pain threshold for writing/maintaining code that the writer isn't willing to go above. Is there anything weird in that?

> (I'm generally pretty skeptical of the whole "write down your requirements with no awareness of available solutions and then choose a solution that meets them" approach to decision-making).

I also don't understand this. If there's a solution that meets all of the requirements what does it matter what there are other solutions? I mean, yeah, you be doing a local vs. global optimization, but as long it meets the requirements it's probably not a big deal. (The only exception is if the requirements are so broad as to be meaningless, but as long as they're reasonably specific, I don't see the problem.)


A language with only goto and call statements can express any program, too. It's not unreasonable to require higher level programming constructs, because such make programming faster and less prone to error.


If you have a half million codebase that you need to migrate, I would say that every heavily used Extended Pascal feature is required to be easily translatable. If it's simply possible but the translation is not direct or easy, then the maintainers may be better off staying with Ext Pascal than maintaing a half million uglified code base.


Well Microsoft deemed they are still relevant enough to add them to C# 7.


> Nowadays, whenever D is publicly evaluated, the younger languages Go and Rust are often brought up as alternatives. Here, we need not go into an in-depth comparison of these languages because both Rust and Go lack one feature that we rely on heavily: nested functions with access to variables in their enclosing scope.

Maybe I've misunderstood, but aren't Rust's closures [1] exactly what the author is describing? Specifically, you can capture variables from the environment into functions.

[1] https://doc.rust-lang.org/book/first-edition/closures.html


Closures are, but functions defined using "fn" syntax cannot capture from the environment. This is a useful feature, and I can see why people would want it, but it was considered ultimately not consistent with Rust's aims to be excellent at the low level. See [1] for more discussion of the tradeoffs and why Rust chose not to include the feature.

[1] https://users.rust-lang.org/t/inner-functions-not-closed-ove...


I believe the story here is the same in C++. Functions are just a pointer to some compiled code. Closures are an object that represents all the captured variables, in addition to the code. That object has a size, and a destructor, and the variables it's holding might have pointer lifetimes that you need to worry about. In Rust, quite a lot of type system machinery gets invoked when you create a closure, to make sure it doesn't outlive any of the references it's closing over.


Yes, it's similar to C++. A Rust `fn()` type is similar to a C++ `(*foo)()` type and at the low level is just a pointer to code. A Rust `Fn` trait object (of which there are several variants) is roughly similar to a C++ std::function and is "fat" in that it can contain data as well as code. Most higher level languages don't make the distinction, buying conceptual simplicity at the cost of potential optimizations when a simple code pointer will do.


D nested functions don't 'capture' variables from their enclosing scope. They can access them just like other code in the enclosing function.

This is achieved by adding a hidden parameter that is a pointer to the enclosing function's stack frame. This forms a threaded list so variables in nested enclosing functions can be accessed by walking that list.

This is the same way Pascal does it.

It also means that taking the address of a nested function produces a pair - a pointer to the function, and this hidden parameter to the enclosing stack frame.


It's the same way that MetaWare implemented nested functions in its C/C++ compiler in the 1980s and 1990s, too. On top of that, MetaWare built iterators and an iterator-driven for mechanism. The yield() in an iterator was actually a nested function callback from the iterator to the body of the invoking for statement, which was turned into a nested function under the covers.

* http://jdebp.info./FGA/metaware-iterator-driven-for.html

In theory, the extra frame pointer mechanisms of the ENTER and LEAVE instructions in the Intel ISA were there for supporting this kind of nested function. MetaWare High C/C++ did not use them, preferring to use a variant calling convention where (if I recall correctly) ECX contained the extra hidden parameter.


The ENTER and LEAVE instructions only supported dynamic linking (pointer to caller's stack frame) not static linking (pointer to the enclosing function's stack frame).


That is not in fact true.

ENTER copies M words from the caller's stack frame. The pointer passed by MetaWare High C/C++ in the register could equally well have been placed at the relevant position in the caller's stack frame, just as other arguments are, and an ENTER N,1 used as part of the perilogue.

The only consideration here is speed of the ENTER instruction versus having the hidden pointer in a register, not that this sort of nested function implementation is somehow impossible with ENTER. It's just another case of ENTER is slow and other instructions do the same thing better.

* http://jdebp.info./FGA/function-perilogues.html#Standardx86

(Part of that consideration is making non-nested functions usable via full function pointers.)


You're right.


That's usually what "capture" means when talking about closures.


At least in Rust, capture granularity is per-variable, so I think closures don't contain a stack frame pointer? Though, I'm not sure why this would matter to a user evaluating Rust's closures vs. D's nested fns?


It’s a pointer to the environment, and a pointer to the function. So yes, not a stack frame pointer.

The environment is a struct containing what is captured; so for example, it would contain an &T if your closure captures a T by reference. You don’t need to walk a set of stack pointers to find it, just follow the reference directly there.


That's distinctly different from D's, which has a pointer to the enclosing stack frame.

If it is the enclosing function, it's just a single pointer to it. The "walking" comes from if you're accessing the enclosing function's enclosing function's frame.


Makes perfect sense, thanks. I’m now wondering about the trade offs of these two approaches...


Go also certainly supports this. Here's a playground example: https://play.golang.org/p/_BLguocGBRn


Does Go take a copy or a reference?


The following example will provide your answer:

https://play.golang.org/p/NeGuDahW2yP


The captured context persists between calls to the function, as the sibling example demonstrates, or this one:

https://play.golang.org/p/drfx_3g84aE


The interaction between borrow-checking and closures restricts what else you can do while you have a closure in scope that closes over some variables, so (like many rust features) they aren't a drop-in replacement for their counterpart in other langs.


The only difference I see is if you need recursive local functions, which is harder (but not impossible) to do with anonymous closures.


Recursive nested functions in D just work - the hidden parameter to the enclosing frame forms what's called a "static link". A "dynamic link" would be the usual pointer to the previous stack frame (not the enclosing stack frame).


Maybe "access to variables" means write access. I know that in Java variables referenced by a lambda need to be final.


Here's a tweak to the Go example above that demonstrates writing: https://play.golang.org/p/H9vCt93Qz2B

Here's a Rust version that does the same (without heap allocation!): https://play.rust-lang.org/?gist=bb5406062970b62088d58fb8a9f...


In Rust's case, this becomes far less practical as soon as the closure's return type does not implement the Copy trait.


Here's an example that does something similar with Vec's: https://play.rust-lang.org/?gist=7a4ed9098a53831195c5189ba9c...

I think the real nastiness happens if you want to have more than one closure over the same variables, when those variables aren't Copy. Because then when you try to `move` them in, the compiler yells at you. At that point you have to resort to putting them in an `Arc<Mutex>` or something like that. But that's par for the course with Rust: the compiler is preventing you from creating multiple mutable references to the same object.


Write access works. https://play.rust-lang.org/?gist=573c941b91657bb0dbac80c33d0...

Once "non-lexical lifetimes" lands, the inner scope won't be necessary. That may have tripped the OP up.


I’m surprised to see a lot of criticism in the comments.

Generally, I prefer mainstream languages myself, that’s why I mostly code C++ and C#. But “generally” is the key here. For some projects, or some parts of it, picking a non-mainstream language can be a huge win in terms of productivity.

I never used D in production, but I can remember several occasions where I picked unusual languages, both specialized and general purpose ones. Among others, I’ve used XSLT, sed, awk, VBScript, VB.NET, R, power shell, AutoLISP, MATLAB.

Theoretically all modern programming languages are equivalent because Turing-complete. Practically, for a particular problem, it can be orders of magnitude difference in development time between them. The reasons include availability of specific libraries, integration with other code be it third-party or legacy, or specific runtime features.


> I’m surprised to see a lot of criticism in the comments.

I'm not surprised to see criticism for D on HN. While reading HN comments on posts related to D, I've always found pro Rust/Go comments.


There's a strictly adhered to rule across Reddit/HN that every D article must consist primarily of people asking why the author did not use Rust/Go instead.


This article seems to ignore two of the most important reasons to choose one language over another. Specifically:

1)Expressiveness in the given problem domain. Some languages are really great in particular niches.

2)Availability of devs. The ability to hire and retain a really great team makes or breaks projects, and while a smart, motivated dev can learn any language, it's definitely a factor in their choice to work or not work with you. If you're in a niche language it will greatly affect the ability of people to use external resources (stackoverflow etc) if they hit blockers and some people will be reluctant to railroad their career into a tech that doesn't appear to be going to the mainstream. On the flipside, if you're lucky you may be able to get really passionate devotees of the language to work for you just to use the language where they might not have done it otherwise.


3) Tooling, ecosystem, and library availablity. Picking language X over Y is unwise if it means spending a bunch of time writing the libraries you need, even if it's ultimately more expressive.


This, turns out, is ultimately the most important thing amongst most important things. I've invested quite a lot of effort and time in D, back when D1 was around and D2 was a newsgroup post. D1 was everything I wanted out of better C, while d2 looked like it ought to be a better c++. Two years into heavy D programming, d2 vs d1+tango vs phobos aside, most of my time was spent in hunting for things or re-inventing things for weeks which I could've got in minutes with more popular language. Now that I'm older and hopefully wiser, I just go for my standard c toolbelt or, more often, python and be done with whatever I wanted to do without 'the hunt' (often for unmaintened things or things out of my knowledge scope - so I can't even re-implement).

With all that said... Lisp' siren call is still out there, and experience tells me it will be the same. (As it was once before)


With the existence of d++[1], you can essentially use any existing c or c++ library by simply including its header file.

1: https://github.com/atilaneves/dpp


While interesting, glancing through readme/limitations just furthers my (view)point based on experience.

Since then, I do by and large dive into new languages but don't use them. What I do is bring back home some of the practices which I think are of benefit to me.


This is definitely key - one startup I worked for was messing around with live voice capture and processing, voice recognition, etc.. by and large the libraries we had access to that could help do tiny manipulations, cutting, processing of the waveform data was primarily C. We wrote most of our code in C++ because we could directly call C functions and use header files directly.

This was 15 years ago, but I honestly don't know what language I would use today. It's funny but sometimes binary data handling isn't a particular language's strong suit simply because they don't have the primitives to handle it, or if they do, the APIs to handle them are awkward or unnecessarily verbose. It's kinda neat that in C/C++ you can take a 1MB byte buffer and pass it to a function starting at 30% in. In other languages its like you have to pass in the whole thing and another parameter to tell it where to start. I get that's why C/C++ is in the situation it's in (security - but that's why I don't use C/C++ to handle web requests), but sometimes tasks need that.


That is how I migrated from Turbo Pascal to C++, around 1994, only touching C when required to do so.

Could make use of a saner type system, nice frameworks (Turbo Vision, OWL), while being able to painless create safe wrappers around C libraries.


Exactly that's why for CFD Fortran still has so many advantages.


0) Tool satisfaction

> motivated dev can learn any language

The key word is "motivated". That's why I'm an expert user of C++98 and D but I can't learn Go, Python, C++11, etc.

> able to get really passionate devotees of the language

That.


>I can't learn Go, Python

I lol'd at this


I sometimes think people use "availability of devs" to mean "cheap, expendable service". If you're working in an industry where you bank more on hitting the ground running more than retaining developers then this makes sense, and totally valid.

But I've found that lots of places seem to be focused on if you know X rather than being a well versed, developer who enjoys growth. And it feels more and more like the reason is because they don't expect anyone to stay there for more than a few years tops.


Maybe I am too timid, but the closing word saying that once the transpiler is ready they could switch overnight seems a bit optimistic.

I feel that splitting as much as possible the project into chunks/components and then replacing them one by one sounds more realistic (with the appropriate test harness). Maybe that is the plan but it is not touched in the post.

And then maybe an issue that is not that bad, but you are going to have a hard time comparing old_codebase with new_codebase once you've switched. If you have to maintain old branches, porting the fix to recent branches will be more time consuming that simply do a merge.


There is precedent for this, even within D. The DMD compiler was originally written in C, and someone went to the effort to write a C to D transpiler, worked on it until it was flawless, then ran it overnight and did the transition all at once: https://github.com/dlang/dmd/pull/4923

I think the Go compiler did something similar.

It certainly requires iteration of continuously attempting the translation until it works, and fixing bugs either in the transpiler or in the original codebase. But once everything seems to be in order, you can do the transition all at once.


The original c2go Go transpiler used to transform the original C compiler to Go wasn't able to produce identically-behaving code, last I heard. It required manual editing. It's no longer maintained, but the fork [1] is much more ambitious, and aims to produce compilable, working programs.

[1] https://github.com/elliotchance/c2go


Daniel Murphy is the one who wrote the translator.


I think "a bit optimistic" doesn't go far enough. Even some excellent transpilers need manual intervention, and most transpilers I've used have weird edge cases that you can only catch through thorough unit testing.

It sounds like the author has never transpiled a large code base into D before.


For real. It is hard enough to port a project between compilers for the _same_ language, let alone transpiling to a different language.


I would love to be able to use D in my day job. It's a fantastic language that is both powerful and fun to work with. It's great to see companies embracing it.


What is your day job?


I work in a small team building APIs with PHP for the real estate industry. In my last job I did manage to use D for a "sequence mining" algorithm that looked for problematic behavioural patterns in web user behaviour. PHP would have been waaay too slow for the quantity of data that needed to be evaluated. That was honestly one of the most enjoyable things I've ever coded.

My hobby project languages or choice are currently D and Kotlin. I prefer the semantics and flexibility of D, but Kotlin has the backings of the tried and tested Java ecosystem, which is hard to beat when you just want to get down to it. Also web servers in Java are vastly superior at the moment. Vert.x is so very fast.


A recent article I wrote about converting C code to D:

https://dlang.org/blog/2018/06/11/dasbetterc-converting-make...


> After this initial pruning, three languages remained on our shortlist: Free Pascal, Ada and D.

I'm really sad to see that the author didn't even consider the Nim programming language. They could have saved themselves a LOT of work, Nim's original compiler was written in Pascal and then translated to Nim using a `pas2nim` tool that is freely available on GitHub[1].

Out of Go, Rust and D, Nim is most similar to Pascal by a long shot. Did the author simply miss Nim completely or did they dismiss it for another reason? It seems to me like it would be absolutely perfect for their project.

1 - https://github.com/nim-lang/pas2nim


I would have ported the application to Free Pascal 15 years ago when their Extended Pascal compiler vendor went out of business. That's a long time to wait! :)

I'm curious what manual translation and rewriting would be necessary when moving to Free Pascal compared to their Pascal2D transpiler and compatibility library.


D has support for nested functions:

I love nested functions. It's good when you need a quick local function, and don't want to expose it to the rest of the code base.

But, how do you systematically unit test nested functions?


"But, how do you systematically unit test nested functions?"

You don't. You test the enclosing function. The nested function is just an implementation detail like any other line of code in that function.


> But, how do you systematically unit test nested functions?

Why, with nested unittests, of course! Just put them in a nested type first:

  void main()
  {
      static int nested(int x)
      {
          return x * 2;
      }

      struct Test
      {
          unittest
          {
              assert(nested(2) == 4);
          }
      }
  }
Well, OK, that's more of a hack than a feature, and you obviously can't test non-static nested functions (as there'd be no way to give values to the outer function's frame variables). But you asked :)


> But, how do you systematically unit test nested functions?

You expose it.

If you want to test a local nested function, you test the enclosing function.


Why would you need to? If you unit test the top level function then you're implicitly testing anything it does, e.g., use of the local function.


Really cool to see companies willing to migrate to a more modern language. I'm part of a team working on C to Rust migration tools (partly inspired by the Corrode translator).

Shameless plug/pre-alpha demo: https://www.c2rust.com

Edit: missing word.


Your SSL certificate works in Chrome, but not in my Safari 11.1.1. SSL Shopper (not sure how reliable this is), also shows warnings.

https://www.sslshopper.com/ssl-checker.html


Thanks so much for letting me know; should be fixed now.

Edit: reflect resolved status of issue.


Go did have closure support, where you can assign functions to normal variables and move them around, just like any other primitive data type.

While variables which were declared prior to it in its outer function were available to its inner functions.


Yes, I believe both Rust and for sure Go support closures that can capture enclosing scoped variables.

Unless there is an obvious example of where this breaks down in Go and Rust, I’d certainly like to know how.


Not directly related, but maybe the lack of genetics in go was a dealbreaker?

Also I inagine the qt bindings are better in D.


All Rust, D and Go pass as "better C++"

To me, the evaluation goes solely as ecosystem vs. ecosystem

There D is ahead not so much by number of features, than by lack of showstopper flaws


Go's lack of generics make it a strange choice to replace C++, philosophically rather than functionally.


Personally/Syntactically I've found that coding in D is much pleasant than C++. Consistent '.' operator for the equivalent '.' '->' '::' in C++, template meta programming for humans, modules, UFCS, etc, etc.

But a killer feature that's available in C++, but not available in D is pure RAII idiom. Sure D does scoped destruction (via struct's destructor), but is in no way equal to the contructor+destructor combination in C++. struct's in D don't have default constructor, so RAII in D is just incomplete.

Automem[1] is a boon for D, but is useful only for heap memory resource. Not for all kinds of resources (like locks, etc).

On the other hand D has scope exit/failure, which is not available in C++.

[1] https://dlang.org/blog/2017/04/28/automem-hands-free-raii-fo...


You can disable the default constructor on structs with @disable this(); Then the only proper way to initialize the struct is with its parameterized constructor.


I wouldn't say go is at all a better c++. It has very different stated goals and a different philosophy. As for d and rust, they improve on c++ in very different ways, just because they're both improvements on c++ doesn't mean they're the same. For example:

* D has generally better metaprogramming, however, rust has real macros

* D has a garbage collector and defaults to code being memory-unsafe. Rust uses ownership semantics and defaults to safe code.

* Rust has only an llvm-based compiler (although possibly moving to a self-hosted one soon?). D has a self-hosted, gcc-based, and llvm-based compiler.


I don't understand the "D has generally better metaprogramming, however, rust has real macros" statement. Or do you mean like even though Rust allows you to work on AST, it's too clunky compared to D string mixins?


I don't mean the string mixins, I mean more the template metaprogramming.


I think you've got the wrong idea on "self-hosted". That just means the compiler is written in the language itself. Rustc is written in rust, so it is self-hosted (even if it has an LLVM backend). And likewise, all three D implementations are written in D, despite having either baked-in code generation, a gcc backend, or an llvm backend. You can check the repos on github: all three of DMD, LDC, and GDC are written in D-lang, so they are self-hosted.


I would add Ada and Swift as well.


> [Go lacks] nested functions with access to variables in their enclosing scope

Surely Go has this?

func main() { a := 1 func() { b := a g := func() { c := b fmt.Println("c is", c) a = 7 } g() }() fmt.Println("a is now", a) }

https://play.golang.org/p/zmWwCNm_PuU


I wonder what stage companies people that have the freedom to do these types of explorations work at. I barely have time to work on core components of my system at a deep level. I usually have to leverage existing work and move on


very impressive and ambitious project


How do they plan to solve the hiring problem? I am curious.


if you mean "hiring d programmers", i think it's somewhat of a nonissue. afaik d isn't particularly esoteric, so any new hire should be able to get up to speed quickly.


I'm not so sure about that. For the basics that's definitely true, and no doubt if you're a C++ programmer then it will be an easier transition, but if you're coming from a higher level language like Java, C#, PHP etc then there's a lot to get your head around. That said, it's absolutely worth it :-)


I come from a Perl/Js background and find D quite easy to pick up and be productive in, as opposed to C, C++ or Rust. With Rust and C I always find myself fighting the compiler.


> it will possibly take a year, and probably longer, to migrate to D

The article title is misleading. They still have not migrated.


>How an Engineering Company Chose to Migrate to D

I mean, they have chosen to migrate. That doesn't imply they're done with the implementation.


... and went bankrupt because of not being able to finding Engineers.


Patently false and disservice to software engineering.

There are many companies (e.g. Weka.IO, Sociomantic, etc.) that hire C++ developers and have them happily write D code. I did work at Weka.IO for a while and never met a single D programmer that was unhappy.

Ali Çehreli


...which is why we’re all still programming COBOL, Fortran, and LISP.


Nothing wrong with the languages you mentioned; they do what they do best. COBOL and Fortran are both fantastic and nothing yet truly replaces them. COBOL is accurate out to 38 digits, which is amazing for such an "old" language. Banks still like their COBOL and it just works. No stupid "libraries and/or frameworks of the month" to worry about, it compiles cleanly and every time, and it's easy to write and maintain. Modern doesn't always mean best. I still write tons of stuff in sh. If it's under 100 lines, it's shell, awk, or similar. COBOL and Fortran still have tons of life left in them. Every time someone undertakes a massive project to replace COBOL in the financial sector (usually driven by a latte-sipping hipster and his recently graduated ilk), it goes pear shaped. We still use hammers. They work.


I have nothing against any of them, I just was surprised anyone thinks that a new(ish) language can't find engineers to use it.


From my admittedly cursory look at D, I find nothing persuasive enough to get me to take it on and learn it when the chances of my ever using it are slim. I guess I'm just old school. It seems that every week there is a new language, framework, etc., and most of them are really not doing anything radically different. As a back end guy, I see most of the churn happens in the web dev world. I prefer old, stable, and very little churn, hence my continued love for COBOL in particular. I also like C++ and Python, but there is something fantastic about COBOL, sh, and awk, which are oft-used. I'm getting less and less enthusiastic about systems stuff now that I'm getting older and more into writing useful tools to help my guys. Call it an easy exit...


Developers can learn new languages though if a company lets them.

(Edited to be less glib. bad habit)


I agree, but that costs money for the company and time for the developer.

Airbnb dropping react native due to a lack of qualified candidates that were articulate in both the native and javascript realm is a great example. Something can work flawlessly with the right conditions (for instance, developers that understand and can implement the path of least resistance across native and react), yet if its prohibitive to create those conditions you are back where you started.

If the dominant ideology among developers is that learning X language or technology costs more of their career than its worth, it dies out. Same goes for the company and their resources.


I'm skeptical about extrapolating too much from a JavaScript example. That ecosystem looks nothing like the rest of the software development world.

Erlang has been around in a niche role for >30 years and still going strong, even stronger now that Elixir has reinvigorated interest in the platform.


My only point in that you should be as careful about creating problems building a team and adding ramp-up time to your product when you attempt to optimize. It's a decision that doesn't go away without lots of heartburn and having employees lose morale rewriting their own work.


According to their own blog, they dropped react because it was a difficult to use piece of technology that was slowing them down and making it difficult to offer a top notch mobile experience.

Yes, they also had difficulties hiring devs which know both native and react, but that's normal. Most mobile devs wouldn't touch react.


> but that costs money for the company and time for the developer

Quite the opposite: The new tool saves time and money.

Airbnb example just shows that one should not jump on the newest thing on the market.


A friend (C# dev) just refused a job from a company in the Netherlands, for a D dev position. I don't know if it's the same company.

It's not that they're not capable, but some will not want to risk several years of their career to learn it.


honestly, if i were hiring and heard this of an applicant (and didn't have reason to believe it was something else like salary), i would feel that we dodged a bullet.

(that may not be entirely fair, i guess i just tend to be skeptical of people who get hung up on languages.)


I don’t agree with you. Provided you’d persuade him that the company and tech are worth it, I think he’d be a good addition to most companies.

It’s a legit concern.


And did they refuse the job because of D, low salary, not wanting to move to the Netherlands or some other reason?

I could see a C++ dev giving this a try without hurting their career prospects, but C# devs are not exactly known for trying out non-mainstream technologies. If they switch for a couple of years MS will pump out a couple of frameworks and .NET programming languages. Fire and motion as such.


He wants to move to the Netherlands and he was quite explicit that he's apprehensive about D. It's true that C# programmers generally don't venture outside the mainstream.


I can understand this. If I had to pick a marketable/future employment language today, I would pick C# over D (I have used both).


Just tell your new hires here is a book on the language go learn it - which is what I was told at my first Job (world leading RnD).


I am usually not against learning new stuff. But I would also like my learning from current work generalize to my future career. From a pragmatic perspective, learning D is probably going to have less ROI.

Either the company focus solely on finding enthusiasts or they pay the premium to overcome this.


True up to a point but any professional developer needs to be able to pick up a new language quickly - I had to learn Pl/1G to work on a Map reduce code base.

I was assuming a more rational choice but then developing serious technical software in pascal instead of fortran in the 80's was not a rational choice in the first place.


Ideally, the company switching to D wouldn't affect their current or future engineers at all unless some of the engineers are also programmers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: