Hacker Newsnew | past | comments | ask | show | jobs | submit | youerbt's commentslogin

> That also explains why TDD is more popular in say Ruby or Python vs. Java.

I'd say that TDD being more popular in untyped languages speaks against TDD, as it hints that maybe some of its benefits are covered already by a type system.


You did clarify latter a bit, but this cannot stand unchallenged. TDD and tests solve different problems from types and so are valuable for that. Tests assert that no matter what you change this one fact remains true. Types assert that you are using the right things in your code.

I don't think it is lack of types at fault for untyped languages liking TDD (though I miss types a lot). I think it is there is no way to find out if functions exist until runtime (most allow self modifying code of some form so a static analysis can't verify without solving the halting problem). Though once you know a function exists the next step of verifying the function (or an overload in some languages) exists does need types.


The biggest proponents of TDD I’ve seen are only capable of writing code that one cannot trust in the absence of tests. Writing tests is good, striving for 100% coverage contorts code in ways that are detrimental to robustness and correctness.


I like to think I'm better than that. Who knows though.

I'm also against measuring coverage - I've never seen anything useful to do with the measure so why bother.


types are just autoverified logic. tdd just tests logic which cannot be typed in given type system. in lean4 one can type a lot(dependant types to test integration shapes and proofs are proptests).


It's worth nothing that type checking can also verify things that cannot reasonably be verified by tests. Things like exhaustiveness checking ("you handled every possible value of this enum") or, even simpler "you didn't attempt to access a property that does not exist on this object."


TDD also asserts that if you make a change you don't break anything. Most programs are too complex to keep all behavior in your head so sometimes what looks like an obvious change breaks something you forgot about. Types won't tell you because you adjusted the types, but the test will tell you. (if you have the right tests of functionality - a very hard problem outside the scope of this discussion)


The person you're replying to mentioned Lean4. In such a language, types can definitely assert that a change didn't break anything, in the sense that you can write down the property you want as a type, and if there is an implementation (a proof), your code satisfies that property.

Now, proofs can be often devilishly hard to write whereas tests are easy (because they're just examples), so in practice, types probably won't supplant tests even in dependently typed languages.


Proofs are impossible in many cases because nobody really fully understands the requirements, they are sort of working them out as they go. (and in any case they will change over time). That might be what you meant by devilishly hard to write.

Tests let you instead say "with this setup here is what happens", and then ensure that whatever else you change you don't break that one thing.

To my knowledge nobody has scaled proofs to very large problems. I still think proofs should have a place in better code, but I can't figure out how to prove anything in my real world code base. (I could probably prove my languages basic containers - something that itself would be valuable!)


> That might be what you meant by devilishly hard to write.

No, that's a separate problem. I agree that we don't always know what we need our code to do very precisely - although I think we could still increase the number of known invariants/properties in many situations - but even when you do, a proof is often hard to write.

Proofs also typically have the problem that they're not refactoring-proof: if you change the implementation of function f (but it still does the same thing), tests won't have to change, but a proof would have to be rewritten (the type would stay the same though).


i am coming from rust. writing a lot of narrowing wrappers/type states/proptests/const and runtime asserts/expects of possible proofs. i am targeting to do these first.

for big things wiring many things together, will use normal tests(given lean4 allows to reflect on io graph, i guess can have some fun here too)


> runtime asserts/expects

Those are not proofs. If you have a formal proof tool (I'm not aware of one for rust, but there is a lot of progress in this area) they feed and and sometimes tools can prove things. Though beware, there are limitations to this approach - sometimes they would have to solve the halting problem to say code is correct (though if they say code is wrong they are right), other times the problem is solvable only on a computer that doesn't exist yet.

Types and const are a form of formal proof - if you don't cast those away (I'm not sure what rust allows, particularly in unsafe). However there is a lot more potential in formal proofs. Rust's borrow checker is formal proof, and there are safe things you might want to do that the borrow checker doesn't allow because if it was allowed rust could no longer prove your code memory safe (a trade off that is probably worth it in most cases)


I agree with what you're saying, but some context:

> I'm not aware of one for rust, but there is a lot of progress in this area

https://github.com/model-checking/kani is probably the best known one, I believe there are a few others.

> (I'm not sure what rust allows, particularly in unsafe).

You can't "cast const away" in Rust, even in unsafe. That is, you can do it in unsafe, but it is always undefined behavior to do so. (I am speaking about &T and &mut T here, const T and mut T exist and you're allowed to cast between those, as they have no real aliasing requirements and are really just a lint.)


It's blatantly obvious that some of the benefits of extensive testing are covered by a type system. Even by a mostly useless one like Java's.

If you look at any well tested program in a dynamic language, almost all the tests check the same properties that a type system would also check by default. If you remove those, usually only a few remain that test non-trivial properties.

EDIT: And I just love that in the time I took to write this, somebody wrote a comment about how it isn't so. No, it is still blatantly obvious.


Id say if you think tests and types are doing the same thing in the same way you are badly abusing at least one of them.

One attacks the problem of bugs from the bottom up and the other from the top down. They both have diminishing returns on investment the closer they get to overlapping on covering the same types of bug.

The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.


> The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.

Please don't do this. It's not constructive.


I'm a Haskell bro and I love testing. You misunderstand me, though. All I say is that maybe _some_ of those tests deliver value by just making sure that code even runs, which is otherwise covered by types.


When I do TDD (virtually every time i write a line of code) each test scenario isnt just a way to verify that the code is working, it's also a specification - often for a previously unconsidered edge case.

Throwing away the test means throwing away that user story and the value that comes with it.


I believe you (other than tests being specifications, they are examples at best). But that doesn't change the fact that TDD looks more adopted in untyped languages, and that deserves an explanation.

Mine is that a lot of potential errors (typos, type mismatches) don't need to be exercised by running code in typed language.

Yours is... well, you don't really address it.


>I believe you other than tests being specifications

If you're not, that suggests you're not doing them right which in turn suggests why you might have an issue with them...


How would you make a test a specification?

I suppose you could do something like, enumerate every possible combination of inputs and check that some property holds for all of them. Or, maybe you could instead randomly select a number of combinations of inputs and check that a property holds for each of those random combinations, but that wouldn't be guaranteed to find the inputs for which the specification isn't satisfied.

I guess maybe if the test passed to the function to be tested, mock values, such that the function is effectively evaluated symbolically (where any branching that depends on the inputs to the function, would maybe have the mocked object specify what the result of the conditional should be, with different tests for different cases?) ?

Or.. Can you explain how you write tests such that they truly function as specifications?


Good question - and there's been lots of work on this area. See for example property testing and fuzz testing, which can do something similar to what your second paragraph suggests.

You should be able to find a property testing library in your favourite language such as Hypothesis (python), Quickcheck (Haskell), Fastcheck (JS/typescript), etc.


Jane logs in, enters her DOB which is 11/5/1998, does Y the result of which is Z.

Where X, Y and Z are very specific.

These example scenarios work well as a communication medium for discussing intended program behavior as well as translating well into tests.

>enumerate every possible combination

Whereas if you start doing this you will probably confuse your stakeholders.

Specific examples tend to make better specifications.


A “specification” which is based on just “the desired behavior in a few specific examples” seems to allow undesired behavior without it being clear whether said behavior is allowed by the spec.

That to me doesn’t seem like what a “specification” is.

Now, maybe having good examples is more important for business use cases than having a specification.

But I generally wouldn’t call it one.


That’s exactly what those tests are for. When you no longer have to worry if you invoked .foo() or .fooTypo(), you eliminated one class of bug. Namely trying to run things that do not exist.

Maybe you meant to invoke .bar(), but at least we know thanks to type checks that the target exists.


> it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID"

I'm not sure I understand how hiding this changes anything. Could you just not pass "tenant ID" to doBusinessLogic function and pass it to saveToDatabase function?


That's exactly what what they're talking about, "tenantId" shouldn't be in the function signature for functions that aren't concerned with the tenant ID, such as business logic


But they chose a solution (if I understand correctly), where tenant ID is not in the signature of functions that use it, either.


> So why hasn't it happened?

4. History. In those types of discussions, there are always "rational" arguments presented, but this one is missing.

> One with lots of persistent mutable state.

You mean like a database? I don't see a problem here. In fact, there is a group of programs large enough, that Haskell fits nicely, that it cannot be 3; REST/HTTP APIs. This is pretty much your data goes in, data goes out.


> > One with lots of persistent mutable state.

> You mean like a database?

No, I mean like a routing switcher for a TV station. You have a set of inputs and a set of outputs, and you have various sources of control, and you have commands to switch outputs to different inputs. And when one source of control makes a change, you have to update all the other sources of control about the change, so that they have a current view of the world. The state of the current connections is the fundamental thing in the program - more even than controlling the hardware is.


Thanks. This does sound like a state machine, though, but the devil is probably in the details. Yes, here Haskell is probably a bad choice, and something where direct memory manipulation is bread and butter should do better. Which is completely fine; Haskell is a high level language.

But in your example, PHP is also a bad choice, and alas, it dwarfs Haskell in popularity. I can't really think of where PHP is a great fit, but Haskell isn't.


Sounds like you want green threads, a concurrent runtime, and best-in-class synchronisation primitives.


HTTP APIs are just an interface. The application behind it can be almost anything.


One exception for me about >>=, is instead of this:

thing <- getThing

case thing of

writing this:

getThing >>= \case

Not so much because it is less code, but fewer variables to name.


You can pipe into Elixir cases too, and the language I'm developing also allows for this and more.

The difference here seems to be that \case is a lambda?


Elixir doesn’t even need a special syntax — it gets Haskell’s LambdaCase as a natural consequence of case being just another function, and the pipe operator always chaining by the first argument.

Haskell’s >>= is doing something slightly different. ‘getThing >>= \case…’ means roughly ‘perform the action getThing, then match the result and perform the matched branch’

Whereas ‘getThing |> \case…’ means ‘pattern match on the action getThing, and return the matched action (without performing it).

The >>= operator can also be used for anything satisfying the Monad laws, e.g. your actions can be non-deterministic.


Pedantry: >>= can be used for any instance of the Monad typeclass. GHC isn't sufficiently advanced to verify that a given Monad instance satisfies the monad laws; it can verify that the instance satisfies the type definition, but for e.g. the identity law "m >>= return === m" the instance can still return something violating the law (not something substitutable with m).


> ...case being just another function, and the pipe operator always chaining by the first argument.

Interesting! I thought Elixir was mostly macros all the way down, like this article shows:

https://evuez.net/posts/cursed-elixir.html

Being able to pipe into def, defp, defmodule, if, etc. is fantastic! But apparently you say at least in the case of case, it's not a macro, and it's just a function which returns a monad—I guess it's the same effect afterall? Is that why people say Haskell can get away with lack of Lisp-style macros because of its type system and laziness?


I was wrong about this. Case is a macro (a special-cased macro even, defined at https://github.com/elixir-lang/elixir/blob/4def31f8abea5fba4...), not a function. It works with pipes because in the AST it's a call, unlike in Haskell where case is its own thing.


Yeah `>>= \case` is a really nice pattern! I use it a lot.


> It's almost never "we just don't have to care" when comparing to most other popular languages.

Struggling with Haskell type system is not an experience of somebody who has developed an intuition about Haskell type system. Granted, it is not a binary thing, you can have good intuition about some parts of it and struggle with others.

I think they way you put it is, while technically true, not fair. Those "most other" languages are very similar to one another. It is not C# achievement, that you don't struggle with its type system coming from Java.

This is like people struggling with Rust because of burrow checker, well, they have probably never programmed with burrow checker before.


> burrow

FYI it's "borrow" (as in when someone lends something to you) not "burrow" (which is a tunnel/hole)


Starcraft player detected


i'm sure burroughs liked checker


Good intuition is a gift from god. For the rest of us, notational confusion abounds. Types are great. Syntax (not semantics) and notation can be a major bummer.

I think use makes master: the more you use it, the more mastery you will have and the better intuition will follow.


I struggled with the borrow checker because I’m smarter than it is, not because I haven’t worked with one before. Mainly I say “I’m smarter” because I’ve worked on big projects without it and never had any issues. Granted, I’ve only gotten in a fight with it once before giving up on the language, mainly because it forced me to refactor half the codebase to get it to shut up and I had forgotten why I was doing it in the first place.


> I’m smarter than it is

Given how many important projects maintained by smart people have dealt with bugs that safe Rust makes impossible, I'm inclined to doubt that.


Let's be realistic, rust is great, especially compared to C.

But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.

If you are dealing with common system level needs like double linked lists, rust adds back in the need for that super human level memory of undefined behavior, because the borrow checker is limited to what static analysis can do.

IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.

Probably just my opinion, and I am not suggesting it is superior, but zig style length as part of the type is what would mitigate most of what is problematic with C/C++

Basically a char myArray[10];, really being *myArray is the main problem.

Obviously the borrow checker removes that problem, but not once you need deques treeps etc...

If I could use Rust as my only or primary language, memorizing the borrow checker rules wouldn't be that bad.

But it becomes problematic for people who need to be polyglots, in a way that even Haskell doesn't.

I really think there's ways for Rust to grow into a larger role.

But at this point it seems that even mentioning the limits is forbidden and people end up trying to invert interface contracts, and leak implementation details when they're existing systems are incompatible with the projects dogma.

It is great that they suggest limiting the size of unsafe code blocks etc....but the entire world cannot bend to decisions that ignores the real world nuances of real systems.

Rust needs to grow into a language that can adjust to very real needs without and many real needs will never fit into what static analysis can do.

Heck C would be safer if that was the real world.

I really do hope that the project grows into the role, but the amount of 'unsafe' blocks points to them not being there today, despite the spin.


> But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.

With one massive difference: the borrow checker will tell you when you're wrong at compile time. Undefined behaviour will make your program compile and it will look like your program is fine until it's not.

I'd take that trade.


> But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.

You have a choice: Fight the borrow checker on the front end until your code compiles, or chase down bugs on the back end from undefined behavior and other safety problems. No single answer there, it depends on what you're doing.

Cognitive load is a big issue. In Rust I find this comes in a couple of phases, the first being figuring out what the borrow checker does and how to appease it. The second is figuring out how to structure your data to avoid overusing clone() and unsafe blocks. It's a bit like learning a functional language, where there's a short-term adaptation to the language, then a longer-term adaptation to how you think about solving problems.

You mention doubly-linked lists, which are a good example of a "non-star" data structure that doesn't nicely fit Rust's model. But: for how many problems is this the ideal data structure, really? They're horrible for memory locality. Those of us who learned from K&R cut our teeth on linked lists but for the last 20 years almost every problem I've seen has a better option.


Intrusive doubly linked lists are very nice in C for some types of problems, and don't have issues with memory locality.


> IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.

rust-analyzer is the tool that does this. Having it flag errors in my editor as I write code keeps me from getting too far down the wrong road. The moment I do something Rust thinks is goofy, I get a red underline and I can pause to consider my options at that moment.


In my particular case, I was absolutely sure it was safe without having to test it. Would it remain so forever? Probably not because software changes over time.


In those cases what you are supposed to do is write your logic using unsafe, and wrap it with lifetime safe APIs. That way your "smartness" joins efforts with rusts, and not-so-smart people can use your APIs without having to be as smart.


Did you rewrite the code base into C++ to avoid the borrow checker?


No, I just stopped contributing to the project and never touched Rust again.


I do AoC in SQL, I wish it was true. With Postgres, you have lots of regex/string manipulation functions that make it easy.

For me, the biggest problem was memory. Recursive CTEs are meant to generate tables, so if you are doing some maze traversal, you have to keep every step in memory, until you are done.


And the modern web search tools don't even try to be good at searching, but some engagement-bullshit-here-is-something-that-might-interest-you contraption.

I love listening to bootlegs (recording of a concert that is not official, mostly done by fans). I happen to be a fan of a band that has quite dedicated fan base and tons of bootlegs. I remember, and I'm quite sure of it, that I could type "<BAND> <SONG> <YEAR> live" in youtube search and get pages upon pages of exactly that. Recording of the song by the band, in given year.

Today if I type "tool right in two live" I get:

actually what I requested - 12

official audio - 1

cover song - 4

other song by the band - 12

full concert - 4

"reaction" video - 4

And after that there are mostly "reaction" videos, yea, just what I wanted. Try it out, it's actually funny (and sad).


Nearly a 1/3 success rate for the first page of results? Honestly, I thought things were worse.


Yes, I thought I just wanted to chill and listen to a song, but actually cutting-edge AI technology decided that I will have a better time listening to this car mechanic over-reacting to my favorite song, that he surely hears for the first time in his life.

Not to mention that our AI-overlords coming out swinging, with a billion of dollars research behind, couldn't figure out that if I'm looking for a live recording of a song, then maybe, just maybe, I actually know those other songs too and could search those if I wanted to. Let's give them a few years to go back to search results they had a few years ago.


That makes no sense to me. If this coder has to access array by index twenty times a day, then he is going to remember it, eventually, no? If is it rare that he has to do it, then why memorize it?

You really think there is more value in remembering how to do something in some arbitrary, shitty, programming language than understanding the concept of doing it? With understanding the idea you can do it in any language, at any time, it is just a few seconds away.


It makes no sense because it indeed makes no sense. People who successfully solve realworld problems understand concepts and ideas and how to apply them, they understand how to iterate and extrapolate.

I've met too many people who can do a specific thing but actually have no idea what's going on for the GP's logic to hold any water at all.


It's not about the value in remembering syntax. It's the value in being able to recall a concept from memory.

Memory is a key part of learning. Understanding is great for learning new concepts, but you want to already know a concept. That way lies knowledge and experience.


> This isn't even hyperbole.

Is there even some big scale mobilization going on in Russia right now?

Or is this just the standard dig at Russia, because the topic is related to Russia?


Yeah, it's been big news for a couple of years now? https://www.cbc.ca/news/world/russia-military-conscription-a...


This news is about conscription they do twice a year, regardless of the war.


https://www.rnbo.gov.ua/en/Diialnist/6714.html "Total mobilisation may follow the 2024 presidential elections."


So there isn't any right now? After the partial mobilization of 2022 I don't think they mobilized more. They just rely on massive monetary incentives to get volunteers iirc. Though yeah maybe that won't last


It's nice, but it comes at a cost. For example, every user of toml forever will have to put strings in quotes. Why? Because having other types creates ambiguity, that is resolved by this one simple trick. But if you don't quote them then you have "the Norway problem" like in yaml.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: