That doesn’t make sense. Any function doing I/O is not pure by definition, and while using an IO monad can shift the impureness a bit to make a function behave as if it was pure, it is not pure and cannot ever be pure. Can you explain?
Yeah - I think from your explanation I can only deduce that you don't know the actual definition and concept of pure functional programming. Note that the term "functional programming" has been watered down over time and now pretty much means "use .map and .filter instead of a loop" etc. Historically the meaning was different though, see: https://en.wikipedia.org/wiki/Purely_functional_programming
With pure functional programming you essentially treat IO-effects as a first class concept and make them explicit in your code. You never "execute" IO directly but compose small "blueprints" for doing IO into bigger blueprints and so on until you end up with one big blueprint which is your application. You then pass it to the runtime ("main() { ...; return bigBlueprint; }") and then it gets executed.
In other words, pure functional programming treats IO as more important compared to "regular programming" and without any IO there would be no need to even do so. But without any IO, a program wouldn't have any meaning, because you want at least expose the result of a computation to the outside world.
I do understand the concept of pureness in the appropriate context. However, it appears that you do not.
This is a good example, since it ticks all the boxes: there is imperative code run as a side effect of executing the code output by the compiler based on declarations in functional style, including those impure IO-using functions.
Slapping IO on your function is making it explicit that it is impure.
The concept of pure functional programming is totally independent of what the compiler generates. Of course, in the end, there will always be something running that executes effects and that will be impure instructions. But for the context of pfp only the programming language matters since that is what we are working on. If you work with assembly directly then yeah, that is impure by all means.
> Slapping IO on your function is making it explicit that it is impure.
Not sure what "slapping IO in your function" even means here.
To maybe some it up and get the discussion to an end: if you have an expression and can freely duplicate it and have it evaluated anywhere within your code (e.g. by assigning it to a variable) and it does not change the semantics of your program (so e.g. performance can be worse, that's okay) then the expression is referential transparent. If all expressions in your programm are referential transparent, you are doing pure functional programming. That is the simplified definition.
Seems that thread is moving the goalposts. The way I see it, universal pure functional programming is an academic exercise and cannot practically exist on Von Neumann architectures.
I assume you are genuinely interested in a discussion, so let's get back to the original question and tackle it from a different way:
> Any function doing I/O is not pure by definition, and while using an IO monad can shift the impureness a bit to make a function behave as if it was pure, it is not pure and cannot ever be pure
Pure functional programming should have a definition that is useful. I already gave my definition. If you think this is not a good definition, then what would be yours? Or more concretely to your example: what does it mean for a function to 1) be pure and 2) to behave pure? And what would it mean if a function is/does neither?
> The way I see it, universal pure functional programming is an academic exercise and cannot practically exist on Von Neumann architectures.
Unlimited memory can also not practically exist on Von Neumann architectures. But in programming languages we still use concepts like linked lists that have no size limitation whatsoever. In the context of the language and reading and understanding the code, this is important and it does not matter what the hardware actually does, except for rare edgecases. The same is true for (erased) generics. They simply don't exist in the generated machinecode. But it still matters for us humans. Programming languages exist for humans to serialize our thoughts in an easy way while still being able to have the machine interprete it. So in the context of a style of programming or a programming language feature, I don't see how what you said makes any sense or is related in any way.
> Or more concretely to your example: what does it mean for a function to 1) be pure and 2) to behave pure? And what would it mean if a function is/does neither?
1) function is intrinsically pure in the mathematical sense: it produces the same output for the same input each time. In practical terms, code execution must not have side effects on the system.
2) function has been made pure in the sense that a compiler can reason about its inputs and outputs as if it was actually pure in the mathematical sense. Code execution can have side effects on the system, but this has been neatly abstracted away.
Memory limitations are not a good analogue: I/O requires interrupts.
> 1) function is intrinsically pure in the mathematical sense: it produces the same output for the same input each time. In practical terms, code execution must not have side effects on the system.
Now if we go back to the example and Odersky's citation. If you have a function `foo` that returns `IO[String]` using e.g. cats.effect.IO or zio.IO. And the String is e.g. the content of a file or something else. Then, is this function pure or not? Answer: it is. You can call it multiple times, it proces the same output for the same input.
val x = foo
val x2 = foo
val x3 = foo
No matter which of those x you use, the result will always be the same. You can call the function as often as you want. There is no side effect being executed. Hence the function is pure by your the definition you just gave and hence Odersky's claim is incorrect (I think he probably would not say this again nowadays).
> 2) function has been made pure in the sense that a compiler can reason about its inputs and outputs as if it was actually pure in the mathematical sense. Code execution can have side effects on the system, but this has been neatly abstracted away.
What does "neatly abstracted away" means? How is this function different from a one in 1) and how is it different from a function that is just impure? Can you give an example?
> Memory limitations are not a good analogue: I/O requires interrupts.
They are a very good analogue, because conceptioally they too cannot exist on Von Neumann architectures. Why does the reason matter? I also gave another example: generics. How about those? They don't even require any specific physical limitations, they simply vanish. I can come up with more examples, but I don't really see the point. Obviously people use pure functional programming and they call it like that. If you say that isn't possible, I think we are now discussing (again) terminology and not practical implications.
I believe looking at the implementation of ”IO” is a sufficient example of ”neatly abstracted away”.
There are practical implications to all IO: interrupts are asynchronicity and they have failure modes, what if a network is down or a hard drive has to try a few times to read a sector? Abstractions only go so far. At least your program crashes when it runs out of memory.
> 1) function is intrinsically pure in the mathematical sense: it produces the same output for the same input each time. In practical terms, code execution must not have side effects on the system.
Now if we go back to the example and Odersky's citation. If you have a function `foo` that returns `IO[String]` using e.g. cats.effect.IO or zio.IO. And the String is e.g. the content of a file or something else. Then, is this function pure or not? Answer: it is. You can call it multiple times, it proces the same output for the same input.
val x = foo
val x2 = foo
val x3 = foo
No matter which of those x you use, the result will always be the same. You can call the function as often as you want. There is no side effect being executed. Hence the function is pure by your the definition you just gave and hence Odersky's claim is incorrect (I think he probably would not say this again nowadays).
> 2) function has been made pure in the sense that a compiler can reason about its inputs and outputs as if it was actually pure in the mathematical sense. Code execution can have side effects on the system, but this has been neatly abstracted away.
What does "neatly abstracted away" means? How is this function different from a one in 1) and how is it different from a function that is just impure? Can you give an example?
> Memory limitations are not a good analogue: I/O requires interrupts.
They are a very good analogue, because conceptioally they too cannot exist on Von Neumann architectures. Why does the reason matter? I also gave another example: generics. How about those? They don't even require any specific physical limitations, they simply vanish. I can come up with more examples, but I don't really see the point. Obviously people use pure functional programming and they call it like that. If you say that isn't possible, I think we are now discussion (again) terminology and not practical implications.