I completely agree with the points in this article and have come to the same conclusion after using languages that default to unary curried functions.
> I'd also love to hear if you know any (dis)advantages of curried functions other than the ones mentioned.
I think it fundamentally boils down to the curried style being _implicit_ partial application, whereas a syntax for partial application is _explicit_. And as if often the case, being explicit is clearer. If you see something like
let f = foobinade a b
in a curried language then you don't immediately know if `f` is the result of foobinading `a` and `b` or if `f` is `foobinade` partially applied to some of its arguments. Without currying you'd either write
let f = foobinade(a, b)
or
let f = foobinade(a, b, $) // (using the syntax in the blog post)
and now it's immediately explicitly clear which of the two cases we're in.
This clarity not only helps humans, it also help compilers give better error messages. In a curried languages, if a function is mistakenly applied to too few arguments then the compiler can't always immediately detect the error. For instance, if `foobinate` takes 3 arguments, then `let f = foobinade a b` doesn't give rise to any errors, whereas a compiler can immediately detect the error in `let f = foobinade(a, b)`.
A syntax for partial application offers the same practical benefits of currying without the downsides (albeit loosing some of the theoretical simplicity).
The functional programming take is that “the result of foobinade-ing an and b” IS “foobinade applied to two of its arguments”. The application is not some syntactic pun or homonym that can refer to two different meanings—those are the same meaning.
Let us postulate two functions. One is named foobinade, and it takes three arguments. The other is named foobinadd, and it only takes two arguments. (Yes, I know, shoot anybody who actually names things that way.)
When someone writes
f = foobinade a b
g = foobinadd c d
there is no confusion to the compiler. The problem is the reader. Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.
Whereas with explicit syntax, the parentheses say what the author thinks they're doing, and the compiler will yell at them if they get it wrong.
> Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.
Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results". Or rather, you never have a result that isn't a function; `0` and `lambda: 0` (in Python syntax) are the same thing.
It does, of course, turn out that for many people this isn't a natural way of thinking about things.
> Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results".
Everyone knows that. At least everyone who would click a post titled "A case against currying." The article's author clearly knows that too.
That's not the point. The point is that this distinction is very meaningful in practice, as many functions are only meant to be used in one way. It's extremely rare that you need to (printf "%d %d" foo). The extra freedom provided by currying is useful, but it should be opt-in.
Just because two things are fundamentally equivalent, it doesn't mean it's useless to distinguish them. Mathematics is the art of giving the same name to different things; and engineering is the art of giving different names to the same thing depending on the context.
Not when a language embraces currying fully and then you find that it’s used all the fucking time.
It’s really simple as that: a language makes the currying syntax easy, and programmers use it all the time; a language disallows currying or makes the currying syntax unwieldy, and programmers avoid it.
If 0 and a function that always returns 0 are the same thing, does that make `lambda: lambda: 0` also the same? I suppose it must do, otherwise `0` and `lambda: 0` were not truly the same.
Fine, it's a regular type. It's still not the type I think it is. If it's an Int -> Int when I think it's an Int, that's still a problem, no matter how much Int -> Int is an "actual result".
And the compiler immediately tells you that you are wrong: your type annotation does not unify with compiler’s inferred type.
And if you think this is verbose, well many traditional imperative languages like C have no type deduction and you will need to provide a type for every variable anyways.
I spent the last three years on the receiving end of mass quantities of code written by people who knew what they were writing but didn't do an adequate job of communicate it to readers who didn't already know everything.
What you say is true. And it works, if you're the author and are having trouble keeping it all straight. It doesn't work if the author didn't do it and you are the reader, though.
And that's the more common case, for two reasons. First, code is read more often than it's written. Second, when you're the author, you probably already have it in your head how many parameters foobinade takes when you call it, but when you're the reader, you have to go consult the definition to find out.
But if I was willing to do it, I could go through and annotate the variables like that, and have the compiler tell me everything I got wrong. It would be tedious, but I could do it.
Well, I totally disagree with this. One of the main benefits of currying is the ability to chain function calls together. For example, in F# this is typically done with the |> operator:
let result =
input
|> foobinade a b
|> barbalyze c d
Or, if we really want to name our partial function before applying it, we can use the >> operator instead:
let f = foobinade a b >> barbalyze c d
let result = f input
Requiring an explicit "hole" for this defeats the purpose:
let f = barbalyze(c, d, foobinade(a, b, $))
let result = f(input)
Or, just as bad, you could give up on partial function application entirely and go with:
let result = barbalyze(c, d, foobinade(a, b, input))
Either way, I hope that gives everyone the same "ick" it gives me.
Yeah, especially in F#, a language that means to interpolate with .Net libraries (most not written with "data input at last" mindset.) now I'm quite surprised that F# doesn't have this feature.
This is essentially how Mathematica does it: the sugar `Foo[x,#,z]&` is semantically the same as `Function[{y}, Foo[x,y,z]]`. The `&` syntax essentially controls what hole belongs where.
For pipelines in any language, putting one function call per line often works well. Naming the variables can help readability. It also makes using a debugger easier:
let foos = foobinate(a, b, input)
let bars = barbakize(c, d, foos)
Other languages have method call syntax, which allows some chaining in a way that works well with autocomplete.
A bunch of Scheme implementations define little-known syntax for partial application[0] that lets you put limits on how many arguments have to be provided at each application step. Using the article's add example:
it gets tedious with lots of single-argument cases like the above, but in cases where you know you're going be calling a function a lot with, say, the first three arguments always the same and the fourth varying, it can be cleaner than a function of three arguments that returns an anonymous lambda of one argument.
(define ((foo a b c) d)
(do-stuff))
(for-each (foo 1 2 3) '(x y z))
vs
(define (foo a b c)
(lambda (d) (do-stuff)))
(for-each (foo 1 2 3) '(x y z))
There's also a commonly supported placeholder syntax[1]:
(define inc (cut + 1 <>))
(inc 2) ; => 3
(define (foo a b c d) (do-stuff))
(for-each (cut foo 1 2 3 <>) '(x y z))
And assorted ways to define or adapt functions to make fully curried ones when desired. I like the "make it easy to do something complicated or esoteric when needed, but don't make it the default to avoid confusion" approach.
> 3. Better type errors. With currying, writing (f 1 2) instead of (f 1 2 3) silently produces a partial application. The compiler happily infers a function type like :s -> :t and moves on. The real error only surfaces later, when that unexpected function value finally clashes with an incompatible type, often far from the actual mistake. With fixed arity, a missing argument is caught right where it happens.
'Putting things' (multi-argument function calls, in this case) 'in-band doesn't make them go away, but it does successfully hide them from your tooling', part 422.
> Simplicity: Every function takes exactly one input and produces exactly one output. No exceptions. If you didn’t care about the input or output, you used Unit, and we made special syntax for that.
Seems like a disaster to use s-expressions for a language like that. I love s-expressions but they only make sense for variadic languages. The entire point of them is to quickly delimit how many arguments are passed.
In say Haskell `f x y z` is the same thing as `(((f x) y) z)`. That is definitely not the case with s-expressions; braces don't delimit; they denote function application. It's like saying that `f(x,y,z)` being the same as `f(x)(y)(z)` which it really isn't. The point of s-expressions is that you often find yourself calling functions with many arguments that are themselves a result of a function application, at that point `foo(a)(g(a,b), h(x,y))` just becomes easier to parse as ((foo a) (g a b) (h x y))`.
Thanks for sharing, interesting to see that people writing functional languages also experience the same issues in practice. And they give some reasons I didn't think about.
One "feature of currying" in Haskell that isn't mentioned in the fine article is that parts of the function may not be dependent on the last argument(s) and only needs to be evaluated once over many application of the last argument(s) which can be very useful when partially applied functions are passed to higher-order functions.
Functions can be done explicitly written to do this or it can be achieved through compiler optimisation.
That's a very good point, I never thought really about how this relates to the execution model & graph reduction and such. Do you have an example of a function where this can make a difference? I might add something to the article about it.
It's also a question of whether this is exclusive to a curried definition or if such an optimization may also apply to partial application with a special operator like in the article. I think it could, but the compiler might need to do some extra work?
One slightly contrived example would be if you had a function that returned the point of a set closest to another given point.
getClosest :: Set Point -> Point -> Point
You could imagine getClosest build a quadtree internally and that tree wouldn't depend on the second argument. I say slightly contrived because I would probably prefer to make the tree explicit if this was important.
Another example would be if you were wrapping a C-library but were exposing a pure interface. Say you had to create some object and lock a mutex for the first argument but the second was safe. If this was a function intended to be passed to higher-order functions then you might avoid a lot of unnecessary lock contention.
You may be able to achieve something like this with optimisations of your explicit syntax, but argument order is relevant for this. I don't immediately see how it would be achieved without compiling a function for every permutation of the arguments.
I think we need to see a few non-contrived examples, because i think in every case where you might take advantage of currying like this, you actually want to make it explicit, as you say.
The flip side of your example is that people see a function signature like getClosest, and think it's fine to call it many times with a set and a point, and now you're building a fresh quadtree on each call. Making the staging explicit steers them away from this.
Consider a function like ‘match regex str’. While non-lazy languages may offer an alternate API for pre-compiling the regex to speed up matching, partial evaluation makes that unnecessary.
I didn't consider inlining but I believe you're correct, you could regain the optimisation for this example since the function is non-recursive and the application is shallow. The GHC optimisation I had in mind is like the opposite of inlining, it factors out a common part out of a lambda expression that doesn't depend on the variable.
I don't believe inlining can take you to the exact same place though. Thinking about explicit INLINE pragmas, I envision that if you were to implement your partial function application sugar you would have to decide whether the output of your sugar is marked INLINE and either way you choose would be a compromise, right? The compromise with Haskell and curried functions today is that the programmer has to consider the order of arguments, it only works in one direction but on the other hand the optimisation is very dependable.
An example where this is useful is to help inline otherwise recursive functions, by writing the function to take some useful parameters first, then return a recursive function which takes the remaining parameters. This allows the function to be partially in-lined, resulting in better performance due to the specialization on the first parameters. For example, foldr:
foldr f z = go
where
go [] = z
go (x : xs) = f x (go xs)
when called with (+) and 0 can be inlined to
go xs = case xs of
[] -> 0
(x : xs) = x + go xs
which doesn't have to create a closure to pass around the function and zero value, and can subsequently inline (+), etc.
In that case I want the signature of "this function pre-computes, then returns another function" and "this function takes two arguments" to be different, to show intent.
> achieved through compiler optimisation
Haskell is different in that its evaluation ordering allows this. But in strict evaluation languages, this is much harder, or even forbidden by language semantics.
Here's what Yaron Minsky (an OCaml guy) has to say:
> starting from scratch, I’d avoid partial application as the default way of building multi-argument functions.
A benefit to using the currying style is that you can do work in the intermediate steps and use that later. It is not simply a 'cool' way to define functions. Imagine a logging framework:
After each partial application step you can do more and more work narrowing the scope of what you return from subsequent functions.
;; Preprocessing the configuration is possible
;; Imagine all logging is turned off, now you can return a noop
(partial log conf)
;; You can look up the identifier in the configuration to determine what the logger function should look like
(partial log conf id)
;; You could return a noop function if the level is not enabled for the particular id
(partial log config id level)
;; Pre-parsing the format string is now possible
(partial log conf id level "%time - %id")
In many codebases I've seen a large amount of code is literally just to emulate this process with multiple classes, where you're performing work and then caching it somewhere. In simpler cases you can consolidate all of that in a function call and use partial application. Without some heroic work by the compiler you simply cannot do that in an imperative style.
1. Such bad examples :( Tuples are data types you have to destruct, in every language. Somebody please show me a language where this doesn't require a tuple-to-function-argument translation:
sayHi name age = "Hi I'm " ++ name ++ " and I'm " ++ show age
people = [("Alice", 70), ("Bob", 30), ("Charlotte", 40)]
-- ERROR: sayHi is String -> Int -> String, a person is (String, Int)
conversation = intercalate "\n" (map sayHi people)
In python you have `*people` to destruct the tuple into separate arguments, or pattern matching. In C-languages you have structs you have to destruct.
2. And performance, you'd think a slow-down affecting every single function call would be high-up on the optimization wish list, right? That's why it's implemented in basically every compiler, including non-fp compilers. Here's GHC authors in 2004 declaring that obviously the optimization is in "any decent compiler". https://simonmar.github.io/bib/papers/eval-apply.pdf
3. Type errors, the only place where currying is actually bad, is not even mentioned directly. Accidentally passing a different number of arguments compared to what you expected will result in a compiler error.
Some very powerful and generic languages will happily support lots of weird code you throw at them instead of erroring out. Others will errors out on things you'd expect them to handle just fine.
Here's Haskell supporting something most people would never want to use, giving it a proper type, and causing a confusing type error in any surrounding code when you leave out a parentheis around `+`:
foldl (+) 0 [1,2,3] :: Num a => a
foldl + 0 [1,2,3]
:: (Foldable t, Num a1, Num ((b -> a2 -> b) -> b -> t a2 -> b),
Num ([a1] -> (b -> a2 -> b) -> b -> t a2 -> b)) =>
(b -> a2 -> b) -> b -> t a2 -> b
Is it bad that it has figured out that you (apparently) wanted to add things of type `(b -> a2 -> b) -> b -> t a2 -> b` as if they were numbers, and done what you told it to do? Drop it into any gpt of choice and it'll find the mistake for you right away.
In SML I believe. I never used SML but from how I understand it in ML all functions technically take one argument, which may be a tuple. In Haskell and Ocaml, all functions technically take one argument and just return a function that takes one argument again.
I never understood why the latter was so popular. Just for automatic implitic partial application which honestly should just have explicit syntax. In Scheme one simply uses the `(cut f x y)` operator which does a partial application and returns a function that consumes the remaining arguments which is far more explicit. But since Scheme is dynamically typed implicit partial application would be a disaster but it's not like in OCaml and Haskell the error messages at times can't be confusing.
I don't get simulating it with tuples either to be honest. Nothing wrong with just letting functions take multiple arguments and that's it. In Rust they oddly take multiple arguments as expect, but they can return tuples to simulate returning multiple arguments whereas in Scheme they just return multiple arguments. There's a difference between returning one argument which is a tuple of multiple arguments, and actually returning multiple arguments.
I think automatic implicit partial application, like almost anything “implicit” is bad. But in Haskell or Ocaml or even Rust it has to be a syntactic macro, it can't just be a normal function because no easy variadic functions which to be fair is incredibly difficult without dynamic typing and in practice just passing some kind of sequence is what you really want.
I couldn't agree more. Having spent a lot of time with a language with currying like this recently, it seems very obviously a misfeature.
1. Looking at a function call, you can't tell if it's returning data, or a function from some unknown number of arguments to data, without carefully examining both its declaration and its call site
2. Writing a function call, you can accidentally get a function rather than data if you leave off an argument; coupled with pervasive type inference, this can lead to some really tiresome compiler errors
3. Functions which return functions look just like functions which take more arguments and return data (card-carrying functional programmers might argue these are really the same thing, but semantically, they aren't at all - in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?)
3a. Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function (so make_string_comparator_for_locale has type like Locale -> Function<string -> string -> order>), so now if you actually want to return a function, there's boilerplate at the return and call sites that wouldn't be there in a less 'concise' language!
I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase. I think academic and hobby languages, and so functional languages, are particularly prone to this. I think implicit currying is one of these features.
> in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?
In the sense that "make_string_comparator" is not a useful concept. Being able to make a "string comparator" is inherently a function of being able to compare strings, and carving out a bespoke concept for some variation of this universal idea adds complexity that is neither necessary nor particularly useful. At the extreme, that's how you end up with Enterprise-style OO codebases full of useless nouns like "FooAdapter" and "BarFactory".
The alternative is to have a consistent, systematic way to turn verbs into nouns. In English we have gerunds. I don't have to say "the sport where you ski" and "the activity where you write", I can just say "skiing" and "writing". In functional programming we have lambdas. On top of that, curried functions are just a sort of convenient contraction to make the common case smoother. And hey, maybe the contraction isn't worth the learning curve or usability edge-cases, but the function it's serving is still important!
> Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function
That seems either completely self-inflicted, or a limitation of whatever language you're using. I've worked on a number of codebases in Haskell, OCaml and a couple of Lisps, and I have never seen or wanted anything remotely like this.
> I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase.
That's not the case with Haskell.
Haskell has a tendency to pick up features that have deep theoretical reasoning and "mathematical beauty". Of course, that doesn't always correlate with codebase health very well either, and there's a segment of the community that is very vocal about dropping features because of that.
Anyway, the case here is that a superficial kind of mathematical beauty seems to conflict with a deeper case of it.
I always felt Monads were an utterly disgusting hack that was otherwise quite practical though. It didn't feel like mathematical beauty at all to me but like a hack to fool to the optimizer to not sequence out of events.
One language that uses the tuple argument convention described in the article is Standard ML. In Standard ML, like OCaml and Haskell, all functions take exactly one argument. However, while OCaml and Haskell prefer to curry the arguments, Standard ML does not.
There is one situation, however, where Standard ML prefers currying: higher-order functions. To take one example, the type signature of `map` (for mapping over lists) is `val map : ('a -> 'b) -> 'a list -> 'b list`. Because the signature is given in this way, one can "stage" the higher-order function argument and represent the function "increment all elements in the list" as `map (fn n => n + 1)`.
That being said, because of the value restriction [0], currying is less powerful because variables defined using partial application cannot be used polymorphically.
I'm biased here since the easy currying is by far my favourite feature in Haskell (it always bothers me that I have to explicitly create a lamba in Lisps) but the arguments in the article don't convince me, what with the synctactic overhead for the "tuple style".
I want to agree, but there is the tension that in business code, what you pass as arguments is very often already named like the parameter, so having to indicate the parameter name in the call leads to a lot of redundancy. And if you’re using domain types judiciously, the types are typically also different, hence (in a statically-typed language) there is already a reduced risk of passing the wrong parameter.
Maybe there could be a rule that parameters have to be named only if their type doesn’t already disambiguate them and if there isn’t some concordance between the naming in the argument expression and the parameter, or something along those lines. But the ergonomics of that might be annoying as well.
This is an issue in Python but less so in languages like JavaScript that support "field name punning", where you pass named arguments via lightweight record construction syntax, and you don't need to duplicate a field name if it's the same as the local variable name you're using for that field's value.
That forces you to name the variable identically to the parameter. For example, you may want to call your variable `loggedInUser` when the fact that the user is logged in is important for the code’s logic, but then you can’t pass it as-is for a field that is only called `user`. Having to name the parameter leads to routinely having to write `foo: blaFoo` because just `blaFoo` wouldn’t match, or else to drop the informative `bla`. That’s part of the tension I was referring to.
OCaml has a neat little feature where it elides the parameter and variable name if they're the same:
let warn_user ~message = ... (* the ~ makes this a named parameter *)
let error = "fatal error!!" in
warn_user ~message:error; (* different names, have to specify both *)
let message = "fatal error!!" in
warn_user ~message; (* same names, elided *)
The elision doesn't always kick in, because sometimes you want the variable to have a different name, but in practice it kicks in a lot, and makes a real difference. In a way, cases when it doesn't kick in are also telling you something, because you're crossing some sort of context boundary where some value is called different things on either side.
I agree with this article. Tuples nicely unified multiple return values and multiple parameters. FWIW Scala and Virgil both support the _ syntax for the placeholder in a partial application.
def add(x: int, y: int) -> int { return x + y; }
def add3 = add(_, 3);
> This feature does have some limitations, for instance when we have multiple nested function calls, but in those cases an explicit lambda expression is always still possible.
The solution is to delimit the level of expression the underscore (or dollar sign suggested in the article) belongs to. In Kotlin they use braces and `it`.
{ add(it, 3) } // Kotiln
add(_, 3) // Scala
Then modifying the "hole in the expression" is easy. Suppose we want to subtract the first argument by 2 before passing that to `add`:
{ add(subtract(it, 2), 3) } // Kotlin
// add(subtract(_, 2), 3) // no, this means adding 3 to the function `add(subtract(_, 2)`
x => { add(subtract(x, 2), 3) } // Scala
There are good ideas in functional languages that other languages have borrowed, but there are bad ideas too: currying, function call syntax without parentheses, Hindley-Milner type inference, and laziness by default (Haskell) are experiments that new languages shouldn’t copy.
I believe one of the main reasons that F# hasn't never really taken off is that Microsoft isn't afraid to borrow the good parts of F# to C#. (They really should've ported discriminated unions though)
Currently DUs are slated for the next version of c# releasing end of this year. However last I knew they only come boxed which at least to me partly defeats the point of having them (being able to have multiple types inline because of the way they share memory and only have a single size based on compiler optimizations).
I like currying because it's fun and cool, but found myself nodding along throughout the whole article. I've taken for granted that declaring and using curried functions with nice associativity (i.e., avoiding lots of parentheses) is as ergonomic as partial application syntax gets, but I'm glad to have that assumption challenged.
The "hole" syntax for partial application with dollar signs is a really creative alternative that seems much nicer. Does anyone know of any languages that actually do it that way? I'd love to try it out and see if it's actually nicer in practice.
Clojure CL as well have macros that let you thread results from call to call, but you could argue that's cheating because of how flexible Lisp syntax is.
Clojure also has the anonymous function syntax with #(foo a b %) where you essentially get exactly this hole functionality (but with % instead of $). Additionally there’s partial that does partial application, so you could also do (partial foo a b).
I completely agree. Giving the first parameter of a function special treatment only makes sense in a limited subset of cases, while forcing an artificial asymmetry in the general case that I find unergonomic.
With a language like Forth, you know that you can use a stack for data and apply functions on that data. With currying it you put functions on a stack instead. This makes it weird. But you also obscure the dataflow.
With the most successful functional programing language Excel, the dataflow is fully exposed. Which makes it easy.
Certain functional programming languages prefer the passing of just one data-item from one function to the next. One parameter in and one parameter out. And for this to work with more values, it needs to use functions as an output.
It is unnecessary cognitive burden. And APL programmers would love it.
Let's make an apple pie as an example.
You give the apple and butter and flour to the cook.
The cursed curry version would be "use knife for cutting, add cutting board, add apple, stand near table, use hand. Bowl, add table, put, flour, mix, cut, knife butter, mixer, put, press, shape, cut_apple." etc..
Here’s an article I wrote a while ago about a hypothetical language feature I call “folded application”, that makes parameter-list style and folded style equivalent.
I've long been thinking the same thing. In many fields of mathematics the placeholder $ from the OP is often written •, i.e. partial function application is written as f(a, b, •). I've always found it weird that most functional languages, particularly heavily math-inspired ones like Haskell, deviate from that. Yes, there are isomorphisms left and right but at the end of the day you have to settle on one category and one syntax. A function f: A × B -> C is simply not the same thing as a function f: A -> B -> C. Stop treating it like it is.
Mathematically it's quite pretty, and it gives you elegant partial application for free (at least if you want to partially apply the first N arguments).
if you don't find currying essential you haven't done pointfree enough. If you haven't done pointfree enough you haven't picked equational reasoning yet, and it's the thing that holds you back in your ability to read abstractions easily, which in turn guides your arguments on clarity.
They are isomorphic in the strong sense that their logical interpretations are identical. Applying Curry-Howard, a function type is an implication, so a curried function with type A -> B -> C is equivalent to an implication that says "If A, then if B, then C." Likewise, a tuple is a conjunction, so a non-curried function with type (A, B) -> C is equivalent to the logic statement (A /\ B) -> C, i.e., "If A and B then C." Both logical statements are equivalent, i.e., have the same truth tables.
However, as the article outlines, there are differences (both positive and negative) to using functions with these types. Curried functions allow for partial application, leading to elegant definitions, e.g., in Haskell, we can define a function that sums over lists as sum = foldl (+) 0 where we leave out foldl's final list argument, giving us a function expecting a list that performs the behavior we expect. However, this style of programming can lead to weird games and unweildy code because of the positional nature of curried functions, e.g., having to use function combinators such as Haskell's flip function (with type (A -> B -> C) -> B -> A -> C) to juggle arguments you do not want to fill to the end of the parameter list.
Please see my other comment below, and maybe re-read the article. I'm not asking what the difference is between curried and non-curried. The article draws a three way distinction, while I'm asking why two of them should be considered distinct, and not the pair you're referring to.
Apologies, I was focused on the usual pairing in this space and not the more subtle one you're talking about. As others have pointed out, there isn't really semantic a difference between the two. Both approaches to function parameters produce the same effect. The differences are purely in "implementation," either theoretically or in terms of systems-building.
From a theoretical perspective, a tuple expresses the idea of "many things" and a multi-argument parameter list expresses the idea of both "many things" and "function arguments." Thus, from a cleanliness perspective for your definitions, you may want to separate the two, i.e., require function have exactly one argument and then pass a tuple when multiple arguments are required. This theoretical cleanliness does result in concrete gains: writing down a formalism for single-argument functions is decidedly cleaner (in my opinion) than multi-argument functions and implementing a basic interpreter off of this formalism is, subsequently, easier.
From a systems perspective, there is a clear downside in this space. If tuples exist on the heap (as they do for most functional languages), you induce a heap allocation when you want to pass multiple arguments! This pitfall is evident with the semi-common beginner's mistake with OCaml algebraic datatype definitions where the programmer inadvertently wraps the constructor type with parentheses, thereby specifying a constructor of one-argument that is a tuple instead of a multi-argument constructor (see https://stackoverflow.com/questions/67079629/is-a-multiple-a... for more details).
The distinction is mostly semantic so you could say they are the same. But I thought it makes sense to emphasize that the former is a feature of function types, and the latter is still technically single-parameter.
I suppose one real difference is that you cannot feed a tuple into a parameter list function. Like:
Probably just that having parameter-lists as a specific special feature makes them distinct from tuple types. So you may end up with packing/unpacking features to convert between them, and a function being generic over its number of parameters is distinct from it being generic over its input types. On the other hand you can more easily do stuff like named args or default values.
The parameter list forces the individual arguments to be visible at the call site. You cannot separate the packaging of the argument list from invoking the function (barring special syntactic or library support by the language). It also affects how singleton tuples behave in your language.
The article is about programmer ergonomics of a language. Two languages can have substantially different ergonomics even when there is a straightforward mapping between the two.
It's not that they are meaningfully different. It's just acknowledging if you really want currying, you can say 'why not just use a single parameter of tuple type'.
Then there's an implication of 'sure, but that doesn't actually help much if it's not standar' and then it's not addressed further.
all three are isomorphic. but in some languages if you define a function via something like `function myFun(x: Int, y: Bool) = ...` and also have some value `let a: (Int, Bool) = (1, true)` it doesn't mean you can call `myFun(a)`. because a parameter list is treated by the language as a different kind of construct than a tuple.
A language which truly treats an argument list as a tuple can support this:
args = (a, b, c)
f args
…and that will have the effect of binding a, b, and c as arguments in the called function.
In fact many “scripting” languages, like Javascript and Python, support something close to this using their array type. If you squint, you can see them as languages whose functions take a single argument that is equivalent to an array. At an internal implementation level this equivalence can be messy, though.
Lower level languages like C and Rust tend not to support this.
Right. Currying as the default means of passing arguments in functional languages is a gimmick, a hack in the derogatory sense. It's low-level and anti-declarative.
Prior to this article, I didn't think of currying as being something a person could be "for" or "against." It just is. The fact that a function of multiple inputs can be equivalently thought of as a function of a tuple can be equivalently thought of as a composite of single-input functions that return functions is about cognition, and understanding structure, not code syntax.
But it is about code syntax. Languages like Haskell make it part of the language by only supporting single-argument functions. So currying is the default behaviour for programmers.
I think you are focusing on the theoretical aspect of partial application and missing the actual argument of the article which having it be the default, implicit way of defining and calling functions isn't a good programming interface.
I'm a programmer, not a computer scientist. The equivalence is a computer science thing. They are logically equivalent in theoretical computer science. Fine.
They are not equally easy for me to use when I'm writing a program. So from a software engineering perspective, they are very much not the same.
I completely agree with the points in this article and have come to the same conclusion after using languages that default to unary curried functions.
> I'd also love to hear if you know any (dis)advantages of curried functions other than the ones mentioned.
I think it fundamentally boils down to the curried style being _implicit_ partial application, whereas a syntax for partial application is _explicit_. And as if often the case, being explicit is clearer. If you see something like
in a curried language then you don't immediately know if `f` is the result of foobinading `a` and `b` or if `f` is `foobinade` partially applied to some of its arguments. Without currying you'd either write or and now it's immediately explicitly clear which of the two cases we're in.This clarity not only helps humans, it also help compilers give better error messages. In a curried languages, if a function is mistakenly applied to too few arguments then the compiler can't always immediately detect the error. For instance, if `foobinate` takes 3 arguments, then `let f = foobinade a b` doesn't give rise to any errors, whereas a compiler can immediately detect the error in `let f = foobinade(a, b)`.
A syntax for partial application offers the same practical benefits of currying without the downsides (albeit loosing some of the theoretical simplicity).
The functional programming take is that “the result of foobinade-ing an and b” IS “foobinade applied to two of its arguments”. The application is not some syntactic pun or homonym that can refer to two different meanings—those are the same meaning.
Let us postulate two functions. One is named foobinade, and it takes three arguments. The other is named foobinadd, and it only takes two arguments. (Yes, I know, shoot anybody who actually names things that way.)
When someone writes
there is no confusion to the compiler. The problem is the reader. Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.Whereas with explicit syntax, the parentheses say what the author thinks they're doing, and the compiler will yell at them if they get it wrong.
> Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.
Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results". Or rather, you never have a result that isn't a function; `0` and `lambda: 0` (in Python syntax) are the same thing.
It does, of course, turn out that for many people this isn't a natural way of thinking about things.
> Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results".
Everyone knows that. At least everyone who would click a post titled "A case against currying." The article's author clearly knows that too.
That's not the point. The point is that this distinction is very meaningful in practice, as many functions are only meant to be used in one way. It's extremely rare that you need to (printf "%d %d" foo). The extra freedom provided by currying is useful, but it should be opt-in.
Just because two things are fundamentally equivalent, it doesn't mean it's useless to distinguish them. Mathematics is the art of giving the same name to different things; and engineering is the art of giving different names to the same thing depending on the context.
> It's extremely rare that
Not when a language embraces currying fully and then you find that it’s used all the fucking time.
It’s really simple as that: a language makes the currying syntax easy, and programmers use it all the time; a language disallows currying or makes the currying syntax unwieldy, and programmers avoid it.
If 0 and a function that always returns 0 are the same thing, does that make `lambda: lambda: 0` also the same? I suppose it must do, otherwise `0` and `lambda: 0` were not truly the same.
Another way to make the point: when you write 0, which do you mean?
In a pure language like Haskell, 0-ary functions <==> constants
Fine, it's a regular type. It's still not the type I think it is. If it's an Int -> Int when I think it's an Int, that's still a problem, no matter how much Int -> Int is an "actual result".
Come on, just write
And the compiler immediately tells you that you are wrong: your type annotation does not unify with compiler’s inferred type.And if you think this is verbose, well many traditional imperative languages like C have no type deduction and you will need to provide a type for every variable anyways.
I spent the last three years on the receiving end of mass quantities of code written by people who knew what they were writing but didn't do an adequate job of communicate it to readers who didn't already know everything.
What you say is true. And it works, if you're the author and are having trouble keeping it all straight. It doesn't work if the author didn't do it and you are the reader, though.
And that's the more common case, for two reasons. First, code is read more often than it's written. Second, when you're the author, you probably already have it in your head how many parameters foobinade takes when you call it, but when you're the reader, you have to go consult the definition to find out.
But if I was willing to do it, I could go through and annotate the variables like that, and have the compiler tell me everything I got wrong. It would be tedious, but I could do it.
It’s not at all clear or the same to the new reader of the code.
Well, I totally disagree with this. One of the main benefits of currying is the ability to chain function calls together. For example, in F# this is typically done with the |> operator:
Or, if we really want to name our partial function before applying it, we can use the >> operator instead: Requiring an explicit "hole" for this defeats the purpose: Or, just as bad, you could give up on partial function application entirely and go with: Either way, I hope that gives everyone the same "ick" it gives me.You can still do this though:
Or if you prefer left-to-right: Maybe what isn't clear is that this hole operator would bind to the innermost function call, not the whole statement.Even better, this method lets you pipeline into a parameter which isn't the last one:
Yeah, especially in F#, a language that means to interpolate with .Net libraries (most not written with "data input at last" mindset.) now I'm quite surprised that F# doesn't have this feature.
This is essentially how Mathematica does it: the sugar `Foo[x,#,z]&` is semantically the same as `Function[{y}, Foo[x,y,z]]`. The `&` syntax essentially controls what hole belongs where.
Wow, this convinced me. It's so obviously the right approach when you put it this way.
For pipelines in any language, putting one function call per line often works well. Naming the variables can help readability. It also makes using a debugger easier:
Other languages have method call syntax, which allows some chaining in a way that works well with autocomplete.> Naming the variables can help readability
It can, or it can't; depending on the situation. Sometimes it just adds weight to the mental model (because now there's another variable in scope).
A bunch of Scheme implementations define little-known syntax for partial application[0] that lets you put limits on how many arguments have to be provided at each application step. Using the article's add example:
it gets tedious with lots of single-argument cases like the above, but in cases where you know you're going be calling a function a lot with, say, the first three arguments always the same and the fourth varying, it can be cleaner than a function of three arguments that returns an anonymous lambda of one argument. vs There's also a commonly supported placeholder syntax[1]: And assorted ways to define or adapt functions to make fully curried ones when desired. I like the "make it easy to do something complicated or esoteric when needed, but don't make it the default to avoid confusion" approach.[0]: https://srfi.schemers.org/srfi-219/srfi-219.html
[1]: https://srfi.schemers.org/srfi-26/srfi-26.html
Currying was recently removed from Coalton: https://coalton-lang.github.io/20260312-coalton0p2/#fixed-ar...
> 3. Better type errors. With currying, writing (f 1 2) instead of (f 1 2 3) silently produces a partial application. The compiler happily infers a function type like :s -> :t and moves on. The real error only surfaces later, when that unexpected function value finally clashes with an incompatible type, often far from the actual mistake. With fixed arity, a missing argument is caught right where it happens.
'Putting things' (multi-argument function calls, in this case) 'in-band doesn't make them go away, but it does successfully hide them from your tooling', part 422.
That's so cool. I already liked Coalton, and after this change I think it's definitely going to be even better. Can't wait to try it.
> Simplicity: Every function takes exactly one input and produces exactly one output. No exceptions. If you didn’t care about the input or output, you used Unit, and we made special syntax for that.
Seems like a disaster to use s-expressions for a language like that. I love s-expressions but they only make sense for variadic languages. The entire point of them is to quickly delimit how many arguments are passed.
In say Haskell `f x y z` is the same thing as `(((f x) y) z)`. That is definitely not the case with s-expressions; braces don't delimit; they denote function application. It's like saying that `f(x,y,z)` being the same as `f(x)(y)(z)` which it really isn't. The point of s-expressions is that you often find yourself calling functions with many arguments that are themselves a result of a function application, at that point `foo(a)(g(a,b), h(x,y))` just becomes easier to parse as ((foo a) (g a b) (h x y))`.
Thanks for sharing, interesting to see that people writing functional languages also experience the same issues in practice. And they give some reasons I didn't think about.
One "feature of currying" in Haskell that isn't mentioned in the fine article is that parts of the function may not be dependent on the last argument(s) and only needs to be evaluated once over many application of the last argument(s) which can be very useful when partially applied functions are passed to higher-order functions.
Functions can be done explicitly written to do this or it can be achieved through compiler optimisation.
That's a very good point, I never thought really about how this relates to the execution model & graph reduction and such. Do you have an example of a function where this can make a difference? I might add something to the article about it.
It's also a question of whether this is exclusive to a curried definition or if such an optimization may also apply to partial application with a special operator like in the article. I think it could, but the compiler might need to do some extra work?
One slightly contrived example would be if you had a function that returned the point of a set closest to another given point.
getClosest :: Set Point -> Point -> Point
You could imagine getClosest build a quadtree internally and that tree wouldn't depend on the second argument. I say slightly contrived because I would probably prefer to make the tree explicit if this was important.
Another example would be if you were wrapping a C-library but were exposing a pure interface. Say you had to create some object and lock a mutex for the first argument but the second was safe. If this was a function intended to be passed to higher-order functions then you might avoid a lot of unnecessary lock contention.
You may be able to achieve something like this with optimisations of your explicit syntax, but argument order is relevant for this. I don't immediately see how it would be achieved without compiling a function for every permutation of the arguments.
I think we need to see a few non-contrived examples, because i think in every case where you might take advantage of currying like this, you actually want to make it explicit, as you say.
The flip side of your example is that people see a function signature like getClosest, and think it's fine to call it many times with a set and a point, and now you're building a fresh quadtree on each call. Making the staging explicit steers them away from this.
> and now you're building a fresh quadtree on each call [...] Making the staging explicit steers them away from this.
Irrespective of currying, this is a really interesting point - that the structure of an API should reflect its runtime resource requirements.
Consider a function like ‘match regex str’. While non-lazy languages may offer an alternate API for pre-compiling the regex to speed up matching, partial evaluation makes that unnecessary.
Those are nice examples, thanks.
I was imagining you might achieve this optimization by inlining the function. So if you have
And call it like Then the compiler might unfold the definition of getClosest and give you Where it then notices the first part does not depend on p, and rewrite this to Again, pretty contrived example. But maybe it could work.I didn't consider inlining but I believe you're correct, you could regain the optimisation for this example since the function is non-recursive and the application is shallow. The GHC optimisation I had in mind is like the opposite of inlining, it factors out a common part out of a lambda expression that doesn't depend on the variable.
I don't believe inlining can take you to the exact same place though. Thinking about explicit INLINE pragmas, I envision that if you were to implement your partial function application sugar you would have to decide whether the output of your sugar is marked INLINE and either way you choose would be a compromise, right? The compromise with Haskell and curried functions today is that the programmer has to consider the order of arguments, it only works in one direction but on the other hand the optimisation is very dependable.
An example where this is useful is to help inline otherwise recursive functions, by writing the function to take some useful parameters first, then return a recursive function which takes the remaining parameters. This allows the function to be partially in-lined, resulting in better performance due to the specialization on the first parameters. For example, foldr:
foldr f z = go
when called with (+) and 0 can be inlined togo xs = case xs of
which doesn't have to create a closure to pass around the function and zero value, and can subsequently inline (+), etc.> explicitly written to do this
In that case I want the signature of "this function pre-computes, then returns another function" and "this function takes two arguments" to be different, to show intent.
> achieved through compiler optimisation
Haskell is different in that its evaluation ordering allows this. But in strict evaluation languages, this is much harder, or even forbidden by language semantics.
Here's what Yaron Minsky (an OCaml guy) has to say:
> starting from scratch, I’d avoid partial application as the default way of building multi-argument functions.
https://discuss.ocaml.org/t/reason-general-function-syntax-d...
A benefit to using the currying style is that you can do work in the intermediate steps and use that later. It is not simply a 'cool' way to define functions. Imagine a logging framework:
After each partial application step you can do more and more work narrowing the scope of what you return from subsequent functions. In many codebases I've seen a large amount of code is literally just to emulate this process with multiple classes, where you're performing work and then caching it somewhere. In simpler cases you can consolidate all of that in a function call and use partial application. Without some heroic work by the compiler you simply cannot do that in an imperative style.1. Such bad examples :( Tuples are data types you have to destruct, in every language. Somebody please show me a language where this doesn't require a tuple-to-function-argument translation:
In python you have `*people` to destruct the tuple into separate arguments, or pattern matching. In C-languages you have structs you have to destruct.2. And performance, you'd think a slow-down affecting every single function call would be high-up on the optimization wish list, right? That's why it's implemented in basically every compiler, including non-fp compilers. Here's GHC authors in 2004 declaring that obviously the optimization is in "any decent compiler". https://simonmar.github.io/bib/papers/eval-apply.pdf
3. Type errors, the only place where currying is actually bad, is not even mentioned directly. Accidentally passing a different number of arguments compared to what you expected will result in a compiler error.
Some very powerful and generic languages will happily support lots of weird code you throw at them instead of erroring out. Others will errors out on things you'd expect them to handle just fine.
Here's Haskell supporting something most people would never want to use, giving it a proper type, and causing a confusing type error in any surrounding code when you leave out a parentheis around `+`:
Is it bad that it has figured out that you (apparently) wanted to add things of type `(b -> a2 -> b) -> b -> t a2 -> b` as if they were numbers, and done what you told it to do? Drop it into any gpt of choice and it'll find the mistake for you right away.In SML I believe. I never used SML but from how I understand it in ML all functions technically take one argument, which may be a tuple. In Haskell and Ocaml, all functions technically take one argument and just return a function that takes one argument again.
I never understood why the latter was so popular. Just for automatic implitic partial application which honestly should just have explicit syntax. In Scheme one simply uses the `(cut f x y)` operator which does a partial application and returns a function that consumes the remaining arguments which is far more explicit. But since Scheme is dynamically typed implicit partial application would be a disaster but it's not like in OCaml and Haskell the error messages at times can't be confusing.
I don't get simulating it with tuples either to be honest. Nothing wrong with just letting functions take multiple arguments and that's it. In Rust they oddly take multiple arguments as expect, but they can return tuples to simulate returning multiple arguments whereas in Scheme they just return multiple arguments. There's a difference between returning one argument which is a tuple of multiple arguments, and actually returning multiple arguments.
I think automatic implicit partial application, like almost anything “implicit” is bad. But in Haskell or Ocaml or even Rust it has to be a syntactic macro, it can't just be a normal function because no easy variadic functions which to be fair is incredibly difficult without dynamic typing and in practice just passing some kind of sequence is what you really want.
I couldn't agree more. Having spent a lot of time with a language with currying like this recently, it seems very obviously a misfeature.
1. Looking at a function call, you can't tell if it's returning data, or a function from some unknown number of arguments to data, without carefully examining both its declaration and its call site
2. Writing a function call, you can accidentally get a function rather than data if you leave off an argument; coupled with pervasive type inference, this can lead to some really tiresome compiler errors
3. Functions which return functions look just like functions which take more arguments and return data (card-carrying functional programmers might argue these are really the same thing, but semantically, they aren't at all - in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?)
3a. Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function (so make_string_comparator_for_locale has type like Locale -> Function<string -> string -> order>), so now if you actually want to return a function, there's boilerplate at the return and call sites that wouldn't be there in a less 'concise' language!
I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase. I think academic and hobby languages, and so functional languages, are particularly prone to this. I think implicit currying is one of these features.
> in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?
In the sense that "make_string_comparator" is not a useful concept. Being able to make a "string comparator" is inherently a function of being able to compare strings, and carving out a bespoke concept for some variation of this universal idea adds complexity that is neither necessary nor particularly useful. At the extreme, that's how you end up with Enterprise-style OO codebases full of useless nouns like "FooAdapter" and "BarFactory".
The alternative is to have a consistent, systematic way to turn verbs into nouns. In English we have gerunds. I don't have to say "the sport where you ski" and "the activity where you write", I can just say "skiing" and "writing". In functional programming we have lambdas. On top of that, curried functions are just a sort of convenient contraction to make the common case smoother. And hey, maybe the contraction isn't worth the learning curve or usability edge-cases, but the function it's serving is still important!
> Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function
That seems either completely self-inflicted, or a limitation of whatever language you're using. I've worked on a number of codebases in Haskell, OCaml and a couple of Lisps, and I have never seen or wanted anything remotely like this.
> I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase.
That's not the case with Haskell.
Haskell has a tendency to pick up features that have deep theoretical reasoning and "mathematical beauty". Of course, that doesn't always correlate with codebase health very well either, and there's a segment of the community that is very vocal about dropping features because of that.
Anyway, the case here is that a superficial kind of mathematical beauty seems to conflict with a deeper case of it.
I always felt Monads were an utterly disgusting hack that was otherwise quite practical though. It didn't feel like mathematical beauty at all to me but like a hack to fool to the optimizer to not sequence out of events.
One language that uses the tuple argument convention described in the article is Standard ML. In Standard ML, like OCaml and Haskell, all functions take exactly one argument. However, while OCaml and Haskell prefer to curry the arguments, Standard ML does not.
There is one situation, however, where Standard ML prefers currying: higher-order functions. To take one example, the type signature of `map` (for mapping over lists) is `val map : ('a -> 'b) -> 'a list -> 'b list`. Because the signature is given in this way, one can "stage" the higher-order function argument and represent the function "increment all elements in the list" as `map (fn n => n + 1)`.
That being said, because of the value restriction [0], currying is less powerful because variables defined using partial application cannot be used polymorphically.
[0] http://mlton.org/ValueRestriction
I didn't know Standard ML, that's interesting.
And yeah I think this is the way to go. For higher-order functions like map it feels too elegant not to write it in a curried style.
I'm biased here since the easy currying is by far my favourite feature in Haskell (it always bothers me that I have to explicitly create a lamba in Lisps) but the arguments in the article don't convince me, what with the synctactic overhead for the "tuple style".
I'd got a step further and say that in business software, named parameters are preferable for all but the smallest functions.
Using curried OR tuple arg lists requires remembering the name of an argument by its position. This saves room on the screen but is mental overhead.
The fact is that arguments do always have names anyway and you always have to know what they are.
I want to agree, but there is the tension that in business code, what you pass as arguments is very often already named like the parameter, so having to indicate the parameter name in the call leads to a lot of redundancy. And if you’re using domain types judiciously, the types are typically also different, hence (in a statically-typed language) there is already a reduced risk of passing the wrong parameter.
Maybe there could be a rule that parameters have to be named only if their type doesn’t already disambiguate them and if there isn’t some concordance between the naming in the argument expression and the parameter, or something along those lines. But the ergonomics of that might be annoying as well.
This is an issue in Python but less so in languages like JavaScript that support "field name punning", where you pass named arguments via lightweight record construction syntax, and you don't need to duplicate a field name if it's the same as the local variable name you're using for that field's value.
That forces you to name the variable identically to the parameter. For example, you may want to call your variable `loggedInUser` when the fact that the user is logged in is important for the code’s logic, but then you can’t pass it as-is for a field that is only called `user`. Having to name the parameter leads to routinely having to write `foo: blaFoo` because just `blaFoo` wouldn’t match, or else to drop the informative `bla`. That’s part of the tension I was referring to.
OCaml has a neat little feature where it elides the parameter and variable name if they're the same:
The elision doesn't always kick in, because sometimes you want the variable to have a different name, but in practice it kicks in a lot, and makes a real difference. In a way, cases when it doesn't kick in are also telling you something, because you're crossing some sort of context boundary where some value is called different things on either side.I agree with this article. Tuples nicely unified multiple return values and multiple parameters. FWIW Scala and Virgil both support the _ syntax for the placeholder in a partial application.
Or more simply, reusing some built-in functions:As noted in the article:
> This feature does have some limitations, for instance when we have multiple nested function calls, but in those cases an explicit lambda expression is always still possible.
I've also complained about that a while ago https://news.ycombinator.com/item?id=35707689
---
The solution is to delimit the level of expression the underscore (or dollar sign suggested in the article) belongs to. In Kotlin they use braces and `it`.
Then modifying the "hole in the expression" is easy. Suppose we want to subtract the first argument by 2 before passing that to `add`:I think I like the explicit lambda better; I prefer to be judicious with syntactic sugar and special variable names.
Coming from Scala to Kotlin, this is what I thought as well. Seeing `it` felt very wrong, then I got used to it.
There are good ideas in functional languages that other languages have borrowed, but there are bad ideas too: currying, function call syntax without parentheses, Hindley-Milner type inference, and laziness by default (Haskell) are experiments that new languages shouldn’t copy.
I believe one of the main reasons that F# hasn't never really taken off is that Microsoft isn't afraid to borrow the good parts of F# to C#. (They really should've ported discriminated unions though)
Currently DUs are slated for the next version of c# releasing end of this year. However last I knew they only come boxed which at least to me partly defeats the point of having them (being able to have multiple types inline because of the way they share memory and only have a single size based on compiler optimizations).
Okay, but if you combine the curried and tuple styles, and add a dash of runtime function pointers, you can solve the expression problem. [1]
[1]: https://gavinhoward.com/2025/04/how-i-solved-the-expression-...
I like currying because it's fun and cool, but found myself nodding along throughout the whole article. I've taken for granted that declaring and using curried functions with nice associativity (i.e., avoiding lots of parentheses) is as ergonomic as partial application syntax gets, but I'm glad to have that assumption challenged.
The "hole" syntax for partial application with dollar signs is a really creative alternative that seems much nicer. Does anyone know of any languages that actually do it that way? I'd love to try it out and see if it's actually nicer in practice.
Glad to hear the article did what I meant for it to do :)
And yes, another comment mentioned that Scala supports this syntax!
Clojure CL as well have macros that let you thread results from call to call, but you could argue that's cheating because of how flexible Lisp syntax is.
Clojure also has the anonymous function syntax with #(foo a b %) where you essentially get exactly this hole functionality (but with % instead of $). Additionally there’s partial that does partial application, so you could also do (partial foo a b).
Someone else in the comments mentioned that scala does this with _ as the placeholder.
I completely agree. Giving the first parameter of a function special treatment only makes sense in a limited subset of cases, while forcing an artificial asymmetry in the general case that I find unergonomic.
> curried functions often don't compose nicely
Same for imperative languages with "parameter list" style. In python, with
def f(a, b): return c, d
def g(k, l): return m, n
you can't do
f(g(1,2))
but have to use
f(*g(1,2))
what is analogical to uncurry, but operate on value rather than function.
TBH I can't name a language where such f(g(1,2)) would work.
perl, though that uses lists rather than multiple value or fixed-size tuples:
prints outWith a language like Forth, you know that you can use a stack for data and apply functions on that data. With currying it you put functions on a stack instead. This makes it weird. But you also obscure the dataflow.
With the most successful functional programing language Excel, the dataflow is fully exposed. Which makes it easy.
Certain functional programming languages prefer the passing of just one data-item from one function to the next. One parameter in and one parameter out. And for this to work with more values, it needs to use functions as an output. It is unnecessary cognitive burden. And APL programmers would love it.
Let's make an apple pie as an example. You give the apple and butter and flour to the cook. The cursed curry version would be "use knife for cutting, add cutting board, add apple, stand near table, use hand. Bowl, add table, put, flour, mix, cut, knife butter, mixer, put, press, shape, cut_apple." etc..
Here’s an article I wrote a while ago about a hypothetical language feature I call “folded application”, that makes parameter-list style and folded style equivalent.
https://jonathanwarden.com/implicit-currying-and-folded-appl...
The Roc devs came to a similar conclusion: https://www.roc-lang.org/faq#curried-functions
(Side note: if you're reading this Roc devs, could you add a table of contents?)
I've long been thinking the same thing. In many fields of mathematics the placeholder $ from the OP is often written •, i.e. partial function application is written as f(a, b, •). I've always found it weird that most functional languages, particularly heavily math-inspired ones like Haskell, deviate from that. Yes, there are isomorphisms left and right but at the end of the day you have to settle on one category and one syntax. A function f: A × B -> C is simply not the same thing as a function f: A -> B -> C. Stop treating it like it is.
What's the steelman argument though? Why do languages like Haskell have currying? I feel like that is not set out clearly in the argument.
Mathematically it's quite pretty, and it gives you elegant partial application for free (at least if you want to partially apply the first N arguments).
I feel like not having currying means your language becomes semantically more complicated because where does lambdas come from?
I've never ever run into this. I haven't seen currying or partial application since college. Am I the imperative Blub programmer, lol?
if you don't find currying essential you haven't done pointfree enough. If you haven't done pointfree enough you haven't picked equational reasoning yet, and it's the thing that holds you back in your ability to read abstractions easily, which in turn guides your arguments on clarity.
What benefit does drawing a distinction between parameter list and single-parameter tuple style bring?
I'm failing to see how they're not isomorphic.
They are isomorphic in the strong sense that their logical interpretations are identical. Applying Curry-Howard, a function type is an implication, so a curried function with type A -> B -> C is equivalent to an implication that says "If A, then if B, then C." Likewise, a tuple is a conjunction, so a non-curried function with type (A, B) -> C is equivalent to the logic statement (A /\ B) -> C, i.e., "If A and B then C." Both logical statements are equivalent, i.e., have the same truth tables.
However, as the article outlines, there are differences (both positive and negative) to using functions with these types. Curried functions allow for partial application, leading to elegant definitions, e.g., in Haskell, we can define a function that sums over lists as sum = foldl (+) 0 where we leave out foldl's final list argument, giving us a function expecting a list that performs the behavior we expect. However, this style of programming can lead to weird games and unweildy code because of the positional nature of curried functions, e.g., having to use function combinators such as Haskell's flip function (with type (A -> B -> C) -> B -> A -> C) to juggle arguments you do not want to fill to the end of the parameter list.
Please see my other comment below, and maybe re-read the article. I'm not asking what the difference is between curried and non-curried. The article draws a three way distinction, while I'm asking why two of them should be considered distinct, and not the pair you're referring to.
Apologies, I was focused on the usual pairing in this space and not the more subtle one you're talking about. As others have pointed out, there isn't really semantic a difference between the two. Both approaches to function parameters produce the same effect. The differences are purely in "implementation," either theoretically or in terms of systems-building.
From a theoretical perspective, a tuple expresses the idea of "many things" and a multi-argument parameter list expresses the idea of both "many things" and "function arguments." Thus, from a cleanliness perspective for your definitions, you may want to separate the two, i.e., require function have exactly one argument and then pass a tuple when multiple arguments are required. This theoretical cleanliness does result in concrete gains: writing down a formalism for single-argument functions is decidedly cleaner (in my opinion) than multi-argument functions and implementing a basic interpreter off of this formalism is, subsequently, easier.
From a systems perspective, there is a clear downside in this space. If tuples exist on the heap (as they do for most functional languages), you induce a heap allocation when you want to pass multiple arguments! This pitfall is evident with the semi-common beginner's mistake with OCaml algebraic datatype definitions where the programmer inadvertently wraps the constructor type with parentheses, thereby specifying a constructor of one-argument that is a tuple instead of a multi-argument constructor (see https://stackoverflow.com/questions/67079629/is-a-multiple-a... for more details).
That's a fair point, they are all isomorphic.
The distinction is mostly semantic so you could say they are the same. But I thought it makes sense to emphasize that the former is a feature of function types, and the latter is still technically single-parameter.
I suppose one real difference is that you cannot feed a tuple into a parameter list function. Like:
fn do_something(name: &str, age: u32) { ... }
let person = ("Alice", 40);
do_something(person); // doesn't compile
Probably just that having parameter-lists as a specific special feature makes them distinct from tuple types. So you may end up with packing/unpacking features to convert between them, and a function being generic over its number of parameters is distinct from it being generic over its input types. On the other hand you can more easily do stuff like named args or default values.
The parameter list forces the individual arguments to be visible at the call site. You cannot separate the packaging of the argument list from invoking the function (barring special syntactic or library support by the language). It also affects how singleton tuples behave in your language.
The article is about programmer ergonomics of a language. Two languages can have substantially different ergonomics even when there is a straightforward mapping between the two.
It's not that they are meaningfully different. It's just acknowledging if you really want currying, you can say 'why not just use a single parameter of tuple type'.
Then there's an implication of 'sure, but that doesn't actually help much if it's not standar' and then it's not addressed further.
The tuple style can't be curried (in Haskell).
That's not what I'm talking about.
The article draws a three way distinction between curried style (à la Haskell), tuples and parameter list.
I'm talking about the distinction it claims exists between the latter two.
all three are isomorphic. but in some languages if you define a function via something like `function myFun(x: Int, y: Bool) = ...` and also have some value `let a: (Int, Bool) = (1, true)` it doesn't mean you can call `myFun(a)`. because a parameter list is treated by the language as a different kind of construct than a tuple.
A language which truly treats an argument list as a tuple can support this:
…and that will have the effect of binding a, b, and c as arguments in the called function.In fact many “scripting” languages, like Javascript and Python, support something close to this using their array type. If you squint, you can see them as languages whose functions take a single argument that is equivalent to an array. At an internal implementation level this equivalence can be messy, though.
Lower level languages like C and Rust tend not to support this.
Rust definitely should. C++s std::initializer_list is a great tool and you wouldn't need macros for variadic functions anymore.
Presumably creating a different class for parameter lists allows you to extend it with operations that aren't natural to tuples, like named arguments.
Right. Currying as the default means of passing arguments in functional languages is a gimmick, a hack in the derogatory sense. It's low-level and anti-declarative.
The article lists two arguments against Currying:
2 is followed by single example of how it doesn't work the way the author would expect it to in Haskell.It's not a strong case in my opinion. Dismissed.
Prior to this article, I didn't think of currying as being something a person could be "for" or "against." It just is. The fact that a function of multiple inputs can be equivalently thought of as a function of a tuple can be equivalently thought of as a composite of single-input functions that return functions is about cognition, and understanding structure, not code syntax.
But it is about code syntax. Languages like Haskell make it part of the language by only supporting single-argument functions. So currying is the default behaviour for programmers.
I think you are focusing on the theoretical aspect of partial application and missing the actual argument of the article which having it be the default, implicit way of defining and calling functions isn't a good programming interface.
Similar to how lambda calculus "just is" (and it's very elegant and useful for math proofs), but nobody writes non-trivial programs in it...
Make that almost nobody.
I wrote a non-trivial lambda program [1] which enumerates proofs in the Calculus of Constructions to demonstrate [2] that BBλ(1850) > Loader's Number.
[1] https://github.com/tromp/AIT/blob/master/fast_growing_and_co...
[2] https://codegolf.stackexchange.com/questions/176966/golf-a-n...
I'm a programmer, not a computer scientist. The equivalence is a computer science thing. They are logically equivalent in theoretical computer science. Fine.
They are not equally easy for me to use when I'm writing a program. So from a software engineering perspective, they are very much not the same.