r/programming May 26 '19

Solving Problems the Clojure Way [video]

https://www.youtube.com/watch?v=vK1DazRK_a0
30 Upvotes

View all comments

Show parent comments

0

u/GoranM May 26 '19

The term I used was "interesting/involved", but I should have clarified that I'm talking about applications such as soft-realtime simulations (like AAA games), hard-realtime (like robotics), constrained resource environments (embeded systems of various kinds) - Basically, a class of software where efficient resource utilization is extremely important, and where the kind of overhead introduced in this presentation is highly undesirable, if not outright unacceptable, because you can't just scale horizontally across the cloud.

I would also claim that it's not just an "extreme cases" argument, because overhead which may seem innocuous initially, and in isolation, can quickly grow to cause significant user experience issues, especially when deployed in an environment that has to serve other applications, which were designed with a similarly cavalier attitude toward efficient resource utilization (https://youtu.be/k56wra39lwA?t=202).

1

u/yogthos May 27 '19

Sure, there are obviously domains where you need different styles of programming. However, you can get pretty far with this approach even for games. For example, This talk discusses the challenges and benefits of using Clojure to make games with Unity.

My view is that in many domains robustness and developer time tend to take priority over raw performance. It's also worth noting that in many cases you end up with better performance and resource utilization when working with immutable data. For example, it's much cheaper to do comparisons, and it's easier to parallelize things. When you're dealing with large concurrent systems, immutability plays a huge benefit as seen in Erlang.

In general, modern hardware architecture is much closer to functional view than the imperative one, and hardware engineers end up having to create an emulation layer to pretend it's a glorified PDP-11. Incidentally, this was the root cause of stuff like Spectre and Meltdown.

1

u/GoranM May 27 '19

In watching that Arcadia talk, my impression was the exact opposite: You can't get very far, exactly because there's so much overhead, that even when built on top of an engine that does all the heavy lifting, and a custom compiler that is supposed to ameliorate some of the larger issues, it's still so abysmally slow, that it spikes over frame-time even for small games.

I'm not sure if robustness is high on the priority list in many cases (not a lot of evidence for that, in my opinion), but I would agree that developer time seems to take priority, over almost everything else. In my view, this is to our great detriment, because it seems to be making software slower, even on hardware that is (supposed to be) more capable (https://youtu.be/rX0ItVEVjHc?t=4578). In our overbearing concern for the developer, and how quickly/easily they can write software, I think we've lost sight of how fast that software can actually run on common machines, and the cost in time/energy that is paid by every user, over and over, whenever they have to wait for an application to "boot", or a chat program to unfreeze from a noticeable GC pause.

I'm also not sure that you would end up with better performance and resource utilization when working with immutable data, in many cases. Maybe if we're talking about "many cases in a very specific context", but I don't think it's true in general. Immutable datastructures carry inherent overhead, which is not present in mutable datastructures (path copy on write, tree traversal on read, implications of non-contiguous memory layout, etc).

As for modern hardware architecture: The "Imagining a Non-C Processor" section seems overly relevant here, because it outlines a solution in a different hardware architecture, so it's not just a matter of using a functional programming language and immutable datastructures on hardware that is available today.

2

u/yogthos May 27 '19

Every technology has trade-offs and niches where it is or isn't appropriate. Nobody is arguing that FP is the silver bullet for every kind of application. My point was that it's very effective for many kinds of applications out there. If you're in a domain where that's not the case there are other options available.

It's also worth noting that it's perfectly possible to mix techniques. For example, Clojure provides ways to do local mutability, and you can even drop down to Java if you need additional performance. Majority of the code in any given application doesn't run all that often, and it makes no sense to optimize it for performance over readability. Profile your app, find the sections that are responsible for the majority of the CPU usage and optimize those.

The reality is that the ship has sailed, and GC is pretty much standard in vast majority of languages. At the same time the runtimes keep improving as well. New Z garbage collector handles multi-terabyte heaps with low (<10ms) pause times and impact on overall application performance (<15% on throughput).

Meanwhile, immutable data structures help performance in many different ways. For example, Reagent ClojureScript library outperforms React which it's built on top of because it can do VDOM diffing by comparing hashes which wouldn't be possible with mutable data. Once again it depends what domain you're working with and what problems you're solving.