dilawar a day ago

Someone has historical insights into why async/await seems to have taken over the world?

I often write Rust and I don't find it very attractive, but so many good projects seem to advertise it as a "killer feature". Diesel.rs doesn't have async, and they claim that perf improvement may not be worth it (https://users.rust-lang.org/t/why-use-diesel-when-its-not-as...).

For a single threaded JS program, async makes a lot of sense. I can't imagine any alternative pattern to get concurrency so cleanly.

  • robmccoll a day ago

    In single threaded scripting languages, it has arisen as a way to allow overlapping computation with communication without having to introduce multi threading and dealing with the fact that memory management and existing code in the language aren't thread-safe. In other languages it seems to be used as a away to achieve green threading with an opt-in runtime written as a library within the language rather than doing something like Go where the language and built-in runtime manage scheduling go routines onto OS threads. Personally I like Go's approach. Async / await seems like achieving a similar thing with way more complexity. Most of the time I want an emulation of synchronous behavior. I'd rather be explicit around when I want something to go run on it's own.

    • fmajid 20 hours ago

      Agreed. Async I/O is something where letting the runtime keep track of it for you doesn't incur any more overhead, unlike garbage collection, and that makes for a much more natural programming pseudo-synchronous.

  • devjab 21 hours ago

    Microsoft did some research on it 15-20 years ago for .NET which showed that sync doesn't scale for I/O workloads. The rest of the world sort of "knew" at this point, and all the callback and statemachine hell which came before was also leading the world toward async/await but the Microsoft research kind of formed the foundation for "universal" acceptance. It's not just for single threaded JS programs, you almost never want to tie up your threads even when you can have several of them because it's expensive in memory. As you'll likely see in this thread, some lower level programmers will mention that they prefer to build stackful coroutines themselves. Obviously that is not something Microsoft wanted people to have to do with C#, but it's a thing people do in c/c++ and similar (probably not with C#), and if you're lucky, you can even work in a place that doesn't turn it into the "hell" part.

    I can't say why Diesel.rs doesn't need async, and I would like to point out that I know very little about Diesel.rs beyond the fact that it has to do with databases. It would seem strange that, anything, working with databases which an I/O heavy workload would not massively benefit from async though.

  • lukaslalinsky a day ago

    https://en.wikipedia.org/wiki/C10k_problem

    Because when you require 1 thread per 1 connection, you have trouble getting to thousands of active connections and people want to scale way beyond that. System threads have overhead that makes them impractical for this use case. The alternatives are callbacks, which everybody hates and for a good reason. Then you have callbacks wrapped by Futures/Promises. And then you have some form of coroutines.

    Keeping in mind that what Zig is introducing is not what languages call async/await. It's more like the I/O abstraction inside Java, where you can use the same APIs with platform threads and virtual threads, but in Zig, you will need to pass the io parameter around, in Java, it's done in the background.

    • matheusmoreira a day ago

      > The alternatives are callbacks, which everybody hates and for a good reason. Then you have callbacks wrapped by Futures/Promises. And then you have some form of coroutines.

      The event loop model is arguably equivalent to coroutines. Just replace yield with return and have the underlying runtime decide which functions to call next by looping through them in a list. You can even stall the event loop and increase latency if you take too long to return. It's cooperative multitasking by another name.

      • zozbot234 21 hours ago

        Coroutines/resumable functions are not restricted to yielding to a single runtime or event loop, they can simply "resume" each other directly. There are also extensions of coroutines that are more than one-shot (a resumable function where the current state can be copied and invoked more than once) and/or are allowed to provide values when "resuming" other code, which also goes beyond the common "event loop" model.

      • lukaslalinsky 21 hours ago

        It's all the same concept, it's just a matter who/what is managing the state while you are waiting for I/O. When you yield, it's the compiler/runtime making sure the context is saved. When you return, it's your responsibility.

    • troupo a day ago

      > The alternatives are callbacks

      No. The alternative is lightweight/green threads and actors.

      The thing with await is that it can be retrofitted onto existing languages and runtimes with relatively little effort. That is, it's significantly less effort than retrofitting an actual honest-to-god proper actor system a la Erlang.

      • matheusmoreira a day ago

        > The alternative is lightweight/green threads and actors.

        How lightweight should threads be to support high scale multitasking?

        Writing my own language, capturing stack frames in continuations resulted in figures like 200-500 bytes. Grows with deeply nested code, of course, but surely this could be optimized...

        https://www.erlang.org/docs/21/efficiency_guide/processes.ht...

        This document says Erlang processes use 309 words which is in the same ballpark.

        • troupo 21 hours ago

          I didn't have to answer :) Thank you for looking it up.

          Erlang also enjoys quite a lot of optimizations on the VM level. E.g. a task is parked/hybernated if there's no work for it to perform (e.g. it's waiting for a message), the switch between tasks is extremely lightweight, VM internals are re-entrant, employ CPU-cache-friendly data structures, garbage collection is both lightweight and per-thread/task etc.

      • antihero a day ago

        Isn’t await often just sugar around the underlying implementation be that greenthreads, epoll, picoev, etc?

        • troupo 21 hours ago

          I think it depends on the language?

          Javascript's async/await probably started as a sugar for callbacks (since JS is single-threaded). Many others definitely have that as sugar for whatever threading implementation they have. In C# it's sugar on top of the whole mechanism of structured concurrency.

          But I'm mostly talking out of my ass here, since I don't know much about this topic, so everything above is hardly a step above speculation.

      • lukaslalinsky 21 hours ago

        > The alternative is lightweight/green threads and actors.

        Those are all some form of coroutines.

  • jandrewrogers 19 hours ago

    The classic use case for async was applications with extreme I/O intensity, like high-end database engines. If designed correctly it is qualitatively higher performance than classic multithreading. This is the origin of async style.

    Those large performance gains do not actually come from async style per se, which is where people become confused.

    What proper async style allows that multithreading does not is that you can design and implement sophisticated bespoke I/O and execution schedulers for your application. Almost all the performance gains are derived from the quality of the custom scheduling.

    If you delegate scheduling to a runtime, it almost completely defeats the point of writing code in async style.

    • ajross 19 hours ago

      > The classic use case for async was applications with extreme I/O intensity, like high-end database engines. If designed correctly it is qualitatively higher performance than classic multithreading.

      FWIW, I'm not aware of any high end database engines that make significant use of async code on their performance paths. They manage concurrent state with event loops, state machines, and callbacks. Those techniques, while crufty and too old to be cool, are themselves significantly faster than async.

      Async code (which is isomorphic to process-managed green threads) really isn't fast. It's just that OS thread switching is slow.

  • felipellrocha 21 hours ago

    Being fully multithreaded comes with significant overhead, while browsers essentially proved how much unreasonable performance you can get out of a single cpu using async because of javascript’s async model.

    It is hard to describe just how much more can be done on a single thread with just async.

  • api a day ago

    I think it’s a terrible complexity multiplying workaround for the fact that we can’t fix our ancient 1970s OS APIs. Threads should be incredibly cheap. I should be able to launch them by the tens of millions, kill them at will, and this should be no more costly than goroutines.

    (All modern OSes in common use are 1970s vintage under the hood. All Unix is Bell Labs Unix with some modernization and veneer, and NT is VMS with POSIX bolted on later.)

    Go does this by shipping a mini VM in every binary that implements M:N thread pooling fibers in user space. The fact that Go has to do this is also a workaround for OS APIs that date back to before disco was king, but at least the programmer doesn’t have to constantly wrestle with it.

    Our whole field suffers greatly from the fact that we cannot alter the foundation.

    BTW I use Rust async right now pretty heavily. It strikes me as about as good as you can do to realize this nightmare in a systems language that does not ship a fat runtime like Go, but having to actually see the word “async” still makes me sad.

    • nananana9 21 hours ago

      You don't need a fat runtime to do fibers/stackful coroutines. You don't need any language support for that matter, just 50 lines of assembly to save registers on the stack and switch stack pointers. Minicoro [1] is a C library that implements fibers in a single header (just the creation/destruction/context switching, you have to bring your own scheduler).

      Our game engine has a in-house implementation - creating a fiber, scheduling it, and waiting for it to complete takes ~300ns on my box. Creating a OS thread and join()ing is just about 1000 slower, ~300us.

      [1] https://github.com/edubart/minicoro

    • zozbot234 21 hours ago

      "Threads" are expensive because they are OS-managed "virtual" cores as seen by the current process. You can run coroutines as "user-level" tasks on top of kernel threads, and both Go and Rust essentially allow this, though in slightly different ways.

    • sapiogram 21 hours ago

      Kill threads at will?

      • api 19 hours ago

        That requires some explanation. Basically I think runtimes should be abort safe and have some defined thing that happens when a thread is aborted. Antiquated 70s blocking APIs do not, or do not consistently.

        It’s a minor gripe compared to the heaviness of threads and making every programmer hand roll fibers by way of async.

  • rr808 a day ago

    I think its as Javascript has taken over the world, people use those paradigms in other languages. It makes absolutely no sense to me as someone who doesn't touch JS or Python.

kbd 21 hours ago

I wrote my shell prompt in Zig years ago in part because I was interested to use its async/await to run all the git calls in parallel for the git status. My prompt is still fast despite never having parallelized things -- slightly slower now after adding Jujutsu status -- but I'm looking forward to getting to do the thing I originally wanted and have my super fast shell prompt.

To speak to the Zig feature: as a junior I kept bugging the seniors about unit testing and how you were supposed to test things that did IO. An explanation of "functional core imperative shell" would have been helpful, but their answer was: "wrap everything in your own classes, pass them everywhere, and provide mocks for testing". This is effectively what Zig is doing at a language level.

It always seemed wrong to me to have to wrap your language's system libraries so that you could use them the "right way" that is testable. It actually turns out that all languages until Zig have simply done it wrong, and IO should be a parameter you pass to any code that needs it to interact with the outside world.

devnull3 21 hours ago

All the stunts in async/await or goroutines in go stem from the fact that there is no support for something lighter than posix threads in kernel.

Shouldn't the OS kernel innovate in this area instead of different languages in userland attempting to solve it?

  • barddoo 21 hours ago

    Fair. The languages have to come up with something based on APIs that were not meant for that, like io_uring, etc.

nananana9 21 hours ago

Async/await feels very misguided to me. It's an extremely complex language feature for something that can be done way better, completely in userspace.

You can implement stackful coroutines yourself in C/C++, you need like 30 lines of assembly (as you can't switch stack pointers and save registers onto the stack from most languages). This is WAY better than what you could do for example with the way more convoluted C++ co_async/co_await for two reasons:

1. Your coroutine has an actual stack - you don't have to allocate a new "stack frame" on the heap for every "stack frame", e.g. every time you call a function and await it.

2. You don't need special syntax for awaiting - any function can just call your Yield() function, which just saves the registers onto the stack and jumps out of the coroutine.

Minicoro [1] is a single-file library that implement this in C. I have yet to dig into the Zig implementation - maybe it's better than the C++/Rust ones, but the fact they call it "async/await" doesn't bring me much hope.

  • messe 3 hours ago

    Zig's implementation is in userspace.

barddoo a day ago

What changed, why it matters, and how to use the new API.

pyrolistical 21 hours ago

> Despite the completion order varying, each task correctly writes to its designated position in the results array, showing proper concurrent data handling.

Huh? It’s not like the entire array was passed into each task. Each task just received a pointer to an usize to write to.

Where is concurrent data writing in the example?

  • barddoo 21 hours ago

    The requests were made concurrently. I don't understand your question. Passing in an array or a pointer does not matter.

ajross a day ago

Is it time now to say that async was a mistake, a-la C++ exceptions? The recent futurelock discussion[1] more or less solidified for me that this is all just a mess. Not just that one bug, but the coloring issue mentioned in the blog post (basically async "infects" project code requiring that you end up porting or duplicating almost everything -- this is especially true in Python). The general cognitive load of debugging inside out code is likewise really high, even if the top-level expression of the loop generator or whatever is clean.

And it's all for, what? A little memory for thread stacks (most of which ends up being a wash because of all the async contexts being tossed around anyway -- those are still stacks and still big!)? Some top-end performance for people chasing C10k numbers in a world that has scaled into datacenters for a decade anyway?

Not worth it. IMHO it's time to put this to bed.

[1] No one in that thread or post has a good summary, but it's "Rust futures consume wakeup events from fair locks that only emit one event, so can deadlock if they aren't currently being selected and will end up waiting for some other event before doing so."

  • jayd16 a day ago

    I really wish people would get over the coloring meme.

    Knowing if a function will yield the thread is actually extremely relevant knowledge you want available.

    • NeutralForest a day ago

      What bothers me, for example in Python, with the function coloring is that it creeps everywhere and you need to rewrite your functions to accommodate async. I think being able to take and return futures or promises and handle them how you wish is better ergonomics.

      • maleldil 21 hours ago

        > I think being able to take and return futures or promises and handle them how you wish is better ergonomics.

        You can do that. If you don't await an async call, you have a future object that you can handle however you want.

        • jayd16 21 hours ago

          Yeah but to be fair, that can have adverse effects if you, say, busy wait.

          The sync code might be running in an async context. Your async context might only have one thread. The task you're waiting for can never start because the thread is waiting for it to finish. Boom, you're deadlocked.

          Async/await runtimes will handle this because awaiting frees the thread. So, the obvious thing to do is to await but then it gets blamed for being viral.

          Obviously busy waiting in a single threaded sync context will also explode tho...

    • bcrosby95 21 hours ago

      This is like saying knowing if you're dealing with NEAR pointers or FAR pointers is extremely relevant. I reject the premise - a model that forces me to think about these things is a degenerate model.

      • jayd16 21 hours ago

        That's fine but the alternatives are insufficient.

        • ajross 15 hours ago

          Obviously "insufficient" is always going to be subjective. But some technologies really do end up bad by consensus, and I'm getting that smell from async. There really aren't any world class software efforts that rely heavily on async code. Big projects that do end up complaining about maintenance and cognitive hassle, and (c.f. the futurelock thing) are starting to show the strains we saw with C++ exceptions back in the day.

          Async looks great in a blog post full of clean examples. It... kinda doesn't in four year old code written by people who've left the project.

    • bmacho a day ago

      Funny you mention this.

      Zig's colorless async was purely solving the unergonomic calling convention, at the cost of knowing if a function is async or not (compiler decides, does not give any hints and if you get it wrong then that's UB).

      Arguably the main problem with async is that it is unergonomic. You always have to act like there were 2 types of functions, while, in practice, these 2 types are almost always self-evident and you can treat sync and async functions the same.

      • jayd16 a day ago

        I don't really know Zig. How does it handle the common GUI thread pattern where you get lock free concurrency by funneling the async GUI code through the GUI thread?

        When you know what functions and blocks are synchronous, you know the thread will not be yielded. If you direct async tasks to run on a single thread, you know they will never run concurrently. These together mean you can use that pattern to get lock free critical sections. You don't need to write thread-safe data structures.

        If a function can yield implicitly, how do you have the control you need to pull this off?

        It's a really common pattern in GUI dev so how does Zig handle that?

    • Calavar a day ago

      Of course it's useful, that's why function modifiers like 'const' or 'virtual' (thinking from a C++ perspective) are widely seen as useful, but making one function virtual doesn't force you to propagate that all the way up the call tree.

      • jayd16 a day ago

        Const is similar, now that you mention it.

        • Calavar a day ago

          Const is the reverse.

          Constness is infectious down the stack (the callee of a const function must be const) while asyncness is infectious up the stack (the caller of an async function must be async). So you can gradually add constness to subsections of a codebase while refactoring, only touching those local parts of the codebase. As opposed to async, where adding a single call to an async function requires you to touch all functions back up to main

    • valcron1000 21 hours ago

      > Knowing if a function will yield the thread is actually extremely relevant knowledge you want available.

      When is this relevant beyond pleasing the compiler/runtime? I work in C# and JS and I could not care less. Give me proper green threads and don't bother with async.

      • jayd16 21 hours ago

        Knowing when execution will yield is useful when you want to hold onto a thread. If you run your GUI related async tasks on the GUI thread you don't have to worry about locks or multi threaded data structures. Only a single GUI operation will happen at a time.

        If yields are implicit, you don't have enough control to really pull that off.

        Maybe it's possible but I haven't seen a popular green threaded UI framework that let's you run tasks in background threads implicitly. If I need to call a bunch of code to explicitly parcel background work, that just ends up being async/await with less sugar.

    • kibwen 21 hours ago

      Same. Colored functions are just effect systems, which are extremely useful.

      Javascript's async as of ten years ago just happened to be an especially annoying implementation of a specific effect.

    • pton_xd a day ago

      Look at the node.js APIs: readFile, readFileSync, writeFile, writeFileSync ... and on and on. If that's not a meme then I don't know what is.

      • rafaelmn a day ago

        And the alternative without async-await is ? blocking the event loop or the callback pyramid.

        Node is one place where async-await has zero counter arguments and every alternative is strictly worse.

        • ojosilva 21 hours ago

          The problem with Node is that the async decision is in the hand of the leaf node, which bubbles up to the parent where my code sits. Async/await is nice and a goal in most modern Node, but there are codebases (old and new) where async/await is just not an option for many reasons.

          Node dictates that when faced with an async function the result is that I must either implement async myself so I can do await or go into callback rabbit holes by doing .then(). If the function author is nice, they will give me both async and sync versions: readFile() and readFileSync(). But that sucks.

          The alternative would be that 1) the decision to go async were mine; 2) the language supports my decision with syntax/semantics.

          Ie. if I call the one and only fs.readFile() and want to block I would then do

                 sync fs.readFile()
          
          Node would take care of performing a nice synchronous call that is beneficial to its event-loop logic and callback pyramid. End of the story. And not some JS an implementation such as deasync [1] but in core Node.

          1. https://www.npmjs.com/package/deasync

        • luke5441 a day ago

          They could have added threads to Node as well? Granted, it would have been a lot of difficult work.

          • dns_snek a day ago

            Losing threads and moving to the async I/O model was the motivation behind Node in the first place.

            https://nodejs.org/en/about

            • luke5441 21 hours ago

              If you use async I/O you can just use the Chrome JavaScript runtime as-is. I would claim it was the only low-effort model available to them and therefore not motivation.

              The motivation for node was that users wanted to use JavaScript on the server.

              • dns_snek 21 hours ago

                > If you use async I/O you can just use the Chrome JavaScript runtime as-is.

                What do you mean? A JS runtime can't do anything useful on its own, it can't read files, it can't load dependencies because it doesn't know anything about "node_modules", it can't open sockets or talk to the world in any other way - that's what Node.js provides.

                > I would claim it was the only low-effort model available to them and therefore not motivation.

                It was a headline feature when it released.

                https://web.archive.org/web/20100901081015/https://nodejs.or...

                • luke5441 19 hours ago

                  Obviously you can add modules calling to C/C++ functionality to a scripting language runtime easily (and the interface to do that is already available for the browser implementation).

                  In the above link Node could be described as a Chrome V8 distribution with modules enabling building a web server.

                  Adding threading to a non-threaded scripting runtime is another ball game.

                  The point is that Node was forced into this model by V8 limitations, then sold it as an advantage, however, it is only one way to solve the problem with its own trade-offs and you have to look at the specific use case you are looking at to see if it is really the best solution for your use case.

                  • dns_snek 18 hours ago

                    > Obviously you can add modules calling to C/C++ functionality to a scripting language runtime easily

                    Yes, obviously, that's what NodeJS does. But you can't "just use the V8 runtime as-is if you're doing async IO", it doesn't have those facilities at all.

                    Async IO wasn't just "sold as an advantage", it is an advantage. Websockets were gaining popularity around that time and async IO is a natural fit for that.

                    You would have to change the language and boil the ocean to make the runtime support multiple threads (properly).

                    But why? Just to end up with the inferior thread-per-request runtime (which by the way, still needs to support async because it's part of the language), that requires developers to write JS which is incompatible with browser JS, which would've eliminated most of the synergy between the two?

                    I really don't understand what you're going for here. I don't see a single advantage here.

                    • luke5441 14 hours ago

                      I think green threads (Java Virtual Threads, Go to an extent) are strictly superior to async/await.

                      If you don't have many threads, OS threads are okay as well. It is all about memory and scheduling overhead.

                      But that is just my opinion. You are welcome to have a different opinion.

          • jayd16 a day ago

            You mean like with web workers or something?

            • luke5441 21 hours ago

              With a shared interpreter/process state, like Python, Java, C, C++, ...

              Node is not a web page, so no reason to limit it to the same patterns.

              Then, the next issue would be thread safety. But that could be treated as a separate problem.

        • ajross 21 hours ago

          > And the alternative without async-await is ? blocking the event loop or the callback pyramid.

          No, just callbacks and event handlers (and an interface like select/poll/epoll/kqueue for the OS primitives on which you need to wait). People were writing threadless non-blocking code back in the 80's, and while no one loved the paradigm it was IMHO less bad than the mess we've created trying to avoid it.

          One of the problems I'm trying to point out is that we're so far down the rabbit hole in this madness that we've forgotten the problems we're actually trying to solve. And in particular we've forgotten that they weren't that hard to begin with.

    • iroddis a day ago

      Except function colouring is a symptom of two languages masquerading as one. You have to choose async or sync. Mixing them is dangerous. It’s not possible to call an async function from sync. Calling sync functions from async code runs the risk of holding the run lock for extended periods of time and losing the benefit of async in the first place.

      I don’t have anything against async, I see the value of event-oriented “concurrency”, but the complaint that async is a poison pill is valid, because the use of async fundamentally changes the execution model to co-operative multitasking, with possible runtime issues.

      If a language chooses async, I wish they’d just bite the bullet and make it obvious that it’s a different language / execution model than the sync version.

      • jayd16 21 hours ago

        I think this analogy is too extreme. That said, modern languages should probably consider the main function/threading context default to async.

        Calling sync code from async is fine in and of itself, but once you're in a problem space where you care about async, you probably also care about task starvation. So naively, you might try to throw yeilds around the code base.

        And your conclusion is you want the language to be explicit when you're async....so function coloring, then?

  • rr808 a day ago

    With Java 25 virtual threads, async definitely is no longer required and I hope it dies a slow and painful death. We have projects at work that have never more than 3 concurrent users that use rxjava and are a nightmare to work on.

  • the__alchemist a day ago

    Concur. I build my own tools in rust when I have to just to avoid it. It is splitting rust into 2 ecosystems, and I wish it didn't exist because it's a big compatibility barrier. We should be moving towards fewer of these; not more. Make code and applications easier to interop; Async makes it more difficult.

    • echelon a day ago

      I can't stand this aversion to async/await. It's not a big deal.

      I don't understand why async code is being treated as dangerous or as rocket science. You still maintain complete control, and it's straightforward.

      Now that we know about the "futurelock" issue, it will be addressed.

      I'm sure Rust and the cargo/crates ecosystem will even grow the ability to mark crates as using async so if you really care to avoid them in your search or blow up at compile time upon import, you can. I've been asking for that feature for unsafe code and transitive dependency depth limits.

      • jeltz 20 hours ago

        Because async Rust is a lot harder to reason about than sync code. And I want my code to be as easy to reason about as possible.

  • amelius a day ago

    What is wrong about C++ exceptions?

    • jandrewrogers a day ago

      There are cases in systems-y code where it is not safe to unwind the stack in the ordinary way and it is difficult to contain the side-effects. These can be non-obvious and subtle edge cases that are often difficult to see and tricky to handle correctly. C++ today is primarily used in code contexts where these kinds of issues can occur. This is why it is a standard practice to disable exceptions at build time i.e. -fno-exceptions.

      With the benefit of hindsight, explicit handling and unwinding has proven to be safer and more reliable.

      • amelius 21 hours ago

        But you can implement exceptions by using the same IF statement approach you would use for manual error handling. No need for unwinding tables and such if that optimization is a bridge too far for your specific target platform.

    • KerrAvon a day ago

      For one thing, they’re expensive and viral. “Zero overhead” implementations don’t take into account the need for unwind tables. For every function/method that might be thrown across. They’re disabled in a lot of production environments for this reason.

      • neonz80 21 hours ago

        There was an interesting talk about C++ exceptions in smaller firmware at CppCon last year: https://youtu.be/bY2FlayomlE

        Basically, the overhead of exceptions is probably less than handling the same errors manually in any non-trivial program.

        Also, it's not like these table doesn't exist in other languages. Both Rust and Go have to unwind.

      • amelius 21 hours ago

        But if you explicitly handle exceptions using IF statements then that's overhead too, right?

        • arbitrandomuser 21 hours ago

          yes but i think branch prediction essentialy makes them zero overhead

          • neonz80 21 hours ago

            That's a different type of overhead than having unwind tables. With exceptions you wouldn't need a branch after each function call at all.

            • amelius 20 hours ago

              But a branch that is (almost) never taken has an overhead close to the overhead of a NOP instruction, which may be negligible on modern architectures.

              • neonz80 19 hours ago

                The CPU can not remember an infinite number of branches. Also, many branches will increase code size. With exceptions the unwind tables and unwind code can be placed elsewhere and not take up valuable L1 cache.

                • amelius 19 hours ago

                  > The CPU can not remember an infinite number of branches.

                  I suspect a modern CPU has a branch instruction saying "This branch will never be taken except in exceptions, so assume this branch is not taken". But I must admit I haven't seriously looked at assembly language for some time.

                  (EDIT: yes, modern CPUs including x86 and ARM allow the programmer/compiler to hint if a branch is expected to be taken).

                  > Also, many branches will increase code size.

                  I'd like to see some data on that. Of course branches take code size, but how much is that percentage-wise? I suspect not much.

                  • neonz80 18 hours ago

                    You should take a look at the presentation I mentioned elsewhere in this thread. You also have to keep in mind that it's not only the branches that use space, but also the error handling code. Code which must be duplicated for every single call to a particular function.

                    • amelius 18 hours ago

                      Ok, thanks. But that code needs to be loaded into memory only if the branch takes place. Which, for exceptions, will be not often. The main assumption is: optimize for the common case, where exceptions are not the common case.

    • ajross a day ago

      "Nothing", in principle. But they're bug factories in practice. It's really easy to throw "past" cleanup code in a language where manual resource management remains the norm.

      It's not that they can't be used productively. It's that they probably do more harm than good on balance. And I think async mania is getting there. It was a revelation when node showed it to us 15 years ago. But it's evolved in bad directions, IMHO.

      • baq a day ago

        Yeah node showed that a native async single threaded runtime can be performant. Seems like this knowledge was lost to the world somewhere along windows vista; everyone who has had to ever develop in the cooperative world of early winapi could tell you that easily.

      • efdee a day ago

        C# had async/await long before Javascript/node. Not that big a revelation ;-)

        • ajross 21 hours ago

          .NET wasn't the first either. Lisps were doing continuations in the 70's.

          But "invented" and "revealed" are different verbs for a reason. The release of node.js and it's pervasively async architecture changed the way a lot of people thought about how to write code. For the better in a few ways. But the resulting attempt to shoe-horn the paradigm into legacy and emerging environments that demanded it live in a shared ecosystem with traditional "blocking" primitives and imperative paradigms has been a mess.

          • mrsmrtss 21 hours ago

            I think you're underestimating the role of .NET in this. It was .NET that popularized this concept for the masses, and from there it spread to other languages including JavaScript, which also borrowed the exact same async/await keywords from C#.

  • csande17 a day ago

    Your comment is downvoted as I write this, but I kind of think Zig's new design agrees with you. It uses the terms "async" and "await", but the API design looks more similar to traditional threading (like Rust's thread::spawn and join() APIs). With the fun distinction that you can choose whether your program uses actual threads, or coroutines, or just runs everything synchronously without changing any of your code.