(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39502276

在计算机科学中,协程(也称为可恢复函数、子例程、Lua 中的用户定义生成器、Scheme 中的连续传递样式或可挂起函数)是一种组合机制或编程语言构造,它允许使用yield 语句(相当于不返回而返回) 立即执行封闭函数的剩余主体)代替传统的函数调用或继续。 这种机制允许创建在调用者和被调用者之间交替的控制流,类似于协作多任务处理或协程式通信,具有如下所述的内存分配能力。 大多数主流编程语言(包括Java或.NET等流行平台,以及一些脚本语言)都没有提供定义协程的特定语法或语言功能; 相反,开发人员通常利用现有的原语(函数、子例程、例程等)以及基于控制流(GOTO、标签和分支)或连续传递样式(方案)的技术来模拟协程呈现的行为。 然而,这个概念在计算数学和计算机科学中仍然具有理论上的重要意义,部分原因是它们能够减少全局状态的使用并改善竞争条件等系统范围的属性。 在一些实际场景中,与协同路由相比,替代模型(例如光纤)可能会表现出更低的开销。 虽然协程本身可以表现出多种好处(例如与替代方案相比更简洁,以及支持更简单的代码库或设计),但这种效果需要额外的认知负荷,从而增加与维护任务相关的复杂性。 文本材料中引用的一个示例解释了“C 中的协程”可以帮助消除“全局变量反模式”,正如在较大应用程序框架的嵌套循环中所演示的那样。 另一个例子涉及用 C 等语言实现带有协程的生成器,并在较长的空闲时间内异步利用它们,从而提高在要求较高的使用模式下的响应能力。 此外,某些操作系统内核级驱动程序和网络实用程序(例如由命令行终端处理的 Telnet 或串行设备)在实践中利用类似的设计模式。 与递归相反,递归可能导致函数调用被堆叠直到达到最大阈值(导致深堆栈帧和有关过多内存分配或耗尽可用资源的潜在问题),以及显式消息队列方案(这可以

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Coroutines in C (2000) (greenend.org.uk)
274 points by ColinWright 18 hours ago | hide | past | favorite | 83 comments










I've found myself at this webpage multiple times while trying to minimize the complexity of APIs in my C projects. I think it does a lovely job explaining control flow and it has helped me to think more explicitly about storage of state on and off the stack as well as the readability consequences of different approaches.

My conclusion for now is that the choice to use C coroutines is best left to the library user. For example: Mongoose (https://github.com/cesanta/mongoose) uses event callbacks to deal with asynchronousness. It is much more pleasant to wrap a library like this in whatever thread/task primitives your system has rather than try to port the mythical cross-platform C couroutine or worse, std::thread.



It’s Simon Tatham’s website. He’s well known for being the author of PuTTY [1] and his puzzle collection [2]!

[1] https://www.chiark.greenend.org.uk/~sgtatham/putty/

[2] https://www.chiark.greenend.org.uk/~sgtatham/puzzles/



I've known about the two projects for literally 20+ years, but wow I never knew it was the same person behind them....


Oh wow... I have had the Android port of his puzzles (your second reference links to it) on my phone for a while. Had no idea the developer of Putty had anything to do with it!


Coroutines. What a lovely concept! It's a joy to watch all the CppCon videos about C++ coroutines, primarily by Microsoft folks. "Negative-cost abstraction" is such a nice hook phrase.

Friends at Meta mentioned to me a couple years ago that they started using c++ coroutines, which ended up being a big mistake because they had to face compiler implementation bugs, which must have been nasty to track down. At Google, we are eagerly waiting for the brilliant folks that are working on properly integrating them in google3/ to tell us when the time has come to use them.

This article uses Duff's device [1] to motivate structured gotos via macros as an implementation strategy for C coroutines. Duff wanted to loop-unroll this:

    do {
        *to = *from++;
    } while (--count > 0);
which he did in this way (shortened for brevity) :

    int n = (count + 3) / 4;
    switch (count % 4) {
    case 0: do { *to = *from++;
    case 3:      *to = *from++;
    case 2:      *to = *from++;
    case 1:      *to = *from++;
            } while (--n > 0);
    } 
That is to say, he realized that he could use `case` statements (almost) anywhere in a `switch` block. The connection with coroutines is simple: One can wrap the whole function body with a switch statement, use a static variable for holding the location of the latest coroutine return, and label all co-returns with a `case` statement:

  #define coBegin static int state = 0; switch (state) { case 0:
  #define coReturn(x) do { state = __LINE_; return x; case __LINE:; } while (0)
  #define coFinish }

  int function(void) {
      static int i;  // function state can't be local anymore.
      coBegin;
      for (i = 0; i 
Sustrik's take on C coroutines might also be an interesting read [2].

[1] https://en.wikipedia.org/wiki/Duff%27s_device

[2] https://250bpm.com/blog:48/index.html



As someone who moved from google3 -> fbcode in the last few years, I think there are weird upsides AND downsides to having async code littered through your C++ (aka co_yield, co_return, co_await, etc).

The advantage, compared to the internal stuff google3 was using, was that as you read code, the async nature of various parts was obvious. Some programmers at G would spend entire quarters+ not knowing what the threading model was, and cause serious bugs in retrospect.

The disadvantage is actually much dumber - a lot of code "could" be async, and over time becomes entirely async because that's the mode the programmer is in when writing the program.

The choice to use a spinlock vs. a mutex w/yields should be one based on the size of the critical section and the threading going on at the time. Unfortunately to make code more readable/uniform/etc you end up with entire projects doing one or the other.

I'd love to learn more about language implementations of threading that do not default either way, but instead could take a profile of the previous run, and make the next run more optimal, without having to change the code or causing bugs.



The alternative is to use the „labels as values“ feature of GCC. You can take the address of a label and later jump to it. I contributed the code that’s now in lc-addrlabels.h back in 2005 :-)

I also used the GCC local labels feature to completely avoid using __LINE__ anywhere, so you could have multiple coReturns in a single code line:

#define LC_SET(s) do { ({ __label__ resume; resume: (s) = &&resume; }); }while(0)



Definitely, "labels as values" (aka "computed gotos", https://gcc.gnu.org/onlinedocs/gcc/Labels-as-Values.html) is so much better than Duff's device.

Unfortunately, computed-gotos is not a C language standard. I don't understand why. I think FORTRAN had it in the 60s. It is so useful in some situations, like a coroutine, or a byte-code interpreter. Is it because some obscure DSP chip with a sizeof(char)==32 using 1's complement arithmetic can't support it? Then maybe make it implementation-defined and allow the rest of the world get nice things.



Steve Wozniak's Integer BASIC (aka Apple BASIC) had computed-gotos too. Not sure about Microsoft BASIC, but Commodore BASIC had definitely lost this feature.


(For ease of reference—Fortran calls this an assigned GOTO: jump to label stored in a variable—as an integer number, as is Fortran’s way, not an address. A computed GOTO in Fortran is more like a switch statement in C: jump to the first label listed in the statement if the specified variable is one, to the second if it is two, ..., fall through to the next statement otherwise.)


> [Duff] realized that he could use `case` statements (almost) anywhere in a `switch` block.

That’s likely true, in that it probably was a moment for realization for Duff (and many others reading him, including me); yet it’s almost certainly a completely intentional feature.

(As mentioned at the bottom of TFA, Duff also realized you could build coroutines on top of it but thought the idea “revolting”.)

There’s a temptation to think of C’s `switch` as a very inexpressive pattern match, and then the “fallthrough” seems like a bug and so on. It’s not. It’s a computed GOTO, in the vein of the one in Fortran but more convenient in that the values don’t have to be sequential, and also in that you don’t have to list all the labels at the top. (In fact, now that I’m writing this out, it’s more of a computed COMEFROM, then, isn’t it? However insane that sounds.)



Oh come on, just rewrite it all in Go! It should only be a few billion line CR. Your SREs will thank you (eventually).


Ah the C pre-processor, the gift that keeps on giving after all these years :-(


I don't mind macro heavy C code, but this one made me freeze


> no commonly used high level language supports the coroutine

This might have been the case back in 2000, but these days many languages do support it, including C++20, Lua, Python, Ruby, etc.



Python was created in 1991; I imagine the "yield" keyword appeared either right then or not much later!

Also, the refinement at the end of the article: "We arrange an extra function parameter, which is a pointer to a context structure; we declare all our local state, and our coroutine state variable, as elements of that structure." sounds like implementing a closure to me. You make the callee a lambda which would use an outside var/context/state to determine what to do or with what value. Am I understanding this correctly?



your note about closures is correct, yes

as lmm pointed out, python didn't have generators and yield until 2.2. icon, which tim peters adapted the idea from, had them quite a bit earlier than that, but i think it's reasonable to describe icon as not being a commonly used language, then or now

(python's generators are closer syntactically to icon's generators than they are semantically)



> Python was created in 1991; I imagine the "yield" keyword appeared either right then or not much later!

Nope. It was introduced 10 years later, as part of PEP 255, released in Python 2.2.



fwiw, Simula67 had coroutines. Not the first to do so, but IIRC it was the first major language to do so.


The "switch" method isn't too uncommon, but usually people have an init function and "state" pointer that's passed into the coroutine function. I've used this method a lot in embedded projects, where one coroutine was handling motor acceleration/deceleration while the other would simply tell it what direction to go, but I've also used it for networked libraries[1]. Even the standard library has a coroutine function like this in "strtok()"[2]

You don't really need to introduce macro hell for it to be manageable, though I've never found reading switch/case flow to be very enjoyable.

[1]: https://github.com/REONTeam/libmobile/blob/master/relay.c#L3...

[2]: https://manpages.debian.org/bookworm/manpages-dev/strtok.3.e...



If you think this is some C black magic, try reading this by the same author on creating arbitrary control structures with macros: https://www.chiark.greenend.org.uk/~sgtatham/mp/


Note that the underscore prefix thing often is still prone to shadowing. You need pretty ugly mangled names to avoid that, and for external-block macros (unlike expression-ish/statement-ish macros) it can't be avoided with GNU's/C23's hygienic macro hack.


Wait, was there any recent change to C23 that enabled a different solution than `__COUNTER__`? I realized you have mentioned but didn't fully define `CLEANSE_MACRO_VARS` in recent comments, is there any other pointer?


I wouldn't recommend doing any of this stuff at all, personally. It's just always amazed me though how much you can do with with just basic string substitution and no homoiconicity/no AST access.


From the same author: Simon Tatham's Portable Puzzle Collection

https://www.chiark.greenend.org.uk/~sgtatham/puzzles/



Coroutines are fun, but in real code please consider using actual threads. Modern processors have many cores, but coroutines will (often) only use a single core.

Edit to add: This is a real world problem too. Until recently qemu, which extensively uses coroutines, would put a lot of its block device I/O through a single thread. This caused some performance issues. Kevin Wolf and others have spent years of effort fixing this so modern qemu will use many threads for I/O (this work will appear in RHEL 9.4).



The only connection between threads and coroutines is that some single-threaded language runtimes only have coroutines, so you might occasionally use them where threads would be a better choice.

Coroutines are a way of structuring single-threaded execution, and a useful one. The example in the Fine Article of a producer-consumer pattern is a good one, attaching a stream to a parser isn't a parallel algorithm so threads are useless for writing it.

Naturally, using a single-threaded paradigm for work which could be performed in parallel is inefficient, but coroutines aren't a poor man's parallelism, they're a control structure which functions on its own terms. They can be combined productively with threads, such as using an event loop in a web server to thread (as in needle) coroutines through various blocking events with a dispatcher, and the runtime can spin up a thread per core to parallelize this, which reduces per-thread coordination to checking the depth of each thread's work queue and farming the request to the least congested one.



Bob Nystrom makes this argument best, I think, in his two-parter on loops and iteration[1,2]. Looping over data structures is of course only one example of how one can apply coroutines, but a very established one. The canonical problem requiring coroutines[3] is also essentially about doing that.

Or for those who want something different there’s the elevator (and elevator-userbase) simulation from TAoCP volume 1, also an essentially concurrent problem with little to no parallelism or I/O to it.

[1] https://journal.stuffwithstuff.com/2013/01/13/iteration-insi...

[2] https://journal.stuffwithstuff.com/2013/02/24/iteration-insi...

[3] https://wiki.c2.com/?SameFringeProblem



> attaching a stream to a parser isn't a parallel algorithm so threads are useless for writing it.

Couldn't it be done in 2 threads? The output of the decompressor thread feeds to the input of the parser thread.



It could be, but given the sometimes astonishing costs of the—effectively—network protocol we know as cache coherency (thousands of cycles if you’re not careful), it’d be a giant waste in many of the cases where stackless coroutines would be perfectly appropriate.


I agree for typical async IO code which needs to wait for external events (like an IO operation to finish), but sometimes in other situations threads are not an option because there would be too much synchronization required.

For instance in my emulators, the CPU emulation is a switch-case state machine which is very similar to the coroutine approach described in the article, trying to move this idea to threading would require a synchronization between multiple threads on each emulator clock cycle which is somewhere between a few dozen and a few hundred host CPU clock cycles. That's not realistic, at least for emulating typical 8- and 16-bit home computers. For emulating 'modern systems' where the hardware components are not as tightly coupled as in old-school 8- and 16-bit machines, threading makes more sense though.

See here to get an idea how that CPU emulation works (only the first few sections are needed to understand the concept): https://floooh.github.io/2021/12/17/cycle-stepped-z80.html



> please consider using actual threads.

Bad advice in general.

Why would you run separate thread if you only want is to iterate over nodes in a tree (as an example of non flat collection).



It’s never bad advice to consider something.


To the contrary, consideration takes time, and rules of thumb are valuable to mitigate overthinking.


No, it's frequently bad advice to consider something. See eg https://www.xkcd.com/1445/.


Real world and toy examples are very different. The example isn't like what people are using coroutines for in the real world. I'd urge you to look at how coroutines are used for inversion of control (quite correctly) in qemu.


> Coroutines are fun, but in real code please consider using actual threads.

Coroutines are lightweight and trivial to synchronize. They are perfect for small bits of incremental computation, like iterators and tokenizers. Maybe you're thinking of green threads?



Threads with implicitly shared memory are more or less impossible to use safely, particularly in a language like C. Coroutines for concurrency, and multiprocessing with explicitly shared memory for parallelism, is a better approach.


Coroutines are fun, but in real code please consider using actual threads. Modern processors have many cores, but coroutines will (often) only use a single core.

Threads and coroutines have different purposes. Coroutines are more about logical structure.



Coroutines are good for modelling concurrency which is different from parallelism. Concurrency is useful for abstraction and expressiveness. Parallelism is useful for making your code run faster by running parts of it in parallel on multiple cores. You could make concurrent programs run faster on multiple cores by distributing the coroutines which don't share state on multiple working threads in a thread pool, thus mixing concurrency and parallelism...but they are still two different things with different purposes.


That seems like an orthogonal concern to structuring control flow, though it is much more difficult if you intend to use coroutines across multiple threads. There's nothing stopping you from using both threading and coroutines.


This is out of the question for real time apps. Co-routines are an elegant solution to implement cooperative multitasking in such cases.


> coroutines will (often) only use a single core

That's generally the desired behavior. If you have decoupled, parallel workloads they're going to naturally be working on disjoint data. The idea behind coroutines is that you have some kind of local workload with synchronous data that, for whatever reason, is easiest to express "inside out" with a function that gets to loop over something and "push" the results to its abstracted consumer whose code lives somewhere else, vs. the natural functional paradigm where the inner loop is a conceptual "pull" controlled by the caller.



Thank you for eloquently expressing an observation I probably should have learned years ago.


There's often a sweet spot to be had in mixing threads and coroutines, where you have a coroutine scheduler instance per thread, and a thread created per core.

Then rarely, if ever, migrate coroutines across schedulers, and rarely, if ever, share data between coroutines on different schedulers.

Coroutines can enable an ergonomic concurrent programming style while avoiding the need for any locking at all via cooperative scheduling. You generally end up with higher scheduling latencies, but potentially quite high throughput by removing any need for atomics/locking overheads, and no timer constantly interrupting execution for preemptive scheduling.



Right, that's what qemu has ended up with.


Not just that, but the scaling problems with threads are usually massively overstated. It's true that thread switching has quite a bit more overhead, but it's been optimized a lot since the bad old days of 15+ years ago. (Plus, unless you're using a massive number of threads it's very unlikely that thread switching is going to be your bottleneck.)


Unless you're on QNX 7 of course...


C++ version of the approach: https://www.codeproject.com/Tips/29524/Generators-in-C

I am using this in my Sciter, just in case. Works quite well and convenient.



> Of course, this trick violates every coding standard in the book. […] I would claim that the coding standards are at fault here.

Thoroughly disagree here. The coding standards for not at fault for rejecting this code, but rather the code is merely a cute trick. Software engineering in the large is all about removing surprises and making code readable even to the sleep-deprived on caller waking up at 3am to debug this. You can't rely on programmers remembering the ground rules all the time (and there are four of them!)

> Coding standards aim for clarity. By hiding vital things like switch, return and case statements inside "obfuscating" macros, the coding standards would claim you have obscured the syntactic structure of the program, and violated the requirement for clarity. But you have done so in the cause of revealing the algorithmic structure of the program, which is far more likely to be what the reader wants to know!

It takes skill to write programs that see clear in both their syntactic structure and their algorithmic structure. This isn't it. (I am a fan of Rust creating implicit state machines from async functions and I think that should be the model here.)



Endlessly "dumbing down" lowest-common-denominator crap is what's responsible for the quality, or lack thereof, of most software today. Shunning knowledge and education will come back to bite you.


Seconded. Not everything is about the sleep deprived on call guy, not everything is about reducing surprises, not everything is about operations.

I had thought before that this "what about the guy at 3am arguments" push in the direction of mediocrity - happy to see I'm not the only one have these thoughts.



Feel free to write your artisanal code in your personal projects. But they don't belong in most companies' code base. You are a cog in the machine at these companies. Your role is to produce code that's easily understood by the next programmer. That's why coding guidelines exist.


that's why all those companies are using putty instead of their own ssh client; they're organizationally incapable of writing software of putty's quality


A lot of these companies don’t ship software at all, they write it only for internal use. It needs to be easily fixable by junior interns, not dependent on Bob the 100X programmer who decided to retire last week.


yeah, and of course you normally want as much as possible of your software to be easily fixable by junior interns in any case; that's always better when there's no compensating drawback

the implicit premise of your comment, however, seems to be that no such compensating drawback is possible, presumably because internal-use-only software isn't a competitive advantage. there are a lot of companies that think that way, but i think it's shortsighted; see https://news.ycombinator.com/item?id=39402299 for some examples of companies that discovered that it mattered a lot how good their internal-use-only software was



> quality, or lack thereof, of most software today.

Also the incredible quantity software today, and how dang cheap it is.



I’ve used libaco in the past for coroutines in C. I found zlib a pain to use when using curl scheduled with libuv to fetch data. zlib expects a read loop to extract data, but libuv provides an evented push model. Saving all of the zlib state and building a state machine seemed tedious, but a coroutine made the zlib code look like the standard, blocking loop.

This was just code for my own amusement, and maybe used by a few people, for non-production work. I’d do it again, however, if I needed to.



Setjmp/longjump are the built-in coroutines in C, no?


Some longjmp implementations unwind the stack, so they can't be used for coroutine switching. Even if it works (it's technically undefined), you need to get a suitable stack from somewhere.

The next issue is that usually, applications want to resume coroutines on a thread different from the one it on which it was suspended. That runs into trouble because on some systems, compilers cache the address of thread-local variables in the local stack frame, assuming that the thread does not switch in a function mid-execution.



The only platform I’ve seen stack unwind was VAX/VMS :)

But yes, you do need to allocate the stack which could take up a lot of ram.

It’s odd not to mention it in the article though.



Current glibc unwinds the shadow stack if it is active: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86...

It makes longjmp useless for coroutine switching, although it does not result in other effects of stack unwinding (such as invoking C++ destructors).

On Windows, longjmp really unwinds the stack (and maybe this is something influenced by VMS): https://learn.microsoft.com/en-us/cpp/c-runtime-library/refe... “In Microsoft C++ code on Windows, longjmp uses the same stack-unwinding semantics as exception-handling code. It's safe to use in the same places that C++ exceptions can be raised.”



Well, things have changed since I looked last. Thanks for explaining.

FWIW, back in the nineties we just wrote our own setjmp/longjmp for VMS to avoid stack unwind - save registers / restore registers. We used it to implement coroutines in Modula 2, iirc.



No. The C standard says this about longjmp: "if the function containing the invocation of the setjmp macro has terminated execution in the interim [...] the behavior is undefined". So while you can longjmp out of functions, you can't longjmp back into them.


These are stackless coroutines, if you use longjump you have to create a stack for the coroutine.

There are pros and cons for each style.



In theory (but only possible in assembly right now), there could be coroutines that shared the stack of their caller. As long as the caller (who's calling from a normal function) finishes calling the coroutine and doesn't expect to be able to call it after they return, then you could use it to implement iterators, e.g. over a binary tree or a hash table, like generators in Python. It could work as long as the caller used the stack frame base pointer to refer to their saved local variables, since the stack pointer could be changed between yields to the coroutine. I'm genuinely surprised there hasn't been a compiled programming language to do that other than Sather and CLU[0] (both of which are long dead by now). Graydon Hoare originally wanted them in Rust [1], but LLVM didn't support it, so it was scrapped.

[0]: https://dl.acm.org/doi/pdf/10.1145/800127.804079 (the third PDF page, page 125)

[1]: https://graydon2.dreamwidth.org/307291.html (search "non-escaping coroutine")



You can absolutely build coroutines out of a generalized context switch. So yes, in some sense. But note that the linked article doesn't use setjmp/longjmp, which is what makes it so clever.

FWIW: would I personally actually use this trick? Almost certainly not. C APIs aren't well suited to that level of abstraction IMHO, if you have an app that needs it leave the C stuff to the stuff C is good at and wrap a C++ or Rust or whatever layer on top for the subtleties.



I was so sure this was about protothreads til I remembered its name. https://dunkels.com/adam/pt/


Also related, the C++ lambda fuckup: https://news.ycombinator.com/item?id=33084431


Ah, this page again! it's been more than two decades? since I saw this page last? It was fun to learn about coroutines from the author of PuttY the ssh client of choice on Windows those days.


Related:

Coroutines in C (2000) - https://news.ycombinator.com/item?id=37357673 - Sept 2023 (1 comment)

Coroutines in C (2000) - https://news.ycombinator.com/item?id=36639879 - July 2023 (2 comments)

Coroutines in C - https://news.ycombinator.com/item?id=23293835 - May 2020 (1 comment)

Coroutines in C (2000) - https://news.ycombinator.com/item?id=19106796 - Feb 2019 (59 comments)

Coroutines in C, revisited - https://news.ycombinator.com/item?id=13199245 - Dec 2016 (36 comments)

Coroutines in C - https://news.ycombinator.com/item?id=13138673 - Dec 2016 (1 comment)

Coroutines in C (2000) - https://news.ycombinator.com/item?id=11051004 - Feb 2016 (11 comments)

Show HN: Libconcurrent – Coroutines in C - https://news.ycombinator.com/item?id=10887071 - Jan 2016 (24 comments)

Coroutines in C with Arbitrary Arguments - https://news.ycombinator.com/item?id=9402314 - April 2015 (22 comments)

Coroutines in C (2000) - https://news.ycombinator.com/item?id=8615501 - Nov 2014 (27 comments)

Coroutines in C (2000) - https://news.ycombinator.com/item?id=6244994 - Aug 2013 (1 comment)

Coroutines in one page of C - https://news.ycombinator.com/item?id=6243946 - Aug 2013 (60 comments)

Coroutines in C (Simon Tatham, 2000) - https://news.ycombinator.com/item?id=1380044 - May 2010 (16 comments)

Coroutines in C - https://news.ycombinator.com/item?id=835849 - Sept 2009 (16 comments)

Co-routines in C - https://news.ycombinator.com/item?id=794157 - Aug 2009 (1 comment)





(2000)


I've used this for some embedded/IoT projects before. They work really well.


I honestly like stackful coroutines if you don’t mind allocating memory for a stack.

https://github.com/Keith-Cancel/Bunki



UNIX pipes solve this problem. Both reader and writer are driving their respective process.


The article says: "In many modern operating systems, you could do this using pipes between two processes or two threads. emit() in the decompressor writes to a pipe, and getchar() in the parser reads from the other end of the same pipe. Simple and robust, but also heavyweight and not portable. Typically you don't want to have to divide your program into threads for a task this simple."


Thanks. I have read this article a few times and somehow missed that was acknowledged.

“Heavyweight” is where I disagree. It’s exactly whats needed to be able to write sequential code on each side.



how does this compare to using Go goroutines?


I assume I'm missing that this a joke, it's honestly hard for me to tell.

But in the conclusion, the author talks about actually making this work by providing a context object to hold all of the intermediate state and providing this context object to the callee.

Once this is required, how does this approach compare to simply using an external iterator?

Seems to me like an iterator solves the lion's share of the problem here. It moves the state into the caller's stack (or above them), it's easy to understand, simple to implement and doesn't involve unenclosed and context-dependent macros.



Why would you assume that this is a joke?

C (particularly back when this was written) was a low level language. You could not simply use an external iterator - they didn't exist. And if you try to roll your own, you'll wind up dealing with a lot of complications around resource management in a language which lacks basic memory management.

But the proof is in the pudding. Back then it was common to want to telnet into a Unix machine from Windows. And the only two solutions that worked well enough to consider were installing Cygwin, or installing PuTTY. Cygwin was better if you needed a Unix environment on your Windows machine. Otherwise PuTTY was your answer. As the article comments, PuTTY was written with this technique.

When you've solved a problem that a lot of people had, and your solution is widely acknowledged as the best one out there, people get interested in how you think it should be solved. Which is why this article interested me when I first saw in many years ago on Slashdot.

So absolutely not a joke.



Proto-Activities have this context to store the state in the caller.

https://github.com/frameworklabs/proto_activities



> Seems to me like an iterator solves the lion's share of the problem here.

Iterator APIs are indeed aimed at the same kind of problem, but they're not the same solution. And often they're harder to write. If you have a component with a big list of stuff, it's generally easier to write and understand the idea of "iterate over my big list of stuff and emit one item at a time to my consumer" than it is "what state do I need to remember such that when I get called again I can emit the correct next item given the one I just emitted?".

Coroutines are a way of expressing the former. Iterators are the latter. If all you do is write the outer loop, iterators are absolutely just as good. If you need to write the iterator itself, it's more of a discussion.







Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com