If you're already familiar with channels in Go and want to try them in JavaScript then check out my port, otherwise continue reading for background explanation.

Channels make it easier to write programs that interact with other programs or processes. Concurrent code written with channels is more elegant, concise, and easier to reason about. It's also fun to write! I'm going to jump into simple examples first and then address theory at the end of this article.

Async Functions

With ES 2017 we have a new kind of function called async. It allows us to call a function and then do something else asynchronously without waiting for it to return. What do you think the following code displays on the console?

(async () => {
  for (let index = 0; index < 3; index++) {
    await console.log(`foo`);

(async () => {
  for (let index = 0; index < 3; index++) {
    await console.log(`bar`);

Async functions are similar to Web Workers but with some key differences. The first is that workers are relatively heavy-weight (think threads):

Workers ... are relatively heavy-weight, and are not intended to be used in large numbers. For example, it would be inappropriate to launch one worker for each pixel of a four megapixel image. ... Generally, workers are expected to be long-lived, have a high start-up performance cost, and a high per-instance memory cost.

In contrast you can launch thousands of async functions at once without a problem!

A second difference is that async functions are an example of cooperative multitasking (think coroutines). Unlike web workers, control is switched from one async function to another explicitly at expressions preceded by the keyword await.


This is great but how can we get async functions to coordinate with each other? There is much research in this area and I'm impressed with Go's solution of channels. Channels are used both to convey information between two async functions as well as to synchronize them. You can think of channels as being like queues in which the values are spread across time rather than space.

Coordination with Push & Shift

In JavaScript we use arrays as queues:

const queue = [];

// Enqueue a value into the back of the queue.

// Dequeue a value from the front of the queue.
const value = queue.shift();

My implementation of channels uses a similar interface:

const channel = Channel();

// Enqueue a value into the back of the channel.
await channel.push(value);

// Dequeue a value from the front of the channel.
const value = await channel.shift();

Let's look at an example. Imagine you want to create thumbnails of images and videos. A service is available to help you with the following API. Each function returns a promise:

identifyImageOrVideo(mediaUrl) -> async `image` | `video`
createThumbnailOfImage(imageUrl) -> async thumbnailUrl
createThumbnailOfVideo(videoUrl) -> async thumbnailUrl

The API is throttled to three calls of the same function at once, i.e., you can use it to create the thumbnails of up to three images simultaneously. How would you use these functions to process a queue of media?

Here's a version using channels:

const worker = async (input, output) => {
  // loop forever
  for (;;) {
    const url = await input.shift();
    const isImage = (await identifyImageOrVideo(url)) === "image";

    await output.push(
        ? await createThumbnailOfImage(url)
        : await createThumbnailOfVideo(url)

const media = Channel();
const thumbnails = Channel();

worker(media, thumbnails);
worker(media, thumbnails);
worker(media, thumbnails);

// client code below

await media.push(``);

  await thumbnails.shift(),

Three worker functions execute simultaneously making sure we make the most of the service without triggering its quota throttle. As each worker function becomes ready to do more work it pulls the next url off of the media channel. When it completes its work it pushes the finished product into the thumbnails channel. The client code doesn't need to know anything about these worker functions (including what they're named or how many are running). It needs only to push new values onto media and shift completed work off of thumbnails.

A push into a channel blocks until another async function performs the corresponding shift. This gives us back pressure for free: it's not possible to push more urls onto the media channel than the workers can handle.

Closing Channels

Like files and sockets you close a channel to signal that no more information will be available from it.

const range = async (output, start, end = Infinity) => {
  for (let index = start; index < end; index++) {
    await output.push(index);

  await output.close();

You can tell when a channel is closed because shift always returns undefined immediately (just like with an empty array).

const count = async numbers => {
  let total = 0;

  for (;;) {
    const number = await numbers.shift();

    if (number !== undefined) {
      total += number;
    } else {
      return total;


You can also use the familiar Array methods to manipulate channels. For example, to take the first three numbers from a channel, double them, and add them together:

const sum = await channel
  .slice(0, 3)
  .map((number) => 2 * number)
  .reduce((previous, current) => previous + current);

Other Concurrency Models

One of the main reasons concurrency is important is to support parallelization; if you can write code to run concurrently then it's relatively easy to make it run in parallel. This is significant now that processor speeds are no longer increasing as quickly and Amdahl's Law has become more relevant for performance than Moore's Law. It's also important because with the rise of reliable Internet connectivity and the move to microservices our applications are now composed of multiple independent processes interacting with each other.

We've had several attempts at modelling concurrency over the years with varying degrees of success. I did research to help myself understand how channels compare to other popular models and have included some of my notes below.

Events and Callbacks

This is the current model for concurrency in JavaScript. Any time you call addEventListener you're setting up a callback to respond to an event. It's worked well for small problems but as our applications get more complex it's getting us into trouble.

Rob von Behren, Jeremy Condit and Eric Brewer at the Computer Science Division, University of California at Berkeley (my alma mater, go Bears!) wrote:

Event-based programming has been highly touted in recent years as the best way to write highly concurrent applications. Having worked on several of these systems, we now believe this approach to be a mistake. Specifically, we believe that threads can achieve all of the strengths of events, including support for high concurrency, low overhead, and a simple concurrency model. Moreover, we argue that threads allow a simpler and more natural programming style.

Rich Hickey also gives a good explanation about the problems with events and callbacks in his talk about core.async (Clojure's port of channels).

Shared-State Multithreading

Glyph Lefkowitz wrote a great essay about why shared-state multithreading is bad.

As we know, threads are a bad idea, (for most purposes). Threads make local reasoning difficult, and local reasoning is perhaps the most important thing in software development.


With the phrase “local reasoning”, I’m referring to the ability to understand the behavior (and thereby, the correctness) of a routine by examining the routine itself rather than examining the entire system.


Ron of the Parallel Universe blog has this to say about monads:

...I now think monads (or at least exposing them to the user) are the wrong abstraction for effects even in pure FP languages. They are hard to understand and their composition is even harder. This opinion seems to be shared by FP experts. Martin Odersky, Scala's designer, has said outright that he dislikes monads for effects, and he's working on alternatives. Kiselyov's work on effect handlers basically tries to hide monads from users, and provide them with a more convenient abstraction that looks like scoped continuations.

Actor Model

The actor model in computer science is a mathematical model of concurrent computation that treats "actors" as the universal primitives of concurrent computation. In response to a message that it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify private state, but can only affect each other through messages (avoiding the need for any locks).

The actor model was used in Erlang by Ericsson to make sure telephone calls were routed reliably via software without crashing. It was a fantastic success and is basically the model for Web Workers. In short it's like having a channel tightly coupled to every async function.


So here we are today at async functions: coroutines implemented with lightweight user space (green) threads. You don't get the full power of coroutines in standard JavaScript—they're shallow in order to protect us from ourselves (see Why coroutines won’t work on the web)—but if you're using Node.js and you're adult enough to hold the knife by the correct end then you can make use of deep coroutines with asyncawait.

Channels are powerful building blocks. I surveyed existing implementations in JavaScript and decided to create my own because I wanted an implementation that's simple, bullet-proof, and idiomatic. If you know how to use an Array then you already know most of how to use a Channel. Because it's modelled after Go's channels we can learn from that community's documentation and examples.