[Complete] Sub-RFC 2: Signal APIs · angular/angular · Discussion #49683
Sub-RFC 2: Signal APIs
Changelog
April 10, 2023
- Added
asReadonly()to theWritableSignalAPI - Changed
effect()to schedule cleanup viaonCleanupargument instead of returning a cleanup function (see fix(core): allow async functions in effects #49783)
Introduction
This discussion covers the API surface and some of the implementation details for Angular’s signal library.
Signals
Fundamentals
A signal is a value with explicit change semantics. In Angular a signal is represented by a zero argument getter function returning the current signal value:
interface Signal<T> { (): T; [SIGNAL]: unknown; }
The getter function is marked with the SIGNAL symbol so the framework can recognize signals and apply internal optimizations.
Signals are fundamentally read-only: we can ask for the current value and observe change notification.
The getter function is used to access the current value and record signal read in a reactive context - this is an essential operation that builds the reactive dependencies graph.
Signal reads outside of the reactive context are permitted. This means that non-reactive code (ex.: existing, 3rd party libraries) can always read the signal's value, without being aware of its reactive nature.
Writable signals
The Angular signals library will provide a default implementation of the writable signal that can be changed through the built-in modification methods (set, update, mutate):
interface WritableSignal<T> extends Signal<T> { /** * Directly set the signal to a new value, and notify any dependents. * * Useful for changing primitive values or replacing data structures when * the new value is independent of the old one. */ set(value: T): void; /** * Update the value of the signal based on its current value, and * notify any dependents. * * Useful for setting a new value that depends on the old value, such as * updating an immutable data structure. */ update(updateFn: (value: T) => T): void; /** * Update the current value by mutating it in-place and notifying any * dependents. * * Useful for making internal changes to the signal's value without changing * its identity, such as pushing to an array stored in the signal. */ mutate(mutatorFn: (value: T) => void): void; /** * Return a non-writable `Signal` which accesses this `WritableSignal` but does not allow * mutation. */ asReadonly(): Signal<T>; }
An instance of a settable signal can be created using the signal creation function:
function signal<T>( initialValue: T, options?: {equal?: (a: T, b: T) => boolean} ): WritableSignal<T>;
Usage example:
// create a writable signal const counter = signal(0); // set a new signal value, completely replacing the current one counter.set(5); // update signal's value based on the current one counter.update(currentValue => currentValue + 1);
Signal and WritableSignal interfaces naming
In the current proposal the primary interface is named Signal. This represents a read-only value changing over time. We’ve chosen this name as it is short, discoverable and we are expecting it to be the most commonly imported and used interface. WritableSignal are somewhat specialized and adding “writable” to the name indicates that additional operations are permitted on those types of signals.
An alternative naming that we’ve considered is a pair of the ReadonlySignal (primary interface) and Signal (writable flavor). This aligns nicely with the TypeScript naming schema (ex. ReadonlyArray and Array). We were hesitant to use this naming as ReadonlySignal is far less discoverable and API authors might reach out for the Signal interface when their intention was to use ReadonlySignal, ex.:
import {Signal} from '@angular/core'; // API author requires a writable signal but it needs a read-only version // this might force API users to use type casts and / or convert computed to writable signals (!) function readFromSignalAndDoSth(signal: Signal) { …}
Discussion point 2a: given the trade-offs outlined here, would you prefer the Signal / WritableSignal naming pair or the ReadonlySignal / Signal one?
Equality
It is possible to, optionally, specify an equality comparator function. If the equality function determines that 2 values are equal, and if not equal, writable signal implementation will:
- block update of signal’s value
- skip change propagation.
The default equality function compares primitive values (numbers, strings, etc) using === semantics but treats objects and arrays as “always unequal”. This allows signals to hold non-primitive values (objects, arrays) and still propagate change notification, example:
const todos = signal<Todo[]>([{todo: 'Open RFC', done: true}]); // we can update the list and still trigger change notification // even without using immutable data todos.update(todosList => { todosList.push({todo: 'Respond to RFC comments', done: false}); return todoList; });
Other implementations of the signal concept are possible. Both Angular or 3rd party libraries can create customized versions - as long as the underlying contract is maintained.
.set is the fundamental operation, .update is a convenience method
While the API surface has 3 different methods (set, update, mutate) of changing signal’s value, the .set(newValue) is the only fundamental operation that we need in the library. The other 2 methods are just syntactic sugar, convenience methods that could be expressed as .set.
Example of .update expressed with .set:
// create a writable signal const counter = signal(0); // update signal's value based on the current one counter.update(c => c + 1); // same code written without .update, using .set counter.set(counter.get() + 1);
While everything could be expressed using .set only, the .update is often more convenient in certain use cases and hence were introduced in the public API surface.
Discussion point 2b: is the convenience of the .update worth introducing, given the larger public API surface?
.mutate is for changing values in-place
The .mutate method can be used to change a signal's value by mutating it. It is only useful for signals that hold non-primitive JavaScript values: arrays or objects. Example:
const todos = signal<Todo[]>([{todo: 'Open RFC', done: true}]); // we can update the list and still trigger change notification // even without using immutable data todos.mutate(todosList => { todosList.push({todo: 'Respond to RFC comments', done: false}); });
The .mutate method will always send change notifications, bypassing the custom equality checks on the signal level.
The combination of the .mutate method and the default equality function makes it possible to work with both mutable and immutable data in signals. We specifically didn’t want to “pick sides” in the mutable / immutable data discussion and designed the signal library (and other Angular APIs) so it works with both.
Separation of read/write
In our signal library, we've made a design choice that the main reactive primitive (Signal<T>) is read-only. This means that it's possible to propagate reactive values to consumers without giving those consumers the ability to modify the value themselves.
The separation of read/write capabilities will encourage good architectural patterns for data flow in signal-based applications. This is because mutation of state must be centralized and happen through the owner of that state (the component or service which has the WritableSignal) instead of happening anywhere within the application.
Discussion point 2c: in some systems (e.g. Vue) reactive state is inherently mutable throughout the application. In other frameworks (e.g. SolidJS) this separation is enforced even more strongly. What do you think about our choice to separate readers and writers, and the architectural benefits or drawbacks of this approach?
Getter functions
In the Angular chosen implementation, a signal is represented by a getter function. Here are some of the advantages of using this API: :
- It's a built-in JavaScript construct, which makes signal reads consistent between TypeScript code and template expressions
- It clearly indicates that the primary operation on a signal is read
- It clearly indicates that something more than a plain property access is occurring
- It is syntactically very lightweight, which we feel is important because reading signals is an extremely common operation.
Drawbacks of getter functions
Getter functions do have some downsides, covered below.
Function calls in templates
Angular developers have learned over the years to be wary of calling functions from templates. This advice arose because of the way change detection runs frequently for components, and the potential for functions to easily hide computationally expensive logic.
These concerns don't apply to signal getter functions, which are efficient accessors that do minimal computational work. Calling signal getters repeatedly and frequently is not an issue.
However, using function calls for signal reads might initially confuse developers who are used to avoiding function calls in templates.
Interaction with type narrowing
TypeScript can narrow the type of expressions within conditionals. The following code will type-check even if user.name is nullable, because TypeScript knows that within the if body it can't be null:
if (user.name) { console.log(user.name.first); }
However, TypeScript doesn't narrow function call return types, because it can't know that the function will return the same value every time it's called (like signal functions do). So the above example does not work with signals:
if (user.name()) { console.log(user.name().first); // type error }
For this simple example, it's straightforward to extract user.name() to a constant outside of the if:
const name = user.name(); if (name) { console.log(name.first); }
But this doesn't work in templates, as there is no way to declare an intermediate variable. There are some workarounds (we could create such variables automatically, for example).
Alternative syntaxes
We did consider different approaches and discarded them for the reasons listed below.
.value
const count = signal(0); console.log('current value', count.value); // read count.value = count.value + 1; // write
.value is a potentially viable API but wasn't chosen for the following reasons:
- it looks writable, even though for many signals it may not be;
- it looks like a plain property access;
- it is more verbose when chaining multiple signals together, ex.:
user.value.name.value.firstvsuser().name().first
However, there are some advantages as well. As .value is a plain property access, it does not suffer from the same type narrowing limitations that getter functions do.
Discussion point 2d: do the potential advantages of .value outweigh the disadvantages? Would you prefer that API?
Decorators
class MyComponent { @signal count = 0; increment() { console.log('current value', this.count); // read this.count = this.count + 1; // write } }
Decorators are great at providing metadata and / or syntactic sugar and several people suggested usage of decorators. We've explored this options and discarded it for the following reasons:
- we can only decorate classes and their members while we want to have signals usable in all places where JavaScript; expressions are allowed; also, it would limit our ability to further expand the way components can be authored;
- it would be impossible to pass signal instances around (with getters there is a clear distinction between a signal instance and signal's value);
- decorators specification is changing so it wasn't clear if we could build it on the legacy or the new decorators specification
- we would have to generate code for the decorated class members that would make the whole library more "magical" and Angular-specific.
Getter / setter tuple
const [count, setCount] = signal(0);
This approach has a desired property of segregating read and write operations. Unfortunately, we can't use destructuring assignment when defining properties in JavaScript classes:
class MyComponent { // this is not legal in JavaScript [count, setCount] = signal(0); }
which made this API a non-starter.
Proxy
The initial steps of our reactivity story are focused on providing basic building blocks, the smallest primitives that we (and the Angular community) can build upon. Signals are such a building block that model reactivity for both primitive JavaScript values and complex objects. We can't proxy access to primitive values so we needed some other mechanism that could work for both primitive JavaScript values and objects / arrays.
Having said this, we do see potential usage of proxies in store-like constructs that encapsulate "bigger" JavaScript objects and / or collections. We might explore Proxy usage there and expect that community-driven, Proxy-based state management solutions will be available in the future.
Compile-time reactivity
Some UI frameworks take a compiler-based approach to reactivity: most notably Marko and Svelte. We did look into those methods and see many benefits, but at the end of the day we've decided to continue with a runtime-based solution.
Svelte-based reactivity results in an excellent developer experience, as the framework comes with built-in reactive language constructs. This greatly reduces "syntactical noise" and makes components code easier to write and read. Unfortunately this approach works only in components - as soon as we want to move reactive code outside of component boundaries (ex. to share it between components) we need to change the reactive paradigm and syntax by moving to Svelte stores. In Angular we wanted to work with the same reactive primitive across the entire application code base. Signals are usable in components, services and anywhere in the application, really.
Marko makes the reactive primitive available across the application but at the cost of "global analysis" in a dedicated compiler. In the past Angular was leaning heavily towards the "full knowledge" / "global analysis" compiler pass but it proved to be relatively slow and made Angular's compilation pipeline hard to integrate with the other tools in the JavaScript ecosystem. We want to shift Angular's to local, faster compilation. Global analysis of a reactive graph would go against this goal.
Computed signals
Computed signals create derived values, based on one or more dependency signal values. The derived value is updated in response to changes in the dependency signal values. Computed values are not updated if there was no update to the dependent signals.
Computed signals may be based on the values of other computed signals, allowing for multiple layers of transitive dynamic computation.
Example:
const counter = signal(0); // creating a computed signal const isEven = computed(() => counter() % 2 === 0); // computed properties are signals themselves const color = computed(() => isEven() ? 'red' : 'blue');
The signature of the computed is:
function computed<T>( computation: () => T, options?: {equal?: (a: T, b: T) => boolean} ): Signal<T>;
The computation function is expected to be side-effect free: it should only access values of the dependent signals (and / or other values being part of the computation) and avoid any mutation operations. In particular, the computation function should not write to other signals (the library's implementation will detect attempts of writing to signals from computed and raise an error).
Similarly to the writable signals, computed signals can (optionally) specify the equality function. When provided, the equality function can stop recomputation of the deeper dependency chain if two values are determined to be equal. Example (with the default equality):
const counter = signal(0); // creating a computed signal const isEven = computed(() => counter() % 2 === 0); // computed properties are signals themselves const color = computed(() => isEven() ? 'red' : 'blue'); // providing a different, even value, to the counter signal means that: // - isEven must be recomputed (its dependency changed) // - color don't need to be recomputed (isEven() value stays the same) counter.set(2);
The algorithm chosen to implement the computed functionality makes strong guarantees about the timing and correctness of computations:
- Computations are lazy: the computation function is not invoked, unless someone is interested in (reads) its value.
- Computations are disposed of automatically: as soon as the computed signal reference is out of scope it is automatically eligible for garbage collection. No explicit cleanup boundaries and / or operations are exposed by the library.
- Computations are glitch-free: it is guaranteed that a given computation is executed a minimal number of times in response to dependencies change. The computation never executes with stale / intermediate dependency values and is immune to the famous “diamond dependency problem”. The glitch-free execution doesn’t require any explicit “transaction” or “batching” operations.
Branching in Computations
Computed signals keep track of which signals were read in their computations, in order to know when recomputation is necessary. This dependency set is dynamic, and self-adjusts with each computation. So in the conditional computation:
const greeting = computed(() => showName() ? `Hello, ${name()}!` : 'Hello!');
The greeting will always be recomputed if the showName signal changes, but if showName is false, the name signal is not a dependency of the greeting and will not cause it to recompute.
Effects
An effect is a side-effectful operation which reads the value of zero or more signals, and is automatically scheduled to be re-run whenever any of those signals changes.
The basic API for an effect has the following signature:
function effect( effectFn: (onCleanup: (fn: () => void) => void) => void, options?: CreateEffectOptions ): EffectRef;
Usage example:
const firstName = signal('John'); const lastName = signal('Doe'); // This effect logs the first and last names, and will log them again when either (or both) changes. effect(() => console.log(firstName(), lastName()));
Effects have a variety of use cases, including:
- synchronizing data between multiple independent models
- triggering network requests
- performing rendering actions
Effect functions can, optionally, register a cleanup function. If registered, cleanup functions will be executed before the next effect run. The cleanup function makes it possible to "cancel" any work that the previous effect run might have started. Example:
effect((onCleanup) => { const countValue = this.count(); let secsFromChange = 0; const id = setInterval(() => { console.log( `${countValue} had its value unchanged for ${++secsFromChange} seconds` ); }, 1000); onCleanup(() => { console.log('Clearing and re-scheduling effect'); clearInterval(id); }); });
Scheduling and timing of effects
Effects in Angular Signals must always be executed after the operation of changing a signal has completed.
Given the variety of effect use-cases, there is a wide spectrum of possible execution timings. This is why the actual effect execution timing is not guaranteed and Angular might choose different strategies. Application developers should not depend on any observed execution timing. The only thing that can be guaranteed is that:
- effects will execute at least once;
- effects will execute in response to their dependencies changes at some point in the future;
- effects will execute minimal number of times: if an effect depends on multiple signals and several of them change at once, only one effect execution will be scheduled.
Stopping effects
An effect will be scheduled to run every time one of its dependencies change. In this sense an effect is “always alive” and ready to respond to the changes in a reactive graph. Such “infinite” lifespan is obviously undesired as effects should be shut down when an application stops (or some other life-scope ends).
By default Angular effects lifespan is linked to the underlying DestroyRef in the framework. In other words: effects will try to inject the current DestroyRef instance and add register its stop function in there.
For situations where more control over lifespan scope is required, one can optionally pass the manualCleanup option to the effect creation:
effect(() => {...}, {manualCleanup?: boolean});
If this option is set, the effect won't be automatically destroyed even if the component/directive which created it is destroyed.
Effects can be explicitly stopped / destroyed by using the EffectRef instance returned from the effect creation function:
// create an effect and capture its EffectRef const effectRef = effect(() => {...}); // later on, explicitly destroy / stop this effect effectRef.destroy();
Effects writing to signals
We generally consider that writing to signals from effects can lead to unexpected behavior (infinite loops) and hard to follow data flow. As such any attempt of writing to a signal from an effect will be reported as an error and blocked.
This default behavior can be overridden by passing the allowSignalWrites options to the effect creation function, ex.:
const counter = signal(0); const isBig = signal(false); effect(() => { if (counter() > 5) { isBig.set(true); } else { isBig.set(false); } }, {allowSignalWrites: true});
Please note that computed is often a more declarative, straightforward and predictable solution to synchronizing data:
const counter = signal(0); const isBig = computed(() => counter() > 5);
Frequently asked questions
Can I create signals outside of components / stores / services?
Yes! You can create and read signals in components, services, regular functions, top-level JS module code - anywhere you might need a reactive primitive.
We see this as a huge benefit of signals - reactivity is not exclusively contained within components. Signals empower you to model data flow without being constrained by the visual hierarchy of a page
Any guidelines when it comes to granularity of signals?
This is a common question! Given a non-trivial object, it is not obvious how many signals should be created: one signal for the entire object? Or maybe one signal for each individual property?
Currently we can't provide hard-and-fast rules here but would suggest starting with more coarse-grained objects (one signal for the entire object) and split up if necessary. While it is tempting to go with many fine grained signals it is often not practical (creating all those signals can get verbose!) and - counterintuitively - not that performant (creating and maintaining signals in memory has associated cost).
Should I use mutable or immutable data in my signals?
Signals work great with both, we don't want to "pick sides" but rather let developers choose the approach that works best for their teams and use-cases.
Signals library
Why a new library instead of using an existing one?
Most of the existing implementations are tightly integrated with the underlying framework needs. From the Angular perspective we want to pick and choose semantics and an API surface that matches our needs. Some examples where we do have clear preferences:
- lazy evaluation of computed properties fits well into the per-component change detection model
- we need an API where a signal’s getter and setter are co-located on the same object reference so it can be used as a field / property on the existing class-based Angular components
- we want to control the timing of effects execution (probably we will have different types of effects executed at the different points in time related to component lifecycle).
- we can tie effect lifecycles to components, and clean up effects when components are destroyed
Finally, having direct dependency on the 3rd party library comes with non-trivial constraints: one needs to be aligned on concepts, implementation details and release scheduling.
On the other hand, reactive signal libraries tend to be fairly small, both in terms of the conceptual / API surface and implementation (~500 LoC).
How is it different from MobX, SolidJS, Vue reactivity?
Angular signals belongs to the same family of approaches and share core characteristics, same philosophy and architecture:
- declarative, push-based, synchronous reactivity
- there is a dynamic graph of dependencies that gets built and re-wired as the application runs (dependencies are auto-tracked)
- overlapping conceptual surface with the 3 main building blocks (signals, computed, effect)
Core implementation ideas are also the same:
- computations register themselves on a (global) stack and accessed (read) signals inspect this stack to register themselves as dependencies (this is why signals are expressed as some form of a getter function)
- great care is taken to make sure that computations are executed once and only once in response to dependencies change (“glitch-free execution”)
Despite the large number of similarities, there are substantial differences between various implementations: both on the conceptual, API and algorithmic levels:
- timing and order of re-recomputation of derived values - most notably, there is clear distinction between the eager vs. lazy computed values
- scheduling and timing of effects executions varies widely
- cleanup / destroy logic is different
- internal algorithms and data structures are different (especially when it comes to assuring glitch-free execution and cleanup), resulting in different performance characteristics
- API surface is different:
- main distinction comes from using proxies vs. getter functions
- many other subtle naming and API signature differences.
Will you publish the library as a separate npm package?
We did discuss the possibility of publishing an independent signal library but didn't do so initially for the following reasons:
- it is still in the early stages and we don't want to publish before going through the RFC feedback and settling on the exact API shape
- there are parts of the library that deeply integrate with the Angular internals (effects scheduling and cleanup is the most notable example)
- there is some practical friction in publishing a new package
We will definitely consider publishing a separate NPM package if there is value in it - please leave feedback in the RFC if you would like to see Angular signals library to be available as a separate NPM package.