**
Stardust is a modern, general-purpose colour palette optimised for screen use that follows similar design principles to Ethan Schoonover's Solarized, and is intended to be useable, for some applications, as a drop-in replacement.
The palette consists of eight balanced hues, each in five shades (Midnight, Dark,
Medium, Vivid, and Pastel), along with five grey scales (Black, Dark grey,
Medium grey, Light grey, and White), and lightly-adapted versions of Solarized's
base
tones.
The eight hues are evenly spaced around the colour wheel, and because there is an even number of them, four of the hues are the direct complements of the other four: Pumpkin, Gold, Lime, and Spearmint complement Sky, Bilberry, Lilac, and Raspberry.
This preview version of Stardust was developed using the Jzazbz colour appearance model, with a number of manual adjustments applied to specific shades to ensure balance in practice.
Colour model conversions are performed using colorspacious and ColorAide, plots generated with matplotlib.
Stardust isn't based directly upon Solarized, but because it follows similar
design principles, the Stardust hues sit with slightly-modified versions of
the Solarized base
tones quite nicely.
The base
shades as they appear in Stardust do not match the Solarized values;
this is because they've been re-specified in J'a'b' and matched with the rest
of the palette. Meanwhile, Stardust's hues are quite different to Solarized's:
if you're especially attached to Solarized's accent palette, Stardust may not
be for you.
In the distributed theme files, "Stardust Dark" and "Stardust Light" themes
use the Solarized base
shades as default foreground and background colours, but
with the Stardust hues as accents. Where "Stardust Solar Day" and "Stardust
Solar Night" theme files are present, these seek to reproduce faithfully
the Solarized palette layouts, but using the Stardust colour values, and so
these are the nearest to a direct drop-in replacement for Solarized themes.
Stardust uses an x.y.z
versioning scheme, similar to but not entirely
matching Semantic Versioning:-
- Major versions (
x
) will be incremented when there is a change to the source colour model, to one or more Jzazbz values, or the addition or removal of colours altogether - Minor versions (
y
) will be incremented when there is a change to derived colour values - Revisions (
z
) will be incremented when there's a change to anything else (e.g., adding or adjusting a theme)
Where this diverges from Semantic Versioning is that the addition of new colours is considered a "backwards-incompatible" change (so that you only need to look at the major version number to know which colours are available), but adjustments to sRGB values (which are non-canonical) is not.
All values are subject to change
- 2024-06-16 - Renamed Stardust High Contrast to Stardust Ultra
- 2024-06-06 - Renamed Mint to Spearmint; adjusted Raspberry, Pumpkin, and Gold mid tones
- 2024-06-04 - Corrected/re-balanced all brightness levels; adjustments to vivid and pastel shades (Lilac, Raspberry, Pumpkin, Bilberry)
- 2024-06-03 - Major change: Switch from J'a'b' to Jzazbz as the source model; all shades have been re-specified and re-adjustted
- 2024-06-02 - Palettes: added Apple Color Picker
.clr
file - 2024-06-02 - Values: added colour values as a tab-delimited text file
- 2024-06-02 - iTerm2: adjusted Stardust Solar themes to more closely match Solarized; added Stardust Ultra theme.
- 2024-06-01 - Initial preview (all values subject to change)
Stardust is a carefully-chosen balanced subset of a colour palette developed throughout 2023/24 that forms part of a much more comprehensive design system. Shades were selected initially in sRGB, subsequently re-modelled in CIELab, then, J'a'b', then subsequently Jzazbz.
If your application isn't listed here, you're welcome to use the colour values; pull requests containing samples are gratefully received and credit will naturally be given for any contributions. Please do include a link to any documentation that explains the file format, so that theme files can be generated rather than needing to be manually updated if values changed.
- Apple Color Picker (copy to
~/Library/Colors
)
There are several versions of Stardust provided for iTerm 2:
- Stardust uses the
base
tones (like Solarized), but map the eight Stardust hues to the 16 standard- and high-intensity ANSI colours as most applications expect - Stardust Solar is a direct port of the Solarized Dark & Light palettes: like with Solarized, these remap all but two of the high-intensity ANSI colours to
base
tones - Stardust Pro is a grey-on-black or grey-on-white theme with the eight Stardust hues mapped to the 16 ANSI colours; these are closest to "traditional" colour terminal themes
- Stardust Ultra is similar to Stardust Pro, but uses darker shades on a white background, and lighter shades on a dark background, in order to increase contrast. You may wish to experiment with iTerm 2's "Minimum contrast" profile setting, but 40% offers a good balance.
Note that the Stardust Solar themes seek to reproduce faithfully the
Solarized palette arrangements (notwithstanding the accent colour hue
differences between Solarized and Stardust), which is inconsistent with
iTerm 2's bundled Solarized theme: the bundled theme uses the same 16 ANSI
colour assignments for both light and dark modes. Stardust Solar follows
the light/dark logic as specified in the Solarized Xresources file,
which inverts all of the "base" shades between modes (i.e., all base0
shades become base
shades and vice versa; base03
is swapped for base3
and so on).
The Stardust palette is available as a set of CSS Variables.
Stardust has not been designed with specific support for non-sRGB gamuts, nor have its tones been matched to any dye/pigment combinations. However, that doesn't mean that Stardust is entirely unsuitable for print use, particularly with respect to the medium tones, whose shades ought to lie within the gamut of most colour processes, and the CIELAB values should aid in achieving consistent results.
A future version of Stardust may include specific support for non-screen use of the palette, but there are no current plans to do so. Contributions are of course welcome.
To say that colour is complicated would be a monstrous understatement; colour is complicated in ways that we are still learning about today. Digital colour, and in particular digital colour synthesis (i.e., reproducing colours using combinations of red, green, and blue light emitters driven by digital signals) is itself a complex, constantly-evolving field of scientific and industrial research. New display technologies are being continually developed, often vastly altering our ability to reproduce colours on screens.
This short introduction is unlikely to do the subject justice.
Light, and so colour, is conveyed through wave packets of electromagnetic radiation that travel in a straight line through curved spacetime until they collide with some matter, such as that making up your retinal cells, or the paint on the wall nearest to you.
The amount of energy in one of these wave packets is proportional to its frequency, which is inversely proportional to its wavelength. Or to put it another way, the radiation frequency, wavelength, and energy of a single photon (wave packet) are different ways of expressing the same thing. Perhaps counterintuitively, this means light being brighter doesn't mean its individual photons have any more energy—there are just more of them—whereas if the light was bluer they would.
(Obviously, it's possible for the total energy in a lot of redder photons to add up to more than the total energy in a few bluer photons, but it's not especially relevant to this discussion).
For practical purposes, we split the electromagnetic spectrum into different frequency bands which we tend to refer to with names like "shortwave radio", "X-rays" and "gamma rays", based upon the common characteristics and applications of radiation whose frequency lies within that band.
The 420-700 terahertz frequency band (wavelengths of between 700 and 400 nanometres respectively) is termed the visible light band, or visible spectrum, so-called for the probably-obvious reason that it's the range within which typical human photoreceptor cells have good frequency response.
This range therefore corresponds broadly with the colours you can see looking at a rainbow or at warm light through another type of prism: red is towards the 420 THz end of the band, having a wavelength of up to about 700 nm, and violet towards 750 THz (with a much shorter wavelength of about 400 nm).
Radiation outside of this range is all around us, but humans can't see it directly: for example infrared lies just below the visible light band, and ultraviolet just above it, although all electromagnetic radiation follows the same rules (even if we are still figuring out what some of them are).
If you're speaking about radiation of a single wavelength, then its colour could perhaps be said to be defined by its wavelength: reds are one band of frequencies, oranges a little further along, then yellows, greens, blues, indigos, and finally violets. Indeed, the colour of laser light is usually specified in terms of its wavelength.
But lasers are quite special in that they're designed (to the extent possible) to emit only one wavelength of light: that's not a property shared by the most of the light sources humanity has experienced throughout the majority of its existence. Moreover, there are colours that don't appear in a rainbow: you may have heard for example that magenta "doesn't exist".
In a (very real) sense, you can think of each of the colours of the rainbow as being like a pure sine-wave tone at a particular pitch, and the names of the colours are sort of like the names of musical notes, a subject which rivals this for complexity. And just as most sounds we hear don't consist of single, pure sine-wave tones, most light we see doesn't either.
Radiation from the Sun is emitted across quite a broad spectrum of wavelengths, which is to say it produces waves of electromagnetic radiation at many different frequencies simultaneously: you probably knew that "white" sunlight is not any one colour in the rainbow, but all of them mixed together (indeed, it has to be all of them mixed together, because that's what a rainbow shows you). A significant proportion of the radiation is scattered and deflected by the Earth's atmosphere before reaching us on the surface, although not all of the harmful radiation is filtered before it gets here.
It's generally unwise and counterproductive to look more than fleetingly at the Sun, and given that light travels in straight lines, most of the daylight we see has not come directly from our star to our eyes, but instead has reached us after it's interacted with some matter—such as that floating around in the atmosphere, or the environment around you—and each of those interactions will alter the light in some way.
You probably remember learning at school that the colours of things are dictated by which wavelengths of light they absorb and which they reflect, and this—depending upon your point of view—you may find to be a perfectly good model or one that leads to be more questions than answers (such as "what does it mean for a photon to be reflected by an atom? are atoms… shiny?")
A complete picture of photon-matter interactions is probably beyond the scope of this document, but one way to look at it is that all of the wave packets that manage to collide with matter (rather than zipping through it unimpeded) are absorbed. The difference is what happens to that energy: one option is that the it's turned into heat. It's probably fair to say that most radiation that collides with matter is absorbed by it and converted to heat.
(Matter which absorbs all light within the visible spectrum in this way would appear completely black to human vision no matter the light shone upon it; it's highly sought-after for some applications, and remains (appropriately) elusive).
But converting the photon's energy into kinetic energy is just one option. If the photon is energetic enough (so depending upon its wavelength), having just the right energy that an electron absorbing it would be able to jump up to another energy level, then that jump and subsequent fall back will result in the emission of more electromagnetic radiation: i.e., a new photon. Reflection, refraction, and photoluminesence can all be thought of as variations on this same process, where the direction, wavelength, and in the case of phospheresence, the number of emitted photons and over what duration, differ.
The colour of something could therefore be defined as being the combined intensities (number of photons) emitted by this process at all visible wavelengths over some specified short period of time—and if you pointed a spectrometer at it, you'd get just that data which could be plotted on a chart that would tell you how much light was emitted by it across the visible wavelengths (and depending on the equipment, quite far beyond)—exactly like an audio spectrum analyser shows you how much bass, mid-range, or treble is in a sound signal.
But if you tried to measure the same thing again a few hours later, you'd likely find you get very different data: this is because the relative amounts of light at different wavelengths being emitted still depends upon the relative amounts being absorbed—what we might call "environmental factors" generally, but in the case of light and colour, we can be more specific: lighting conditions—which we'll come back to later.
You could try to arrange for consistent lighting conditions before trying to take your spectrogram, but for the most part humans are able to tell what colour something is without doing that. You could say that colour seems to be relative.
Before you can even look at the spectrogram of your colour, we have to deal with what happens when these photons actually reach your eyes. Humans, like many primates, typically have red-green-blue trichromatic vision, which means we have red-sensitive, green-sensitive, and blue-sensitive cones in our retinas, all of which generate signals in our optic nerves simultaneously, the intensities of which depending upon the amount of light in their respective ranges that they receive.
There are two interesting cases here: one is magenta, mentioned above. It's the colour we see when approximately equal amounts of red and blue, but not green, hit our retinas. The existence of fuchsias in nature is enough to tell us that this colour is very much real, but it's not a single "note" in the same way as a hue that appears in the rainbow.
With our sense of hearing, humans have the ability to focus in on one part of the audible spectrum, allowing us to pick out individual sounds or voices in a rich soundscape, but with very limited spatial awareness. If you were able to turn your spectrogram's output into a continuous wave, you would have a signal that you could play as audio, and it would produce a distinct sound for each colour of light that you pointed it at, but you would only be able to point it at one colour at a time—your brain might be able to perform clever tricks with sound, but the information density of colour vision is huge in comparison—we can't see magenta as "red and blue", but we can see a whole field of vision's worth of colours simultaneously and distinguish them.
Being able to mentally decompose what we see into reds, greens, and blues so that you could tell that magenta was "lots of red and lots of blue but not green" would be a neat trick, but probably wouldn't have conveyed any evolutionary benefit as compared to what happens now: we see magenta.
Vision varies hugely across the animal kingdom. Many species can distinguish fewer colours than we typically can, because they have dichromatic or even monochromatic vision; others have a broader or quite different range to humans. Jumping spiders have been studied quite extensively in part because different sub-species have evolved trichromatic and tetrachromatic vision multiple times, sometimes employing the same mechanisms, sometimes novel ones.
The cones in our retinas do not respond only to light at one specific frequency and no other, but instead have a response curve, which peaks at a particular frequency, but then falls off as you move away from it in either direction. As with most things to do with our eyes, the precise response curves for the photoreceptors in your retinas are dependent on genetics and age.
These response curves have tradeoffs: if the response curve for one of the sets of photoreceptors is very broad, then you might be able to see a wider range of colour, but have a harder time distinguishing hues, because the broader range means there are more combinations of light wavelengths that resulting in the same intensity of response being sent down your optic nerve.
But these response curves bring us to the other interesting case: yellow. Yellow does appear in the rainbow, right where you'd expect it to be at roughly 580 nm. But we don't have yellow photoreceptors: instead, we infer yellow when green cones and the red cones in the same part of our retinas are stimulated at the same time a great deal and the blue cones aren't—in the exact same way that we infer the existence of magenta.
Does this mean there are two different kinds of yellow? The yellow in the rainbow—"pure yellow", if you will—and the yellow that we perceive when we see a combination of red and green? Yes—and we have no way of telling them apart with our eyes. Fortunately, we can use this to our advantage, which we'll come back to in a moment.
You may have been told once that the primary colours are "red, blue, and yellow", and you may also have been told that it's wrong and that they're not. We know that we have red, green, and blue photoreceptors, so where on earth does this red-blue-yellow thing come from?
This is one of those things that is simultaneously correct and incorrect.
Imagine that you were to draw a circle, and then found the largest equilaterial triangle that could fit within it, and put bright red at one corner of it, bright green at another, and bright blue at the third. Then, halfway between red and blue, you put bright magenta. Between red and green you put bright yellow. Between green and blue, bright cyan. This arrangement probably looks quite familiar: it's a standard colour wheel, and the relative positions of the hues are ultimately defined by how our eyes work, although colour wheels predate that understanding.
If you look at the spots on the colour wheel halfway between red, green, and blue, you'll see they're cyan, magenta, and yellow: these three primaries, together with black, are used for CMYK printing, the standard modern colour printing process.
So yellow is a primary colour? Or isn't it?
It's probably worth clarifying what we mean by "primary colour". And also probably "red" and "blue" in this context.
At its simplest, a primary colour is one that you mix with other primary colours to make different colours: its definition then depends a great deal on context. If you're working with paints, you can mix varying quantities of red, blue, and yellow to get a reasonable range of colours.
You could, then, define a Red-Blue-Yellow colour model, where each colour was defined as the percentage of red, blue, and yellow paints that you need to mix to arrive at a desired shade.
If you look at magenta and red on the colour wheel, you don't need to move them a great deal closer together (adjusting their hues as you do) before they start to look like roughly the same sort of pinkish hue, and the same happens with cyan and blue as well.
We talk about reds and blues as whole families of colours, but you'd very rarely talk about magentas or cyans, and so part of the story is that "red-yellow-blue" and "cyan-magenta-yellow" are really talking about the exact same thing—it's just that when you're talking about modern printing processes, we can be more precise.
But why yellow? And why are these primaries rotated?
What we're looking at here is not really about rotation—showing the two sets of primaries on the same colour wheel is a little bit of a red herring, but rather inversion: CMY(K) and RGB are not just different colour models with different primaries, but different kinds of colour model: additive and subtractive. In other words, we use cyan, magenta, and yellow, because they're the inverse of red, green, and blue respectively.
Much of our understanding of colour was derived not from understanding how light or our optical cells behave necessarily, but how pigments do: and in particular, how to mix pigments, paints, and dyes to arrive at one which reflects a desired colour. As a consequence, these colour models are subtractive, because as you add more pigment, the darker the result gets.
When we talk about emitting light of a particular colour, rather than dyes, paints, and pigments reflecting it, we're working with additive colour models, because the more light you add, the lighter the shade (until it's completely white).
(The "K" or "key" in CMYK printing refers to the use of a specific blank ink or toner alongside the other three: partly because this is more much more efficient when much of what is printed is black text, but also because it produces "blacker blacks" than simply mixing lots of cyan, magenta, and yellow together).
The biggest factor in how many primaries you use is typically cost: because we have three sets of photoreceptors, three is the minimum you can use to achieve a broad gamut, but you don't have to stop there.
For the longest time, incandescence was the hottest thing in light production: either by setting things on fire, or by trying very hard to not set them on fire. This was variously inefficient, extremely imprecise from a colour perspective (despite valiant efforts involving translucent gels), and quite perilous when it came to the whole naked flame thing; yelling "fire!" in a crowded theatre was something that unfortunately had to happen quite a lot.
As well as great leaps in theoretical physics, the 20th century also heralded significant developments in photographic, movie production, and television technology, all of which help form the foundation of our understanding of light and colour today. Perhaps knowing what you do now about photons and electrons, the idea of making pictures appear on a screen by firing an electron beam at phosphors to excite them into giving off photons doesn't seem all that wild.
Thankfully (given the picture size of contemporary televisions), interesting states of matter and semiconductors have rendered the vacuum tube-based display technologies obsolete for all but the most niche of applications. And nowadays, the biggest difference between televisions and computer monitors is that the former come with a computer built-in and weird default settings: a TV, a laptop, a desktop monitor, a phone, and a tablet can all use the same display technologies, and what follows is generic enough to apply to all those in common use.
Colour is reproduced on screens somewhat analogously to the way that moving pictures are: by doing just enough to provide the right sorts of signals that your eyes and your brain will interpret them to get the desired result, whether that's by displaying a sequence of still images quickly enough that it looks like smooth motion, or by emitting combinations of red, green, and blue light that fool your retinas into seeing a huge range of colours.
And it's not that a display can produce different shades of red, green, or blue in each of its pixels: only the intensity of each can be varied, the wavelengths are fixed. Just by varying intensity levels, light can be produced that looks to us like hot pink, or deep teal, or bright white, or yellow.
When you see yellow on screen, you're seeing red and green sub-pixels really close together, closely enough that you can't distinguish them. It's not even that your brain is being tricked, so much as the cones in your retinas: they simply can't tell the difference between "red and green photons arriving together in the same part of your field of vision" and "yellow photons arriving".
Most of us know from first-hand experience that this approach of tricking us into seeing motion and colour works pretty well in general, but it's also not without challenges.
First, there is the gamut of a display: that is, the range of colours it is able to reproduce. There are many colours that displays cannot reproduce. If you want to specify colours, it's therefore important that you work within the gamut of the device that will display (or print) them.
Second, there's consistency: every display is different, most are not professionally calibrated, and rigour of factory calibration can vary wildly. Consistency is even more challenging if you need to reproduce colours between media: for example when devising a palette that needs to be suitable for both web and print use, for example.
Third, there's subjectivity: every single human's eyesight and colour vision and different. We have common reference points, but that doesn't mean we see the same colours: we might look at the same patch of grass and agree that it's a rich green, but we have no way of knowing that we see it exactly the same way, especially not in relation to other colours.
If you were to take a photo of that same patch of grass, you'd immediately notice that it doesn't look quite the same on your phone's screen as it does right in front of you. With careful adjustment of filters and white balance, you manage to get it close, but there's still something not quite right. Why?
Partly this is down to your phone's display and the range of colours its pixels are capable of reproducing, but also your phone's camera, which will have it's own gamut (which may not entirely line up with the display's), and our old friend lighting conditions play a big role in both the capture of the photo and how colour appears on a display.
There are also limits to how much all of this can be compensated for in post-processing (either through manual adjustments to the photo, or as a result of intelligent display adjustment technology, the sort increasingly found in higher-end phones and laptops).
Fortunately, improvements in camera, display, and print technology in recent decades have come hand in hand with improvements to the ways that we represent and manage colour in digital systems. It wasn't that long ago that colour values in web pages or image files were interpreted as being, in effect, the raw RGB values to send to your computer monitor, regardless of its characteristics. Nowadays, the values are interpreted as existing within a reference space, specifically sRGB, and then transformed into a colour space specific to the target device, using a colour profile—virtually every device capable of reproducing colour sold in the last two decades has a colour profile available for it.
The combination of colour profiles and reference spaces has greatly improved our ability to achieve consistency across a range of devices. Meanwhile, contemporary image and video formats don't typically store red, green, and blue values at all, but instead use a differently-arranged colour model to suit their needs (such as Y'CbCr), which will often have a much wider theoretical gamut than any output device.
But plenty of challenges remain: colour profiles, like factory calibration, vary significantly in quality; and more importantly, they are only ever the "average". How much this matters depends upon the application: if you care that the photo you just took will look pretty much like you see it on you screen when you send it to a friend, colour profiles have basically solved this problem. If you're a professional designer, or a filmmaker, or a TV graphics editor, or somebody designing safety-critical systems, then they help, but only so much.
How do you find complementary colours in RGB? How do you increase lightness and decrease intensity? How do you guarantee minimum contrast levels in a range of lighting conditions? How do you guarantee those contrast levels when the colours are printed onto a reflective metal roadsign?
Colour systems such as those from PANTONE and RAL have been the traditional answer to these questions, and will doubtless continue to be for some time to come. They work by defining what is now a huge number of named colours, and for each have specified not only the red, green, and blue values for on-screen use, but also the specific combinations of different kinds of inks needed to achieve consistent results. They also provide physical samples in a variety of forms, and support many different kinds of inks—many with a gamut far outside of sRGB. In these cases the RGB values are considered to be just a rough approximation, like a low-resolution preview (but the resolution is colour space instead of phsyical dimensions) and professionals will rely on the physical samples and published specifications, which will include information about contrast levels under different lighting conditions.
Systems of this scale are not produced without some significant human effort, but rely on colour models to do a lot of heavy lifting. Whilst RGB is one family as models, there are others—as mentioned above—and the choice of model is defined by what mathematical properties it exhibits. For example, whether "lightness" is defined as a single axis, or computed from a combination of axes as it is in RGB.
And whilst you can straightforwardly compute hue, saturation and lightness values from sRGB primaries and vice versa, there's no requirement that any colour model have a linear relationship with sRGB, and indeed most don't.
sRGB, to use as an example, means that we have a pretty good idea of what wavelengths of light will be emitted by a display for a given set of RGB colour values (on average, with error bars, and so on). But how well do those wavelengths actually match up with human photoreceptors? How much does that vary? How much does that matter?
As noted in the introduction, the current version of Stardust was developed using a colour appearance model (or CAM), and specifically the Jzazbz model, which is one of the "Lab" family of models (which also includes CIE Lab*, OKLab, and so on).
The easiest way to consider Lab models is:—
- The first axis represents "lightness", typically on a 0-100 scale (although values can fall outside of this range)
- Then, for a given lightness level, the other two values represent coordinates on that plane — together, these represent both the hue and the intensity (or "chromacity")
- If you plot a circle on the plane for a given lightness, with (0, 0) at the centre, then the radius of the circle will be the chromacity, and the angle of any point around that circle will represent the hue
In principle the colour space represented by such models is usually unbounded, but we are generally only interested in the subset of it which can actually map to values in the sRGB space. Colour values that don't map neatly to sRGB are termed "out of gamut", which essentially means that one or more of the RGB values is really less than 0 or greater than 1.0 (or 255, depending upon how you represent your RGB values) and gets "clipped" at that level. High Dynamic Range displays (and their accompanying colour models) have a broader gamut, and so an sRGB "1.0" is not the same as an HDR "1.0".
In appearance models, the aim is that distance between points within the colour space is proportional to the perceived difference between those two shades. In practice, it is impossible to arrive at a single model that is straightforward to work with, produces meaningful values at a wide range of chromacity and brightness levels, and accurately represents the appearance differences between shades on real-world devices under various different realistic viewing conditions.
These limitations, coupled with the limited and uneven gamut of sRGB, means that it's not sufficient to simply select some defined brightness and intensity levels, and then split the colour wheel up evenly: at higher intensity levels, there's a higher chance of straying outside of sRGB's gamut—but it's more pronounced with some hues than others. Where this happens, you have multiple distinct colours (in your source colour space) that end up with the same sRGB values. Meanwhile, some shades need to have their intensity increased in order to appear evenly-matched with others at the same notional intensity and brightness level.
Rather than complicate the palette definition by using different models for the different colour variants, the current version of Stardust is defined using a single general-purpose model, and then a series of adjustments are applied to specific shades. Without a significant amount of data gathering and analysis, this will remain the most subjective aspect of the process. Fortunately, colour appearance models are an active area of research, and so hopefully future versions of Stardust can be more data-driven.
Stardust's hue names are chosen to be evocative, rather than attempt to match a well-known or standard colour name: there are no "reds", "blues", or "greens", for example, because the problem quickly becomes "which red?", particularly in the larger palette from which Stardust is subsetted.
Using evocative names can also aid in translating the names of shades: "Dark sky" (English) and „dunkler Himmel“ (German) are both evocative of the same concepts to speakers of their respective languages, for example. Of course, care still needs to be taken that colour names don't end up being too similar to one another (this is why "Bilberry" is not "Blueberry"): for example, "Raspberry" translated to German is „Himbeere“, which could be seen as too similar to „Himmel“ (Sky). This particular quandry will not be resolved in this release of Stardust.
All of the files in this repository that are influenced in some way by the individual colour values in the Stardust palette were generated by a fairly extensive (if not necessarily good quality from a coding standards perspective) Python tool, whose role could be best characterised as a "colour publication system", and was written specifically for this purpose.
For multiple reasons, it's unlikely that this code will be released as open source, and on a practical basis, describing what it does and how it works is arguably much more useful.
The tool, called Chromatik, processes JSON files as input and produces various kinds of output — SVG renderings of the palette, an Objective-C program that will create and save an NSColorList, the CSS variables, and so on.
Each JSON file, which specifies a Colourset, may contain:
- Global options (the title of the palette, which formats to generate plots in, etc.)
- Specific colours, with their values specified in any supported colour space
- Variants of those colours (e.g., "Pastel gold"), whose values are specified in the same way
- Colour generation parameters
- Bulk adjustment parameters to create variants that weren't explicitly specified
- Imports of other Coloursets; the import may be restricted to particular hues, variants, or even individual colours
- Individual adjustments to make to specific colours
A colour's values can be specified in more than one space simultaneously; Chromatik will only convert values from one space to another if they're not already specified. Unfortunately, this also means that no checks are performed to confirm that the supplied values are all within a reasonable range of each other—supplied values are simply assumed to be canonical.
In more extensive palettes such as Stardust's, a colour as specified at the top level of a Colourset is taken to be the base shade of that hue: "Spearmint" or "Lime" for example. Each hue can then have any number of variant shades. Different hues don't all need to have the same variants, but if bulk-adjustment parameters were specified for a particular variant, then all hues will have a colour generated in that variant if one isn't already specified.
Those "bulk-adjustment parameters" are simply absolute or relative values in one or more of a colour space's axes. For example, a pastel variant might be specified as 5% chroma and 95% brightness in the J'C'h' space, which is the polar form of J'a'b' — Chromatik simply takes the base shade, converts it into whichever colour space the adjustment is to be performed in, applies the adjustments to those values as specified in the JSON, and creates a new colour object from those values, and attaches it to the base shade's object as a variant.
Individual adjustments work the same way, but target a specific colour once all generation has happened. Individual adjustments can also rename a colour if the scheme doesn't fit in an individual case: for example, we always refer to white as "white" and not "pastel grey" or something.
Generation happens in a similar way to adjustment, albeit prior to it: the JSON specifies the colour space, the number of divisions, and which axes have fixed/limited range; the JSON specifies a list of name mappings, either as an array or a dictionary; if a dictionary, one of the axes (e.g., the hue angle) is used as the lookup key — this means that if the range or number of divisions is altered, the names won't get out of sync with the colours themselves (as long as their hues were still in the generated set). Manually-specified colours will always override generated colours.
One Colourset can import another, either in whole or part. Imports can be limited to specific individual colours, particular hues as a whole, and particular variants. Stardust works in this way: its JSON file doesn't contain any colour values, nor adjustments, nor generation parameters, just imports and information about the various outputs that should be produced. Indeed, splitting the palette up into multiple source files in this way is what makes it feasible to derive Stardust as a subset in the first place. Imports happen before adjustments are applied to the Colourset doing the importing, which means it can perform additional adjustments on top of those already applied.
Once a Colourset has been loaded, including all of its imports, it's passed through various generators, which are simply Python classes that turn Coloursets into useful output. A simple generator is the one that writes the tab-delimited text file in this repository, for example. In the Stardust build process, several generators are run in sequence and then the results (including this README with its big table of colour values) are copied to the Stardust public repository and the results committed.
This does have the slightly unfortunate side-effect (besides an unpleasant commit log) of re-generating the SVG plots, which contain both embedded timestamps and random values (internal object identifiers), even if the colour values haven't changed. Perhaps there's a way to prevent matplotlib from doing that, or to only commit the SVGs if their PNG equivalents differ.
The palette from which Stardust is specified consists of 72 hues in a number of different chromacity/lightness variations, which is far too large for good results by performing manual adjustments alone.
One approach might be that each individual axis of each adjustment is treated as a kind of "adjustment layer" that applies to the whole colour space, but with an intensity which drops off the further away (in the other two dimensions) you get from the target colour, for example using the inverse-square law multiplied by a spread factor (a spread of zero meaning it only applies to that colour and none of its neighbours).
This approach would mean that rather than adjusting colours, the colour space itself is adjusted, using the named colours as reference or anchor points. This starts to sound a great deal like a colour profile, and so perhaps it would be best to just learn how to specify and apply those instead.
The Jzazbz values are canonical. D65 is used as the reference white point where relevant. RGB values are sRGB unless otherwise noted.
The following table is also available in tab-delimited text format.
Name | Hex | R | G | B | L* | a* | b* | Jz | az | bz |
---|---|---|---|---|---|---|---|---|---|---|
Pumpkin | #c13e28 |
193 | 62 | 40 | 46 | 51 | 42 | 0.111 | 0.0806 | 0.0806 |
Midnight pumpkin | #3f0600 |
63 | 6 | 0 | 10 | 26 | 17 | 0.0333 | 0.0424 | 0.0424 |
Dark pumpkin | #670a00 |
103 | 10 | 0 | 20 | 39 | 34 | 0.0555 | 0.0636 | 0.0636 |
Vivid pumpkin | #ff5034 |
255 | 80 | 52 | 71 | 85 | 68 | 0.1749 | 0.1167 | 0.1167 |
Pastel pumpkin | #ffe0d2 |
255 | 224 | 210 | 94 | 17 | 14 | 0.211 | 0.0255 | 0.0255 |
Gold | #9c7200 |
156 | 114 | 0 | 50 | 6 | 112 | 0.111 | -0.0 | 0.138 |
Midnight gold | #2d2000 |
45 | 32 | 0 | 13 | 1 | 26 | 0.0333 | -0.0 | 0.06 |
Dark gold | #4d3700 |
77 | 55 | 0 | 24 | 3 | 51 | 0.0555 | 0.0 | 0.09 |
Vivid gold | #ffc600 |
255 | 198 | 0 | 83 | 7 | 154 | 0.1833 | 0.0 | 0.165 |
Pastel gold | #fef3c7 |
254 | 243 | 199 | 96 | -3 | 23 | 0.211 | -0.0 | 0.036 |
Lime | #009c16 |
0 | 156 | 22 | 56 | -61 | 53 | 0.111 | -0.0849 | 0.0849 |
Midnight lime | #002f00 |
0 | 47 | 0 | 15 | -30 | 24 | 0.0333 | -0.0424 | 0.0424 |
Dark lime | #005000 |
0 | 80 | 0 | 28 | -43 | 41 | 0.0555 | -0.0636 | 0.0636 |
Vivid lime | #00f02e |
0 | 240 | 46 | 83 | -83 | 72 | 0.1666 | -0.1061 | 0.1061 |
Pastel lime | #d6ffd4 |
214 | 255 | 212 | 97 | -23 | 18 | 0.211 | -0.0255 | 0.0255 |
Spearmint | #00a87c |
0 | 168 | 124 | 57 | -85 | 7 | 0.111 | -0.12 | -0.0 |
Midnight spearmint | #003423 |
0 | 52 | 35 | 16 | -42 | 3 | 0.0333 | -0.06 | -0.0 |
Dark spearmint | #00573c |
0 | 87 | 60 | 29 | -62 | 5 | 0.0555 | -0.09 | 0.0 |
Vivid spearmint | #00ffc0 |
0 | 255 | 192 | 85 | -118 | 9 | 0.1666 | -0.15 | 0.0 |
Pastel spearmint | #b9fff2 |
185 | 255 | 242 | 98 | -28 | 3 | 0.211 | -0.036 | 0.0 |
Sky | #008fc3 |
0 | 143 | 195 | 53 | -32 | -41 | 0.111 | -0.0849 | -0.0849 |
Midnight sky | #002b3d |
0 | 43 | 61 | 14 | -17 | -18 | 0.0333 | -0.0424 | -0.0424 |
Dark sky | #004867 |
0 | 72 | 103 | 26 | -22 | -28 | 0.0555 | -0.0636 | -0.0636 |
Vivid sky | #00dcff |
0 | 220 | 255 | 78 | -41 | -58 | 0.1666 | -0.1061 | -0.1061 |
Pastel sky | #c1feff |
193 | 254 | 255 | 96 | -15 | -14 | 0.211 | -0.0255 | -0.0255 |
Bilberry | #6356db |
99 | 86 | 219 | 45 | 41 | -67 | 0.111 | -0.0 | -0.12 |
Midnight bilberry | #181546 |
24 | 21 | 70 | 10 | 18 | -30 | 0.0333 | 0.0 | -0.06 |
Dark bilberry | #2c2475 |
44 | 36 | 117 | 20 | 29 | -46 | 0.0555 | -0.0 | -0.09 |
Vivid bilberry | #816bff |
129 | 107 | 255 | 70 | 97 | -143 | 0.1833 | -0.0296 | -0.2079 |
Pastel bilberry | #cce1ff |
204 | 225 | 255 | 90 | 5 | -29 | 0.2005 | -0.0067 | -0.0474 |
Lilac | #aa12b7 |
170 | 18 | 183 | 42 | 72 | -50 | 0.111 | 0.0827 | -0.0827 |
Midnight lilac | #35003a |
53 | 0 | 58 | 8 | 36 | -24 | 0.0333 | 0.0424 | -0.0424 |
Dark lilac | #590061 |
89 | 0 | 97 | 18 | 53 | -36 | 0.0555 | 0.0636 | -0.0636 |
Vivid lilac | #e43af4 |
228 | 58 | 244 | 58 | 84 | -59 | 0.1499 | 0.0902 | -0.0902 |
Pastel lilac | #ffdbff |
255 | 219 | 255 | 93 | 24 | -18 | 0.211 | 0.0255 | -0.0255 |
Raspberry | #bb1f71 |
187 | 31 | 113 | 42 | 64 | -7 | 0.1082 | 0.102 | 0.0 |
Midnight raspberry | #40001f |
64 | 0 | 31 | 9 | 37 | -4 | 0.0333 | 0.06 | -0.0 |
Dark raspberry | #6a0037 |
106 | 0 | 55 | 18 | 54 | -6 | 0.0555 | 0.09 | 0.0 |
Vivid raspberry | #ff21a0 |
255 | 33 | 160 | 59 | 86 | -9 | 0.1499 | 0.1275 | -0.0 |
Pastel raspberry | #ffd6ef |
255 | 214 | 239 | 92 | 28 | -3 | 0.211 | 0.036 | -0.0 |
base00 | #627d89 |
98 | 125 | 137 | 51 | -7 | -9 | 0.111 | -0.0154 | -0.0197 |
base03 | #002633 |
0 | 38 | 51 | 13 | -8 | -12 | 0.0333 | -0.0215 | -0.0276 |
base02 | #0b3340 |
11 | 51 | 64 | 19 | -8 | -12 | 0.0444 | -0.0215 | -0.0276 |
base01 | #56707b |
86 | 112 | 123 | 46 | -7 | -9 | 0.0999 | -0.0154 | -0.0197 |
base0 | #899499 |
137 | 148 | 153 | 61 | -3 | -4 | 0.1332 | -0.0062 | -0.0079 |
base1 | #96a1a6 |
150 | 161 | 166 | 66 | -3 | -4 | 0.1443 | -0.0062 | -0.0079 |
base2 | #f9f1e0 |
249 | 241 | 224 | 95 | 0 | 9 | 0.211 | 0.0021 | 0.0149 |
base3 | #fff9e8 |
255 | 249 | 232 | 98 | 0 | 9 | 0.2176 | 0.0021 | 0.0149 |
Medium grey | #787878 |
120 | 120 | 120 | 50 | 0 | 0 | 0.111 | -0.0 | 0.0 |
Black | #000000 |
0 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0.0 | 0.0 |
Dark grey | #3a3939 |
58 | 57 | 57 | 24 | 0 | 0 | 0.0555 | 0.0 | 0.0 |
Light grey | #bab9b9 |
186 | 185 | 185 | 75 | 0 | 0 | 0.1666 | -0.0 | 0.0 |
White | #ffffff |
255 | 255 | 255 | 100 | 0 | 0 | 0.2221 | -0.0 | 0.0 |