Like any large body of work, no single technique was sufficient to create an image for every icon. Throughout the family, a number of different approaches were used and the results range across a spectrum from the literal to conceptual abstraction.
Principles for the imagery:
Some of the new icons were closely based on the older version of the same icon, translated into the new visual style. This provided continuity to users when upgrading.
More importantly, where the original already used the most recognizable symbol of the operator’s function, the imagery did not need to be reinvented.
The Box Plot operator visualizes the input data’s statistical attributes as a box plot. This is a standardized way of displaying the distribution of a dataset, and the distinctive whisker is an immediately recognizable figure to the audience. On the icon, the whisker is used to represent the whole (synecdoche).
The commitment to recognizability is perhaps the most dominant of the guidelines.
Because of the specialized domain and audience for the application, “commonly known” means within the specialized world of data science. The user profile for this environment axiomatically establishes familiarity with the underlying functions, whether deep knowledge (bona fide data scientists) or basic understanding (analytics consumers).
The linear regression modeling icon takes advantage of this. It displays an idealized representation of the algorithm’s results (and that itself is a graphic visualization).
Pivot and Unpivot demonstrate a different way of communicating the concept.
Most of the transformation category operators use some variant of this approach, where the process or action is indicated in some manner. In both pivot and unpivot, the arrow communicates change.
While there may not be an innate sense of pivoting a data set, the action arrow creates a visual connection to the transformation involved and an image that is learnable.
Some icons need to represent a process that has no intuitive visual equivalent, where there is no generally accepted visual identification, and the function does not have any representations outside of the underlying math.
Many of the more esoteric machine learning algorithms fall into this group.
For these cases, conceptual abstraction is used.
The AdaBoost algorithm, for example, has no visual equivalent, no typical image result, no simple explanation. For the AdaBoost operator, the design concept represents the algorithm’s process of classification using multiple other classifiers, and uses the progression of bands (vertical and horizontal) to conceptualize the movement towards the visually weighted center.
Abstract imagery is the most difficult type of imagery for users, since the person will not have any instinctive understanding of the icon, but the requirement to be visually distinctive will help form a connection through usage, making it recognizable.
When gathering ideas, there were innumerable sources that fertilized the ground for growing ideas. Just a few of them are:
The visual language gives form to the graphic ideas.
Continuing the visual direction and transformation started in the platform redesign project, the operator icon style took inspiration and influence from pictograms and ideograms. Added to this were a set of guidelines for the visual attributes:
Taken as a whole, there are certain basic elements that are common to many operators, regardless of the function or algorithm, related to their shared domain of machine learning.
To take advantage of this, a collection of visual building blocks were used throughout the entire family, contributing to the shared visual language of the operators. Data, for example, is represented by a small square throughout the collection. The data element appears in over thirty percent of the icons, from an entire table (the sampling actions) to an individual mote (null replacement).
The operators are organized into seven primary categories:
Each category has a unique hue, and each icon has a solid color background according to its category.
In addition to the basic aesthetic appeal, the color becomes a learned quality that helps the user work with the application.
Color provides mnemonic identification by type (i.e. category of function), and the ability to organize the operations. At a high level, the color coding allows a person to follow the general process of an analytic workflow and to parse the process, from the initial data input through modeling and prediction.
An interesting consequence of this is that images of the visual workflow are often used by data scientists in presentations to document and explain their work to other people.
For the categorical colors, saturation and brightness have been controlled for uniformity over the entire family. The rationale for this is to balance distinction and usage habits.
This design decision balances distinction and usage habits, optimizing the experience for extended periods of usage (since the typical data scientist session is several hours long). Toning down and reducing the saturation and brightness creates a more neutral holistic visual display and minimizes visual fatigue.
This also reserved higher saturation for specific cases: the selected state of the icon is subtly more saturated and brighter, befitting the intended attention focus.
On top of the category color, the foreground imagery uses only black and white, applied in specific transparency steps to render the intended idea.
In addition to the main hue-category connection, each icon category has a distinct base shape which reinforces the unique identification. In some cases, shape is varied within a group to add an additional level of differentiation. Both data input and flow control are in the “tools” category, but their shape is different. In the transformation category, four trapezoid variations are used.
Each operator icon also has four states that change contextually in response to user actions while editing the workflow. The states convey the relationships or possible relationships between the operators when an operator is selected.
States are applied as visual treatments of the base icon.
Icon design is a design world unto itself, where the macro-design rules are sometimes insufficient or need to be bent, and empirical perception is the final law.
Constructing each icon using the defined palette and drawing the imagery within the style guidelines resulted in an icon that matched the others. This did not necessarily result in an icon that was finished. Each image needed to be assessed individually. Changes were made to every icon below the threshold of “noticeable” to make a final result that was still the same image, but that worked at the size of 60 pixels (the display size in the workflow editor). This was the final visual optimization step.
A 65% opacity setting for a black element (representing a piece of data) might work well visually when the background is medium blue, but not well enough when the background is amber, and the opacity needed to be changed to 68%. In another case, the imagery for the PCA (Principle Component Analysis) algorithm is the same for both PCA Modeling and PCA Predictor. But the two are not the same icon. The larger size of the predictor operator caused the elements to look awkward. To resolve this, the proportions were changed very slightly, and some of the elements were shifted a pixel or two so that they maintain equilibrium.
In the cases where new visual imagery needed to be created, pen and paper sketching was usually the first step for visual thinking, accompanied by abundant research. A whiteboard is invaluable, as well, for collaborative sessions and thinking in a larger format.
Eventually, the production process moved into Adobe Illustrator for high fidelity work and the final icon development in a vector format. Practices for working with vector icons include two important rules:
Exporting from Illustrator as a vector image gave the default normal state icon file.
Because of my opinion that Photoshop offers more control and consistency over image effects, and also because of an easier ability to construct a batch process, each normal state icon was then processed through a custom Photoshop action to create icons for the other three states.
Finally, lossless image optimization (variously: imageOptim, pngquant, tinypng, PiedPiper compression) whittled the files down to be as svelte and speedy as possible.