The “Drawing Gesture” interface was a large part of what made Comp unique and fast. Objects could be added to the canvas with the stroke of a pen. Styles could be copied and pasted. Multiple objects could be selected. Items could be cloned. All in all the app contained around 20 different gestures1, and even more were prototyped behind the scenes as we worked to get the interface right.
We worked hard to make the drawing gesture input as freeform and unconstrained as possible. Shapes could be made up of a single stroke or multiple lines. Shapes could be drawn one at a time and tweaked, or multiple shapes could be added at once. Undo and redo of drawn but not-yet-added shapes was integrated seamlessly with the app’s history system.
To delete one or more objects, you can scribble over them. This gesture was one of the more complex: It can take a much more varied shape, it is recognized and starts to delete objects underneath while still being drawn, and it deletes not-yet-placed drawn shapes, and shapes in the foreground are deleted before shapes in the background. The pressure of the Apple Pencil, the speed of the gesture, and the exposed are of the underlying objects are all taken into account to make an experience that feels simple and intuitive.
Since drawing with your finger is fairly imprecise, the resulting object often needed to be resized or nudged into place. I created a set of rules to automatically size and place the objects drawn, in relation to other drawn objects being placed at the same time, the document bounds, any grids and guides in the document, and existing objects.
These rules dramatically reduced the amount of adjustment needed after drawing shapes, and allowed users with an Apple pencil for example to closely control positioning while drawing, where as a hastily-drawn object created with a users finger might be adjusted more to fit with existing content.
In the first version of Comp, a prominently-placed button would open and close a separate mode for drawing gestures. But, we quickly realized it was too important a part of the experience to have one step removed. For version 2.0, we worked to remove the need for this, using a system of gesture recognizers and intelligent ignoring of accidental input to remove the need for a dedicated mode, and to unify the app’s editing experience.