Pen-based Interaction Beat Signer Abstract The use of pens in human-computer interaction has been investigated since Ivan Sutherland’s Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction. 1 Introduction The predecessors of today’s pens have been used over more than five thousand years in the form of reed pens, ink brush pens or quill pens. Pens together with their corresponding writing surfaces—including papyrus, parchment or paper—have therefore been optimised for the task of writing over thousands of years. This implies that nowadays, everybody knows how to use pen and paper but also has certain expectations based on a mental model when using these artefacts, a fact that must be taken into account when designing new forms of pen-based human-computer interaction. The telautograph by Elisha Gray was the first electronic device to capture penbased input in the form of handwriting or drawings (Gray, 1888). A pen was conBeat Signer Web & Information Systems Engineering (WISE) Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium e-mail:
[email protected]1 This is the author's version and the final article is available at: Beat Signer. Pen-based Interaction. Jean Vanderdonckt, Philippe Palanque, Marco Winckler (Eds.). Handbook of Human Computer Interaction, Springer, Cham, 2025, Springer Reference, https://doi.org/10.1007/978-3-319-27648-9_102-1 2 Beat Signer nected to some potentiometers on a transmitting device, transforming any pen writing into electrical impulses that could then be transmitted to a remote receiver as illustrated in Fig. 1(a). On the receiving device, the electrical impulses would be transformed back into the corresponding movements of a pen attached to a servomechanism for remotely reproducing the pen movements. A modified version of the telautograph called telewriter was later operated over regular phone lines and, for instance, used for remote signatures. While the telautograph and telewriter were reproducing pen-based input at a remote site, a next step was an electromechanical device for the real-time detection of handwritten numerals patented by Goldberg (1914). Pen-based interaction was also described in Vannevar Bush’s seminal paper ‘As We May Think’, introducing the memex (memory extender) as a natural way for human-machine interaction where document pages could be annotated by using a pen in addition to the definition of associative trails between individual pages captured on microfilm: “Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, ‘memex’ will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. [. . . ] He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him.” (Bush, 1945) As we will see in the next section, nowadays pen-based interaction (or pen-based computing) can not only be used for direct object manipulation and handwriting recognition on computer screens, tablet computers or smartphones, but existing pen and paper-based interactions and workflows have also been augmented with digital services and information as proposed by Marc Weiser in his visionary article on ubiquitous computing, where computers would “disappear” and become embedded into everyday objects and environments: “At breakfast Sal reads the news. She still prefers the paper form, as do most people. She spots an interesting quote from a columnist in the business section. She wipes her pen over the newspaper’s name, date, section, and page number and then circles the quote. The pen sends a message to the paper, which transmits the quote to her office.” (Weiser, 1991) When designing new forms of pen-based human-computer interaction, it is essential to respect the affordances of pen and paper as well as existing practices and workflows that have co-evolved and been optimised over thousands of years. While certain pen and paper-based tasks might be digitised and new forms of multimodal pen-based user interfaces are evolving, it can sometimes be challenging to support specific pen and paper-based activities—such as reading and annotating across multiple spatially arranged documents—in digital space, as discussed by Sellen and Pen-based Interaction 3 Harper (2001) in ‘The Myth of the Paperless Office’. Therefore, we should carefully investigate the diversity of inking behaviours afforded by pen and ink as done by Riche et al (2017), who discussed nine pen affordances and related activities for “analogue pens”, and compared them to the affordances of digital pens to inform the design of new forms of pen-based user interfaces and eventually improve digital pen and ink experiences. The study of existing solutions for pen-based interaction with tablet computers and the analysis of factors such as the perceived latency, accuracy or unintended touch interactions with a tablet further help in understanding differences between physical and digital inking experiences (Annett, 2014). This article on pen-based interaction is organised as follows. We start with the history of the most relevant pen-based interfaces for human-computer interaction that have been developed over the last 60 years. We then discuss different technologies that can be used for tracking and developing pen-based interactions and outline different pen-based interaction styles. After presenting various pen-based applications for different application domains, we discuss some challenges and future directions for pen-based interaction. 2 Pen-based Interaction While we have already introduced some of the early mechanical or dry photographybased pen interfaces, including the telautograph and the memex, we now present the major developments in pen-based human-computer interaction from the 1950s on. 2.1 History The Stylator (stylus translator) by Dimond (1957) that is schematically shown in Fig. 1(b) (based on Dimond (1957)) was the first pen-based input device for real-time single-character handwriting recognition to control a computer. Its plastic writing surface consisted of several well-defined areas separated by seven embedded conductors. While writing a character with the pen connected to a power source, individual conductors were energised when crossing the boundaries between areas, which was then translated into a specific character via several flip-flops. While the Stylator could only capture single characters, the RAND tablet released in 1963 by Davis and Ellis (1964) was one of the first data tablets supporting pen-based freehand drawings and can be seen as a predecessor of today’s graphics tablets. The 10 × 10 inch tablet surface offered a resolution of 100 lines per inch and could thus detect about one million unique discrete positions of a wired pen with a pressure-sensitive tip. In combination with the Graphical Input Language (GRAIL), 53 handwriting letters, numbers and other symbols or shapes could be recognised and used as input to control a computer. 4 Beat Signer (a) Telautograph (b) Stylator (c) Sketchpad (d) DigitalDesk Fig. 1 Early pen-based devices and technologies Instead of writing on a separate tablet, Ivan Sutherland’s Sketchpad graphical communication system (Sutherland, 1963) shown in Fig. 1(c) (CC BY-SA 3.0 by Kerry Rodden) was the first solution enabling the direct manipulation of graphical objects (e.g. touch, select and drag) by interaction with a light pen on a cathode ray tube (CRT) computer monitor. Note that the light pen as an input device for human-computer interaction was developed in the mid-1950s at the Massachusetts Institute of Technology (MIT), where also Sutherland was conducting his research, a few years before the computer mouse was invented by Douglas Engelbart and Bill English in 1963. The seminal work on pen-based direct manipulation of graphical objects in Sketchpad formed the basis for many new interface ideas and graphical user interfaces (GUIs). Inspired by the RAND tablet as well as the pen-based interactions on Sketchpad, Alan Kay came up with the vision and concept of a personal dynamic medium the size of a notebook called the Dynabook, which would not only allow for keyboard input but also direct pen-based interactions with the Dynabook’s screen (Kay and Goldberg, 1977). In terms of commercial pen-based solutions, the Wang Freestyle system (Levine and Ehrlich, 1991) in 1988 was an early attempt by Wang Laboratories to establish pen-based input. The system consisted of a touch-sensitive tablet and a stylus that could be used to annotate documents displayed on a computer screen. Further, in a multimodal interface, the pen input could be combined and synchronised with recorded voice comments—made through a phone handset—containing supplemental information about the parts of the digital document selected via the pen Pen-based Interaction 5 input as well as with pen-written notes. Unfortunately, the very high pricing and focus on senior executives prevented the adoption of this promising multimodal input technique by a broader audience. Just one year after the launch of the Wang Freestyle, GRiD Systems Corporation introduced GRiDPad, the first commercial pen-controlled tablet computer. In the early 1990s, multiple operating systems with pen support were introduced as so-called pen computing platforms. The PenPoint OS (Carr and Shafer, 1991), which would later run on tablet computers such as the EO Personal Communicator, was introduced by GO Corporation in 1991. In addition to a large set of gestures that could be used consistently across the operating system and individual applications, PenPoint also introduced the notebook interface metaphor as an integral part of the operating system. Only a year later, in 1992, Microsoft released its Windows for Pen Computing software suite as an add-on for their Windows 3.1 operating system, providing various tools such as a notebook for pen-based sketches and handwriting recognition. Pen-based computing on Personal Digital Assistants (PDAs) addressed the mass market for the first time in the form of the Newton MessagePad, which was running on the Newton operating system and was introduced by Apple in 1993. However, these early tablet and PDA solutions for pen-based computing were not very successful, mainly due to the limitations of existing hardware at that time, offering limited screen readability, poor battery power as well as a lack of memory and processing power. Despite the introduction of dedicated alphabets with single stroke characters only—such as Graffiti or Unistrokes—these pen-based text entry techniques often resulted in poor handwriting recognition due to the mentioned hardware limitations. While these pen-based tablet and PDA solutions often try so “simulate” paper and offer pen and paper functionality such as sketching, notetaking or annotating on digital devices, an alternative approach is to augment existing pen and paper workflows with digital information and services. Pierre Wellner introduced a pioneering approach for such an augmented pen and paper workflow at Xerox EuroPARC in the form of the DigitalDesk shown in Fig. 1(d) (courtesy of Pierre Wellner). Wellner’s idea was to realise the “opposite of the desktop metaphor” where not a desktop environment is simulated on a computer, but where the computer is brought to the real desktop environment: “Instead of making the workstation more like a desk, we can make the desk more like a workstation. This is the aim of the DigitalDesk. On this desk, papers gain electronic properties, and electronic objects gain physical properties. Rather than shifting more functions from the desk to the workstation, it shifts them from the workstation back onto the desk.” (Wellner, 1993) By placing a camera above the desk surface (over-desk video), a user’s interaction with documents lying on the desk, including pen-based annotations, could be tracked. Supplemental digital information could then either be shown on a separate screen or be directly projected to the table and on top of individual documents via a projector mounted above the desk. By pointing with their finger to parts of a 6 Beat Signer document, the camera could capture parts of a document’s text as well as a user’s pen-based writing on different paper documents and perform some optical character recognition (OCR). The basic idea of the DigitalDesk has been further extended and refined in later projects such as the EnhancedDesk (Koike et al, 2001). However, a drawback of these so-called augmented desk solutions is that we lose a main feature of paper documents with respect to their mobility and thereby affect the affordances of paper documents. In order to access any superimposed digital functionality or information, a paper document can only be used at dedicated places equipped with such an augmented desk system. An alternative solution for capturing pen and paper interactions was introduced by the Pen Computing Group at A.T. Cross in the form of the CrossPad electronic notepad. Any paper document could be placed on top of the portable CrossPad graphics tablet and a special inking pen tracked by the graphics tablet would be used to write on the paper document. A limitation of this approach was that only the interactions on a single page could be tracked at once, and it would therefore not be possible to work across multiple documents. While the pen-operated PDAs of the early 1990s did not result in long-term commercial success, research continued on how to provide pen and paper-based interaction and workflows in digital space. An example is the XLibris “active reading machine” developed by FX Palo Alto Laboratory (Schilit et al, 1998). XLibris offered a pen-based interface that was running on a tablet display tethered to a PC. The active reading process was enabled by supporting free-form digital ink annotations as well as the lookup of supplemental information. After the introduction of the Microsoft Tablet PC aiming to support various digital notetaking activities in 2000, it took another decade before advances in hardware, a reduction in weight and dedicated applications led to the broader acceptance of tablet computers in the consumer market, starting with Apple’s introduction of the first iPad in 2010. Nowadays, pen-based input on tablet computers, smartphones, desktop solutions such as the Microsoft Surface Studio shown in Fig. 2(a) or Wacom’s Cintiq display as well as interactive tabletop and wall surfaces are well accepted for specific pen-based tasks. Combining electronic ink (e-ink) tablets with pen-based input further resulted in dedicated notetaking solutions such as the reMarkable 2 tablet, striving to offer the most natural notetaking experience on digital devices. On the other hand, a significant milestone in integrating existing pen and paper workflows with digital services and information was the introduction of Anoto’s digital pen and paper technology (Forster, 2001), enabling high-resolution paperbased position tracking based on a special printed dot pattern (nearly invisible to the human eye) covering each page of a paper document, that is detected by a camera integrated into a so-called digital pen also offering regular inking. Over the last two decades, there has been extensive academic work on interactive paper solutions based on the digital pen and paper technology (Signer, 2005; Yeh, 2007; Weibel, 2009; Liao, 2009; Ispas, 2011; Steimle, 2012; Heinrichs, 2015) and we present some applications in Sect. 2.4. Further, there are various commercial applications for bridging the paper-digital divide based on Anoto’s technology, ranging from notetaking solutions by Livescribe to form-filling applications and learning plat- Pen-based Interaction 7 forms for children such as Ravensburger’s tiptoi audio-digital learning solution or LeapFrog’s LeapReader reading and writing system. 2.2 Technologies In the following, we discuss different technologies that can be used for pen tracking to enable pen-based interaction. Thereby, we will distinguish between solutions supporting the tracking of a pen on digital devices—we are going to call them pen-and-device (PaD) solutions—and technologies that have been developed to capture and digitally augment existing pen and paper interactions that we will call pen-and-paper (PaP) solutions. Note that some of the presented pen tracking techniques might be used for both PaD as well as PaP solutions. 2.2.1 Pen-and-Device (PaD) Technologies The light pen was the first approach for supporting pen-based input on digital devices, and while it has been in use for multiple decades, it is nowadays not used much anymore due to changes in display technologies. The light pen was initially developed as part of the Whirlwind project at MIT in 1955 and has, for instance, been applied in Ivan Sutherland’s Sketchpad graphical communication system introduced earlier. A light pen is a relatively low-cost solution making use of the method of operation of cathode ray tube (CRT) monitors where an electron beam scans the screen line by line. When the light pen is positioned on a CRT screen, it will detect the change in brightness of nearby pixels caused by the passing electron beam. By comparing the timing of these changes with the information from the beam control unit, we can derive the pen’s position on the screen. While light pens have been used in various application domains, including computer-aided design (CAD) solutions, the precision of the detected position is not highly accurate due to rounding errors and noise. Further, prolonged use of the light pen on vertical screen surfaces might result in some arm fatigue—something we should take into account when designing pen-based interfaces for vertical surfaces. As a result of the replacement of CRT monitors with LCD screens, light pens have lost importance since the discussed electron beam-based tracking technique no longer works on LCD screens. When talking about pens to be used on smartphones, LCD desktop screens or interactive tabletop and wall surfaces, we must distinguish between passive and active pens. A passive pen has no built-in electronics and is often used for simple pen-based interactions and input on capacitive multi-touch screens. It provides the same functionality as finger-based input based on a pen tip made of conductive foam or rubber. Given the smaller point of contact, these passive capacitive pens typically offer a higher precision than finger-based input. The pen’s position can also be better tracked by a user than a position covered by their finger. There has further been 8 Beat Signer (a) Microsoft Surface Studio 2 (b) Wacom Intuos Pro graphics tablet (c) Massless Pen (d) Anoto Digital Pen (Magicomm G303) Fig. 2 Modern pen-based devices and technologies research to add additional functionality to simple conductive pens by augmenting them with magnets to support pen identification as well as pen interactions around mobile devices (Hwang et al, 2013). In contrast, active pens such as Samsung’s S Pen, Microsoft’s Surface Pen or the Apple Pencil contain some electronics and offer additional features like input buttons or touch sensitivity via a pressure sensor in the pen’s tip. These active pens work in combination with a digitiser embedded in the screen surface. When an active pen gets in proximity to the screen’s surface, the digitiser’s magnetic field induces a current in the pen. The pen creates its own electromagnetic field that—together with some optionally superimposed information about the pen pressure or pressed buttons—is detected and processed by the digitiser. While some active pens are exclusively powered by the digitiser’s electromagnetic field, most pens also contain a battery. An advantage of active pens is that they provide palm rejection to avoid unintended touch input during pen interactions and offer higher precision than passive pens. Most pens can further be configured and their buttons can replace specific mouse or even keyboard input, and sometimes also some eraser functionality is provided. Given that active pens work based on an electromagnetic field, their position can also be tracked when hovering over but not touching the screen surface, supporting various above-screen interactions. Originally, the digitiser-based approach was not integrated into screen surfaces but only used in dedicated graphics tablets such as the Wacom Intuos Pro shown in Fig. 2(b), similar to the idea of early pen-based input devices like the RAND tablet mentioned in Sect. 2.1. Pen-based Interaction 9 We can distinguish between passive and active graphics tablets. A passive tablet switches between two modes, first using some electromagnetic induction to power the pen and then receiving the pen’s signal. While this has the advantage that the pen does not require any battery since it is powered by the tablet, the switching between the sending and receiving modes results in some jitter. In contrast, in an active graphics tablet solution, the pen has its own battery and the tablet no longer has to switch modes to power the pen but constantly listens to the signal received by the pen, resulting in less jitter. However, the pens used on active graphics tablets are often slightly bulkier due to their battery. Note that many major graphics tablet vendors pay special attention to the tactile writing experience, which seems to be essential for professionals such as designers, illustrators or artists who often do not like the smooth and almost frictionless interaction of pens on the glass surface of tablet computers. Therefore, different replaceable pen tips can be used to control the level of friction and tactile feeling on many of these graphics tablet solutions. While graphics tablets are mainly used in combination with a computer screen, nowadays some fully integrated solutions exist, such as Wacom’s Cintiq product line. In addition to active and passive graphics tablets, there are also capacitive graphics tablets that work similarly to capacitive multi-touch screens and can be operated with a passive pen as well as a finger. It is further possible to optically track pens in 3D space by, for instance, attaching some markers to the pen and using a tracking solution such as OptiTrack. The information can either be used when a pen is used on a 2D surface or when a user is pointing or writing in 3D space while using some virtual reality (VR) or augmented reality (AR) applications as shown for the Massless Pen in Fig. 2(c) (courtesy of Massless Corp.). A recent study on using such a 3D pen in VR/AR environments revealed that it clearly outperforms modern VR controllers for pointing tasks and it was also the preferred input device, opening new opportunities for pen-based interactions in VR and AR environments (Pham and Stuerzlinger, 2019). Further, Drey et al (2020) introduced a design space for 2D and 3D sketching interaction metaphors in virtual reality, where 2D surface-based metaphors were used for sketching straight lines and 3D mid-air sketching was applied to shape volumes. In the future, we might not only want to track the pen on a writing surface but also the immediate surroundings, including a user’s hand to enable richer forms of interactions. A prototype of such an enhanced pen with an integrated downward-facing camera has been proposed by Matulic et al (2020). 2.2.2 Pen-and-Paper (PaP) Technologies Similar to pen-and-device technologies, we can distinguish between pen-and-paper technologies with active pens containing some electronics and those with passive pens that are tracked via external technologies such as computer vision-based solutions. In external computer vision-based pen tracking solutions, one or multiple cameras are used to track paper documents on a horizontal or vertical surface and projections or computer screens might be used to present supplemental information 10 Beat Signer as seen in augmented desk solutions (Wellner, 1993) or interactive wall solutions where paper notes are tracked and captured by a camera system and augmented with digital information (Everitt et al, 2003). As mentioned earlier, a drawback of these vision-based solutions with fixed cameras and projections is that paper documents can only be augmented with digital services and information at dedicated places, resulting in limited mobility. Another limitation of these vision-based solutions for passive pens is that they usually perform some offline recognition of pen strokes and are thus less suitable for real-time pen interactions. We can further distinguish between solutions that can track a pen’s relative position and technologies for absolute positioning. Relative positioning solutions track a pen’s relative positions on a paper document without knowing the pen’s absolute position on the paper document. While these solutions are sufficient for capturing handwriting without any additional calibration, they do not support pen interactions with specific printed content on a paper document. A first possibility for tracking pen and paper interactions is to place the paper document on a graphics tablet and slightly modify the active pen to an inking pen as, for instance, realised in the Audio Notebook (Stifelman et al, 2001) or as recently supported by different Wacom Smartpads. While these solutions work for relative positioning, absolute positioning would require careful calibration every time a new document is placed on the graphics tablet. Further, it is impossible to automatically distinguish between different documents or individual pages within a document. Instead of using a complete graphics tablet, there exist some clip-on solutions such as the Pegasus Mobile NoteTaker or the Seiko InkLink Handwriting System that can be attached to a paper document. These clip-on devices have two or more reference points which continuously measure the distance to a special soundemitting pen based on high-resolution ultrasonic position detection. Depending on the pen distance measured from these reference points (based on time-of-flight), the paper clip-on device can calculate the pen’s position on the paper document via triangulation. An advantage of this clip-on approach is that the position detection process is independent of the concrete medium (e.g. paper or transparencies) on which the pen is used. There are also PaD solutions based on the same ultrasonic (and infrared) positioning technique for pen-based interaction and handwriting capture on large whiteboards or projector screen surfaces, such as the Boxlight MimioTeach for classroom settings as well as more recent IR-based solutions for tracking pens on planar surfaces (Maierhöfer et al, 2024). Another approach is to directly encode positional information on the paper documents, which can then be decoded by special hardware in the pen to detect its absolute position. In the Paper++ research project, a grid of almost invisible barcodes (encoding positional information) was printed on paper documents with conductive silver ink (Luff et al, 2007). A specially developed pen then detected the positional information encoded in the barcodes by measuring the inductivity. Due to the relatively large size of the barcodes, the resulting pen tracking resolution was not high enough to capture handwriting and could mainly be used for the pen-based selection of elements in paper documents to support enhanced active reading. Pen-based Interaction 11 A vision-based approach for high-resolution paper-based tracking of a digital pen has been developed by the Swedish company Anoto in the form of the so-called digital pen and paper technology (Forster, 2001). Again, positional information is directly encoded on each piece of paper, but in this case by using a special pattern of tiny visual dots. One can imagine that there is a virtual grid over a page and the dots are printed with a small displacement relative to the intersections of the horizontal and vertical lines of the grid. Each dot thus encodes a two-bit sequence defined by its horizontal or vertical displacement. Multiple dots together (a 6 × 6 matrix of 36 dots) form a unique sequence of 72 zeros and ones defining a position in a large virtual document space formed by the 272 possible different combinations. A special digital pen for the Anoto dot pattern, as illustrated in Fig. 2(d), contains an embedded infrared camera in the pen nib to track the pen’s movement on the paper surface based on the detected dot pattern. The pen further captures additional information such as the pressure on the pen tip, its orientation and tilt. Over the years, various Anoto pens have been manufactured by Livescribe, Maxell, LeapFrog, Logitech or Anoto themselves. Some of these digital pens store information on the pen, while others can also stream the positional information to a third-party device in real time. Further, some digital pens, such as the Livescribe WiFi Smartpen, can also record and replay hours of audio and cross-index the recorded audio with the captured handwriting. Compared to other solutions, the digital pen and paper approach is highly portable and always returns an absolute pen position, further allowing the identification of different documents and individual pages within these documents. Various models and frameworks have been investigated for mapping paper documents to their corresponding digital representation (Weibel et al, 2007). Note that the Anoto pattern can also be printed on large projection surfaces for high-precision digital pen tracking as realised by we-inspire (later acquired by Anoto). There has also been some research to integrate the pattern into existing computer screens to enable the tracking of a single pen on paper as well as on screen surfaces (Hofer and Kunz, 2010). The Neo Smartpen uses a similar approach to Anoto’s digital pen and paper technology, where each page is encoded with the Ncode pattern and a camera in the pen reads the pattern to detect its absolute position within a page. 2.3 Interface and Interaction Styles The original pen-based interaction on computer screens via Sutherland’s Sketchpad graphical user interface introduced pen-based direct object manipulation, taking full advantage of hand-eye coordination to offer familiar forms of interaction. As we have seen earlier, over the last six decades many new hardware solutions for the tracking of pen input on digital devices as well as on paper documents have been developed, offering new opportunities for pen-based interaction. In the following, we highlight some of the past and ongoing research for new forms of pen-based interfaces and interactions. 12 Beat Signer The pen as a graphical input device has been characterised and compared to other input devices such as the mouse or touch screens in the three-state model of graphical input by Buxton (1990). A pen can be used for various things, including the input of textual data via some handwriting recognition service, the creation of sketches and drawings, the annotation of existing content, the execution of gestured-based commands or just as a pointing device to replace regular mouse input. Therefore, a major challenge in the design of pen-based interactions and user interfaces is the mode switching where the pen is either used as a mouse replacement (pointing device), as a gesture input device to execute some commands, a general inking device for drawings and sketches or as a text input device via handwriting recognition. In many cases, we want to support some explicit mode switching (e.g. between text input and gesture-based commands), which can for instance be realised via some dedicated buttons on the pen or by making use of some of a pen’s input features such as pen pressure, tilt or pen rolling (rotation). Rather than controlling the mode via the pen, it might also be defined by specific parts of the interaction surface. For example, for a form shown on a tablet computer, we might implicitly switch to text input mode as soon as the pen is used in one of the form input fields. Another possibility is that a pen might switch modes after some timeouts, such as switching back to a default mode if the pen has not been used for a specific period of time. However, such timeout-based mode switches should be used carefully since users might get confused by not being aware of these implicit pen mode switches. Some pen input features that could also be used for mode switches, including pressure, tilt or pen rolling, have been further investigated for different types of pen interactions. While the pressure on a digital pen’s tip can of course be used to vary the thickness of a generated digital ink trail, it might for instance also be used to simultaneously indicate both the selection and action on an object in the form of the pressure marks described by Ramos and Balakrishnan (2007). Also the tilt of a pen can be used to, for example, choose between different menu items on a screen when designing pen-based interactions (Xin et al, 2011). Finally, pen rolling might serve as an additional input channel for rotating selected objects or as a penbased alternative to a mouse scroll wheel (Bi et al, 2008). In addition to these more natural pen input features, we can also experiment with new input features for pen interaction that are not offered by traditional pens, such as the bending of a pen that has been investigated for stroke width control or the use of radial menus in the FlexStylus prototype (Fellion et al, 2017). When entering text with a pen-based interface, we often apply some handwriting recognition. In contrast to traditional offline optical character recognition (OCR), pen-based interfaces provide additional temporal stroke information, which can be used in so-called online handwriting recognition to achieve better recognition rates. In addition to the recognition of handwritten text, pen-based interfaces often also make use of specific pen-based gestures such as the Microsoft Application Gestures (Microsoft, 2021) for specific commands (e.g. by drawing a curlicue gesture to delete the selected text in a document). In order to support customised pen gestures, there exist several frameworks and toolkits, including the general iGesture gesture recognition framework (Signer et al, 2007), as well as different gesture recognition Pen-based Interaction 13 algorithms (Magrofuoco et al, 2021). Apart from recognising handwriting or gesture input, we might also want to recognise domain-specific sketches as, for example, supported by the LADDER sketching language for user interface developers (Hammond and Davis, 2005). An important factor in pen-based user interfaces is the real-time feedback provided based on a user’s pen input. While it is easy to provide visual feedback when the pen is used on a screen surface or in combination with a separate screen (e.g. when using a graphics tablet), the delivery of feedback becomes more challenging if the user interface does not consist of a screen, as often seen in pen-andpaper (PaP) solutions. In this case, a first possibility is to use some output channels that might be offered by digital pens, including auditory feedback or vibration as outlined by Liao (2009) when discussing so-called pen-top feedback. Some digital pens might also have integrated LEDs or even a small LCD display which can also be used for basic forms of feedback. With the steady miniaturisation of laser projectors, in the future it might even be possible to integrate a projector into the pen and enable a direct projection of digital information and augmentation of physical documents as proposed in the PenLight prototype by Song et al (2009). Another possibility is to directly integrate the possibility for feedback into paper documents (e.g. illumination of areas) based on emerging paper-based electronics and novel thin-film segment-based display technologies as demonstrated in the IllumiPaper research prototype (Klamka and Dachselt, 2017). However, note that this fusion of regular paper documents and printed electronics might affect some of the original affordances of paper documents. While in early pen-based user interfaces the pen was used to navigate standard menus, new graphical widgets were introduced to simplify the pen-based control of menus and other interface elements. Marking Menus are one of these graphical widget elements enabling users to access items from a pop-up radial menu or by just drawing a mark in the direction of a menu item’s recalled location that users might learn over time by using the radial menu (Kurtenbach and Buxton, 1994). These pen-based interactions with widgets can not only happen when the pen is tracked on the surface but also when moving the pen above the display surface as, for instance, realised in Hover Widgets for replacing interface elements that might not offer convenient pen interaction (Grossman et al, 2006). Over the years, various pen pointing techniques for larger surfaces such as tabletops and wall displays have been developed, including Pick-and-Drop, Push-andPop or Bubble Radar (Aliakseyeu et al, 2006). Multi-user interaction with large wall displays or digital whiteboards introduces new challenges in terms of the shared user interface and innovative ideas for multi-device interaction where users can select items on a handheld computer and transfer them to a shared digital whiteboard via some pick-and-drop operation (Rekimoto, 1998). Some of the pen-based input techniques for pen-and-device interaction can also be applied to pen-and-paper solutions as, for example, seen with the pigtail command type delimiter used in the PapierCraft digital pen and paper prototype, combining pigtail menus with real-time pen-top feedback (Liao, 2009). A general model for pen and paper interaction taking into account the characteristics of pen and paper 14 Beat Signer user interfaces (PPUIs) has been introduced by Steimle (2012). The model separates the semantic (what) and syntactic (how) level of an interaction and defines some core interactions of the CoScribe platform for collaborative paper-based knowledge work. The design of pen and paper interfaces offers less control over a user’s interactions since there is a lack of a transactional operation concept as manifested in graphical user interfaces (GUIs) in the form of modal dialogues (Signer and Norrie, 2010). Note that pen-and-paper as well as pen-and-device interactions are often combined with other modalities and extensive research has been conducted in the domain of multimodal speech and pen interfaces (Cohen and Oviatt, 2017). While introducing various challenges in terms of multimodal fusion, these pen-based multimodal user interfaces are promising for application domains like learning, since the expressively powerful interfaces can stimulate cognition and learning. Another modality that has been thoroughly investigated is the use of touch, which is usually present in traditional pen and paper interaction, with the non-dominant hand holding or fixating the paper document. The Focus+Context environment has been introduced by Hahne et al (2009) for the combined use of pen and touch in sketching activities. Thereby, a high-resolution pen-enabled display is placed and tracked on a multi-touch tabletop. While a user is sketching on the focus area shown on the screen, the non-dominant hand can be used for touch gestures on the tabletop surface as part of the bimanual interaction. Similar bimanual interaction can also be directly offered on interactive tabletop surfaces, with the non-dominant posture being used to select different pen modes and the pen-holding hand articulating different document editing transactions, thereby offering experts an alternative to widget-based document editing (Matulic and Norrie, 2013). Pfeuffer et al (2017) investigated new techniques for bimanual thumb and pen interaction on modern tablets and discussed some new interface elements. By applying additional sensing techniques (e.g. for detecting different pen grips) for pen and touch interactions on a tablet, it is further possible to support context-sensitive tools such as a magnifier tool for detailed stroke work (Hinckley et al, 2014). An issue that often has to be addressed in pen and touch-based interactions is preventing accidental touch interaction (e.g. via, palm rejection) while still recognising intentional touch gestures. Several other factors in the design of pen-based interfaces might significantly influence the user experience. For natural pen-based interactions, it is essential that there is minimal latency between a user’s pen input and the system’s response by, for instance, rendering the strokes. The accuracy and precision of pen input are further influenced by the chosen PaD or PaP technology and might affect the quality of potential subsequent digital ink processing, including stroke beautification or handwriting recognition. Finally, also the design of digital pens, including factors such as the pen weight, battery life, grip comfort or button placement and overall aesthetics, can enhance the user experience in pen-based interaction. Pen-based Interaction 15 2.4 Application Domains One of the first applications of pen-based input was the verification of remote signatures as already performed with the telautograph in 1888. However, while signatures produced by regular pens result in a static image which might be copied with some training, the use of online pen strokes in combination with additional input modalities offered by some digital pens, such as pressure, tilt and pen rolling, makes it almost impossible to reproduce a signature that has been captured with this additional metadata and digital pens are therefore used for signature verification solutions. A main application domain of PaD and PaP technologies is notetaking. In a recent study by Mueller and Oppenheimer (2014), it has been shown that in educational settings, pen-based notetaking has some advantages over taking notes on a laptop. Students who took their notes with pen and paper performed better on conceptual questions than students who used a laptop computer for notetaking. It seems that laptop notetakers more often transcribe content rather than process information and reframe it in their own words, which would be beneficial for learning. Further, a recent study by Van der Weel and Van der Meer (2024) revealed that handwriting (but not typewriting) leads to widespread brain connectivity. Dynomite by FX Palo Alto Laboratory (FXPAL) is a pen-based electronic notebook that has been designed for capturing and retrieving handwritten notes and audio recordings. Any notes written on the digital screen can be linked and augmented with audio recordings and further be annotated with special properties such as names or URLs to further classify the handwritten information for later retrieval. Later, special tablet computer applications such as InkSeine were developed for active notetaking, where the penbased notetaking interface is combined with in-situ search functionality, enabling fast and flexible workflows where users can freely interleave inking, searching and the gathering of content (Hinckley et al, 2007). PaD and PaP technologies might also enable the design of more natural user interfaces for specific domains, such as writing mathematical expressions in maths, drawing chemical formulas, musical notation or the annotation of maps by cartographers and geographers. More recently, there are also pen-based e-ink tablet solutions such as the Remarkable 2, aiming at providing long battery life, a writing surface offering a paper-like tactile writing and sketching experience and running no other applications that might disturb and interrupt the notetaking or sketching experience. Further, professional software for pen-based sketching, such as Autodesk SketchBook—a paint application specifically designed for sketching and capturing hand-drawn ideas with all the sensitivity of drawing with pencils, pens, and markers on paper—has been introduced. Of course, also various pen and paper-based notetaking solutions have been developed over the last two decades. The Audio Notebook by Stifelman et al (2001) is a multimodal pen and paper-based notetaking solution. A paper document is placed on top of a digitising tablet and a user’s handwritten notes are automatically synchronised with some optional audio recording, with the notes later serving as a fast index to retrieve parts of the recorded audio. A similar idea was commercialised by Livescribe with the digital pen and paper technology-based Livescribe WiFi Smartpen, which can store up to 200 hours of audio recordings on 16 Beat Signer the pen, cross-index the captured handwriting with the audio recordings and transfer the information to a computer. Other digital pen and paper notetaking solutions offer enhanced natural notetaking and advanced forms of intelligent ink data processing (Ispas, 2011). Further, notetaking solutions for specific domains, such as the ButterflyNet application for field research enabling the integration and synchronisation with pictures and samples collected while conducting some fieldwork (Yeh, 2007) or the prism hybrid laboratory notebook (Tabard, 2009) have been developed. Pen and paper-based collaborative remote sketching with real-time communication has been investigated in the PaperSketch prototype by Weibel et al (2011). Different applications have also been realised for the DigitalDesk and its successors and a mobile version based on similar ideas has been presented with a-book, a solution based on a graphics tablet with a paper overlay for the writing capture and a PDA acting as a digital interaction lens between digital and paper documents (Mackay et al, 2002). The next class of applications does not focus on notetaking but enables active reading and annotations. PaperProof supports the proofreading and pen-based correction of paper documents (Weibel, 2009). These pen-based corrections are automatically translated into the corresponding digital document edits and multiple users’ concurrent editing of the digital and physical document versions is supported. Paper Augmented Digital Documents (PADD) and PapierCraft in particular, offer new interface and interaction elements as well as different forms of pen-based feedback to manipulate and edit documents either digitally or based on digital pen and paper technology (Liao, 2009). The PLink prototype enables the creation of links from paper to digital resources (e.g. websites) via pen interactions, whereas the CoScribe framework supports pen and paper-based cross-media knowledge work across documents (Steimle, 2012). ActiveInk allows for natural pen use in active reading behaviour while supporting analytic actions by activating any of these ink strokes and thereby enabling users to seamlessly transition between data exploration and the externalisation of their thoughts by using pen and touch (Romat et al, 2019). The texSketch prototype combines pen-and-device interactions with natural language processing to reduce the costs of active diagramming for knowledge externalisation while reading a text and maintaining the cognitive effort necessary for comprehension (Subramonyam et al, 2020). As mentioned earlier, active reading is further supported by learning platforms for children, including tiptoi or the LeapReader reading and writing system. Pen-based solutions are also used for various interactive whiteboard and tabletop solutions such as in Livenotes where cooperative notetaking as well as the sharing of notes and annotation of slides in classroom settings is supported via a shared whiteboard in combination with pen-based input on tablet computers (Kam et al, 2005). The sharing of information on a large whiteboard and the annotation of slides on individual tablet computers has also been studied to enhance several Computer Science courses (Berque et al, 2004). Further, pen and touch interactions for whiteboards have been investigated to offer more natural forms of information visualisation by Walny et al (2012). In their Tivoli application, Moran et al (1997) proposed pen-based interaction techniques enabling groups of users to easily organise Pen-based Interaction 17 and rearrange material on the LiveBoard whiteboard in informal meetings. The collaboration on pen-enabled whiteboard solutions, such as in brainstorming sessions or collaborative sketching, can be enhanced by supporting multiple pens (possibly with different colours). A collaborative tabletop environment has been realised in the Shared Design Space project, where users can perform pen-based annotations on digital documents and paper printouts (Haller et al, 2006). Thereby, virtual and paper-based drawings are overlaid with digital information in a single information space to support the design process. A similar form of mixed reality workspace is offered in HoloDoc by augmenting physical documents with digital information based on interaction with a Neo smartpen and a Microsoft HoloLens for visualisation in the resulting augmented reality environment (Li et al, 2019). Pen-and-paper-based solutions can not only be used for notetaking and the paperbased capturing of information, but also represent user interfaces for digital applications and services. For instance, PaperPoint is a digital pen and paper-based user interface for controlling and annotating PowerPoint presentations based on printed handouts (Signer and Norrie, 2007) as shown earlier in Fig. 2(d). Various other interactive paper solutions, including the interactive and multimodal EdFest festival guide (Norrie et al, 2007) for the Edinburgh Fringe Festival, the Lost Cosmonaut storytelling application (Vogelsang and Signer, 2005) or the Generosa Enterprise art installation, have been presented by Signer (2005). 3 Challenges and Future Directions We have presented different technologies for pen-and-device (PaD) as well as penand-paper (PaP) solutions together with some of their advantages and limitations. We have further outlined various interface and interaction styles and discussed different pen-based applications and application domains. While pen-based solutions have been developed for the last six decades, it is only over the last decade that they found a broader acceptance in the consumer market due to major advances in computing, display and pen tracking performance. In the following, we present some challenges and future directions for pen-based solutions, partly based on Signer and Norrie (2010). Device Independence The interaction with pen-based applications should be decoupled from the underlying device-specific details whenever possible. This decoupling enables easy migration of applications when new pen tracking technologies become available and the same PaD or PaP technology can be used across many different applications. It might further help to address the problem that users have to carry multiple pens and remember which pen is working with which application or device, a problem that is currently encountered with digital pen and paper solutions where individual pen brands are limited to working with specific applications only. Ideally, a single pen should work across different applications and might even be used for cross-device interactions. 18 Beat Signer Cross-Platform Consistency Consistent pen behaviour across different platforms such as tablets, laptops and interactive whiteboards can improve the user experience due to familiar interactions. The standardisation of pen input APIs, gesture recognition and ink rendering across platforms might help to address the issue of cross-platform consistency. Digital Pen Design The future adoption of pen-and-device as well as penand-paper solutions also depends on the pen design, including aesthetics and ergonomics, the pen’s weight and weight balance, its battery life, grip comfort and the tactile writing sensation when used on paper or device surfaces. Digital Ink Abstraction and Processing While there exist standards for digital ink representation such as the Ink Markup Language (InkML) (Chee et al, 2011), many existing pen-and-device as well as pen-and-paper solutions still rely on proprietary formats for captured pen input data. Open and standardised data formats can not only help to exchange information across applications but also support the integration of captured pen data with other types of media to enable general cross-media workflows. Recent advancements in deep neural networks will further have a future impact on digital ink processing, including gesture and handwriting recognition, the reduction of latency in stroke rendering and new forms of palm rejection. Interaction Design Pen-based interaction differs significantly from traditional graphical user interfaces. For instance, for most pen-and-paper interfaces there are the previously mentioned limitations for visual feedback and there is a lack of a transactional operation concept as manifested in GUIs in the form of modal dialogues. Similar to gesture-based interfaces, there is the risk that each pen-based application defines its own interface and interaction styles, making it challenging for users to work with different pen-based applications. Some standardisation of these interfaces and interaction styles for pen-and-device as well as pen-and-paper solutions could enhance the adoption of pen-based applications and devices. Collaborative Pen-based Workflows The support of collaborative work where pens are used across multiple devices and surfaces is challenging. Future research might further investigate the real-time synchronisation and seamless sharing of handwritten content in collaborative environments where multiple users can cocreate, annotate and edit content based on PaD and PaP technologies. Given the recent developments in new display and pen tracking technologies for pen-and-device as well as pen-and-paper solutions, in combination with emerging solutions for augmented reality environments such as the Microsoft HoloLens 2, there are new opportunities for pen-based document interaction as well as the digital augmentation of pen-based workflows as demonstrated in the HoloDoc prototype (Li et al, 2019). These augmented reality-based solutions might enable the realisation of future cross-media information spaces (Signer, 2019) where users can seamlessly move between digital and physical information and have the flexibility to easily switch between input devices (e.g. keyboard, pen or touch) or combine different modalities depending on whatever best fits their current task and given context, paving the way for more seamless and versatile pen-based interaction experiences. Pen-based Interaction 19 References Aliakseyeu D, Nacenta MA, Subramanian S, Gutwin C (2006) Bubble Radar: Efficient Pen-based Interaction. In: Proceedings of the Working Conference on Advanced Visual Interfaces (AVI 2006), Venezia, Italy, pp 19–26, DOI 10.1145/ 1133265.1133271 Annett MK (2014) The Fundamental Issues of Pen-based Interaction with Tablet Devices. PhD thesis, University of Alberta Berque D, Bonebright T, Whitesell M (2004) Using Pen-based Computers Across the Computer Science Curriculum. In: Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (SIGCSE 2004), Norfolk, USA, pp 61–65, DOI 10.1145/1028174.971324 Bi X, Moscovich T, Ramos G, Balakrishnan R, Hinckley K (2008) An Exploration of Pen Rolling for Pen-based Interaction. In: Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology (UIST 2008), pp 191–200, DOI 10.1145/1449715.1449745 Bush V (1945) As We May Think. Atlantic Monthly 176(1):101–108, DOI 10.1145/ 227181.227186 Buxton W (1990) A Three-State Model of Graphical Input. In: Proceedings of the 3rd International Conference on Human-Computer Interaction (INTERACT 1990), Cambridge, UK, pp 449–456 Carr R, Shafer D (1991) The Power of PenPoint. Addison Wesley Chee YM, et al (2011) Ink Markup Language (InkML). W3C Recommendation, URL https://www.w3.org/TR/InkML Cohen PR, Oviatt S (2017) Multimodal Speech and Pen Interfaces, Association for Computing Machinery and Morgan & Claypool, pp 403–447. DOI 10.1145/3015783.3015795 Davis MR, Ellis TO (1964) The RAND Tablet: A Man-Machine Graphical Communication Device. In: Proceedings of the Fall Joint Computer Conference (AFIPS 1964), San Francisco, USA, pp 325–331, DOI 10.1145/1464052.1464080 Dimond TL (1957) Devices for Reading Handwritten Characters. In: Proceedings of the Eastern Joint Computer Conference: Computers with Deadlines to Meet (REACM-AIEE 1957), Washington D.C., USA, pp 232–237, DOI 10.1145/1457720. 1457765 Drey T, Gugenheimer J, Karlbauer J, Milo M, Rukzio E (2020) VRSketchIn: Exploring the Design Space of Pen and Tablet Interaction for 3D Sketching in Virtual Reality. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2020), DOI 10.1145/3313831.3376628 Everitt KM, Klemmer SR, Lee R, Landay JA (2003) Two Worlds Apart: Bridging the Gap Between Physical and Virtual Media for Distributed Design Collaboration. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2003), Fort Lauderdale, USA, pp 553–560, DOI 10.1145/642611. 642707 Fellion N, Pietrzak T, Girouard A (2017) FlexStylus: Leveraging Bend Input for Pen Interaction. In: Proceedings of the ACM Symposium on User Interface Software 20 Beat Signer and Technology (UIST 2017), Quebec City, Canada, pp 375–385, DOI 10.1145/ 3126594.3126597 Forster B (2001) Writing to the Future. Computerworld Goldberg HE (1914) Controller. US Patent 1,117,184 Gray E (1888) Telautograph. US Patent 386,815 Grossman T, Hinckley K, Baudisch P, Agrawala M, Balakrishnan R (2006) Hover Widgets: Using the Tracking State to Extend the Capabilities of Pen-operated Devices. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2006), Montréal, Canada, pp 861–870, DOI 10.1145/1124772. 1124898 Hahne U, Schild J, Elstner S, Alexa M (2009) Multi-touch Focus+Context Sketchbased Interaction. In: Proceedings of the 6th Eurographics Symposium on Sketchbased Interfaces and Modeling (SBIM 2009), New Orleans, USA, pp 77–83, DOI 10.1145/1572741.1572755 Haller M, Brandl P, Leithinger D, Leitner J, Seifried T, Billinghurst M (2006) Shared Design Space: Sketching Ideas Using Digital Pens and a Large Augmented Tabletop Setup. In: Proceedings of the 16th International Conference on Artificial Reality and Telexistence (ICAT 2006), Hangzhou, China, pp 185–196, DOI 10.1007/11941354 20 Hammond T, Davis R (2005) LADDER, a Sketching Language for User Interface Developers. Computers and Graphics 29(4):518–532, DOI 10.1145/1281500. 1281546 Heinrichs FHF (2015) Mobile Pen-and-Paper Interaction: Infrastructure Design, Conceptual Frameworks of Interaction and Interaction Theory. PhD thesis, Technical University of Darmstadt Hinckley K, Zhao S, Sarin R, Baudisch P, Cutrell E, Shilman M, Tan D (2007) InkSeine: In Situ Search for Active Note Taking. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2007), San Jose, USA, pp 251–260, DOI 10.1145/1240624.1240666 Hinckley K, Pahud M, Benko H, Irani P, Guimbretière F, Gavriliu M, Chen XA, Matulic F, Buxton W, Wilson A (2014) Sensing Techniques for Tablet+Stylus Interaction. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST 2014), Honolulu, USA, pp 605–614, DOI 10.1145/2642918.2647379 Hofer R, Kunz A (2010) Digisketch: Taming Anoto Technology on LCDs. In: Proceedings of the ACM Symposium on Engineering Interactive Computing Systems (EICS 2010), Berlin, Germany, pp 103–108, DOI 10.1145/1822018.1822034 Hwang S, Bianchi A, Ahn M, Wohn K (2013) MagPen: Magnetically Driven Pen Interactions on and Around Conventional Smartphones. In: Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2013), Munich, Germany, pp 412–415, DOI 10.1145/2493190.2493194 Ispas A (2011) Beyond the Digital Capture of Paper Notes: Investigations of Enhanced Natural Notetaking based on Digital Pen and Paper Technology. PhD thesis, ETH Zurich, DOI 10.3929/ethz-a-007139493, dissertation ETH No. 20157 Pen-based Interaction 21 Kam M, Wang J, Iles A, Tse E, Chiu J, Glaser D, Tarshish O, Canny J (2005) Livenotes: A System for Cooperative and Augmented Note-taking in Lectures. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2005), Portland, USA, pp 531–540, DOI 10.1145/1054972.1055046 Kay A, Goldberg A (1977) Personal Dynamic Media. Computer 10(3):31–41 Klamka K, Dachselt R (2017) IllumiPaper: Illuminated Interactive Paper. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2017), Denver, USA, pp 5605–5618, DOI 10.1145/3025453.3025525 Koike H, Sato Y, Kobayashi Y (2001) Integrating Paper and Digital Information on EnhancedDesk: A Method for Realtime Finger Tracking on an Augmented Desk System. ACM Transactions on Computer-Human Interaction 8(4):307–322, DOI 10.1145/504704.504706 Kurtenbach G, Buxton W (1994) User Learning and Performance with Marking Menus. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 1994), Boston, USA, pp 258–264, DOI 10.1145/191666.191759 Levine SR, Ehrlich SF (1991) The Freestyle System, Springer, Boston, USA, pp 3–21. DOI 10.1007/978-1-4684-5883-1 1 Li Z, Annett M, Hinckley K, Singh K, Wigdor D (2019) HoloDoc: Enabling Mixed Reality Workspaces That Harness Physical and Digital Content. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2019), Glasgow, Scotland UK, pp 1–14, DOI 10.1145/3290605.3300917 Liao C (2009) PapierCraft: A Paper-based Interface to Support Interaction With Digital Documents. PhD thesis, University of Maryland Luff P, Adams G, Bock W, Drazin A, Frohlich D, Heath C, Herdman P, King H, Linketscher N, Murphy R, Norrie MC, Sellen A, Signer B, Tallyn E, Zeller E (2007) Augmented Paper: Developing Relationships Between Digital Content and Paper. The Disappearing Computer: Interaction Design, System Infrastructures and Applications for Smart Environments, LNCS 4500 4500:275–297, DOI 10.1007/978-3-540-72727-9 Mackay WE, Pothier G, Letondal C, Bøegh K, Sørensen HE (2002) The Missing Link: Augmenting Biology Laboratory Notebooks. In: Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology (UIST 2002), Paris, France, pp 41–50, DOI 10.1145/571985.571992 Magrofuoco N, Roselli P, Vanderdonckt J (2021) Two-dimensional Stroke Gesture Recognition: A Survey. ACM Computing Surveys 54(7), DOI 10.1145/3465400 Maierhöfer V, Schmid A, Wimmer R (2024) TipTrack: Precise, Low-Latency, Robust Optical Pen Tracking on Arbitrary Surfaces Using an IR-Emitting Pen Tip. In: Proceedings of the 18th International Conference on Tangible, Embedded, and Embodied Interaction (TEI 2024), Cork, Ireland, DOI 10.1145/3623509.3633366 Matulic F, Norrie MC (2013) Pen and Touch Gestural Environment for Document Editing on Interactive Tabletops. In: Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS 2013), St. Andrews, UK, pp 41–50, DOI 10.1145/2512349.2512802 22 Beat Signer Matulic F, Arakawa R, Vogel B, Vogel D (2020) PenSight: Enhanced Interaction with a Pen-Top Camera. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2020), pp 1–14, DOI 10.1145/3313831.3376147 Microsoft (2021) Application Gestures and Semantic Behavior. URL https://learn.microsoft.com/en-us/windows/win32/ tablet/application-gestures-and-semantic-behavior Moran TP, Chiu P, van Melle W (1997) Pen-based Interaction Techniques for Organizing Material on an Electronic Whiteboard. In: Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology (UIST 1997), Banff, Canada, pp 45–54, DOI 10.1145/263407.263508 Mueller PA, Oppenheimer DM (2014) The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking. Psychological Science 25(6):1159–1168, DOI 10.1177/0956797614524581 Norrie MC, Signer B, Grossniklaus M, Belotti R, Decurtins C, Weibel N (2007) Context-aware Platform for Mobile Data Management. Wireless Networks (WINET) 13(6):855–870, DOI 10.1007/s11276-006-9858-y Pfeuffer K, Hinckley K, Pahud M, Buxton B (2017) Thumb + Pen Interaction on Tablets. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2017), Denver, USA, pp 3254–3266, DOI 10.1145/3025453. 3025567 Pham DM, Stuerzlinger W (2019) Is the Pen Mightier than the Controller? A Comparison of Input Devices for Selection in Virtual and Augmented Reality. In: Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology (VRST 2019), Parramatta, Australia, pp 1–11, DOI 10.1145/3359996. 3364264 Ramos GA, Balakrishnan R (2007) Pressure Marks. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2007), San Jose, USA, pp 1375–1384, DOI 10.1145/1240624.1240834 Rekimoto J (1998) A Multiple Device Approach for Supporting Whiteboardbased Interactions. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 1998), Los Angeles, USA, pp 344–351, DOI 10.1145/274644.274692 Riche Y, Henry Riche N, Hinckley K, Panabaker S, Fuelling S, Williams S (2017) As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2017), Denver, USA, pp 3241–3253, DOI 10.1145/ 3025453.3025716 Romat H, Riche NH, Hinckley K, Lee B, Appert C, Pietriga E, Collins C (2019) ActiveInk: (Th)Inking with Data. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2019), Glasgow, UK, pp 1–13, DOI 10.1145/3290605.3300272 Schilit BN, Golovchinsky G, Price MN (1998) Beyond Paper: Supporting Active Reading with Free Form Digital Ink Annotations. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 1998), Los Angeles, USA, pp 249–256, DOI 10.1145/274644.274680 Pen-based Interaction 23 Sellen AJ, Harper R (2001) The Myth of the Paperless Office. MIT Press Signer B (2005) Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces. PhD thesis, ETH Zurich, DOI 10.3929/ethz-a-005174378, dissertation ETH No. 16218 Signer B (2019) Towards Cross-Media Information Spaces and Architectures. In: Proceedings of the 13th International Conference on Research Challenges in Information Science (RCIS 2019), Brussels, Belgium, pp 1–7, DOI 10.1109/RCIS. 2019.8877105 Signer B, Norrie MC (2007) PaperPoint: A Paper-based Presentation and Interactive Paper Prototyping Tool. In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction (TEI 2007), Baton Rouge, USA, pp 57–64, DOI 10.1145/1226969.1226981 Signer B, Norrie MC (2010) Interactive Paper: Past, Present and Future. In: Proceedings of the 1st International Workshop on Paper Computing (PaperComp 2010), Copenhagen, Denmark Signer B, Kurmann U, Norrie MC (2007) iGesture: A General Gesture Recognition Framework. In: Proceedings of the 9th International Conference on Document Analysis and Recognition (ICDAR 2007), Curitiba, Brazil, pp 954–958, DOI 10.1109/ICDAR.2007.4377056 Song H, Grossman T, Fitzmaurice G, Guimbretière F, Khan A, Attar R, Kurtenbach G (2009) PenLight: Combining a Mobile Projector and a Digital Pen for Dynamic Visual Overlay. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2009), Boston, USA, pp 143–152, DOI 10.1145/1518701.1518726 Steimle J (2012) Pen-and-Paper User Interfaces: Integrating Printed and Digital Documents. Springer Stifelman LJ, Arons B, Schmandt C (2001) The Audio Notebook: Paper and Pen Interaction with Structured Speech. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2001), Seattle, USA, pp 182–189, DOI 10.1145/365024.365096 Subramonyam H, Seifert C, Shah P, Adar E (2020) texSketch: Active Diagramming Through Pen-and-Ink Annotations. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2020), Honolulu, USA, pp 1–13, DOI 10.1145/3313831.3376155 Sutherland IE (1963) Sketchpad: A Man-Machine Graphical Communication System. PhD thesis, Massachusetts Institute of Technology Tabard A (2009) Supporting Lightweight Reflection on Familiar Information. PhD thesis, Université Paris-Sud Vogelsang A, Signer B (2005) The Lost Cosmonaut: An Interactive Narrative Environment on Basis of Digitally Enhanced Paper. In: Proceedings of the International Conference on Virtual Storytelling 2005, Strasbourg, France, pp 270–279, DOI 10.1007/11590361 31 Walny J, Lee B, Johns P, Riche N, Carpendale S (2012) Understanding Pen and Touch Interaction for Data Exploration on Interactive Whiteboards. IEEE 24 Beat Signer Transactions on Visualization and Computer Graphics 18(12):2779–2788, DOI 10.1145/2642918.2647379 Van der Weel F, Van der Meer A (2024) Handwriting But Not Typewriting Leads to Widespread Brain Connectivity: A High-Density EEG Study With Implications for the Classroom. Frontiers in Psychology 14, DOI 10.3389/fpsyg.2023.1219945 Weibel N (2009) A Publishing Infrastructure for Interactive Paper Documents: Supporting Interactions Across the Paper-Digital Divide. PhD thesis, ETH Zurich, DOI 10.3929/ethz-a-005886877, dissertation ETH No. 18514 Weibel N, Norrie MC, Signer B (2007) A Model for Mapping Between Printed and Digital Document Instances. In: Proceedings of the ACM Symposium on Document Engineering (DocEng 2007), Winnipeg, Canada, pp 19–28, DOI 10. 1145/1284420.1284428 Weibel N, Signer B, Norrie MC, Hofstetter H, Jetter HC, Reiterer H (2011) PaperSketch: A Paper-Digital Collaborative Remote Sketching Tool. In: Proceedings of the International Conference on Intelligent User Interfaces (IUI 2011), Palo Alto, USA, pp 155–164, DOI 10.1145/1943403.1943428 Weiser M (1991) The Computer for the 21st Century. Scientific American 265(3) Wellner P (1993) Interacting with Paper on the DigitalDesk. Communications of the ACM 36(7):87–96, DOI 10.1145/159544.159630 Xin Y, Bi X, Ren X (2011) Acquiring and Pointing: An Empirical Study of Pen-Tiltbased Interaction. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2011), Vancouver, Canada, pp 849–858, DOI 10.1145/ 1978942.1979066 Yeh RB (2007) Designing Interactions That Combine Pen, Paper, and Computer. PhD thesis, Stanford University