Computer Science Industrial Engineering Issue III Volume XXV

Modern Note Taking: It’s All Virtually Handwritten!

About the Author: Ellen Landrum

Ellen is a third-year Computer Science and Mathematics student. She works in aerospace software, but particularly enjoys studying computing theory and algorithm design.

The growing prevalence of digital writing in personal, academic, and professional settings is due to recent advancements in the stylus-tablet technology and handwriting recognition software. Using research spanning the user documentation of industry tools and methods to improve digital writing and dynamic digital graphics, this article presents the current state of virtual note-taking with descriptive examples. Telemetrydriven systems of digitizing touch for stylus-tablet pairings are examined through a historical engineering lens. Beyond raw data, an overview of modern note-taking applications and the object-based representations of strokes–which enable realtime repositioning, live handwriting recognition, spellcheck, and mathematical inference– while also addressing common functionality lapses and performance trade-offs. Finally, emerging directions in collaborative annotation, adaptive gestures, and semantic augmentation are considered for their potential to reshape educational and professional practices. This article provides a comprehensive roadmap for anyone seeking to better understand and harness the full potential of digital handwriting.

Modern Note-Taking: It’s All Virtually Handwritten! 

Like many undergrads, I had spent my entire academic career lugging around notebooks, writing utensils, textbooks, folders, and loose paper in a backpack often described as ‘ridiculously heavy’. After 15 years of sore shoulders, forgetting problem sets and textbooks, and awkwardly splitting mixed-media notes between paper and my laptop, I finally broke down and bought a tablet and accompanying pencil stylus this past January, joining millions who have incorporated tablets into their education or profession by personal choice, peer pressure, or institutional initiatives. My tablet has increasingly overtaken the analog methods of note-taking and annotation in my repertoire, largely due to its versatility. I used to preserve physical notes for annotation, drawing, and math, and delegated visual- and programming-heavy topics to my laptop. Now that I handwrite digitally, I can easily represent information for which typing is ill-suited while remaining ‘online’, with textbooks, assignments, and lecture presentations available at the touch of a finger. 

This shift in my preference raises questions about the digitization of informational media and how these media cater to user experience. It has also introduced me to the opportunities and shortcomings of digital image storage, note-taking apps, and physical devices that hint at what’s happening under the hood. So how does swiping a stick across a glass plate display your strokes nearly as seamlessly as writing on paper, and how can software make the experience more flexible and useful? This article offers an in-depth look at the engineering behind our present digital writing revolution.

Hardware Making it Happen: A Background on Touchscreens and Styluses 

Similar to paper and pencil in a long line of steadily improved writing mediums, their digital counterparts stand on the shoulders of fairly dated pre-existing technologies. E.A. Johnson patented the idea of the modern touchscreen in 1965, but the invention wasn’t used widely until the release of the original 2007 iPhone [1]. Styluses were also present in some of the earliest tablet designs, like the 1963 ‘Graphics tablet’ developed by the Rand Corporation, but took a backseat during the explosion of handheld devices [2]. The hardware currently used in popular tablets and laptops has changed in only a few key ways that allow them to compete with traditional writing mediums. Today, personal devices like tablets and phones use a specific type of touchscreen named after the capacitor, an electric component that these devices employ. ‘Capacitive’ touchscreens work by forming a tiny electric field between a charged metal surface painted under the protective glass layer of the screen and your point of contact [3]. When your electrically-conductive finger or stylus touches the screen, you close the circuit of the capacitor and the difference in charge between you and the plate indicates where you’ve tapped [3]. Passive styluses, like the ones you use with a Nintendo DS or while giving a signature on a store checkout screen, don’t use any fancy technology and need only a conductive tip, typically made of rubber, plastic, or foam, to perform their core function of disrupting the screen’s electrical field [4]. However, to enable a level of accuracy and responsiveness rivaling pen and paper, touchscreen software had to be further developed alongside the use of dynamic styluses.

Figure 1: A product image for a classic capacitive stylus sold by STYLUSLINK. This stylus works by emulating the touch of a finger on a screen. [5]

Giving Styluses a Voice through Telemetry 

Samsung’s 2011 S Pen was one of the first widely successful software-incorporating “active” styluses, marking a shift towards digital writing as a comparative and even better alternative to analog methods [6]. The added complexity forced large touchscreen devices and styluses into proprietary couplings, requiring off-brand styluses to reverse-engineer for compatibility. For example, active pens hoping to work on Apple tablets comply with the  PencilKit API, a software library that makes the pencil’s position and stroke information interpretable to the tablet being written on, as detailed further in the following section [7]. The growing specificity of interaction between screens and active styluses motivates an investigation into how these elements talk to one another.

Figure 2: The Microsoft Surface Pen and Surface Go, which are sold together for their active digitizing partnership. [8]

Active styluses are nothing without active digitizers. The digitizer is any touch-sensitive screen that “digitizes” touch input by recording the location and characteristics of screen contact [9]. An active digitizer is set apart by its ability to receive signals from an active stylus from two of the stylus’s components: an inertial measurement unit (IMU) and Bluetooth chip, and these electrical components explain the hiked prices and need for charging [9]. An IMU is used in many navigation devices to detect the location and orientation of electronic tools ranging from drones to a Nintendo Wii game controller. The IMU has two essential sensors: an accelerometer, which detects acceleration along three axes, and a gyroscope, which tracks rotational speed [10]. Coupling the data from these two inputs with respect to the time it’s received, the IMU can calculate the position, tilt, and force applied to an object at extremely high precision, enough to detect a single pixel being selected [10], [11].

The Bluetooth chip in the pen enables telemetry, a deceptively magical term for the constant barrage of data the sensors in a stylus transmit to their receiving digitizer [11]. For example, the Apple MacOS represents each touch with a 64-bit number for the location, force, angle to the screen, and timestamp (generally) [27]. Its digitizer reports a 120 Hz sampling rate: it scans for updates on your stylus 120 times per second, also a common rate of frames per second for gaming and video production [27]. It picks up the few hundred bits just mentioned on every scan, meaning that in a single second of you drawing a straight line, the digitizer receives tens of thousands of bits. With so much information to store, it’s unsurprising that latency, which is the time between making a mark and seeing it on screen, is a major obstacle to digital writing. You’ve probably experienced latency drawing on all kinds of touchscreens, though it’s most noticeable in “buggy” devices where something written with a pen or finger pad shows up in stilted segments as you trace along the screen. Increasing information and speed is the primary method of improving performance, demonstrated in the way many active digitizers like the Apple iPad boast “double” touchscreen scanning rates for their styluses, essentially using old capacitive detection but faster [12].

So what does the digitizer do with this enormous stream of data it receives when you drag a rubber pencil tip in a small counterclockwise loop and append a short vertical tail to its right side, for example? Programs like the Apple PencilKit API reconstruct these time-stamped data points into smooth curve segments, stitching them together into your cohesive letter “a” by the time you lift the pen tip (PKStrokePoints in the API). Pressure sensitivity allows you to push harder on your downstrokes, leaving the underside of the “a” thin for a calligraphic effect and simulating natural variations in line weight and opacity [12]. Tilt detection is necessary for shading and directional strokes, allowing active styluses to emulate real-world implements ranging from paintbrushes to fountain pens to realize this simple letter [13]. Palm rejection—made possible by distinguishing stylus signals from capacitive disruption of a hand or wrist—provides the comfort of resting and lifting your hand while writing without muddling the marks you produce [14]. Coupled with haptic feedback (experienced by the user as clicks or vibration simulating friction), these features can make digital writing, sketching, and annotating comparable to the expressiveness, ergonomics, and responsiveness of pen-on-paper input [15]. 

Note-taking Apps 

During COVID, simple tasks like pointing out a visual detail, sketching an idea, writing a formula, or sharing any other tactile information became incredibly difficult. Virtual meeting applications like Zoom, WebEx, and Google Meet were poorly suited to meet the demands of live handwritten collaboration. Many may recall jerky displays of on-screen drawings or text, the inability to reshape a box or move words across the screen, and the tedious process of erasing . These examples represent just a few digital annotation challenges that note-taking apps have solved. 

The first inklings of this functionality arose with dynamic graphic editing, pioneered in the 80s by original Macintosh applications like MacPaint, which used only a desktop mouse for input [16]. These early applications were hard on the hand and forefinger and did little to simulate real-world drawing, exacerbated by high latency and unpredictable performance. Following the widespread availability of touchscreens, digital art apps like Procreate (still lacking an active stylus) inherited from these apps decades later, and in the modern day, note-taking apps are cropping up to cherry-pick functionality from their predecessors. Microsoft OneNote, Apple Notes, Notability, GoodNotes, and dozens of other personal document apps leverage active styluses and complex software pipelines to optimize the digital note-taking experience for users. At their core, these apps use the telemetry we explained above to receive every pen stroke as a vector object—an ordered data structure that encodes position, pressure, speed, and timestamp—rather than as static pixels [17]. To begin exploring this field, the discussion will focus on three features that are technically interesting and reveal insightful bugs: real-time text manipulation, handwriting recognition, and AI-assisted input corrections and suggestions [18]. 

Object Manipulation: “Circle to Lasso, Scribble to Erase” 

Legacy drawing apps use bitmap images, which store an image by labeling the color code and tone of every pixel [16]. This static storage method fails when a user wants to change the existing image. Any resizing, moving, or even erasing of elements (which are core features of modern digital art) requires too computationally-heavy transformation functions. As a solution to this inflexibility, vector structures representing strokes as objects were developed, enabling strokes to be organized by various metrics. 

Figure 3: A comparison of static and dynamic graphic storage between pixels and vectors. [19]

While storing every discrete pen stroke as its own object position requires greater  storage and rendering time, it allows users to  rapidly select, move, duplicate, and resize elements[19]. The ‘gestures’ that serve as shortcuts for this functionality, like selection by enclosing a section in a circle, or erasing by scribbling over a block of writing, rely on algorithms comparing your input strokes to one another and inferring when irregularity indicates a gesture [20]. When you trace a squiggly blob around the notes section you just decided to move to the other side of the page, the app intelligently decides the scale and shape of your ‘writing’ (jagged, cutting around the paragraph instead of outlining it in a clear circle or box) is for selection, not emphasis. Again, this decision-making relies on the grouping schemas of stroke objects that vary within different apps. However, when strokes cross page boundaries or extend off the visible canvas, the vector grouping can break down. This is often why a dragged paragraph might leave behind stray letters or refuse to cross pages entirely. The bugs that arise from object manipulation expose the internal logic of these systems; for instance, they can show whether an app groups strokes by bounding box (the general rectangle around the shape), time proximity, or a time block of input [17]. 

Handwriting Recognition (OCR/ICR) 

Handwriting recognition originates in document digitization but has expanded to live conversion of finger and stylus writing to print font on text messaging apps, smart watches, and even Google searches. The handwriting-to-text features rely on Optical Character Recognition (OCR) and its specialized subfield, Intelligent Character Recognition (ICR). OCR is designed to recognize mostly printed text: think of the document scanning used to virtually cash checks reading each datafield like “Pay to the Order of” and “for Deposit Only at”; ICR is much more flexible (and expensive) and uses machine learning neural networks that enable it to constantly improve, which is particularly relevant when it’s detecting your handwriting: in the check-cashing example, ICR reads the handwritten contributions to those fields, like your name and the intended amount [21]. Both depend on massive datasets to learn to parse shapes into character codes by recognizing patterns corresponding to certain letters and symbols [22]. These systems convert vector strokes into encoded text objects, allowing the words to be searched, copied, or reformatted [21]. For example, when writing the word “apple” to convert to print, the ICR layer is activated by the high probability of script and converts the smoothed-curve shapes of the individual letters through several layers of its neural architecture to first recognize them as letters (sometimes in the context of the other letters they are in sequence with) and then string them together with its knowledge of the English dictionary and spacing between other strokes to produce the most likely text representation of the shape, which is hopefully correctly deemed to be “apple” in any functional note application. The transition from visual data –vector graphics– to semantic data –character encoding– introduces dual layers of complexity. In PDFs, this can result in mishaps like annotations disappearing when you export a file, because the vector annotations are stored in the overlay layer, which is not always embedded into the base text layer of the file [23]. This mismatch is an illuminating consequence of the divergent encoding pathways for images and text that note-taking apps make use of simultaneously. 

AI Optimizations: Spellcheck and Math Suggestions 

AI-manned optimizations like handwriting spellcheck and auto-suggesting mathematical solutions build on top of the ICR layer, taking in its data as input to machine learning algorithms and outputting more semantic data that travels from the ICR layer back down to vector graphics as suggested text. These systems require constant analysis of handwritten input to offer their corrections or enhancements [24]. That real-time processing introduces latency: the delay between stylus movement and visual feedback discussed in prior sections, which in ‘acceptable’ systems is usually limited to 20-40 ms, or an average of 33 updates per second [18]. Because the software is continuously interpreting your input and not just storing it, the longer you write, the more the note file becomes internally layered with a pile of metadata, semantic guesses, and correction attempts, decreasing in stability, sometimes leading to frozen note pages, app crashes, long load times, and unresponsiveness [24]. Latency grows with the computation and memory operations required (it has more and more data to sift over as it contextualizes new data), which is why these features are often disabled by users (including me) in favor of an efficient writing experience. As processing time and data storage improve, these operations could become less computationally expensive and disruptive and thereby more useful.

Figure 4: Apple iPadOS 18 ‘Math Notes’ interprets character and math symbol data to read an equation and calculate its result, which it autofills in your own handwriting. [25]

Looking Forwards (and Backwards) 

The current capabilities of styluses and note-taking apps shed considerable light on what’s to come in efficiency, convenience, and collaboration. Trying to organize your writing on a physical page as you receive information in the form of a lecture or a textbook is riddled with potential mishaps: missing what’s said about a graphic as you frantically try to copy it, thinking you could fit an equation into the rest of a line of notes and realizing you’ll have to split between two lines, or leaving space for the lecture to return to a subject again to find it’s either never re-mentioned or you wildly underestimated how much space you would need and having to make another section of addendum. One of the most immediate consequences of digitizing notes is the ability to constantly restructure them with drag, drop, and resize features, allowing flexible redefinition of scope and organization across a page [23]. Pliable notes enable more perfectionism but also require less planning, and the impact on cognition merits further inquiry. In a related sphere of document administration, the use of ‘gestures’ for manipulating objects, changing a writing implement, and adding new pages allows you to make a virtual document whose real-life equivalent would require applying and removing different colors and stroke widths with a large set of implements. As handwriting recognition improves and these apps’ ability to discern writing from housekeeping markings improves, we can expect more shortcut gestures to emerge and become commonplace, creating a writing experience increasingly distinct from paper [20]. Online distribution is yet another avenue of distinguishing virtual notebooks. We rarely share real-life notes in the same way as used textbooks, which can provide emphasis and interpretations, augmenting educational value [26]. Tools empowering real-time collaborative annotation grow increasingly useful as the responsiveness of digital drawing continues to improve [23], and with the progress of machine learning, our note-taking apps could soon detect

the texts and subjects we’re working on and make suggestions based on other users’ commentary. This small sample of the evolving nature of the tactile digital medium gives us something to look forward to in a largely tech-averse climate. 

When large areas of annotation moved from handwriting to typing, the creativity and forced re-processing of manual formatting, margin notes, and personal code systems gave way to text-dominant regurgitation as users battled rigid and frustrating interfaces [26]. Handwriting on tablets may restore and augment that fluidity, blending the expressiveness of analog methods with the shareability and searchability of digital media. It offers a way to reconnect with how we’ve historically engaged with texts and self-presented our ideas while extending and advancing those practices into the digital future. While I don’t expect or hope for the extinction of paper, graphite, and ink, understanding the technical solutions enabling this revolution helps us to find new and exciting ways to represent and even optimize our cognition and share it with one another.

Refrences

[1] M. Woggon, “A Brief History Of Touchscreen Technology: From The iPhone To Multi-User Videowalls,” Forbes. Accessed: Apr. 01, 2025. [Online]. Available: https://www.forbes.com/councils/forbestechcouncil/2022/07/20/a-brief-history-of-touchs creen-technology-from-the-iphone-to-multi-user-videowalls/

[2] B. Buxton, “Some Milestones in Computer Input Devices: An Informal Timeline.” Accessed: Apr. 01, 2025. [Online]. Available:  https://www.billbuxton.com/inputTimeline.html 

[3] K. Glinpu, “Touch Screens: How our devices moved beyond buttons,” Illumin Magazine. Accessed: Mar. 23, 2025. [Online]. Available: https://illumin.usc.edu/touch-screens-how-our-devices-moved-beyond-buttons/

[4] M. Kazmeyer, “How Does a Stylus Pen Work? | Techwalla.” Accessed: Mar. 23, 2025. [Online]. Available: https://www.techwalla.com/articles/how-does-a-stylus-pen-work

[5] “Stylus Pen for iPad iPhone, Styluslink 3 in 1 Universal Capacitive Stylus Pen for Touchscreen,” Amazon. Accessed: Apr. 03, 2025. [Online]. Available: https://www.amazon.com/Styluslink-Capacitive-touchscreens-Compatible-Touchscreen/ dp/B0CJBV9CQY?ref_=ast_sto_dp&th=1 

[6] J. Hindy and A. Walker, “Samsung S Pen: The ultimate guide,” Android Authority. Accessed: Mar. 30, 2025. [Online]. Available: https://www.androidauthority.com/samsung-s-pen-the-ultimate-guide-925944/

[7]  Introducing PencilKit. Accessed: Apr. 02, 2025. [Online Video]. Available: https://developer.apple.com/videos/play/wwdc2019/221/ 

[8] “Surface Go 3 and Surface Pen Bundle,” Microsoft Store. Accessed: Apr. 03, 2025. [Online]. Available:  https://www.microsoft.com/en-us/d/surface-go-3-and-surface-pen-bundle/8v55vsg5vd51

[9] J. R. Ward and M. J. Phillips, “Digitizer Technology: Performance Characteristics and the Effects on the User Interface,” IEEE Computer Graphics and Applications, vol. 7, no. 4, pp. 31–44, Apr. 1987, doi: 10.1109/MCG.1987.276869

[10] B.  Or and M. Urwin, “Inertial Measurement Unit (IMU) Explained,” Built In. Accessed: Apr. 01, 2025. [Online]. Available: https://builtin.com/articles/inertial-measurement-unit

[11] J. A. Diffley, “Apple Pencil: More Than a Stylus? – News,” All About Circuits. Accessed: Apr. 02, 2025. [Online]. Available:  https://www.allaboutcircuits.com/news/apple-pencil-more-than-a-stylus/

[12] IGN, Introducing Apple Pencil – Official Trailer, (Sep. 09, 2015). Accessed: Mar. 26, 2025. [Online Video]. Available: https://www.youtube.com/watch?v=8sJQKbbSn9g

[13] K. Hinckley et al., “Sensing techniques for tablet+stylus interaction,” in Proceedings of the 27th annual ACM symposium on User interface software and technology, in UIST ’14. New York, NY, USA: Association for Computing Machinery, Oct. 2014, pp. 605–614. doi: 10.1145/2642918.2647379.

[14] “What is Palm Rejection? A Beginner’s Guide to Palm Rejection,” Enticio. Accessed: Mar. 31, 2025. [Online]. Available: https://enticio.com/blogs/studio-and-office/a-beginner-s-guide-to-palm-rejection

[15] J. Sim, Y. Yim, and K. Kim, “A review of the stylus system to enhance usability through sensory feedback,” South African Journal of Industrial Engineering, vol. 32, no. 1, May 2021, doi: https://doi.org/10.7166/32-1-2300.

[16]  Valerie Marier, “MacPaint (v1.5),” archives.design. Accessed: Mar. 26, 2025. [Online]. Available: https://assets.tumblr.com/analytics.html?_v=9f5febfd57a8a649c598d888f2d9e062&amp ;amp;utm_source=chatgpt.com#https://archives.design 

[17] C. J. Sutherland, A. Luxton-Reilly, and B. Plimmer, “Freeform digital ink annotations in electronic documents: A systematic mapping study,” Computers & graphics, vol. 55, pp. 1–20, 2016, doi: 10.1016/j.cag.2015.10.014.

[18]  Y.-H. Hsu and C.-H. Chen, “Usability Study on the User Interface Design of Tablet Note-Taking Applications,” in HCI International 2021 – Posters, C. Stephanidis, M. Antona, and S. Ntoa, Eds., Cham: Springer International Publishing, 2021, pp. 423–430. doi: 10.1007/978-3-030-78635-9_55. 

[19] F. Faurby, “Vector vs. Bitmap Images Explained,” Filecamp. Accessed: Apr. 02, 2025. [Online]. Available: https://filecamp.com/blog/vector-vs-bitmap-images-explained/

[20] K. I. Gero, L. Chilton, C. Melancon, and M. Cleron, “Eliciting Gestures for Novel Note-taking Interactions,” in Proceedings of the 2022 ACM Designing Interactive Systems Conference, in DIS ’22. New York, NY, USA: Association for Computing Machinery, Jun. 2022, pp. 966–975. doi: 10.1145/3532106.3533480.

[21] R. John, “Intelligent Character Recognition: Benefits and Use Cases.” Accessed: Apr. 03, 2025. [Online]. Available: https://www.docsumo.com/blogs/ocr/intelligent-character-recognition 

[22]  “What’s the difference between ICR vs. OCR? | Adobe Acrobat.” Accessed: Apr. 02, 2025. [Online]. Available:  https://www.adobe.com/acrobat/hub/difference-between-icr-vs-ocr.html 

[23] G. Rathnavibushana and K. Gunasekera, “Cross-platform annotation development for real-time collaborative learning,” in 2021 International Conference on Advanced Learning Technologies (ICALT), Jul. 2021, pp. 9–13. doi:  10.1109/ICALT52272.2021.00010. 

[24] H. Zhao and H. Li, “Handwriting identification and verification using artificial intelligence-assisted textural features,” Sci Rep, vol. 13, no. 1, p. 21739, Dec. 2023, doi: 10.1038/s41598-023-48789-9.

[25] K. Gedeon, “iPadOS 18 ‘Math Notes’ can solve your handwritten equations. Here’s how to use it.,” Mashable. Accessed: Apr. 03, 2025. [Online]. Available:  https://mashable.com/article/ipados-18-math-notes 

[26] C. C. Marshall, “Annotation: from paper books to the digital library,” in Proceedings of the second ACM international conference on Digital libraries – DL ’97, Philadelphia, Pennsylvania, United States: ACM Press, 1997, pp. 131–140. doi:  10.1145/263690.263806. 

[27] “UITouch,” Apple Developer Documentationhttps://developer.apple.com/documentation/uikit/uitouch

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *