Touchscreens are a human-computer interface with allows users to interact with data directly. They are simple to use, fast, and have low error rates when designed correctly, as opposed to other input decives such as a mouse or joystick. 
We discuss here several ways of creating touchscreen surfaces: the resistive touchscreen, capacitative touchscreen, and the optical-imaging touchscreen, and resolving recognition issues with writing on a touchscreen.
Directness: Touchscreen users merely need to point to the desired object to select it, and do not need to map hand motions to cursor motions. 
Speed: Touchscreens allow for faster selection as oppose to other devices such as a computer mouse, since users do not need to reach for a mouse, but simply point to the screen. 
Ease of use: Since users merely need to touch the screen to interact with data, and do not need to practice coordination, as opposed to users of the computer mouse, users can master the touchscreen more quickly. 
No moving parts: Since only the screen is used, and no other mechanical parts are involved, a touchscreen is less likely to fail even after heavy usage over an extended period of time, as opposed to a device with mechanical components like a lightpen. 
No additional desk space: A keyboard requires additional space besides a screen, but a touchscreen has no such requirements. Having no additional components also reduces the number of components to keep track of when bringing the touchscreen around. A computer mouse, for instance, could be easily lost when transported around. A touchscreen user only needs to keep track of the device itself. 
Touch screen input keys take up limited screen space. Studies have shown that a minimum key size of 20mm is needed for consumer satisfaction on a numeric keypad displayed on a capacitive touch screen. 
However, a resistive touch screen cannot have a large size as the upper substrate is bent whenever it is touched, resulting in sagging.  To solve this problem, Park et al. proposed a modified resistive touch screen, where relatively high and dense dot spacers are used to prevent sagging. Single touch panels are connected to allow for a larger touch panel.
Capacitative screen sensors do not have the problem of sagging. They usually have a single surface, such as glass, with a uniform resistive material coated to one face of the surface. Electrodes are attached to the resistive surface. A finger or stylus applied to the surface establishes capacitative sensing. When applied to the surface, the electric field is perturbed, allowing the screen sensor to detect the position of the stylus. When this happens, the electric field of the nearest electrode to the stylus is distorted to most, so it is established as the position of touching. One limitation is that the surface of the screen is therefore discretely divided into areas at least as large as the electrodes, therefore limiting resolution. 
An array of source Light Emitting Diodes (LEDs) along two adjacent X and Y sides of an input display and another reciprocal array of corresponding photodiodes along the opposite two adjacent XY sides of the input display are needed. Each LED generates a light beam directed to the reciprocal photodiode. When the user touches the display, interruptions in the light beam are detected by the corresponding X and Y photodiodes on the opposite side of the display. Data input is determined by calculating the coordinates of the interruptions displayed by the X and Y photo-diodes. 
Disadvantages are that a large number of LEDs and photodiodes are required for a typical data input display. Precise alignment of the LEDs and corresponding photodiodes is needed, driving up the cost of such a touchscreen. 
An alternative is to use polymer waveguides to generate and receive beams of light from a single beam source to a single wave detector. Such a system is complicated an also requires precise alignment between transmit and received waveguides, and optical elements and waveguides. The waveguides are made with a lithographic process which is expensive. It is also flat and makes the bezel around the waveguide wide. To alleviate this problem, Smits (2006) proposed a folded optical element waveguide which allows a minimum width bezel to be used around the perimeter of a touch screen display. 
Handwriting recognition requires algorithmic design of analysis of characters, since the relative positioning of the stylus can be captured accurately.  Lexicon-driven methods are useful when resolving text in the case when characters are mistakenly captured.  This uses the same approach as spellcheckers on text editors do. Once a word is written, it is segmented into individual characters and compared against a lexicon of characters. The context in which a word or phrase is written is also consider. For example, if a zip code is written, it is compared against a database of allowed zip codes, or else flagged to the user.  If the user is writing a letter in Spanish, for instance, the characters forming the word are then compared against a lexicon of all legal Spanish words to ensure accuracy.
© Lay Kuan Loh. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
 A. Sears, C. Plaisant, and B. Schneiderman, "A New Era for Touchscreen Applications: High Precision, Dragging Icons, and Refined Feedback," in Advances in Human-Computer Interaction, Vol. 3, ed. by R. Hartson and D. Hix, (Ablex Publising, 1992), p. 1.
 H. Colle and K. Hiszem, "Standing at a Kiosk: Effects of Key Size and Spacing on Touch Screen Numeric Keypad Performance and User Preference," Ergonomics 47, 1406 (2004).
 B. Evans, "Method of and Apparatus for Sensing the Location, Such as Coordinates, of Designated Points on an Electrically Sensitive Touch-Screen Surface," U. S. Patent 4806709, 21 Feb 89.
 G. D. Smits, "Apparatus and Method For a Folded Optical Element Waveguide For Wse With Light Based Touch Screens." U. S. Patent 8184108, 22 May 12.
 G. Kim and V. Govindaraju, "A Lexicon Driven Approach to Handwritten Word Recognition For Real-Time Applications," IEEE Trans. Pattern Analysis and Machine Intelligence 19, 366 (1997).