USC
About this Article
Written by: Austin Bowie
Written on: April 12th, 2016
Tags: computer science, security & defense, communication
Thumbnail by:
About the Author
Austin Bowie is a student at the University of Southern California.
Stay Connected

Volume XVIII Issue I > Are You a Human? Exploring What Web Security Means to You

CAPTCHA

One issue of security is that robots or programs are able to interact with the Internet as if they were human users. For example, a program could make as many emails as possible and spam a set of users. Based on the numbers seen earlier, it is not unthinkable for a computer to quickly fill an inbox with over one million emails from one million different email accounts. Couple that with a robot that searches the Internet for email accounts, and the situation could quickly get out of hand.
To prevent this, there is a common countermeasure called a CAPTCHA, an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”[3]. An example is portrayed in Fig. 1. The “Turing test” component of this acronym is the important part. A Turing test, named after the pioneering computer scientist Alan Turing, is a test used against an entity to determine whether it is a robot or a human. While this may sound complicated, these tests are quite brief and easy to take. Signing up for an email account in the last five years has likely involved solving one to complete the process and it probably looked something like this:[3]
illumin.usc.edu/illu​min.usc.edu
Figure 1: A typical CAPTCHA security module.
While usually easy for humans to solve, this can be an unexpectedly difficult challenge to model for computer scientists developing programs. Humans can see the image in whole, use their natural ability to recognize any relevant patterns or text, and give the answer. On the other hand, a computer can only look at one pixel at a time, forcing it to go back and forth and attempt to decipher any patterns that may exist. Interestingly, with the advent of artificial intelligence, computers have begun to be able to mimic humans brains in this regard, and these image-based CAPTCHA programs are not working as well as they used to.

A New CAPTCHA System

To combat this, in late December of 2014, a new type of CAPTCHA was developed, called the “no CAPTCHA reCAPTCHA.” Shown in Fig. 2, This new technology uses software that tracks what is done before, during, and after engagement with the CAPTCHA to decide if the user is human or not [4]. As a result, instead of having to decipher distorted text, the user only has to check a checkbox confirming that they are, in fact, human.[4]
illumin.usc.edu/illu​min.usc.edu
Figure 2: A more recent development, a reCAPTCHA module.
How is this done? While the code is under wraps still, one can imagine a robot, unless told otherwise, would likely fill out a form in thousandths of a second, while it would take a human a minute or two. Using this information, it would be fairly easy to decide whether a user filling out a form is human or not. While undoubtedly much more robust that this simple example, the “no CAPTCHA reCAPTCHA” likely operates along similar lines. This is only one form of security against the multifaceted world of security however.

Brute Force Hacking and Preventive Measures

Spamming accounts and forms, however, are not the only security risks on the Internet. Guessing passwords is a common and considerable threat to online security as well. While a human at a keyboard would take years to guess every word before correctly guessing someone else's password, a computer can run through millions of words in a minute. Suddenly, trying every word in the dictionary is a trivial task. If a password is an actual word, odds are the computer will guess the correct password very quickly. This method is called “brute forcing” a password, and there are a couple interesting ways to combat it.
The first counteractive measure is one that often annoys many users: password requirements. Having length requirements, along with capitalized letters and numbers, makes guessing a password much more difficult. For example, requiring that a password be twice as long, assuming it is all lower case letters, makes the number of possible guesses go from 11.8 million to 141 trillion, which is 11.8 million times more difficult. If numbers and uppercase letters are included as well, there are 839 quadrillion possibilities, equivalent to 839,000 trillion different combinations of possible passwords. All of a sudden, what previously took a minute would now take 134,000 years if the computer was going at the same rate.
Even then, however, some users still are not protected and some attackers are very persistent. To deal with this, a limit can be placed to how quickly passwords can be tried. For example, there may be a limit to trying one password every five seconds, which to a human would likely make no difference at all, but to a computer suddenly makes the possible seem impossible. If the above example is used of five all lowercase letters as a password, what may have taken a minute would now take almost twelve and a half days. Extend this five second rule to ten characters, upper and lower case letters, as well as numbers, and the password may not be cracked until the universe is ten times as old as it is now. Not a bad solution, but there is still one more common tactic that can be implemented to deter intrusions.
An easy and somewhat effective way to prevent intrusions is to have a maximum amount of attempts a user can make before no more attempts can be made. A variation to this, that Apple has implemented on their iPhones, are timeouts. The first three attempts can be done instantly. For every wrong attempt made after that, the user has to wait a longer and longer period of time before another attempt can be made [7]. This is a relatively effective mechanism as valid users will almost always be able to guess their passwords in a few attempts, but a computer has a very low chance to do the same, unless the password is known beforehand. After a few wrong guesses, the robot would have no way to continue attempting except to wait. The only caveat to this, however, is that some humans are not valid users in reality. For example, friends trying to guess a password on a user’s phone have been known to lock the phone, sometimes for days at a time. At this point, even the true owner cannot regain access to the phone without special intervention by the manufacturer. In the case of a website, users are often forced to reset their password. This may be an innocuous request at first, but eventually the user will have to do it so often that they won’t be able to remember their password at all, likely leading to other security flaws.