Improving predictive text

How can designers improve predictive text entry systems?
15 October 2019

Interview with 

Per Ola Kristensson, University of Cambridge

IPHONE-TYPING

Typing on an iphone

Share

Let’s take a look at a technology that most of us are very familiar with - typing. But not everyone can type; and some people can’t move anything other than their eyes. Per Ola Kristensson is from the University of Cambridge, and works on text entry systems to help in these situations. Chris Smith spoke to him...

Per Ola - What people use today is called eye-typing and the way it works is that you have a display in front of the user and the system is tracking your eye gaze and showing an indication of where the system thinks you're looking at on the screen. And you move that eye pointer by looking at the desired key and then you have to look at it for a very long time. And the reason you have to look at it for a very long time, called the dwell timeout, is because the system has to be sure that you actually want to type that key, as opposed to just looking at that key. And that’s just very very slow.

Chris - It's very laborious isn't it, because we sent Mariana to come and have a go with a system you mocked up to show how this actually works. We've got a recording of her having a go with it.

Mariana - Tried to type H but it got U. That's fine. Go over E, I went for I apparently. How do I delete a bit? Okay, let's start again. Go over the H, that picked Y, let’s see if prediction works. E... L... Oh, I can see “Hello” as the first prediction! It clicked it twice. Do the N. A. K. E. “Naked” appeared as the first prediction! S. C. We have “science”. Ah, that picked E. C. I picked V again, not very accurate at this last word. I. E. N. “Scientists”. “Hello Naked Scientists”!

Chris - Now that’s someone who actually has a PhD in Computer Science, clearly a bright person equipped with the technology that we would give someone with a disability to try and type words and it took her actually six minutes, we cut that down a bit. Clearly that's not something that people are going to end the day feeling anything other than intense frustration. So what can you do, Per Ola, to make this a more intuitive experience, and a more efficient experience for people who need to use gaze tracking to type?

Per Ola - The fundamental problem is that it's just not natural to have to stare at a key at a time, because the eyes are sensory organ, it's not a control organ, and the way we write... when we want to write, we think in terms of phrases and sentences and words, not in terms of inputting individual letters. So the way we changed the system is we basically got rid of this dwell timeout altogether. So to write for instance “the cat”, you look in the vicinity of the T key, you don't even have to look at the T key, and then you look at the H key, the E key, the C key, the A key and the T key. So you spell out “the cat”, without going to the space bar, and then you just look at the text area and the system will automatically translate that sensor data into “the cat”.

Chris - So you're using statistics, you're basically monitoring where the person looks and working out the possibilities of what they could want to say, and then refining that statistically and then that means that you give them a range of options to choose from but it's much quicker. It's like predictive text on steroids I suppose and it doesn't mentally fatigued them, like I think Mariana probably would've gone nuts if we tried to make her do that for much longer.

Per Ola - You can focus more about the actual process of writing and what you want to communicate rather than fiddling with the interface.

Chris - And how many words per minute can you realise with your system compared to say the industry standard that people were using?

Per Ola - In a controlled lab setting, you can type at about 46 words 

Chris - With your system?

Per Ola - Yeah.

Chris - That’s very fast actually.

Per Ola - Which is more than twice than the records systems ever measured for a standard eye typing. But that was captured under very unrealistic circumstances. In reality it depends on the nature of the user and how good their eye control is, because often some of these users have additional issues what makes actually even eye typing very difficult. But my experience is from talking to users who have used this system is that you can typically double the entry rate.

Chris - Even so, I mean that's pretty impressive isn't it. It is commercially available now, the system you've just described isn't it?

Per Ola - I worked with Tobii Dynavox for about six years, and it's actually a part of our free software update for their communicator system for users who use these type of eye tracking systems.

Chris - But in your view then, what are the big gains that we can make next. Where are we going to go with this technology next in order to make life better?

Per Ola - What we need is more sources of information. The reason this kind of system works at all is because we are very predictable as humans in terms of what we want to communicate in terms of text and that predictability can be captured in a statistical language model. But it can only go so far. The next step we need is to take into account other context. Like your G.P.S. location, who you’re trying to communicate to, the time of day etc.

Chris - So just integrate more information together which will help you make better predictions about what that person is trying to do. And that means that then it makes it a more fluid, more rapid experience for them.

Per Ola - Absolutely. Because what you can do then is you can predict whole sentences. And if you can go to the sentence level, that's when you can get a massive boost in performance.

Comments

Add a comment