Recently, we wrote an article about how we could soon send scented odours in our online messages, courtesy a Chinese study. And now, it’s time to welcome eye-tracking Internet of Things (IoT).
The first, second, and third screens have been successful in exercising the potential that this magnificent organ of sight holds when it comes to digesting output information, and now the fourth screen (wearables) presents a very promising first step for sight as an input.
While head-worn, wrist-based, and all the other types of wearable devices have an advantage of their predecessors and building on the familiarity of talk, tap, and touch-based interactions, the biggest challenge is that smaller screens end up massively compromising the capacity to communicate and consume data. Further, until recently, the world had a very small number of commercially-available alternatives trying their best to close this very visible gap.
Even though human eye is considered as the fastest moving organ in the human body, still the eye-tracking technology represents a nascent market all around the world. Though new and comparatively smaller, the market has a number of participants making an entry into the space by bringing with them a good range of new, innovative solutions to provide users with more intuitive, immersive, and glanceable experiences, something which they expect the fourth screens to deliver.
Currently, there are a number of companies trying their hand at capitalising this great notion by bringing eye-tracking technologies to wearable augmented reality and virtual reality devices. The list includes California-based startup Eyefluence, which was acquired by Google last year, and Copenhagen-based The Eye Tribe, which was just recently acquired by Oculus (Facebook).
When it comes to eye tracking, the main idea is to efficiently make use of vision as a tool for measuring intent. The longer vision with making this kind of functionality a reality is to use it as a complement, rather than as a competitor, with the various components that will be making more spatially-aware, contextual computing solutions an Internet of Things reality.
For each and every player competing in this segment, it is very crucial to understand the best way they can make use of the Optimal Recognition Point (ORP), or the point within a word to which a human eye naturally gravitates before the human brain begins processing its meaning, by making use of these new technologies.
When it comes to traditional line-by-line text reading, a human eye makes a jump from one word to next word, and keeps identifying each of the ORP along the whole way until a punctuation mark in between the line signals the human brain to take a pause and make sense of the entire thing. This is the very reason that a majority of humans find it difficult to recite a song or the alphabet backwards; the components that make them complete/whole are to be learned in a sequence.
Positional memory has an important role to play when it comes to working with sequential data—which is most often the case when it comes to Internet of Things.
Regular neural nets are often placed around fixed-size inputs and outputs based on the unidirectional flow of input data in the hidden layer. Recurrent neural nets are the ones that do the incorporation of the memory concept. In order to successfully do this, a combination of hidden-layer information and input data from each of the timestep has to be used as an input for the previous timestep, repetitively. This very hidden recurrence is responsible for the addition of both the context and the back-end framework that machine learning and advanced analytics bank on, ranging everything from speech and natural language processing to image and handwriting recognition.
This very same methodology is now attracting the attention of tech giants like Google. In the year 2014, Google acquired DeepMind. Its Artificial-intelligence inspired Garage projects are now entering into industrial, medical, and retail operations, among various others. This methodology will also prove to be very important for the currently emerging eye-tracking technology solutions to get themselves on the map.
In the meanwhile, we will be witnessing gesture control serving as an entry point for the upcoming eye-tracking technologies, especially as they spread through the IoT landscape. The fundamental difference between gestures and eye tracking technologies is quite simple. While gestures technology functions on the existing hardware but it requires the user learning, initiating, and engaging with the device in order for the interaction to take place, but in the case of eye-tracking technologies the user will be able to achieve the same results as the gesture technologies by building value from something that they already do…see.