By Eduardo Binotto, Software Engineer/IoT Developer at PackIOT
Much is discussed about artificial intelligence, different ways of communication, languages, and manners to provide interaction between machines and human beings. But if we think about it, there is one area that has not progressed that far: the way we type. Patented in 1878, the QWERTY keyboard has been the main input device for computers since the birth of computing, having even been digitized into modern devices like our mobile phones. So, chances are that we will be typing, one way or another, for a long time to come.
But, as technology evolves, new ways to interact with systems arise, and it’s important to stay up to date and see what the future may offer in terms of Human-Machine Interactions.
One of them that has been gathering traction in recent years is the Voice Control. Many computing devices today are capable of understanding speech in a way that they can easily control things with just a simple voice command. Although the hype, a survey showed that 46% of the respondents said that they have never used voice assistants, and another 19% rarely use, revealing that there is room for improvement before becoming a true mainstream interface.
An example of a voice command application is the large scale use of Google Assistant, for example. GA is a personal virtual assistant developed by Google that can perform day-to-day tasks such as calling people, sending messages, searching on Google, and even chatting with the user. At PackIOT, we ran real tests of our software integrated into Google’s system and the results were amazing. In the model we proposed, a plant manager was able to ask the question of how his production was going that day (among others) and Google Assistant answered through the data collected by PackIOT.
Another promising interface that drew a lot of attention was the control by a gesture. In general terms, Gesture control is the ability to recognize and interpret movements of the human body in order to interact with and control a computer system without direct physical contact.
After winning, in 2011, the “Fastest-Selling Consumer Electronics Device” world record by Guinness, the Kinect, the most mainstream device in this category, faced a lot of challenges coming out of the video-game world to more “sober” environments. The truth is that Microsoft stopped the manufacture of the Kinect a few years after that record, showing how difficult it is to design such interfaces.
There is a lot of room for improvement here and there will emerge good options in the future (short term). And this will make our lives easier, no matter what.
Once again remembering the QWERTY keyboard and its stagnation, I should say there is hope yet.
The last interface technology we will discuss is a science-fiction device that is already doing science fiction a reality today. Brain-Machine Interfaces, available to consumers thanks to the EEG technology, have the bold promise of controlling systems, or even anything, with just our thoughts. A brain-computer interface (BCI), sometimes called a neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain-machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
Although it can’t read the words inside our heads, some devices are capable of detecting patterns and taking actions on them, and also a bit more sophisticated thoughts, like moving objects in a 3D space.
Of course, it still needs a lot of research to make this into a reliable industrial tool, but once the challenges are overcome, it may be a bold threat to the dominance of our old and simple QWERTY keyboard.
Eduardo Binotto is tech enthusiast interested in software, innovation, and super excited on futuristic topics related to technology.