Tool: Using machine learning to create personalized conversational user interfaces.
We live in a time of big data and even bigger need for personalization. Companies now are basing new technology off giant databases of consumer trends, collected from sensors that we use very day. Sensors like facial recognition (scans a face using a camera) or touch sensors (embedded in the surface of a touch screen).
This creates a pain point when users don't fall into the majority of the data pool because the technology is no longer designed for them, making it difficult to use. To help design for the outliers, settings, hidden deep within menu's, were created to tailor the experience for that particular user. The goal for this project is to create a system that uses data, sensors, and machine learning to create personalized interfaces.
What is Machine Learning?
I like to think about machine learning in terms of food. Machine learning is based off something called an algorithm or a set of rules to perform. Lets call this algorithm an apple pie recipe. To make the algorithm work you need data like a recipe needs ingredients. Once you have the recipe and ingredients, there are a finite number of different combinations to bake the apple pie. Some avenues make the pie too salty, or too liquidy.
These failed attempts create something called a neural network or a series of avenues that have positive or negative outcomes. These failed attempts let you know that you should try baking longer or with less salt. Ultimately the recipe gets whittled down to the best combinations of ingredients and it can be passed down for years to come.
How it Works
This solution depends on four technologies that help create the unique experiences seen above. These technologies are listed below with their uses:
Facial Recognition: collects age, sex, ethnicity, mood, and clothing patterns
Touch Sensors: collects usage patterns of certain features i.e. phone applications
Machine Learning: compiles all the date collected and makes visual decisions based off a neural network
Neural Network: conducts trial and error to refine visual decisions made from the machine learning algorithm
With these four technologies the phones software would alter how the phone looks and functions, in real time, to provide the user a personalized experience that enhances ease of use and eliminates pain points.
During preliminary research and iteration, this tool was applied to a variety of situations including accessibility needs. The primary impairments that were focused on included vision, hearing and kinesthetic impairment.
With facial recognition, the program could determine if the user had a difficult time reading the screen by bringing the phone closer to their face. This would result in the text getting bigger or smaller automatically. If a user asked the CUI (conversational user interface) (bill gates) to repeat him/herself multiple times the CUI would automatically increase volume or decrease talking speed. Lastly if it is a kinesthetic (touch) impairment the components of the phone would get bigger or smaller if the user continuously hit wrong buttons.
CUI Relevance Mode
In the primary scenario the two users, Eva and Georgina find an opportunity where they are both talking to the CUI sitting on a nearby table. This enables CUI relevance mode which uses data from both users to determine who's CUI has the most information about a certain subject. This mode can accommodate up to 11 people contending for this primary CUI role.
You can see the entire scenario played out by clicking the "view" button below, forwarding you to a Google slides presentation.
design is always evolving
Working on this project for only a few weeks has brought out many interesting questions regarding this technology and its uses to create personalized experiences within a digital or physical space. My hopes for this project are to use it as inspiration for my thesis and to develop parts of it using python and facial recognition hardware from Arduino.
Much more to come!!