After adding thousands of new eye-tracking images, the latest Neurons AI model is now up. This model improvement leads to a substantial gain in attention prediction accuracy. Here, we go through the main differences with a focus on object recognition, better discrimination, and metaphorical elements.
Neurons has just launched the latest major update to the AI model. This model improvement is a substantial increase in accuracy, and follows our development process.
Most notably, we can distinguish between three types of accuracy improvements in the new model: object recognition, better discrimination, and metaphorical elements. Let's take them in turn.
Object recognition
If you follow the way in which the brain's visual system is working, there is a divide after the primary visual cortex. Here, one stream of processing goes upwards, the so-called dorsal route, which is related to object position and usability.
The other route is through the brain's lower end, the so-called ventral route, at the underbelly of the temporal lobe. Processes here are related to object category and recognition. Objects, faces, cars, brands, etc. are processed here.
With the increasing data amount and new advances in our AI models, we can now see that the AI model better recognizes faces across a variety of conditions and angles. It is interesting to see how the AI models gain a function that in human brains are what we can call object permanence. That is, that we perceive an object as the same regardless of the angle at which we look at it.
Better discrimination
The second type of attention prediction precision comes from the reduction of errors. Making a better model is not only about boosting the true positives of the model. You also need to show that other parts of the model are being more accurate, including:
- higher true negatives -- when the model says "this part of the picture is not seen" and this is true when looking at the eye-tracking data
- lower false positives -- reducing the times that the model says that there is something when there is nothing in the eye-tracking data
- lower false negatives -- reducing the times where the model says that there is nothing when there actually is in the eye-tracking data
Metaphorical elements
Many ads play on metaphorical elements that are relevant to human viewers. However, models are not particularly good in understanding these features, and it is even more difficult to hard code into a model.
Still, the latest Neurons AI model has improved substantially in the processing of metaphorical ads. Here, the precision has reached an astounding new level, matching non-metaphorical contents.
Better attention prediction
Taken together, the new Neurons AI model update leads to a substantial increase in the precision of attention prediction. These increases have even exceeded our expectations for what machine learning models can do. However, these advances clearly demonstrate that a combination of high-quality eye-tracking data together with world class machine models is the only way forward!
Check out the updated Neurons today. Book a demo with us to see the tool in action.