Can convolutional neural networks replicate human perceiving logic? This project mainly focuses on analyzing different image representations between the human visual perception system and brain-like convolutional neural networks. Furthermore, the idea of implementing a brain-like computational model which imitates human visual performance on illusory faces and it's statistical attributes would be addressed.
- First, We designed a behavioural task, in which people answered to images of 3 different categories: Faces, Pareidolia Faces and Objects Matches (of Pareidolias); each with 86 images. The aim of this task was to compare peoples' answer to these categories. You can see the details of the task and relevant results here.
- Then, we tried to create a deep neural network computational model in which we can imitate the human performance in the behavioural task. We used most brain-like deep neural networks and retrained them by manipulating just the dataset and the model statistics kept unchanged (as official in papers) in all senarios. Then different types of statistical, visional and control tests applied to the newly-trained models. You can see and check the details of working with customized deep neural networks here.
The procedure includes many training programmings, statistichal tests, and very deeep thinks behind everything. plots, notebooks, training codes, model results and weights will be uploaded.
- Finall,
- add codes
- correlation results
- ADD conclusion1
- add cornet descriptions
- add cornet results and codes
- add final conclusion
- add mutual cornet
If you had any feedback or question, please reach out to me at mh.nikimaleki@gmail.com