ALEXANDRA STEINHILBER
Post-Doctorante (Université Grenoble Alpes)
- Imprimer
- Partager
- Partager sur Facebook
- Partager sur Twitter
- Partager sur LinkedIn
- Share url
Langage

Coordonnées
Bâtiment : Bât. Michel Dubois
Bureau : E 115
Enseignement
Enseignement
2019-2022 Language theory and Automatons (30 hours each year)
Bachelor of Science and Technology Department (DLST)
2020-2021 Information Theory (18 hours)
National School of Computer Science and Applied Mathematics of Grenoble (ENSIMAG)
Curriculum vitae
Curriculum vitae
Research Experience
2019-2023 PhD: Bayesian modeling of reading acquisition
supervised by Sylviane Valdois and Julien Diard, LPNC, Grenoble
2019 Master thesis: Speech, memory and movement coordination
supervised by Amélie Rochet-Capellan and Marion Dohen (6 months), Gipsa-Lab, Grenoble
2018 Master thesis: Rat spine modeling
supervised by Lionel Reveret (6 months), LJK, Grenoble.
2017 Speech polarity detection
supervised by Félix Burkhardt (2 months), Deutsche Telekom, Berlin, Germany.
2017 Inner speech IRMf data analysis
supervised by Hélène Lœvenbrück (1 month), LPNC, Grenoble.
2016 Speech analysis tool development
supervised by Pascal Perrier and Amélie Rochet-Capellan (6 weeks), Gipsa-Lab, Grenoble.
Education
2019-2023 PhD: Laboratory of Psychology and NeuroCognition (LPNC), Grenoble.
2018-2019 Master Degree: School of Engineering in Physics, Electronics and Materials Science (Phelma), Grenoble
Major in Natural and Artificial Cognition.
-
Bayesian Cognition: Models for Perception, Learning, and Action.
-
Models of Memory and Learning in Natural and Artificial Systems.
-
Cognitive Psychology.
-
Language Sciences and Linguistics.
-
Early Childhood Skills Development.
2015-2018 Engineer School: National School of Computer Science and Applied Mathematics of Grenoble (ENSIMAG), Grenoble
Major in Mathematical Modeling, Image, Simulation.
-
Probabilistic Models for Learning.
-
Surface Modeling.
-
3D Scientific Visualization.
-
Introduction to Laboratory Research.
2013-2015 Preparatory Classes for Engineer School: Lycée Champollion, Grenoble.
-
2013-2014: Mathematics, Physics and Engineering Sciences (MPSI)
-
2014-2015: Mathematics, Physics, elite class (MP*)
2012-2013 Licence Degree: Grenoble Alpes University (UGA), Applied Mathematics for Social Science (MASS), Grenoble
Publications
Publications
Steinhilber A. (2023). Bayesian modeling of reading acquisition (PhD thesis).
-
Linkto the manuscript: http://www.theses.fr/2023GRALS019
-
Link to the BRAID-Acq model’s code: TODO
-
Link to the simulations of my manuscript: TODO
Steinhilber A., Valdois S. & Diard J. (2022). Bayesian comparators: a probabilistic modeling tool for similarity evaluation between predicted and perceived patterns. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 44, No. 44).
-
Link to the code : https://gricad-gitlab.univ-grenoble-alpes.fr/steinhia/bayesian-comparat…
Burkhardt F., Steinhilber A. & Weiss B. (2018). Ironic speech-Evaluating acoustic correlates by means of speech synthesis. In ESSV (pp. 342-350).
-
Link to the Emofilt graphic interface used to generate the data: https://github.com/felixbur/Emofilt
Informations complémentaires
Informations complémentaires
Thesis summary : Baysesian modeling of reading acquisition
In my thesis, I investigated reading acquisition through the self-teaching theory, the predominant framework in this field. This theory posits that learning to read relies on the incidental acquisition of new orthographic forms through successful phonological decoding, with context playing a role in enhancing partially correct phonological decodings. Existing computational models of self-teaching are grounded in the dual-route architecture, implementing two distinct procedures to read known words and novel words. These models assume direct lexicon access for known words and graphemic segmentation followed by grapheme-phoneme conversion (GPC) for novel words. Moreover, they necessarily rely on context and prior phonological knowledge to learn a novel word. However, behavioral studies challenge these models. Several studies discuss the relevance of the grapheme as the primary psycholinguistic unit of decoding and suggest that reading, even of novel words, is performed by analogy to lexical knowledge. Furthermore, behavioral studies show that incidental orthographic learning without prior phonological knowledge and context is possible.
In response to these challenges, my thesis introduces a new probabilistic computational model named BRAID-Acq, which departs from the dual-route approach by reading both known words and novel words using lexical knowledge. This model adopts a single-route architecture with visual attention mechanisms that can process information without rigid alignment to predefined psycholinguistic units. A phonological attentional submodel is coupled with the visual component, linking orthographic and phonological segments to simulate attention dynamics during processing.
The BRAID-Acq model demonstrated its capability to simulate oculomotor behaviors during repeated exposure to novel words, showing sensitivity to visual attention capacities, word length, and lexicality. Moreover, It successfully reads novel words using flexible sub-lexical processing, without relying on a graphemic segmentation. The model effectively captures various self-teaching scenarios, including those with or without context and prior phonological form. Although context and phonological knowledge enhance learning, they are not prerequisites. Context, for instance, helps in disambiguating irregular words, in opaque languages, or when orthographic knowledge is poor. In conclusion, the BRAID-Acq model successfully simulates self-teaching, which supports our theoretical hypotheses.
- Imprimer
- Partager
- Partager sur Facebook
- Partager sur Twitter
- Partager sur LinkedIn
- Share url