I am a Stanford PhD student working at the intersection of AI, neuroscience, and physics. I am advised by Scott Linderman. I develop theory to explain how AI models learn and represent information and apply it to questions about biological systems.
Contact: jsmekal at stanford dot edu
Research
Jakub Smékal, Jimmy T.H. Smith, Michael Kleinman, Dan Biderman, Scott Linderman (2024). Towards a theory of learning dynamics in deep state space models. Next Generation of Sequence Modeling Architectures Workshop at ICML 2024. Selected for Spotlight Presentation (top 10% of accepted papers)
Daniel Ari Friedman, Jakub Smékal (2023). Generative Research Teams: Active Inference Compositions For Research and Meta-Science. Preprint.
Jakub Smékal, Daniel Ari Friedman (2023). Generalized Notation Notation for Active Inference Models. Preprint.
Richard Blythman, Mohamed Arshath, Jakub Smékal, Hithesh Shaji, Salvatore Vivona, Tyrone Dunmore (2022). Libraries, Integrations and Hubs for Decentralized AI using IPFS. Preprint.
Richard Blythman, Mohamed Arshath, Salvatore Vivona, Jakub Smékal, Hithesh Shaji (2022). Decentralized Technologies for AI Hubs. Workshop on Decentralization and Machine Learning in Web3 at NeurIPS 2022.
Shady El Damaty, Jakub Smékal (2022). Simulations for Open Science Token Communities: Designing the Knowledge Commons. Workshop on Decentralization and Machine Learning in Web3 at NeurIPS 2022.
Jakub Smékal, Arhan Choudhury, Amit Kumar Singh, Shady El Damaty, Daniel Friedman (2022). Active Blockference: cadCAD with Active Inference for cognitive systems modeling. Third International Workshop on Active Inference at ECML PKDD 2022.
Daniel Friedman, Shaun Applegate-Swanson, Arhan Choudhury, RJ Cordes, Shady El Damaty, Avel Guénin—Carlut, V. Bleu Knight, Ivan Metelkin, Siddhant Shrivastava, Amit Kumar Singh, Jakub Smékal, Caleb Tuttle, Alexander Vyatkin (2022). An Active Inference Ontology for Decentralized Science: from Situated Sensemaking to the Epistemic Commons. Preprint.