Welcome to my Causality Home

My broader field of interest is causality [1]. This fundamental subject is universally applicable in various application domains, ranging from Archaeology to Zoology – from Cyber Defence to Paleontology — whenever we are confronted with questions of explainability, truth, belief and justification. Consequently, my goal is to help the international research community with contributions towards explainable AI – where DARPA recently launched a  program  [2].

ORCID iD iconorcid.org/0000-0002-0238-8657

ORCID 0000-0002-0238-8657

Example: Today, Artificial Intelligence is experiencing a renaissance. However, the problem of “unknown unknowns” is often underestimated, but governments, industry and businesses cannot afford to deploy highly intelligent AI systems that make unexplainable decisions. Especially, if these systems are in safety critical decision support the main question is: “Can we trust machine learning results?” [3]. Consequently, I am interested in the causality of learned representations, i.e. in the explanation of why a (machine) decision has been made. A huge motivation for this approach are rising legal and privacy aspects, e.g. with the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May, 25, 2018. Although, there is debate whether and to what extent explainability is mandatory [4], explainable models will definitely help to foster trust into future AI [5].

[1]  Judea Pearl 2009. Causality: Models, Reasoning, and Inference (2nd Edition), Cambridge, Cambridge University Press. http://bayes.cs.ucla.edu/BOOK-2K

[2]  David Gunning 2016. Explainable artificial intelligence (XAI): Technical Report Defense Advanced Research Projects Agency DARPA-BAA-16-53, Arlington, USA, DARPA.

[3] Katharina Holzinger, Klaus Mak, Peter Kieseberg & Andreas Holzinger 2018. Can we trust Machine Learning Results? Artificial Intelligence in Safety-Critical decision Support. ERCIM News, 112, (1), 42-43.

[4]  Sandra Wachter, Brent Mittelstadt & Luciano Floridi 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7, (2), 76-99. doi:10.1093/idpl/ipx005

[5]  Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea & Vasile Palade 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.

Short Bio:

Born 1996 in Graz, Austria, I attended from 1999-2002 the English Kindergarten Morellenfeld, from 2002-2005 the Volksschule Ursulinen and from 2005-2008 the Unterstufe and from 2008-2011 the Oberstufenrealgymasium of the Ursulinen Graz. Consequently, I skipped three classes and did my Matura (Reifeprüfung) with distinction in June 2011. I finalized my first Bachelor (BA) in Archaeology at the Karl-Franzens University Graz with a thesis on principal correspondence analysis, combinatorial statistics and seriation methods in April 2014. I participated in a one-year internship at IBM Vienna from April 2014 to April 2015. I earned my first diploma of the Master Study Law at Karl-Franzens University Graz in December 2014 and finalized my second Bachelor (BA) in classical history with a thesis on “Tiberius Gracchus – a Plebian Tribune on his way to the Monarchy” in November 2016. Currently, I am finalizing my MSc in Earth Sciences with a thesis on an unknown vertebrate from the Geistthal formation in the Kainacher Gosau. Since October 2017 I am employed as Junior Researcher at SBA Research in Vienna. I hold my first talk at an international conference in Athens in 2010, followed by a talk in Sevilla 2011, an invited lecture at the RWTH Aachen University in September 2011, a talk in Warsaw 2014 and at the Banff international research station for mathematical discovery, Alberta, Canada in 2015. In 2016 I visited the University of Edmonton, Canada, for six weeks.

Thank you for reading to the end.

 

Leave a Reply