top of page

© 2025 by Applied AI Epistemics

Prof Prince Sarpong
Prof Prince Sarpong

Founder

Prof. Prince Sarpong is Associate Professor of Finance at the School of Financial Planning Law, University of the Free State, and a post‑disciplinary systems architect whose work spans corporate finance, financial psychology, AI epistemics, and institutional behaviour. He has developed several pioneering frameworks, including Brittle Financial Health, Antifragile Financial Therapy, Institutional Financial Therapy, and the Cognitive Growth Index (CGI), a generative AI model for mapping intellectual recursion and adaptive cognition.

 

His broader intellectual ecosystem, Sarpong Strategia, anchors a set of transdisciplinary ventures: the Centre for Financial Psychology and Systems (CFPS) in Bloemfontein, South Africa; Financial Therapy for Men Inc. and Fortis Strategia LLC in Sacramento, California; and Applied AI Epistemics Inc., also based in Sacramento, California, focusing on epistemic research and reform in the age of artificial intelligence.

 

He currently serves on the advisory board of AI 2030, a Washington, DC–based global nonprofit for responsible AI; a member of the Equity and Inclusion Committee of the Financial Therapy Association in the US; and is a member of the Financial Planning Institute of Southern Africa.

 

Prof Sarpong’s current work confronts the limits of traditional disciplinary boundaries by designing systems that metabolize pressure into structure, across money, identity, capital, and corporate leadership.

At Applied AI Epistemics, we design systems that think about thinking. Our mission is to build reflective AI frameworks that don’t just process data but interrogate it, contextualize it, and return it with epistemic integrity. We operate at the intersection of cognition, computation, and consequence.

We are not developers. We are epistemic engineers. Our work spans the design of metrics like the Cognitive Growth Index (CGI) and frameworks like Reflective Intelligence, aimed at embedding feedback, friction, and coherence into human-AI loops. We believe the future of AI is not just autonomous, but also accountable.

From higher education to institutional governance, our systems challenge the passivity of automation. We create tools that enhance human judgment, confront cognitive drift, and expose symbolic dissonance. Every model we deploy is guided by a single principle: AI must elevate reflection, not replace it.

Our work is not in product development, rather, we focus on the systemic redefinition of how knowledge is shaped, validated, and applied.

bottom of page