Navigating the Risks of Artificial Intelligence Foundation Models in Healthcare: How Health Systems Can Respond


  • Warren Poquiz TWU Student - MHA Program


Foundation Models, Healthcare AI, Healthcare Management, AI Governance


Foundation Models (FMs) have unveiled a new phase in the Artificial Intelligence (AI) era, characterized by significantly larger datasets and massive computational power. This analysis examines the applicability of FMs in the healthcare sector and how their advanced functionalities, such as in-context learning, can enhance overall organizational performance by increasing efficiency, accuracy, and predictability. However, the rapid advancement of AI models, combined with insufficient regulatory oversight, poses significant risks to patients and Healthcare Organizations (HCOs), including privacy breaches, adversarial attacks, model opacity, and algorithmic biases. To address these risks, this paper proposes a three-layer governance structure for HCOs based on the hourglass model for AI governance.



Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V. I., & Precise4Q Consortium. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC medical informatics and decision making, 20, 1-9.

Bak, M., Madai, V. I., Fritzsche, M. C., Mayrhofer, M. T., & McLennan, S. (2022). You can’t have AI both ways: balancing health data privacy and access fairly. Frontiers in Genetics, 13, 1490.

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

Broadhead, G. (2023). A Brief Guide to LLM Numbers: Parameter Count vs. Training Size. Medium.

Carlini, Nicholas, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts et al. "Extracting training data from large language models." In 30th USENIX Security Symposium (USENIX Security 21), pp. 2633-2650. 2021.

Center for Research on Foundation Models. (n.d.). Home. Stanford University.

Cheng, Y., Wang, F., Zhang, P., & Hu, J. (2016, June). Risk prediction with electronic health records: A deep learning approach. In Proceedings of the 2016 SIAM international conference on data mining (pp. 432-440). Society for Industrial and Applied Mathematics.

Chu, L. C., Anandkumar, A., Shin, H. C., & Fishman, E. K. (2020). The potential dangers of artificial intelligence for radiology and radiologists. Journal of the American College of Radiology, 17(10), 1309-1311.

Cohen, I. G., & Mello, M. M. (2019). Big data, big tech, and protecting patient privacy. Jama, 322(12), 1141-1142.

Cox, J. (2023). Facebooks’ powerful large language model leaks online. Vice. Retrieved from

Dinnerstein v. Google and The University of Chicago Medical Center 1:19cv—04311 (N.D. Ill.). (2019).

Dong, Q., Li, L., Dai, D., Zheng, C., Wu, Z., Chang, B., ... & Sui, Z. (2022). A survey for in-context learning. arXiv preprint arXiv:2301.00234.

Elsayed, G. F., Goodfellow, I., & Sohl-Dickstein, J. (2018). Adversarial reprogramming of neural networks. arXiv preprint arXiv:1806.11146.

European Union. (2022). Artificial Intelligence Act. EUR-Lex.

European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from

Fakoor, R., Ladhak, F., Nazi, A., & Huber, M. (2013, June). Using deep learning to enhance cancer diagnosis and classification. In Proceedings of the international conference on machine learning (Vol. 28, pp. 3937-3949). New York, NY, USA: ACM.

Floridi, L., & Cowls, J. (2022). A unified framework of five principles for AI in society. Machine learning and the city: Applications in architecture and urban design, 535-545.

Gaffney, T. (2023). Synthetic data generation: Building trust by ensuring privacy and quality. IBM.

Gymrek, M., McGuire, A. L., Golan, D., Halperin, E., & Erlich, Y. (2013). Identifying personal genomes by surname inference. Science, 339(6117), 321-324.

Future of Life Institute. (2023). An open letter calling for a pause on all giant AI experiments. Future of Life Institute.

Futurescan. (2023). Consumer trends. Futurescan: Healthacre Trends and Implications.

Keddell, E. (2019). Algorithmic justice in child protection: Statistical fairness, social justice and the implications for practice. Social Sciences, 8(10), 281.

Kiser, S., & Maniam, B. (2021). Ransomware: Healthcare industry at risk. Journal of Business and Accounting, 14(1), 64-81.

Kleinman, Z., & Vallance, C. (2023). AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google. BBC News.

Liddell, K., Simon, D. A., & Lucassen, A. (2021). Patient data ownership: who owns your health?. Journal of Law and the Biosciences, 8(2), lsab023.

Liu, C. F., Chen, Z. C., Kuo, S. C., & Lin, T. C. (2022). Does AI explainability affect physicians’ intention to use AI?. International Journal of Medical Informatics, 168, 104884.

Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60.

Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Putting AI ethics into practice: the hourglass model of organizational AI Governance. arXiv preprint arXiv:2206.00335.

Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2018). Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics, 19(6), 1236-1246.

Na, L., Yang, C., Lo, C. C., Zhao, F., Fukuoka, Y., & Aswani, A. (2018). Feasibility of reidentifying individuals in large national physical activity data sets from which protected health information has been removed with use of machine learning. JAMA network open, 1(8), e186040-e186040.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

OpenAI. (2018). AI and Compute. OpenAI.

Rani, V., Nabi, S. T., Kumar, M., Mittal, A., & Kumar, K. (2023). Self-supervised learning: A succinct review. Archives of Computational Methods in Engineering, 30(4), 2761-2775.

Rudin, C. (2018). Please stop explaining black box models for high stakes decisions. Stat, 1050, 26.

Song, C., & Shmatikov, V. (2019). Overlearning reveals sensitive attributes. arXiv preprint arXiv:1905.11742.

Standards for Privacy of Individually Identifiable Health Information, 45 C.F.R. Part 164. (2000).

The Allan Turing Institute. (2023). Exploring foundation models - Session 1 [Video]. YouTube.

Tuffaha, M. (2023). The Impact of Artificial Intelligence Bias on Human Resource Management Functions: Systematic Literature Review and Future Research Directions. European Journal of Business and Innovation Research, 11(4), 35-58.

Van Dijck, G. (2022). Predicting recidivism risk meets AI Act. European Journal on Criminal Policy and Research, 28(3), 407-423.

White House. (2022). Notice and Explanation. AI Bill of Rights.

World Economic Forum. (2016). WEF Values and the Fourth Industrial Revolution White Paper.



2024-05-17 — Updated on 2024-05-28


How to Cite

Navigating the Risks of Artificial Intelligence Foundation Models in Healthcare: How Health Systems Can Respond. (2024). TWU Student Journal, 3(1). (Original work published 2024)