Critical Phenomenology of Prompting in Artificial Intelligence

Main Article Content

Jorge González Arocha

Abstract

This paper analyzes the philosophy of prompting as a tool within the context of the rise of Artificial Intelligence (AI), particularly in large language models (LLMs). The topic is justified by
the need to understand the prompt as a mediating space between human intentionality, language, and the sociopolitical structures that shape interactions with these technologies. The central
objective is to examine how prompting reflects ethical, ontological, and epistemological tensions
that arise in the construction of meaning within AI systems. Methodologically, the study adopts a critical-phenomenological approach, combining first-person experiences (user) with practical experimentation of prompts in different scenarios. The results demonstrate that the prompt is not merely a technical instruction but a discursive practice, where human decisions, such as
the configuration of “parameters” (e. g., temperature and Top P), directly influence the outputs
generated by AI systems. While these decisions appear technical, they carry significant ethical and epistemological implications that demand critical examination. The study concludes that it is essential to adopt an interdisciplinary approach that integrates technical development with
philosophical reflection. This approach would foster an ethical, conscious, and responsible use of
AI while recognizing the central role of humans in interactions with these emerging technologies.

Article Details

Section
Monographic articles
Author Biography

Jorge González Arocha, Dialektika: Global Forum for Critical Thinking, Humanities, and Social Sciences

Editor General de Revista Publicando, Ecuador. Doctor en Ciencias Filosóficas (2017). Ha sido profesor del Departamento de Filosofía de la Universidad de La Habana y Director Académico del Centro Cultural Calazans (Cuba) en conjunto con la Universidad Cristóbal Colón (México). Es autor del libro «Una pasión inútil. Muerte y libertad en la Obra Filosófica de Jean-Paul Sartre». Coautor de antologías y textos para la enseñanza de filosofía y ciencias sociales. Ensayista y articulista en medios de divulgación y prensa digital. A nivel regional ha sido Primer Premio en la categoría de ensayo en Ciencias Sociales por la revista académica TEMAS; Primer Premio en el Concurso Internacional de Ensayos "Enséñame a pensar", organizado por UNESCO, Revista Utopía e instituciones de México y Colombia en el 2013; entre otros premios y reconocimientos. Ha sido revisor académico en Daimon; Tópicos; Areas; Oxímora entre otras. Actualmente también funge como Director de la plataforma filosófica Dialektika. Profesor visitante de la Universidad Singidunum (Serbia) en Estudios Culturales, y funge como Director de la organización no gubernamental Dialektika: Global Forum for Critical Thinking, Humanities, and Social Sciences.

References

, P. (2001). Ethics, Regulation and the New Artificial Intelligence, Part I: Accountability and Power. Information, Communication & Society, 4(2), 199-229. https://doi.org/10.1080/713768525

Bory, P., Natale, S., & Katzenbach, C. (2024). Strong and weak AI narratives: An analytical framework. AI & SOCIETY. https://doi.org/10.1007/s00146-024-02087-8

Bringsjord, S., & Govindarajulu, N. S. (2024). Artificial Intelligence. En E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2024). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2024/entries/artificial-intelligence/

Bucknall, B. S., & Dori-Hacohen, S. (2022). Current and Near-Term AI as a Potential Existential Risk Factor. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 119-129. https://doi.org/10.1145/3514094.3534146

Chen, B., Wu, Z., & Zhao, R. (2023). From fiction to fact: The growing role of generative AI in business and finance. Journal of Chinese Economic and Business Studies, 21(4), 471-496. https://doi.org/10.1080/14765284.2023.2245279

Chiarella, S. G., Torromino, G., Gagliardi, D. M., Rossi, D., Babiloni, F., & Cartocci, G. (2022). Investigating the negative bias towards artificial intelligence: Effects of prior assignment of AI-authorship on the aesthetic appreciation of abstract paintings. Computers in Human Behavior, 137, 107406. https://doi.org/10.1016/j.chb.2022.107406

Deleuze, G. (1994). Difference and repetition. Columbia University Press.

Deshpande, A., Rajpurohit, T., Narasimhan, K., & Kalyan, A. (2023). Anthropomorphization of AI: Opportunities and Risks. https://doi.org/10.48550/ARXIV.2305.14784

Döderlein, J.-B., Acher, M., Khelladi, D. E., & Combemale, B. (2023). Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic? (No. arXiv:2210.14699). arXiv. https://doi.org/10.48550/arXiv.2210.14699

Floridi, L. (2023). The Ethics of Artificial Intelligence. Oxford University Press (OUP). https://doi.org/10.1093/oso/9780198883098.001.0001

Foucault, M. (1994a). Dits et Écrits 1970-1975 (Vol. 2). Gallimard.

Foucault, M. (1994b). Dits et Écrits 1976-1979 (Vol. 3). Gallimard.

Goertzel, B. (2015). Superintelligence: Fears, Promises and Potentials: Reflections on Bostrom’s Superintelligence, Yudkowsky’s From AI to Zombies, and Weaver and Veitas’s «Open-Ended Intelligence.» Journal of Ethics and Emerging Technologies, 25(2), Article 2. https://doi.org/10.55613/jeet.v25i2.48

Gonzalez, Jorge. (2024). Artificial Intelligence and the New Dynamics of Social Death: A Critical Phenomenological Inquiry (September 01, 2024). Available at SSRN. http://dx.doi.org/10.2139/ssrn.4955995

Gu, J., Han, Z., Chen, S., Beirami, A., He, B., Zhang, G., Liao, R., Qin, Y., Tresp, V., & Torr, P. (2023). A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models (No. arXiv:2307.12980). arXiv. https://doi.org/10.48550/arXiv.2307.12980

Hartmann, J., Schwenzow, J., & Witte, M. (2023). The political ideology of conversational AI: Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation (No. arXiv:2301.01768). arXiv. https://doi.org/10.48550/arXiv.2301.01768

Heidegger, M., Stambaugh, J., & Heidegger, M. (1996). Being and time. State Univ. of New York Press.

Homolak, J. (2023). Opportunities and risks of ChatGPT in medicine, science, and academic publishing: A modern Promethean dilemma. Croatian Medical Journal, 64(1), 1-3. https://doi.org/10.3325/cmj.2023.64.1

Kumar, S., & Choudhury, S. (2023). Humans, super humans, and super humanoids: Debating Stephen Hawking’s doomsday AI forecast. AI and Ethics, 3(3), 975-984. https://doi.org/10.1007/s43681-022-00213-0

Lee, J. H., Shin, J., & Realff, M. J. (2018). Machine learning: Overview of the recent progresses and implications for the process systems engineering field. Computers & Chemical Engineering, 114, 111-121. https://doi.org/10.1016/j.compchemeng.2017.10.008

Liu, H. (2014). Philosophical Reflections on Data. Procedia Computer Science, 30, 60-65. https://doi.org/10.1016/j.procs.2014.05.381

Merleau-ponty, M. (1993). Fenomenología dela percepción. Editorial Planeta Argentina.

Meskó, B. (2023). Prompt Engineering as an Important Emerging Skill for Medical Professionals: Tutorial. Journal of Medical Internet Research, 25, e50638. https://doi.org/10.2196/50638

Morreale, F., Bahmanteymouri, E., Burmester, B., Chen, A., & Thorp, M. (2024). The unwitting labourer: Extracting humanness in AI training. AI & SOCIETY, 39(5), 2389-2399. https://doi.org/10.1007/s00146-023-01692-3

Noah Harari, Y. (2023). Yuval Noah Harari argues that AI has hacked the operating system of human civilisation. The Economist. https://bit.ly/3DippkO

Peeperkorn, M., Kouwenhoven, T., Brown, D., & Jordanous, A. (2024). Is Temperature the Creativity Parameter of Large Language Models? (No. arXiv:2405.00492). arXiv. https://doi.org/10.48550/arXiv.2405.00492

Placani, A. (2024). Anthropomorphism in AI: Hype and fallacy. AI and Ethics, 4(3), 691-698. https://doi.org/10.1007/s43681-024-00419-4

Prompting Techniques – Nextra. (2024, septiembre 9). https://bit.ly/4gfQaFc

Ricœur, P. (2010). Memory, history, forgetting (K. Blamey & D. Pellauer, Trads.). University of Chicago Press.

Roser, M. (2024). The brief history of artificial intelligence: The world has changed fast — what might be next? Our World in Data. https://bit.ly/4iE1sVq

Russell, S., & Norvig, P. (2020). Artificial intelligence—A modern approach by Stuart Russell and Peter Norvig, Prentice Hall. Series in Artificial Intelligence, Englewood Cliffs, NJ. Pearson.

Sartre, J.-P. (1984). Being and Nothingness. Washington Square Press.

Searle, J. R. (1980). Minds, brains, and programs. The Behavioral and Brain Sciences, 3, 417-457.

Silva, A. de O., & Janes, D. dos S. (2022). The emergence of ChatGPT and its implications for education and academic research in the 21st century. Review of Artificial Intelligence in Education, 3, e6-e6. https://doi.org/10.37497/rev.artif.intell.educ.v3i00.6

Vallor, S. (2024). The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (1.a ed.). Oxford University PressNew York. https://doi.org/10.1093/oso/9780197759066.001.0001

Wang, S., Cooper, N., & Eby, M. (2024). From human-centered to social-centered artificial intelligence: Assessing ChatGPT’s impact through disruptive events. Big Data & Society, 11(4), 20539517241290220. https://doi.org/10.1177/20539517241290220

Weiss, G., Murphy, A. V., & Salamon, G. (Eds.). (2019). 50 Concepts for a Critical Phenomenology. Northwestern University Press. https://doi.org/10.2307/j.ctvmx3j22

Zarifhonarvar, A. (2023). Economics of ChatGPT: A Labor Market View on the Occupational Impact of Artificial Intelligence [Working Paper]. Kiel, Hamburg: ZBW - Leibniz Information Centre for Economics. https://bit.ly/4gBn4Qa