<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ncorwiki.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Phismith</id>
	<title>NCOR Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://ncorwiki.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Phismith"/>
	<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php/Special:Contributions/Phismith"/>
	<updated>2026-05-07T09:46:51Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75591</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75591"/>
		<updated>2026-05-07T05:41:43Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.zoom.us/j/92039377770?pwd=pL6MALdTb5mB1QnqlwYPwAoNaYeEp4.1 Zoom link]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Video Video]&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/USI-2026-Lecture-3-Video Video]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/USI-2026-Lecture-3-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
Provides an introduction to the theory of complex and dynamic systems. Examples include: population growth&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) 1. Capabilities and Skills; 2. AI and History and Geography==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75590</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75590"/>
		<updated>2026-05-07T05:41:03Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.zoom.us/j/92039377770?pwd=pL6MALdTb5mB1QnqlwYPwAoNaYeEp4.1 Zoom link]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Video Video]&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/USI-2026-Lecture-3-Video Video]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/USI-2026-Lecture-3-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
Provides an introduction to the theory of complex and dynamic systems. Examples include: population growth&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75589</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75589"/>
		<updated>2026-05-07T05:39:25Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.zoom.us/j/92039377770?pwd=pL6MALdTb5mB1QnqlwYPwAoNaYeEp4.1 Zoom link]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Video Video]&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/USI-2026-Lecture-3-Video Video]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/USI-2026-Lecture-3-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75588</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75588"/>
		<updated>2026-05-06T14:14:59Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.zoom.us/j/92039377770?pwd=pL6MALdTb5mB1QnqlwYPwAoNaYeEp4.1 Zoom link]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Video Video]&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75587</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75587"/>
		<updated>2026-05-06T14:11:56Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
Zoom link: https://buffalo.zoom.us/launch/chat?src=direct_chat_link&amp;amp;email=phismith@buffalo.edu&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Video Video]&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75586</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75586"/>
		<updated>2026-05-06T06:36:33Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Video Video]&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75585</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75585"/>
		<updated>2026-05-06T06:35:54Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:{https://buffalo.box.com/v/USI-2026-Lecture-2-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/USI-2026-Lecture-2-Video Video]&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75584</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75584"/>
		<updated>2026-05-05T15:26:33Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday (May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75583</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75583"/>
		<updated>2026-05-05T10:19:13Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75582</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75582"/>
		<updated>2026-05-05T10:18:46Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Wednesday May 6 (09:30-12:15) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75581</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75581"/>
		<updated>2026-05-05T10:18:09Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday May 5 (09:30-12:15) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15)==&lt;br /&gt;
&lt;br /&gt;
AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75580</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75580"/>
		<updated>2026-05-05T10:17:50Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15)==&lt;br /&gt;
&lt;br /&gt;
Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15)==&lt;br /&gt;
&lt;br /&gt;
AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75579</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75579"/>
		<updated>2026-05-05T07:35:35Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) &lt;br /&gt;
&lt;br /&gt;
Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) &lt;br /&gt;
&lt;br /&gt;
AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75578</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75578"/>
		<updated>2026-05-05T06:02:40Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75577</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75577"/>
		<updated>2026-05-05T05:55:52Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1-Video.mp4 Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75576</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75576"/>
		<updated>2026-05-05T05:54:20Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture-1.mp4 Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75575</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75575"/>
		<updated>2026-05-05T05:53:21Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1.mp4 Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75574</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75574"/>
		<updated>2026-05-05T05:52:33Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1 Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/USI-2026-Lecture1-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75573</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75573"/>
		<updated>2026-05-04T07:43:21Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
Part 4: An introduction to ontology, focusing on capabilities, skills, talents, and know-how. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Animal, Human and Machine Intelligence: A Territorial Perspective==&lt;br /&gt;
&lt;br /&gt;
Part 1: On territory &lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Part 2:&lt;br /&gt;
&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75572</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75572"/>
		<updated>2026-05-04T07:36:37Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here]. &lt;br /&gt;
&lt;br /&gt;
:In the course of the next 2 weeks we will show why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
:Note the difference between &#039;complex&#039; and &#039;complicated&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
Part 4: An introduction to ontology, focusing on capabilities, skills, talents, and know-how. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75571</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75571"/>
		<updated>2026-05-04T07:34:01Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: An introduction to ontology, focusing on capabilities, skills, talents, and know-how. &lt;br /&gt;
&lt;br /&gt;
Part 3: Outline of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 4. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75570</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75570"/>
		<updated>2026-05-04T07:33:27Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course&lt;br /&gt;
&lt;br /&gt;
Part 2: Booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: An introduction to ontology, focusing on capabilities, skills, talents, and know-how. &lt;br /&gt;
&lt;br /&gt;
Part 3: Outlin of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 4. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75569</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75569"/>
		<updated>2026-05-03T14:32:46Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Part 2: The booming interest in ontology unleashed by the idea of &#039;neurosymbolic AI&#039;&lt;br /&gt;
&lt;br /&gt;
Part 3: An introduction to ontology, focusing on capabilities, skills, talents, and know-how. &lt;br /&gt;
&lt;br /&gt;
Part 3: Outlin of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 4. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75568</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75568"/>
		<updated>2026-05-01T15:12:35Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Part 2: Outlin of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
&lt;br /&gt;
Part 4: Skills, talents, know-how and other human capabilities&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:[Capabilities: An ontology]&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&amp;lt;!-- This text is hidden --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75567</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75567"/>
		<updated>2026-05-01T12:50:59Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Part 2: Outlin of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
&lt;br /&gt;
Part 4: Skills, talents, know-how and other human capabilities&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
:-&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
-:&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75566</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75566"/>
		<updated>2026-05-01T12:41:40Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Part 2: Outlin of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
&lt;br /&gt;
Part 4: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75565</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75565"/>
		<updated>2026-05-01T12:40:39Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Part 2: Outlin of the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings&lt;br /&gt;
&lt;br /&gt;
Part 4: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
:Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
s.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75564</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75564"/>
		<updated>2026-05-01T12:34:59Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Part 2: Outlines the theory of complex systems documented in our book: &#039;&#039;[https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032941405/ Why machines will never rule the world]&#039;&#039;. Summary [https://philpapers.org/rec/LANWMW-3 here].&lt;br /&gt;
&lt;br /&gt;
Part 3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human being&lt;br /&gt;
&lt;br /&gt;
Part 4: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
s.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75563</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75563"/>
		<updated>2026-04-30T13:45:37Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems= */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
:Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
:Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems==&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Historical development of the types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75562</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75562"/>
		<updated>2026-04-30T13:44:26Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is at least as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely [https://scholar.google.com/citations?user=icGNWj4AAAAJ cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
:Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
:Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems===&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (13:30 - 16:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (09:30 - 12:15) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75561</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75561"/>
		<updated>2026-04-30T12:53:47Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
:Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
:How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
:Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
:Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
:Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
:Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
:Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
:Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
:Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
:1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
:2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
:3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems===&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75560</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75560"/>
		<updated>2026-04-27T22:36:47Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems===&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Education&amp;diff=75559</id>
		<title>Education</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Education&amp;diff=75559"/>
		<updated>2026-04-27T19:12:14Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;2026&#039;&#039;&#039;&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Philosophy_and_Artificial_Intelligence_2026 Philosophy and Artificial Intelligence], Università della Svizzera italiana, Lugano, Switzerland, Spring 2026&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/BFO-Intro Introduction to Basic Formal Ontology], Department of Philosophy, University at Buffalo, Spring 2026&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Introduction_to_Philosophy_from_an_Ontological_Perspective Introduction to Philosophy from an Ontological Perspective], University at Buffalo, Spring 2026&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Ontology_and_AI Ontology and Artificial Intelligence], University at Buffalo, Spring 2026&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Ontology_of_Economics Ontology of Economics], University at Buffalo, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2025&#039;&#039;&#039;&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Philosophy_and_Artificial_Intelligence_2025 Philosophy and Artificial Intelligence], Università della Svizzera italiana, Lugano, Switzerland, Spring 2025&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Introduction_to_Philosophy_from_an_Ontological_Perspective Introduction to Philosophy from an Ontological Perspective], Department of Philosophy, University at Buffalo, Fall 2025&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Ontology_and_Artificial_Intelligence_-_Fall_2025 Ontology and Artificial Intelligence], Department of Philosophy, University at Buffalo, Fall 2025&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Ontology_of_Economics Ontology of Economics], Department of Philosophy, University at Buffalo, Fall 2025&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2024&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Philosophy_and_Artificial_Intelligence_2024 Philosophy and Artificial Intelligence], Università della Svizzera italiana, Lugano, Switzerland, Spring 2024&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Ontology_of_Economics_2024 Ontology of Economics 2024], Department of Philosophy, University at Buffalo, Fall 2024&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Introduction_to_Philosophy_from_an_Ontological_Perspective Introduction to Philosophy from an Ontological Perspective], Department of Philosophy, University at Buffalo, Fall 2024&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2023&#039;&#039;&#039;&lt;br /&gt;
*[http://ncorwiki.buffalo.edu/index.php/Philosophy_and_Artificial_Intelligence_2023 Philosophy and Artificial Intelligence], Università della Svizzera italiana, Lugano, Switzerland, Spring 2023&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Nature_and_Culture Nature and Culture], Department of Philosophy, University at Buffalo, Fall 2023&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2022&#039;&#039;&#039;&lt;br /&gt;
*[http://ncorwiki.buffalo.edu/index.php/Philosophy_and_Artificial_Intelligence_2022 Philosophy and Artificial Intelligence], Università della Svizzera italiana, Lugano, Switzerland, Spring 2022&lt;br /&gt;
&lt;br /&gt;
*[http://ncorwiki.buffalo.edu/index.php/Applied_Ontology,_Spring_2022 Applied Ontology 2022], Spring Semester, Department of Philosophy, University at Buffalo, Spring 2022&lt;br /&gt;
&lt;br /&gt;
*[http://ncorwiki.buffalo.edu/index.php/Philosophy_of_Science Philosophy of Science], University at Buffalo, Department of Philosophy, Fall Semester&lt;br /&gt;
&lt;br /&gt;
==Archived Web Content==&lt;br /&gt;
[[Archived Web Content]]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75558</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75558"/>
		<updated>2026-04-20T14:25:53Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems= */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems===&lt;br /&gt;
&lt;br /&gt;
Surveys the range of existing AI systems.&lt;br /&gt;
&lt;br /&gt;
Establishes the limits of AI and addresses problems such as hallucinations and enshittification&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75557</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75557"/>
		<updated>2026-04-20T14:24:52Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) AI and the Theory of Complex and Dynamic Systems===&lt;br /&gt;
&lt;br /&gt;
==Wednesday May 6 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75556</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75556"/>
		<updated>2026-04-20T14:23:41Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) &lt;br /&gt;
AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75555</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75555"/>
		<updated>2026-04-20T13:57:58Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75554</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75554"/>
		<updated>2026-04-17T19:08:32Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday May 15 (09:30-12:15) On AI, Jobs, and Economics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics, followed by Student Presentations ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75553</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75553"/>
		<updated>2026-04-17T19:08:16Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) On AI, Jobs, and Economics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75552</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75552"/>
		<updated>2026-04-17T19:07:18Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
From Speech Acts to Document Acts: An Ontology of Institutions [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
&lt;br /&gt;
Massively Planned Social Agency [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75551</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75551"/>
		<updated>2026-04-17T19:06:44Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]==&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;From Speech Acts to Document Acts: An Ontology of Institutions&#039;&#039;&#039; [https://buffalo.box.com/s/85h3u1nvjtbnm5krs0tr7rfkys2p48qc Slides]&lt;br /&gt;
:16:30 &#039;&#039;&#039;Massively Planned Social Agency&#039;&#039;&#039; [https://buffalo.box.com/s/v6huywh7gs09jsosfxo7ox62ap7wiccd Slides]&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75550</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75550"/>
		<updated>2026-04-17T19:04:12Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday, May 12, 9:30 - 12:15) AI and Creativity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) Planning, Creativity, and Entrepreneurial Perception&lt;br /&gt;
&lt;br /&gt;
Money Slides&lt;br /&gt;
&lt;br /&gt;
The Ontology of Document Acts Slides&lt;br /&gt;
&lt;br /&gt;
AI Creativity and Entrepreneurial Perception Slides ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75549</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75549"/>
		<updated>2026-04-17T19:01:11Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) AI and Creativity ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75548</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75548"/>
		<updated>2026-04-17T18:59:55Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) AI and Creativity ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) The Limits of AI and the Limits of Physics==&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75547</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75547"/>
		<updated>2026-04-17T18:58:28Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Tuesday, May 12, 9:30 - 12:15) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15) AI and Creativity ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75546</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75546"/>
		<updated>2026-04-17T18:58:03Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) Skills, Capabilities and Tacit Knowledge==&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Tacit knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75545</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75545"/>
		<updated>2026-04-17T18:39:48Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) ; The Limits of AI and the Limits of Physics&lt;br /&gt;
:Friday, May 15 (09:30-12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75544</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75544"/>
		<updated>2026-04-08T10:44:11Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Interviews_and_podcasts_on_%27%27Why_Machines_Will_Never_Rule_the_World%27%27&amp;diff=75543</id>
		<title>Interviews and podcasts on &#039;&#039;Why Machines Will Never Rule the World&#039;&#039;</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Interviews_and_podcasts_on_%27%27Why_Machines_Will_Never_Rule_the_World%27%27&amp;diff=75543"/>
		<updated>2026-03-24T12:14:00Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Interviews and Podcasts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://www.futurity.org/artificial-intelligence-ai-2789642-2/ AI is cool, but will never reach human capability], Futurity podcast (August 12, 2022)&lt;br /&gt;
&lt;br /&gt;
[https://blog.apaonline.org/2022/09/23/why-machines-will-never-rule-the-world-artificial-intelligence-without-fear/ Blog of the American Philosophical Association: Interview with Charlie Taben] [https://www.youtube.com/watch?v=Zle7pJIIfFc Youtube], (August 30, 2022)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=T4HJi7dQzvg Systems Conversation] (with Dr Oliver Gao, Director, Systems Engineering, Cornell University, Ithaca, NY, (September 2, 2022) &lt;br /&gt;
&lt;br /&gt;
[https://https://www.youtube.com/watch?v=f7I6mtFkrOM &#039;&#039;&#039;AI is here, but will it rule us?&#039;&#039;&#039;], Wirkman Comments podcast with David Ramsey Steele, September 27, 2022.  &lt;br /&gt;
&lt;br /&gt;
[https://youtu.be/XeQHey8WFjY &#039;&#039;&#039;Lecture to Philosophy and AI Research Group&#039;&#039;&#039;], University of Zurich, 15 October, 2022&lt;br /&gt;
&lt;br /&gt;
[https://www.nas.org/blogs/media/video-will-machines-rule-the-world? Will Machines Rule the World?] NAS Podcast with Scott Turner,  [https://www.youtube.com/watch?v=3QtrVQ6hmdo Youtube] (October 4, 2022) &lt;br /&gt;
&lt;br /&gt;
[https://www.digitaltrends.com/computing/why-ai-will-never-rule-the-world/ Why AI will never rule the world], Interview by Luke Dormehl on Digital Trends [https://philpapers.org/archive/DORWAW-2.pdf Philpapers] (August 8, 2022) &lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=IMnWAuoucjo Walid Saba on Why Machines Will Never Rule the World], Machine Learning Street Talk, December 15, 2022 (review starts half way through)&lt;br /&gt;
&lt;br /&gt;
[https://www.cspicenter.com/p/why-the-singularity-might-never-come Why Machines Will Never Rule the World – On AI and Faith], Conversation between Jobst Landgrebe, Barry Smith and Rev. Jamie Franklin, Irreverend,  [https://youtu.be/43mM35X7x-c Youtube] (November 30, 2022)&lt;br /&gt;
&lt;br /&gt;
[https://www.cspicenter.com/p/why-the-singularity-might-never-come Why the Singularity Might Never Come]. Interview with Richard Hanania, Center for the Study of Partisanship and Ideology (January 30, 2023)[https://www.youtube.com/watch?v=wwVQQHoORg4 Youtube]&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=bJ8tcposTek&amp;amp;list=PL-PSlrVaK5Iwe5CK06KCp1ZiCHNJcmJtN&amp;amp;index=1&amp;amp;pp=iAQB &amp;quot;Allmacht Künstliche Intelligenz?&amp;quot;], Politicum, TV Berlin, February 2023&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=ze3J3yxVR5w&amp;amp;list=PL-PSlrVaK5Iwe5CK06KCp1ZiCHNJcmJtN&amp;amp;index=2&amp;amp;pp=iAQB &amp;quot;Bestimmte Ingenieure haben keine Ahnung in Mathematik&amp;quot;], Politicum, TV Berlin, February 2023&lt;br /&gt;
&lt;br /&gt;
[https://www.oval.media/narrative-132-jobst-landgrebe/ Elon Musks Irrweg], Interview with Robert Cibis (February 16, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Y-yovYmd1_c Where there’s no will there’s no way], Interview with Alex Thomson, UKCommons (March 21, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=vO_JDTsrdiA Conversation with Jobst Landgrebe and Barry Smith: Why AI won’t rule the world], The Pangburn Hangout (May 5, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=3Ni3NiA29Pw AI and ChatGPT: Should we be worried?] Stever Peterson, Jobst Landgrebe and Barry Smith, National Association of Scholars (May 19, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://dataskeptic.com/blog/episodes/2023/why-machines-will-never-rule-the-world Why Machines Will Never Rule the World], Jobst Landgrebe and Barry Smith, Interview with Kyle Polich, Data Sceptic [https://www.youtube.com/watch?v=mPJaRrJJ_zI Youtube] (May 29, 2023)&lt;br /&gt;
 &lt;br /&gt;
[https://www.youtube.com/watch?v=uHqvQrHQSk8 Why AI Will Never Rule the World], Fidias Podcast (July 21, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://philpapers.org/rec/SOLLAN L’intelligenza artificiale non dominerà il mondo], interview with Barry Smith, &#039;&#039;Il sole de 24 ore&#039;&#039; (April 27, 2024)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=SJbXt02ZC-c Will Machines Rule the World?], Brain in a Vat podcast (November 3, 2024)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=GbHTLfTrjAs Jobst Landgrebe Doesn&#039;t Believe In AGI | Liron Reacts], Doom Debates (October 2024)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=0qNlp5Hf5dU Jobst Landgrebe -- Can AI TAKE OVER The World?], Two Stewards Podcast (January 17, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://creators.spotify.com/pod/profile/ukcolumn/episodes/Jobst-Landgrebe-and-Barry-Smith-Why-Machines-Will-Never-Rule-the-World-e32d5l1 Interview with Jeremy Nell], UK Column (April 29, 2025) &lt;br /&gt;
&lt;br /&gt;
[https://rcr.media/episodes/tech-tuesday-jobst-landgrebe-the-real-limits-of-machine-intelligence-unveiled/ Jobst Landgrebe, The Real Limits Of Machine Intelligence Unveiled], Interview with Paul Brennan, RCR Podcast (June 3, 2025)&lt;br /&gt;
&lt;br /&gt;
[http://rcr.media/episodes/jobst-landgrebe-ai-reality-check-when-large-language-models-break-physics-laws/ Jobst Landgrebe, AI Reality Check: When Large Language Models Break Physics Laws], Interview with Paul Brennan, RCR Podcast (June 24, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=tAo8kO2CJNI&amp;amp;list=PLCobN2DevAuVMAKWcbBZewJLGuYIYQvW5 Jobst Landgrebe and Barry Smith, Why Machines Will Never Rule the World], UK Column, (May, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.aporiamagazine.com/p/debate-can-intelligence-be-engineered Debate: Can Intelligence be Engineered]&lt;br /&gt;
&lt;br /&gt;
[https://www.aporiamagazine.com/p/debate-can-intelligence-be-engineered Debate with Jobst Landgrebe and Barry Smith: Can Intelligence Be Engineered?], Aporia Podcast, (November 10, 2025)&lt;br /&gt;
&lt;br /&gt;
(November 19, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Sln9HLNpUZc Why A.I. Will Never Rule The World], Haman Nature Podcast (November 25, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Rw92EyPBpCY A.I. Won&#039;t Take Over The World...Or Will It?], Haman Nature Podcast (December 5, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://rcr.media/episodes/jobst-landgrebe-on-why-machines-will-never-rule-the-world-artificial-intelligence-without-fear/ Jobst Landgrebe On &#039;Why Machines Will Never Rule The World&#039;], RCR Podcast (December 5, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=2fMPRNTOjW8 Professor warnt: Die KI-Revolution wird scheitern!], Real Unit Schweiz, March 24, 2026&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75542</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75542"/>
		<updated>2026-02-24T13:34:26Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
</feed>