Philosophy Eats AI

Generating sustainable business value with AI demands critical thinking about the disparate philosophies determining AI development, training, deployment, and use.

Reading Time: 32 min 

Topics

Permissions and PDF

Carolyn Geason-Beissel/MIT SMR | Getty Images

In 2011, coder-turned-venture-investor Marc Andreessen famously declared, “Software is eating the world” in the analog pages of The Wall Street Journal. His manifesto described a technology voraciously transforming every global industry it consumed. He wasn’t wrong; software remains globally ravenous.

Not six years later, Nvidia cofounder and CEO Jensen Huang boldly updated Andreesen, asserting, “Software is eating the world … but AI is eating software.” The accelerating algorithmic shift from human coding to machine learning led Huang to also remark, “Deep learning is a strategic imperative for every major tech company. It increasingly permeates every aspect of work, from infrastructure to tools, to how products are made.” Nvidia’s multitrillion-dollar market capitalization affirms Huang’s prescient 2017 prediction.

But even as software eats the world and AI gobbles up software, what disrupter appears ready to make a meal of AI? The answer is hiding in plain sight. It challenges business and technology leaders alike to rethink their investment in and relationship with artificial intelligence. There is no escaping this disrupter; it infiltrates the training sets and neural nets of every large language model (LLM) worldwide.

Philosophy is eating AI: As a discipline, data set, and sensibility, philosophy increasingly determines how digital technologies reason, predict, create, generate, and innovate. The critical enterprise challenge is whether leaders will possess the self-awareness and rigor to use philosophy as a resource for creating value with AI or default to tacit, unarticulated philosophical principles for their AI deployments. Either way — for better and worse — philosophy eats AI. For strategy-conscious executives, that metaphor needs to be top of mind.

While ethics and responsible AI currently dominate philosophy’s perceived role in developing and deploying AI solutions, those themes represent a small part of the philosophical perspectives informing and guiding AI’s production, utility, and use. Privileging ethical guidelines and guardrails undervalues philosophy’s true impact and influence. Philosophical perspectives on what AI models should achieve (teleology), what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation. Without thoughtful and rigorous cultivation of philosophical insight, organizations will fail to reap superior returns and competitive advantage from their generative and predictive AI investments.

This argument increasingly enjoys both empirical and technical support.

Topics

References

1. L. Burgis, “The Philosophy of Peter Thiel’s ‘Zero to One,’” Medium, May 9, 2022, https://luke.medium.com; P. Westberg, “Alex Karp: The Unconventional Tech Visionary,” Quartr, May 8, 2024, https://quartr.com; F.-F. Li, “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI.” (New York: Flatiron Books, 2023); and S. Wolfram, “How to Think Computationally About AI, the Universe, and Everything,” Stephen Wolfram Writings, October 27, 2023, https://writings.stephenwolfram.com.

2. M. Awwad, “Influences of Frege’s Predicate Logic on Some Computational Models,” Future Human Image Journal 9 (April 14, 2018): 5-19.

3. C. McGinn, “Intelligibility,” Colin McGinn, Dec. 14, 2019, www.colinmcginn.net.

4. J. Del Ray, “The Making of Amazon Prime, the Internet’s Most Successful and Devastating Membership Program,” Vox, May 3, 2019, www.vox.com.

5. T. Schaul, “Boundless Socratic Learning With Language Games,” arXiv, Nov. 25, 2024. https://arxiv.org; and The Physics arXiv Blog, “AI Systems Reflect the Ideology of Their Creators, Say Scientists,” Discover Magazine, Oct. 31, 2024, www.discovermagazine.com.

Reprint #:

66311

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (3)
Ahmed El Wakil
Great read, and a lot to consider. However, some points may need further research, especially when it comes to philosophy, one of the fundamental capabilities that make humans who they are. What are the limits of AI in philosophical training and rigour? Can a LLM one day produce a book like Wittgenstein's Philosophical Investigations? Or is there a limit to AI that it will not surpass, which doesn't mean, however, it cannot be useful for philosophical contemplation? A recent book by philopher Barry Smith and Jobst Landgrebe called "Why mashines will never rule the world" interestingly touches upon some of these questions.
Michael Schrage
in case you wonder what chatgpt 'thinks' of our piece:

 **Five Bold Predictions on the “Philosophy Eats AI” Paradigm  

1. **Philosophy Will Become the New Optimization Frontier for LLMs & SLMs**  
   - Just as reinforcement learning (RLHF) and Direct Preference Optimization (DPO) revolutionized LLM fine-tuning, philosophical frameworks will emerge as the next major axis of model optimization. Enterprises will prioritize “philosophy-aligned AI” as a critical design feature, ensuring that AI systems reason, justify, and make recommendations in ways that align with their institutional values, regulatory landscapes, and strategic imperatives.  
   - **Why it matters**: This will redefine **alignment** from an engineering problem into a **philosophical and epistemological one**, meaning that AI training sets, parameter weighting, and tuning will explicitly incorporate philosophical rigor alongside empirical data.  

2. **Philosophy-Trained AI Will Create Competitive Advantages & Economic Differentiators**  
   - Companies that explicitly train AI models on philosophical constructs—like Confucian relational ethics for supply chain optimization or Kantian deontological ethics for legal advisory LLMs—will achieve **higher interpretability, transparency, and trust** in AI-driven decision-making.  
   - **Why it matters**: Philosophy-trained AI will **become a strategic differentiator in high-stakes industries** (finance, healthcare, defense, policy), where organizations must demonstrate **not just accuracy, but explainable reasoning**. Leaders will not just ask, “What does the AI predict?” but “On what philosophical basis does it justify this?”  

3. **Philosophy as a Core AI Capability Will Reshape Global Regulation & Governance**  
   - As AI systems increasingly influence legal rulings, hiring decisions, military strategy, and public policy, regulators will **mandate philosophical transparency** in AI reasoning. AI audits will no longer focus solely on **bias mitigation** but will scrutinize the **philosophical underpinnings** of AI-generated recommendations.  
   - **Why it matters**: Expect to see **jurisprudence-driven LLMs** trained on competing legal philosophies, **corporate AI governance frameworks** based on utilitarian vs. deontological principles, and international disputes over **whose philosophy should dominate global AI regulations**.  

4. **The Next Wave of AI Research Will Focus on “Epistemic AI”**  
   - The LLM/SLM landscape will shift from improving **token prediction accuracy** to **modeling epistemic humility and reasoning structures**. The most advanced models won’t just generate outputs; they will **self-assess the certainty, validity, and philosophical assumptions embedded in their responses**.  
   - **Why it matters**: Expect a **new wave of epistemology-aware models** that rank their own responses based on degrees of certainty, falsifiability, and even competing philosophical interpretations. AI research will pivot towards **designing AI that knows what it doesn’t know** and **models its own epistemic limitations**.  

5. **AI Will Become a Full Participant in Philosophical Inquiry**  
   - Just as AlphaFold solved protein structures beyond human capability, **AI will start producing novel philosophical insights**, challenging long-held human assumptions about ethics, free will, and epistemology. AI will not just apply existing philosophies—it will **synthesize new ones**, accelerating intellectual revolutions.  
   - **Why it matters**: We could see the emergence of **AI-driven philosophical schools**, where AI-generated thought experiments reshape debates on justice, consciousness, and moral responsibility. This will raise **unprecedented questions about AI as a source of original thought and wisdom** rather than merely a tool for human philosophy.  

---

### The Boldest Take: “Agentic AI Will Be Defined by Its Philosophical Commitments” 

The current debate about **agentic AI**—whether AI can develop **goals, intentions, and autonomous reasoning**—misses a crucial point: **What an AI ‘wants’ is inseparable from how it has been philosophically trained to reason about wants, goals, and agency itself.**  

The “Philosophy Eats AI” paradigm reframes **agentic AI** as not just a technological or alignment challenge, but a **philosophical design choice**. **Agency in AI will not be a question of engineering alone but of philosophical orientation.**  

- If an AI is trained on utilitarianism, it will **rationalize trade-offs in cost-benefit terms**.  
- If trained on virtue ethics, it may **prioritize character-building recommendations over mere efficiency**.  
- If trained on Taoist philosophy, it might **advocate for non-intervention over intervention**.  

**The future of agentic AI will be defined by its philosophical architecture.** This means that discussions of AI autonomy, safety, and alignment **will ultimately hinge on choosing, refining, and negotiating the philosophies that undergird AI’s decision-making processes**.  

🔮 **Final Prediction**: By 2030, the most sophisticated AI systems will no longer just respond to human queries—they will **actively debate with humans about what should be done, why, and on what philosophical basis.**
Anonymous
Awesome! Integrating the ultimate thinking discipline with technology, science, business, and even a little General Systems Theory.