Summary
Anastassia and Joseph Miller delve into the complexities of artificial intelligence, particularly focusing on the limitations of large language models (LLMs) and the importance of embedding causality and reasoning into AI systems. Joseph critiques the current transformer model architecture, explaining how it lacks a true understanding of causality, which is essential for meaningful interactions. He emphasizes that while LLMs can generate convincing language, they lack a world model that would enable them to reason or understand the implications of their outputs. This leads to discussions on the necessity of ontologies and knowledge graphs to provide a structured understanding of the world, enabling AI to operate more effectively in real-world contexts.
The conversation also touches on the future of AI in the workplace, with Joseph expressing a somewhat pessimistic view of labor disruption from AI advancements. He believes that while AI can enhance productivity, it may also lead to significant job losses, as many roles could be automated. However, he remains hopeful about the potential for humans and AI to work together, emphasizing the need for accountability and responsibility in AI applications. The discussion concludes with reflections on the importance of AI literacy and the potential for a future in which humans and AI coexist harmoniously, leveraging each other's strengths.
Joseph (Joe) Miller, PhD, is a physicist, scientist, and serial entrepreneur who serves as Co‑Founder and Chief AI Officer at Vivun, where he builds AI sales agents that embed expert domain knowledge into real‑world workflows. Before Vivun, he worked at Bridgewater Associates on expert systems for systematic decision‑making and later founded Battery CI, a quantitative FX hedge fund, and co‑founded other tech ventures at the intersection of AI, finance, and digital identity. Across his roles, Miller focuses on causal inference, world models, and knowledge‑centric AI, translating deep technical ideas into practical systems for high‑stakes enterprise environments like sales, trading, and strategic decision‑making.
Takeaways
Judea Pearl’s “The Book of ‘Why’” is a must-read to understand foundations of causality and what current AI systems lack.
LLMs lack a true understanding of causality.
Embedding ontologies can enhance AI's reasoning capabilities.
AI's productivity gains may lead to significant job disruption.
Humans must remain accountable for AI's decisions. AI makers will be liable for product issues in AI services and applications.
AI literacy is crucial for navigating future challenges.
Chapters
00:00 Introduction to the episode: Looking into AI and reasoning LLMs
03:11 Discussing two books: “Nexus” and “The Book of ‘Why’”
07:36 Limitations of Large Language Models today
14:50 Embedding Context with Ontologies and Knowledge Graphs into LLMs
18:31 The Convergence of AI Approaches as a possible path to a reasoning AI
20:52 Defining Ontologies and Knowledge Graphs
25:45 Innovation Through Interdisciplinary Knowledge in AI as a necessity
30:04 Dynamic Learning in LLMs
34:15 ‘World Models’ and Their Impact in AI
35:14 The Future of AI and Accountability, AI Ethics
40:03 Human-AI Collaboration in the Workplace
47:06 The Importance of AI Literacy
Hyperlinks:
Joe Miller and Vivun/ AI in sales:
Anastassia:
Anastassia Lauterbach - LinkedIn
First Public Reading, Romy, Roby and the Secrets of Sleep (1/3)
First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)
First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)
