Artificial General Intelligence: A Philosopher’s Manifesto | Københavns Universitet

Public talk by Philosophy Professor Anandi Hattiangadi (Stockholm University)

Experts in Artificial Intelligence (AI) and tech industry insiders predict the imminent emergence of Artificial General Intelligence (AGI): machines that are at least as intelligent as humans. Optimists expect AGIs to solve humanity's problems, while doomsayers worry they would render us extinct. The problem is that most people are operating with a misguided conception of AGI. This skews AGI timelines and our assessments of risks and opportunities. To correct this misconception, we need to return to fundamental philosophical questions about what makes human cognition distinctive and what is required for an artificial system to think like a human. In this element, I put forward a novel account of distinctively human cognition and apply it to the interpretation of AI. The upshot is surprising. It turns out that we should be less afraid of machines that think like us than of machines that fool us into thinking that they do.

Contact

HUSET, Xenon

Rådhusstræde 13

1466 København
Share with