Director, Rensselaer AI & Reasoning Lab
Rensselaer Polytechnic Institute
Dr Selmer Bringsjord is Professor of: Cognitive Science; Computer Science; Logic & Philosophy; and Management & Technology, at Rensselaer Polytechnic Institute (RPI), where he is Director of the Rensselaer AI & Reasoning Lab. He obtained the PhD in Philosophy from Brown University and BA in Philosophy at the University of Pennsylvania. He specializes in the logico-mathematical and philosophical foundations of artificial intelligence (AI) and cognitive science, and in collaboratively engineering AI systems with human-level intelligence, where the engineering is based on computational formal logic. One such past system is a self-aware robot able to leverage its self-understanding to solve tough problems; another such system, currently under development, but already in prototype form, is an “ethically correct” robot with the power to compute solutions even to knotty moral dilemmas, and to justify its actions with cogent arguments and proofs. Bringsjord is the author of What Robots Can & Can’t Be, Artificial Intelligence and Literary Creativity: Inside the Mind of Brutus, A Storytelling Machine (wth David Ferrucci),Superminds: People Harness Hypercomputation, and more. Bringsjord has received many honors, including, from the International Association for Computing and Philosophy, the Covey Award, for innovative contributions in the intersection of computing and philosophy.
【Day 2-2】Artificial General Intelligence
Without Artificial General Moral Intelligence, We’re Dead
Particular challenges in particular domains can make otherwise dim machines seem, to some, brilliant. AlphaGo may seem super-smart — until we see that its prowess covers but one measly game in one measly class of simple, solvable games of perfect information. Descartes, who explicitly declared that no machine could ever have AGI (artificial general intelligence), would, if appearing here today, triumphantly point out that if we sprung even just tic-tac-toe on an unsuspecting AlphaGo, it would fail. Edgar Allen Poe, whose views on AI (and specifically on games and the mind) are unfortunately little known, would make a parallel point. Now, into this already-sobering context strikes like a bolt of lightning this further fact: if powerful, autonomous machines of the future cannot divine what they ought to do (or not do) in novel situations, they might well kill us all. The only solution is to engineer machines with AGMI: machines that are morally correct in any new circumstances, and provably, incorruptibly so. With crucial help from others, I offer this solution.