Cyc Google TechTalk
Google Video has a video of a talk given by Douglas Lenat, the President and CEO of Cycorp. It's more than 70minutes long, but worth the time of anyone interested in AI. I want to highlight two parts that I found particular interesting:
It's been my believe for a while that general purpose reasoners and theorem provers are only good for very few tasks (such as proving the correctness of a program) and that most real world tasks rather need faster, task specific reasoners or heuristics. For me this thought was always motivated by ideas from cognitive psychology (see for example the research into "Fast and Frugal heuristics" by the ABC Research Group in Berlin). However, I always lacked good computer science arguments to back up this point - now at least I can say that Cycorp sees it the same way:
There is a single correct monolithic reasoning mechanism, namely thorem proving; but, in fact, it's so deadly slow that, really, if we ever fall back on our theorem prover, we're doing something wrong. By now, we have over 1,000 specialized reasoning modules, and almost all of the time, when Cyc is doing reasoning, it's running one or another of these particular specialized modules.(~32:20)
I also think that humans are almost constantly reorganizing the knowledge structures in their head - most of the time becoming more effective in reasoning and quicker in learning. An example for this process is the forming of "thought entities". There seems to be a limit on the number of thought entities that humans can manipulate in their short term memory. This limit seems to fixed for live and seems to be somewhere between 5 and 8. What does change with experience is the structure and complexity of these thought entities. A famous example for the effect of experience on the thought entities is the ability to recall chess positions in expert chess players and amateurs. If you show the positions of chess pieces from a normal game to expert chess player and amateurs, the expert players will be much better at recalling the exact positions. But when you place the pieces in a random manner both will perform equally bad. The common explanation for this phenomena is that the expert has more complex though entities at her disposal. In normal chess positions she can find large familiar patterns - like "white has played opening A in variant B". These large and complex thought entities allow the expert to fit the position of up to 32 chess pieces into the available 8 slots. When the chess pieces are placed in a random manner, these structures familiar to the experts don't appear anymore and the expert loses its advantage.
And now I always wondered what could be equivalents to this knowledge reorganization process in logic based systems, Cyc has one interesting answer:
Often what we do in a case like this, if we see the same kind of form occurring again and again and again, is we introduce a new predicate, in this case a relational exists, so that what used to be a complicated looking rule is now a ground atomic formula, in this case a simple ternary assertion in our language (~21:15)
<< Home