by Chief Research Officer Sergey Nikolenko
Preface
This is an introduction to modern AI and specifically neural networks. I attempt to explain to non-professionals what neural networks are all about, where these ideas had grown from, why they formed in the succession in which they did, how we are shaping these ideas now, and how they, in turn, are shaping our present and our future. The dialogues are a venerable genre of sci-pop, falling in and out of fashion over the last couple of millennia; e.g., Galileo’s dialogue about the Copernican system was so wildly successful that it stayed on the Index of Forbidden Books for two centuries. In our dialogues, you will hear many different voices (all of them are in my head, and famous people mentioned here do not talk in quotes). The main character is the narrator who will be doing most of the talking; following a computer science tradition, we call her Alice. She engages in conversation with her intelligent but not very educated listeners Bob and Charlie. Alice’s university has standard subscription deals with Springer, Elsevier, and the netherworld, so sometimes we will meet the ghosts of people long dead.
Enjoy!
Dialogue I: From Language to Logic
Alice. Hey guys! We are here with quite a task: we want to create an artificial intelligence, no less. A walking, talking, thinking robot that could do everything a human could. I have to warn you: lots of people have tried, most of them have vastly overestimated themselves, and all of them have fallen short so far. We probably also won’t get there exactly, but we sure want to give it a shot. Where do you suppose we should begin?
Bob. Well… gosh, that sounds hard. To be intelligent a person has to know a lot of things — why don’t we try to write them all down first and let the robot read?
Charlie. We have encyclopaedias, you know. Why don’t we let the computer read Wikipedia? That way it can figure out all sorts of things.
Alice. Riiight… and how would we teach the computer to read Wikipedia?
Bob. Well, you know, reading. Language. It’s a sequence of discrete well-defined characters that combine into discrete well-defined words. We can already make computers understand programming languages or query languages like SQL, and they look exactly the same, only a bit more structured. How hard can it be to teach a computer to read in English?
Alice. Very hard, unfortunately. Natural language is indeed easy to encode and process, but it is very hard to understand — you see, it was not designed for a computer. There is no program even now that could understand English, the best artificial intelligence models struggle very hard with reading — we’ll talk more about this later. But I can give you a quick example from one particularly problematic field called pragmatics. “The laptop did not fit in the bag because it was too big”. What was too big, the bag or the laptop?
Bob. The laptop, obviously.
Alice. Okay. Try another one. “The laptop did not fit in the bag because it was too small”. What was too small, the bag or the laptop?
Bob. Obviously… oh, I see. We understand it because we know the world. But the computer does not know anything about what a laptop is or what a bag is! And the sentence looks very simple, not too contrived at all. But it does look a bit like a handmade counterexample — does this kind of stuff happen often?
Alice. Very often. Our whole system of communication is made for us, wet biological beings who have eyes, ears, and skin, understand the three dimensions, have human urges and drives. There is a lot left unsaid in every human language.
Bob. So the computer can’t just pick up English as it goes along, like children learn to speak, no?
Alice. Afraid not. That is, if it could, it would be wonderful and it would be exactly the kind of artificial intelligence we want to build. But so far it can’t.
Charlie. Well then, we’ll have to help it. You’re saying we can’t just go ahead and write a program that reads English. Okay. So what if we invent our own language that would be more… machine-readable?
Bob. Yeah! It can’t be an existing programming language, you can’t describe the world in C++, but we simply have to make natural languages more formal, clear out the exceptions, all that stuff. Make it self-explanatory, in a way, so that it could start from simple stuff and build upon it. It’ll be a big project to rewrite Wikipedia in this language, but you only have to do it once, and then all kinds of robots will be able to learn to read it and understand the world!
Alice. Cool! You guys just invented what might well be the first serious approach — purely theoretical, of course — to artificial intelligence as we understand it now. Back in the 1660s, Gottfried Leibnitz, the German inventor of calculus and bitter rival of Isaac Newton, started talking about what he called Characteristica universalis, the universal “alphabet of human thought” that would unite all languages and express concepts and ideas from science, art, and mathematics in a unified and coherent way. Some people say he was under heavy influence of the Chinese language that had reached Europe not long ago. Europeans believed that all those beautiful Chinese symbols had a strict system behind them — and they did, but the system was perhaps also a bit messier than the Europeans thought.
Anyway, Leibnitz thought that this universal language would be graphical in nature. He believed that a universal system could be worked out based on diagrams and pictures, and this system would be so clear, logical, and straightforward that machines would be made to perform reasoning in the universal language. Leibnitz actually constructed a prototype of a machine for mathematical calculations that could do all four arithmetic operations; he thought to extend it to a machine for his universal language. It is, of course, unclear how he planned to make a mechanical device understand pictures. But his proposal for the universal language undoubtedly did have a graphical component. Look at a sample diagram by Leibniz — it almost looks like you could use it to summon a demon or two. Speaking of which…
Leibnitz [appearing in a puff of smoke]. Ja! You see, God could not wish to make the world too complicated for His beloved children. We see that in the calculus: it is really quite simple, no need for those ghastly fluxions Sir Isaac was always talking about. As if anybody could understand those! But when you find the right language, as I did, calculus becomes a beautiful and simple thing, almost mechanical. You only need to find the right language for everything: for the science, for the world. And I would build a machine for this language, first the calculus ratiocinator, and then, ultimately, machina ratiocinatrix, a reasoning machine! That would show that snobbish mystic! That would show all of them! Alas, I did not really think this through… [Leibnitz shakes his head sadly and disappears]
Alice. Indeed. Gottfried Leibnitz was the first in a very long line of very smart people who vastly underestimated the complexity of artificial intelligence. In 1669, he envisioned that the universal language could be designed in five years if “selected men” could be put on the job (later we will see how eerily similar this sounds to the first steps of AI in our time). In 1706, he confessed that “mankind is still not mature enough to lay claim to the advantages which this method could provide”. And it really was not.
Charlie. Okay, so Leibnitz could not do this, that doesn’t surprise me too much. But can’t we do it now? We have computers, and lots of new math, and we even have a few of those nice artificial languages like Esperanto already, don’t we?
Alice. Yes and no. But mostly no. First of all, most attempts to create a universal language had nothing to do with artificial intelligence. They were designed to be simple for people, not for machines. Esperanto was designed to have a simple grammar, no exceptions, to sound good — exactly the things that don’t matter all that much for artificial intelligence, it’s not hard for a computer to memorize irregular verbs. Second, even if you try, it is very hard to construct a machine-readable general-purpose language. My favourite example is Iţkuîl, designed in the 2000s by John Quijada specifically to remove as much ambiguity and vagueness from human languages as possible. Iţkuîl is one of the most concise languages in the world, able to express whole sentences worth of meaning in a couple of words. It is excruciatingly hard for humans… but it does not seem to be much easier for computers. Laptops still don’t fit into bags, in any language. There is not a single fluent Iţkuîl speaker in the world, and there has not been any success in artificial intelligence for it either.
Charlie. All right, I suppose it’s hard to teach human languages to computers. That’s only natural: an artificial intelligence lives in the world of ones and zeros, and it’s hard to understand or even imagine the outside world from inside a computer. But what about cold, hard logic? Mathematics? Let’s first formalize the things that are designed to be formal, and if our artificial intelligence can do math it already feels pretty smart to me.
Alice. Yes, that was exactly the next step people considered. But we have to step back a bit first.
It is a little surprising how late logic came into mathematics. Aristotle used logic to formalize commonsense reasoning with syllogisms like “All men are mortal, Socrates is a man, hence, Socrates is mortal”. You could say he invented propositional logic, rules for handling quantifiers like “for all” and “there exists”, and so on, but that would really be a stretch. Mathematics used logic, of course, but for the most part of history, mathematicians did not feel like there are any problems with basing mathematics on common sense. Like, what is a number? Until, in the XIX century, strange counterexamples started to appear left and right. In the 1870s, Georg Cantor invented set theory, and researchers quickly realized there were some serious problems with formal definitions of fundamental objects like a set or a number. Only then it became clear logic was very important for the foundations of mathematics.
The golden years of mathematical logic was the first half of the XX century. At first, there was optimism about the general program of constructing mathematics from logic, in a fully formal way, as self-contained as possible. This optimism is best summarized in Principia Mathematica, a huge work by Bertrand Russell and Alfred Whitehead who aimed to construct mathematics from first principles, from the axioms of set theory, in a completely formal way. It took several hundred pages to get to 1+1=2, but they did manage to get there.
Kurt Gödel was the first to throw water on the fire of this optimism. His incompleteness theorems showed that this bottom-up construction could not be completely successful: to simplify a bit, there will always be correct theorems that you cannot prove. At first, mathematicians took it to heart, but it soon became evident that Gödel’s incompleteness theorems are not really a huge deal: it is very unlikely that we ever come across an unprovable statement that is actually relevant in practice. Maybe P=?NP is one, but that’s the only reasonable candidate so far, and even that is not really likely. And it still would be exceedingly useful to have a program able to prove the provable theorems. So by the 1940s and 1950s, people were very excited about logic, and many thought that the way to artificial intelligence was to implement some sort of a theorem proving machine.
Bob. That makes perfect sense: logical thinking is what separates us from the animals! An AI must be able to do inference, to think clearly and rationally about things. Logic does sound like a natural way to AI.
Alice. Well, ultimately it turned out that it was a bit too early to talk about what separates us from the animals — even now, let alone the 1950s, it appears to be very hard to reach the level of animals, and surpassing them in general reasoning and understanding the world is still far out of reach. On the other hand, it turned out that we are excellent in pattern matching but rather terrible in formal logic: if you have ever had a course in mathematical logic you remember how hard it can be to formally write down the proofs of even the simplest statements.
Charlie. Oh yes, I remember! In my class, our first problem in first order logic was to prove that A->A from Hilbert’s axioms… man, that was far from obvious.
Alice. Yes.There are other proof systems and plenty of tricks that automatic theorem provers use. Still, so far it has not really worked as expected. There are some important theorems where computers were used for case-by-case enumeration (one of the first and most famous examples was the four color theorem), but up to this day, there is no automated prover that would prove important and relevant theorems by itself.
Charlie. So far all you’re saying is that not only it is hard for computers to understand the world, but it is even hard to work with perfectly well-defined mathematical objects!
Alice. Yes. Often, formalization itself is hard. But even when it is possible to formalize everything, like in mathematical logic, it is usually still a long way to go before we can automatically obtain useful new results.
Bob. So what do we do? Maybe for some problems we don’t need to formalize at all?
Charlie. What do you mean?
Bob. I mean, like, suppose you want to learn to fly. Our human way to fly is to study aerodynamics and develop wing-like constructions that can convert horizontal speed to lift and take off in this way. But birds can fly too, maybe less efficiently, but they can. A hundred years ago, we couldn’t simulate the birds and developed other ways through our cunning in physics and mathematics — but what if for intelligence it’s easier the other way around? An eagle does not know aerodynamics, it just runs off a cliff and soars.
Alice. And with this, pardon the pun, cliffhanger we take a break. When we reconvene, we will pick up from here and run with Bob’s idea. In artificial intelligence, it proved surprisingly fruitful.
Congratulations @neuromation! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
Award for the number of upvotes received
Award for the number of posts published
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP