A long time ago (1992) in a land far far away (America) an author wrote a book called The Decline and Fall of the American Programmer. Four years later (1996) he realized he had got it wrong, and he penned the Rise & Resurrection of the American Programmer. So much for being a prophet. Why did Ed Yourdon pen the first tale? And why did he recant it a short four years later? And why the biblical references? Strike that last one. Well, some will say, this all happened a long time ago, so it really doesn’t matter. But does it? What is the future of programming? Will human programmers be out of a job at some point? This author would argue yes, and no. If that’s confusing, read on for clarification.
Part Art, Part Science
Programming has always been part art, and part science. Take it from someone who has practiced it for a longer time than you would care to remember. There were those who were good at it, those who were lousy at it, and then there were the Rembrandts of the world who were particularly irritating because they did it effortlessly. But why is programming so difficult? Why is it not just pure science? The answer is elusive for many, but ultimately obvious in the end. Programming is a creative task, similar in some respects to writing an article or a book. Although the overall concept or theme of the content might be clear, it is not so clear how to express it. Some writers might be inept, while some write prose worthy of the gods. Around the time Ed Yourdon was predicting, and revising his predictions, many in the computer science field were pursuing Computer Aided Software Engineering (CASE). Look up CASE on Wikipedia – you’ll see the word “was” a lot. Many others of the same generation were pursuing Artificial Intelligence (AI), which was a pretty thankless task at the time (look up AI winter on Wikipedia). However, let’s flash forward twenty years. Nope, no CASE tools, but we do have some pretty nifty Integrated Development Environments (IDEs). And we have AI. Perhaps not the AI we dreamt about those many years ago, but a form of AI nonetheless. We’re not talking Deep Blue here, which was a program designed to perform a specific task at much faster speeds than a human. We’re talking about neural networks, speech recognition, data mining, robotics, medical diagnosis, deep learning, and IBM Watson, which successfully defeated the two top human winners on Jeopardy in 2011 for a prize of $1 million dollars.
So where does this leave us with respect to the title of this article, and my answer of both yes, and no regarding the future of human programming. Can we use AI techniques to write working code? Failing that, can we use AI techniques to help humans write working code? Let’s examine what we do know:
- We can invent Domain Specific Languages (DSLs) that can express, in computable form, pretty much any programmable subject matter.
- Human beings can write the backing code for DSLs, some of which may be written in other DSLs. It’s also possible that some DSL code can be machine generated.
- There exists today many Natural Language Processing (NLP) programs that are capable of translating any written human language into DSL(s). NLP can be augmented by technology like IBM Watson, which can find answers for just about any question you’d like to ask, or Google Neural Machine Translation (GNMT), which can blindly translate languages even it’s never even studied.
- There exists today many Speech Recognition (SR) programs that are capable of translating any spoken human language into any written language.
Connecting the Dots
Human speaks ⇒ SR translates ⇒ NLP translates (with help) ⇒ DSL executes ⇒ Voila!
Why did this seem so easy? Because we factored out most of the creative part and left it to a Human Intelligence Task (HIT), a term coined by Amazon for their Mechanical Turk program. The benefit of this approach is two-fold: 1) we create a reusable library of software components in the form of DSLs, and 2) we now have programs on demand, literally available on the spoken word.
Of course, it’s not quite that simple. We did say we factored out most of the creative part. There is a part remaining (the “with help” part) that still requires some attention, which is the weaving together of program elements expressed by various DSL components to create a complete functioning program. This can be trickier than it sounds, and is one of the areas that has been criticized in the past in connection with programming techniques such as Aspect Oriented Programming (AOP), which also performs the weaving function albeit at a lesser level. IBM Watson, which is based on the DeepQA architecture, is probably the closest thing we have today to real AI. Watson uses more than 100 different techniques for analyzing natural language, identifying sources, finding and generating hypotheses, finding and scoring evidence, and merging and ranking hypotheses. Another promising avenue is Google’s recent November 2016 announcement of true multilingual zero-shot translation, enabling the technology used in Google Translate to create new translations on its own between languages combinations it didn’t know before. For example, Google Translate was taught Portuguese ⇒ English and English ⇒ Spanish. It was then able to make reasonable translations between Portuguese ⇒ Spanish. It is possible that in order to solve the weaving problem entirely we might have to add significant intelligence on the order of Watson and Google Translate to the task.
We also have to agree on the design of DSLs, which are essentially the specification that human programmers will write to. Humans are pretty good at coming up with standards when they decide to work together. And of course, some humans will be writing code to test implementations of DSLs to ensure that they perform to that specification. DSLs are as expandable as they need to be. They can also be redesigned when necessary, and the AI components of the above system can be retrained to work with the new variants. Human programming communities today collaborate in creating mountains of free open source software available to everyone. They just need to be re-oriented towards writing code to DSL specifications that serve the above practical purpose. With the exception of the weaving techniques, nothing new needs to be invented, it’s simply a matter of combining together several things we already know how to do. Will we be able to write every piece of code in this fashion? Probably not for some time, but the sooner we get started the better in my opinion.
Where Does That Leave Us?
In 1968, Arthur C. Clarke imagined that by year 2001 we would have an intelligent computer named HAL 9000 that could exceed its human counterparts in intelligence. In 2001, AI founder Marvin Minsky asked “So the question is why didn’t we get HAL in 2001?” There are many answers to that question, but AI continues to grow in application with revenue predicted to reach $5.05 billion by 2020. It seems that applications for AI spring up faster than we can keep up with. Prof Stephen Hawking, one of the world’s leading scientists, warns that AI could spell the end of the human race. Maybe he’s right, but on the other hand, AI could be just the tool we’re looking for to make infinite programming available to the human race. And all without putting human programmers out of work, at least for the foreseeable future. Sounds pretty neat, eh?
Glenn Reid is the CEO of RJB Technology Inc., a Canadian firm with Branch Offices in Makati, Philippines. During his career he has written numerous code generators, as well as an implementation of the Smalltalk programming language for the IBM mainframe. Contact us today for more information about our company, or to discuss your custom software development needs.