Future Vision

The technology world is a cult of innovation. We see that innovation drives success in business, and in the larger economy. It is the building block of civilization.

Software is an almost-pure form of innovation. We call it "soft" because we can change it and reshape it easily. We are converting more and more of almost every product and service into software, because the softer something is, the faster we can improve it.

I can make one easy prediction: software will be a bigger and bigger piece of all of the goods and services that we produce. This has been a trend for 50 years, and it probably will continue for the next 50 years.

I will make some predictions about the ways that the next ten years will take us beyond Continuous Agile to increase speed, scale, and automation. Each step in the process is a straightforward challenge to engineers and organizers of our generation. Each step will provide a powerful competitive advantage for the businesses that can harness it.

The move to SaaS and MAXOS

Businesses will not be able to keep their old-style IT operations. If they keep the old ways, competitors will squeeze them out with faster innovation and aggressive, flexible pricing. We already see this happening in today's deflationary technology environment. All industries will eventually be affected. They can solve the problem by outsourcing their applications and data to SaaS providers. SaaS providers bring every customer up to a consistent level of innovation, scale, and cost management. SaaS providers run continuous delivery on Web-scale IT infrastructure, and they must compete globally on price.

For customized services, enterprises will run MAXOS - a service architecture with continuous delivery. Where MAXOS competes against old-style centralized planning, provisioning, and iterative development, it wins. Enterprise customers will integrate their SaaS providers into the MAXOS matrix as Web service providers. Some of them will build industry-dominating platforms of interconnected services. This will give startups new places to sell their services.

Clear division between Core IT and Fast IT

Enterprise IT users will draw a clear line between their old Core IT and the new Fast IT. They will systematically add API's to the Core IT, and bury it under new Fast IT running on mobile devices and commodity server farms.

The return of Core IT

If my product and service is increasingly based on software, I will want to own some software that gives me a competitive advantage. It will be increasingly clear where I need to invest in "Core IT" to run my business and build my products. I will be rewarded for big, multi-year investments in proprietary software and expertise.

The Empire of Code

Whenever you read this, people are coding. If they were to rise up in unison, they would form an empire on which the sun never sets. And who are the citizens of this empire? They are geeks. They think in similar ways. They may have been raised in a ranch house in Silicon Valley, a concrete apartment block in Kiev, a skyscraper in Shanghai, an igloo, or a mud hut, but when they sit down to work, they form one cultural unit. They live in physical countries where they study, carouse, date, raise families, and pay taxes just like their neighbors. However, they don't actually work in any particular location. They work "on the Web."

This poses some questions for governments. These geeks are a desirable demographic for governments because they are industrious people entering their best years for creativity, work, and paying taxes. Where should they pay taxes? Should they pay taxes where they work, or where their customers are, or where their employers are? If nations don't all have the same rules, there will be a gap. People will work in a country that taxes based on the location of the employer, and employers will locate in countries that tax based on where people work. It's not a coincidence that the companies that most successfully avoid taxes are global software companies like Google.

Geeks will seek out locations that respect their work environment. The world is currently not a friendly place for global teams that want to meet physically. To bring a multinational team together requires a lot of work to get visas for the meeting location. These visas typically state that visitors are not supposed to work without an additional work permit. There is currently no place in the world where a team can go and work together legally for more than a few weeks. I have started a "Software Sanctuary" project to solve this problem. We will make deals with host governments for a package of on-demand visas, work permits, and simple local taxation.

During the last 30 years, the old system of sovereign nations has spread out to include customs unions and currency unions and federations and autonomous regions and special economic zones. The empire of code is an interesting addition to this network.

Beyond Git and code

Code is king, and it has its servants. Git is a tool that makes it easy to move code around the Internet, and run the various code contribution and deployment workflows that we discuss in this book. It has become very popular and even essential since it was created seven years ago.

It will soon be replaced, although we don't yet know what will replace it.

Git uses a lot of manual commands to move and merge code between connected and disconnected users. However, most users are now connected. Outside of the coding world we use systems like Dropbox and gDrive that automatically detect and replicate changes between locations. We will also want automatic replication of code. We can imagine a system that grabs code changes as they are written. This system could test the changes on all possible merge paths, and continually tell us whether a change passes all tests and is releasable. This system could also see users that are working on the same piece of code, and notify them if they were making conflicting changes, in real time.

If we had a repository that could store and update complete running systems, we would not have to do quite so much work moving code around. "A Web service is the new executable," according to my friend Aaron O'Mullen. For a desktop or single server architecture, we compile an application into one executable file. In a service architecture, we do not compile one program and run it a a stand-alone executable. We build a whole "stack" or virtual machine, and run it as a Web service. When we do this, we send a lot of code and configurations back and forth from a code repository. If we want to build a new staging server, we first pull out the code that describes the server and run it. Then we pull out the code for our application and build the app. Then we pull out the latest configurations. This is why HubSpot has 500 Git repositories for 200 services. This machinery works, but it takes a lot of time to set up and maintain. Every one of those three processes (server build, application build, configuration) needs to be scripted and maintained by humans. The system will be a lot simpler and more efficient when we can version complete images or "containers." The repository will contain complete executable stacks in addition to code. Then, we can apply our review, merge, and promote workflows to make changes directly on the runnable systems, without the extra steps to build and configure.

Productivity comes from machines

We humans like to think we are getting smarter, but we are not. We are figuring a few things out, but we aren't getting smarter. So, how are we able to produce more code, faster, every year? We're using more machines, and bigger machinery. Farmers, coal miners, and automobile manufacturers increase their productivity dramatically as they employ bigger and better machines. Programmers get the same boost by using machines to search, build, test, scan, correct, and deploy their software. We can boost productivity by looking for places where we can use more machines, and do more different thing with machines.

Automated programming

There aren't enough programmers in the world to deliver all of the software we will need in the future. Only a small subset of the population enjoys programming and is good at it. Globalization has temporarily solved the problem by vastly increasing the number of educated, connected candidates. However, we will soon reach the limits of the new and expanded labor pool. We will solve the problem by replacing programmers with machines. Computers will program themselves.

I started working on automated programming and applying evolution to generate software in 1992. We didn't know in those days how to build big, useful systems, or how to merge the work of man and machine. Now the way is clear.

Machines can contribute code the same way that people contribute. In a continuous delivery process, each new bit of code gets written by a contributor, and then run through a process where it gets tested, reviewed, and accepted into the mainline version. Ten years ago, contributors were almost always people who worked on a professional team. Now, contributors often come from outside the company or from an amorphous open source community. With enough automated testing, and a simple review process, we can qualify their changes and feel confident accepting them. A machine can feed code changes into this same process. We will test and review their changes without knowing if they are people or machines.

The next step is automated bug fixing. I have recently spoken with several startup teams that are designing systems which fix bugs. Their machines will find automated tests that fail, and then find code changes that pass the tests. The computer uses a variety of approaches. They can look for bugs that fit a pattern, such as an incorrect variable name. They can search huge databases of similar code to find patterns that work. They can randomly make changes (mutations, in the language of evolution) until they get a good result.

In the bug fix process above, a human reviewer can accept or reject a change suggested by a computer. This is the building block of directed evolution, in which computers propose new versions, and humans pick the versions they like. Back in 1993, Karl Sims used directed evolution to create spectacular works of art. According to Sims, "Genetic Images" is a media installation in which visitors can interactively "evolve" abstract still images. A supercomputer generates and displays 16 images on an arc of screens. Visitors stand on sensors in front of the most aesthetically pleasing images to select which ones will survive and reproduce [through random mutation] to make the next generation." A reviewer from Wired wrote that "successive rounds...generate images of unbelievable beauty." Now this technique is used to design many types of products and packaging.

Directed evolution is used in online marketing, where computers make changes to Web pages or advertisements to find layouts that entice more users to click "buy." The autocorrect feature in a programmer's editor (like autocomplete in a search app) is another example of this approach. The editor shows an option for completing a line of code, and the programmer can accept the option, or keep typing. We can imagine a smarter version of this feature which pulls from a big database of prior code and templates, presents multiple options, and learns the programmer's style.

That brings us to less-directed evolutionary programming - an imitation of biological evolution. In evolutionary programming or genetic programming, computers make random changes to code and then run automated "fitness tests" to decide which versions to keep. Then they use a "genetic algorithm" to make new variations from the winners. This process continues until the code passes all fitness tests, or until computers go rogue and take over the world. The automated tests and test layers in a Continuous Delivery system can serve as a fitness test environment for genetic programming. However, there is one problem. Genetic algorithms tend to produce code that is unintelligible to humans. They don't think about code readability. They just generate garbage code and test it until they happen to find a piece of garbage that works better than your finely crafted code. So it's not a good idea to put evolved code into human-written software. We can safely incorporate this ugly-but-effective computer-generated code if we isolate it into a separate service. We can let the computers build and maintain complete services, which can run in our matrix of services beside services maintained by human teams.

I expect to see the computerized services start by taking on pattern recognition tasks. Pattern recognition tasks have the characteristic that it's easy to tell the computer what to do and specify the tests - "just find all of the pictures of cats" - but very hard for a human to write the code.The same tasks may be accomplished more effiiently by neural nets and neurorphic processes. Neural nets are another machine-learning technology that has been incubating since the 1980's and is now ready for packaging into services. We can even use evolutionary techniques to neural computers together with sequential computers.

Taken together, these techniques are the first steps a technology of innovation. By treating innovation as a reproducible technology that is based on evolution, we can incrementally improve it and increase its speed and scope.

Ray Kurzweil imagines a world where computers get faster and faster, and start learning faster and faster, until the process accelerates into an instant that goes beyond the human perception of time. He calls this the "singularity," driven by increased computing power. I doubt that you can achieve this effect just by increasing computing power. However, I believe that it is possible to solve a lot of problems by understanding and deploying the technology of innovation. We will start to understand the engine of creation, the evolutionary machinery that created life, that created computers, and that will create what comes next.