The first enterprise computers were so expensive that much like executive salaries, any one organisation could only afford a few, and they had to be centralised. As the cost of computing decreased, using a remote terminal to run programmes on a central server started to make less sense, and companies began using desktop machines. Modems and the early consumer networks of CompuServe, Prodigy, and AOL segued into the age of the Internet and the World Wide Web, making it possible for information to flow much more freely than ever before. But the way data flowed—relying on user commands and poorly configured networks – made it an unreliable way to exchange important information.

Driverless cars and biological, digital implants are just the beginning

As the Web evolved, social networks and cloud-based products challenged the effectiveness of using desktop Operating System (OS) email and attachments to send and receive information. In the IT world, the emergence of cloud computing represented a synthesis of the old models of centralised computing and the new models of innovation and independence, fostered by an empowered workforce and connected customers. But the data we’d structured, and the security, stability, and speed of the networks it travelled on had a lot of catching up to do. As clouds become the norm, the way we conceive of innovation and risk is changing. Almost any application or device can be connected to the Internet, and vast pools and streams of data have become key assets for both individuals and companies.

At this point, creating a fixed technology infrastructure or strategy just doesn’t make sense. Applications and infrastructures are now being built with an agile mindset, able to be adapted quickly to purposes not originally conceived of by their coders. The cloud promises huge potential for collaboration, and exponential increases in value, speed, and efficiency, but brings with it an explosion of potential risks and vulnerabilities.

Creating a fixed technology infrastructure or strategy just doesn’t make sense

We’re beginning to look towards, and plan for, the next evolution of data and computing – the cyborg age – one that brings science fiction concepts like biological augmentation and artificial intelligence into focus. Cyborgs, short for cybernetic organisms, are a way of thinking about a near future where the distinction between our human selves and our digital selves becomes less clear. We’re already moving into this age. For example, think of how much more capable we’ve become by using a smartphone with mapping and translation tools during international travel rather than paper maps and street signs. We’re even experimenting with previously unthinkable concepts like implanting chips into our bodies and augmenting our senses of sight, touch, and hearing with digital input.

Driverless cars and biological, digital implants are just the beginning of an era when digital technology could become crucial for our survival and quality of life. As a culture, we’re largely unprepared for the legal, regulatory, and social implications of this evolution. Technologically, there are still many complex problems we must solve to make technologies of convenience reliable and safe enough to become technologies of survival. If we’re going to move into the future responsibly, and manage the risks that come with these new opportunities well, we’re going to have to shift our ways of thinking about data, privacy, the nature of business, and security.