Running code and infinite scaling through Supercloud

John Graham Cumming, CTO, Cloudflare.
John Graham Cumming, CTO, Cloudflare.
2 years ago

The Internet was not built for what it has become. The cloud was not designed for what it must become.

That sentence expresses the idea that the Internet, which started as an experiment, has blossomed into something we all need to rely upon for our daily lives and work. And that more is needed than just the Internet as was designed; it needed security and performance and privacy.

By its nature the cloud was a virtualisation of the older real-world infrastructure and not a radical rethink of what computing should look like to meet the demands of Internet-scale businesses. It’s as if steam locomotives were replaced with efficient electric engines but still required a chimney on top and stopped to take on water every two hundred miles.

In the Supercloud both code and data are mobile and move around the network

The cloud replaced the rituals of buying servers and installing operating systems with new and now familiar rituals of choosing regions, and provisioning virtual machines, and keeping code artificially warm.

But along the way glimpses of light are seen through the cloud in the form of lambdas, or edges, or functions, or serverless. All are trying to give a name to a model of cloud computing that promises to make developers highly productive at scaling from one to Internet-scale. It is a model that rather than virtualising machines or disks or wrapping things in containers says: write code, we will run it, do not sweat the details like scaling or location.

We are calling that the Supercloud.

The foundations of the Supercloud are compute and data services that make running any size application efficient and infinitely scalable without the baggage of the cloud as it exists today.

You have a foundation for applications that can scale to any size and move close to users

Just as programmers did not always want to think in database-sized chunks, they should not have to think about VM- or container-sized chunks. It is inefficient and has nothing to do with the actual job of writing code to create a service. It is unnecessary work that distracts from the real value of programming something into existence.

The theory of computing points away from dedicated machines, virtual or real and to code and data that run on the Supercloud handling the details of code execution and data locality automatically and efficiently.

So, whether you write your code by breaking it up into functions or ship large pieces of functionality or entire programs, the foundations of the Supercloud means that your code benefits from its efficiency.

If your data is immovable, we move your code closer to it, no matter how many times you need to access it

The Supercloud makes scaling easy because no one has to think about how many VMs to provision, no one has to keep hot standby VMs in case there’s a flood of visitors. Just as MapReduce, which traces its heritage to the lambda calculus scales up and down, so should general purpose computing.

And it’s not just about scaling. In the Supercloud both code and data are mobile and move around the network. Attach data to the code, such as with Durable Objects; hello Actor model and you have a foundation for applications that can scale to any size and move close to users as needed to provide the best performance.

Alternatively, if your data is immovable, we move your code closer to it, no matter how many times you need to access it.

Not only that but working at this level of flexibility means that code enforcing a data privacy or data residence law about where data can be processed or stored can operate at the level of individual users or objects. The same code can behave differently and even be executed in a completely different country based on where its associated data is stored.

A Supercloud has two interesting effects on the cost of running a programme. Firstly, it makes it more economical because you only run what you need. There’s never any need for committed VMs waiting for work, or idle machines you are paying for just in case. Code either runs or it does not, it scales up and down as needed. You only pay for precisely what you need.

Secondly, it creates a more efficient compute platform which is better for everyone. It forces the compute platform, us to be as efficient as possible. We have to be able to start code quickly for performance and scale up reasons. We need to efficiently use CPUs because no customer is paying us to keep idle CPUs around.

And it is better for the environment because cloud machines run at very high levels of utilisation. This level of efficiency is what allows our platform to scale to 10 million requests.

And this compute platform scales well beyond a machine, or a datacentre, or a country, it scales to the size of the Internet. Software allocates resources automatically across the globe, moving connections, data and processing around for high efficiency and optimal end user experience.

The Supercloud is performant, scalable, available, private, and cost-efficient. Choosing a region for your application, or provisioning virtual machines, or working out how to auto-scale containers, or worrying about cold starts seems ridiculous, hard, anachronistic, a waste of time, rigid and expensive.

There are over a million developers building on the Supercloud.

Each of those developers wants to get code running on one machine and perfect it. It is so much easier to work that way. We just happen to have one machine that scales to the size of the Internet: a global, distributed supercomputer.

Don't Miss

Businesses Anticipate Increase in Cyber Threats, Yet Remain Unprepared

It’s no secret that cyber-attacks are becoming increasingly sophisticated, while simultaneously growing
Bashar Bashaireh, RVP Middle East and Türkiye at Cloudflare

Cloudflare to Showcase Cloud for the “Everywhere World” at GITEX 2024

Cloudflare Inc.  has announced its participation in GITEX 2024, the premier technology