When I was around 6, I used to play a lot on my dad's phone. It was a Nokia 3310 and I was addicted to playing snake. This snake game was extremely entertaining and I spent literal hours on it trying to finish it (spoiler alert, I never did).
With no phone or computer for myself, this was the closest I got to what chips & computing could do. It seems silly now, compared to what we have now, but at the time, it was peak technology.
And that’s how far we’ve come !
I later on had the chance to get my hands on better technology and got interested in software development very early on. And, every year that passed came with new technology. Faster, thinner, better. And all of this was in big part due to how much computing power you could fit in a small form-factor. That’s where microprocessor come in.
A computing device (a computer, a phone, a tablet) is constituted of a lot of parts, but some of the main ones are the processor, the battery & the screen.
To make sure that you can have a thin and light device that still stays powerful, you have to make sure to keep the processor as small and low-consumption as possible, otherwise you’ll need a huge battery or space for the processor itself.
Roughly speaking, a processor is mostly just a bunch of transistors packed together very tightly.
The more transistors you’re able to put on a chip the “faster” it will be.
Moore’s Law is named after Intel cofounder Gordon Moore. He observed in 1965 that transistors were shrinking so fast that every 18 months, twice as many could fit onto a chip, and in 1975, Moore adjusted his observation to a pace of doubling every two years.
He observed that the technology around transistors advanced so fast that we managed to shrink them more and more every year, allowing to put more in processors.
He then stated that the number of transistors in chips would double every 18 months (later revised to 2 years). That roughly meant that you could get 2 times faster chips every two years.
As an example, the very first intel chip was Intel 4004, in 1971, which held 2300 transistors, each technology node of a size of 10μm (micro-meter, 0.000 01m, 10^-5m).
In 1999, Intel announced the Intel Pentium III, a processor counting 9.5 million transistors with a technology node of a size of 180nm (nano-meter, 0.000 000 180m, 1.8x10^-7m).
In just 28 years, that’s an increase in transistors of about 4.1 thousand times. Considering Moore’s law, if every two years, the number of transistors doubled, we’d have seen an increase of 2^(28/2)=16 000.
Moore’s law being more of a rule of thumb, we can totally consider it applied at the time (even though we have a factor of 4 between expected and real life)
But if we look at later examples, a few days ago, Intel announced it will launch its Intel Core i9-12900K chip counting 2.95B transistors with a technology node size of 7nm (nano-meter, 0.000 000 007m, 7x10^-9m).
We’re talking about a miniaturization of about 1500 times and 130 million times more transistors in a single chip.
While this is incredibly impressive there’s 51 years between the release of those 2 chips. So following Moore’s law in a simple way, we should have gone from 2300 transistors to (2300 x 2**(51/2))=109 142 205 468 so around 110B transistors. So about a 1000 times more than what we have for real.
If this isn’t evidence enough that Moore’s law doesn’t apply anymore for transistors, I don’t know what is.
In 2015, Nvidia’s CEO, Jen-Hsun Huang declared that Moore’s law was not applying anymore in terms of computing power. Especially at the time, it was a bold move, Moore’s law had been fueling a lot of interest in technology and was seen as the pinacle of computing development. Even as the CEO of Nvidia, such a claim will meet a lot of push back. But the facts and numbers don’t lie.
There's actually a very good explanation for this downfall. When you’re trying to miniaturize transistors and put as many as possible on a little piece of silicone, you start reaching the fundamental limits of physics (for example the speed of light limit and quantum interactions only working at a very small scale).
Considering our current chips architecture those limits can not really be overcome. And even though we talk a lot about new ways of rethinking computing, it has basically been the same for years now. We talk a lot about quantum computing and other technologies, but as of now, no other technology can really withstand our current needs in computing.
One of the reasons why we invested so much time in miniaturizing computing was to get mobile devices of a small size. Everyone wants a MacBook to be slim and light but still perform extremely well on software and that's where those chips came to play: the enormous computers of the past are not really something nowadays.
Everyone has extremely powerful machines in their pockets but software continue to grow in computing demands. An example is the transition to 4k compared to 1080 or the support of 120fps compared to the historical 24fps.
But what if, instead of trying to put as much power in a small device that you carry on you, you could find a way to access it from a distance and only have an “empty shell” of a device, with a screen and input methods ?
This is a whole idea that you can run a lot of calculations on the distant machine is not new. This is the whole concept of websites communicating with distant servers that are doing all the calculations for you.
Most machines connected to the internet right now can stream. Netflix made sure of this, and with a good enough internet connection, your whole OS can run on a distant server and stream the pixels back to you !
That’s actually the whole concept of Flaneer, being able to have incredibly powerful machines not limited by size on any device, be it a laptop a tablet or even a phone !
You can start testing our solution, completely free of charge.
Let's meet here, to revolutionize the way you use your computer.