Information Technology has played a key role in the growth of science and technology in the last half century. The trigger to the growth of Information Technology was discovery of transistor which is an electronic on/off switch. It helped manufacture Integrated Circuits or ICs which are a set of electronic circuits on a single piece of a semiconductor material and are created by packing many small transistors together.
Uses of Integrated Circuits
An IC could represent memory blocks, logic gates, clocks, A/D converters and vice versa, power management units etc and is often referred to as chip. By far the most important of all chips is microprocessor, which executes all instructions inside the computers.
Microprocessors come in two main flavors. One type are called microcontrollers and are a self-contained system with a processor, memory and peripherals. They may be less visible in media but are part and parcel of automobiles, office equipment, domestic appliances, industrial controls, peripherals for computer systems etc. They are customized for a particular use and require much less programming. The second type are the general purpose microprocessors and they power the personal computers, servers, mobile phones etc.
Speed
The attribute of microprocessors that has captured public opinion the most, is the speed. Speed refers to the rate at which a microprocessor executes instructions often measured in MIPS (million instructions per second). It is function of clock rate, memory and architecture. Current microprocessors can execute more than 100000s MIPS while Intel 4004 could do only 0.092 MIPS.
For a common man, speed of a microprocessor is synonymous with the clock rate which is the speed of CPU or the speed at which a single core of microprocessor runs. Clock rates can be used to comparing microprocessors of the same families but otherwise by itself is not a good measure of the microprocessor's performance. The current clock rates are in a few Ghzs and Intel 4004's 740 Khz clock is many orders of magnitude behind.
Another important factor that decides microprocessor's performance is memory which is part of the chip and is used by the CPU to reduce the time to retrieve data from the main memory. This is called CPU cache and may have different levels which are used to differentiate data based on the frequency of retrieval. Current microprocessors may have CPU cache of size of MBs and it is not present in the earliest microprocessors.
The architecture of a microprocessor plays a key role in its performance. Current microprocessors use pipelining to execute multiple instructions in parallel, have multiple cores which work as independent processors, separate caches for data and instructions and can execute multiple instructions in a cycle.
While the speed and the performance of all chips is facing technical and commercial challenges, the issues are more pronounced for microprocessors. In general, speed of microcontrollers and other chips is less than that of microprocessors.
Moore's law
Fabrication of ICs is one of the most technologically complex process known to humans and requires knowledge of various branches of Physics and Chemistry including Material Science, Thermodynamics, Optics, Electronics, Magnetism etc. Since their discovery aided by advances in these fields, Integrated Circuits have become smaller, faster, cheaper and more tightly packed.
In 1965, one of the pioneers of the industry, Gordon Moore predicted that the number of transistors per square inch on integrated circuits would double every year. In 1975 he changed this to doubling every two years. This prediction is referred to as Moore's law. It is not a law of Physics or Chemistry but still has turned out to be a good predictor of industry's growth.
In 1971, Intel released Intel 4004, the first commercial microprocessor with 2300 transistors using 10,000 nm process. Today microprocessors have transistors' count in a few billions and process' size is in 10s of nm.
The speed at which microprocessor can operate is inversely proportional to the size of the transistors. Smaller transistors are closely packed and hence electrical signals takes less time to travel. Besides smaller transistors imply that more transistors can be packed and hence can be used to increase functionality that can be executed in parallel. That also allows bigger CPU caches which store frequently executed instructions, deeper pipelines, wider buses etc. Besides, smaller transistors consume less power and switch at faster speed.
It seems that performance may be increased by putting transistors on a bigger microprocessor. But there is a restriction on increasing the size of microprocessor itself. A bigger chip is more prone to contain fabrication defects and results in more wastage in the wafer from which it is made. The former is a technical challenge while the latter has economic implications.
Current Challenges
The commercially available microprocessors have transistor size of 10s of nm though research is focused on transistors of size of a few nms. At the latter size the quantum effects cannot be ignored and tunneling can happen whereby an electron can pass through the barriers in the transistors. This can make transistors nonfunctional. From classical viewpoint, a small transistor will have thin insulating layers and will be prone to current leakage. Another aspect is related to the cost of manufacturing which is dwarfing the benefit of scaling. As transistor size falls, the process to manufacture them gets more complex. It could be that transition from the currently available 14 nm node used in Xeon-Broadwell to say 7 nm node may actually increase the per unit cost.
Another important problem is related to the generation of heat. This increasing linearly with the frequency of the clock which nowadays is in GHzs. Besides though heat consumption per transistor has fallen, overall heat consumption has arisen due to increase in number of transistors on a microprocessor. Separately, higher frequency means transistors get less time to switch and as this process is not really digital, chances of error reduce if the voltage is increased. But this implies a cubic dependency of power consumption on clock speed. And this has assured that the clock speeds are changing rather slowly nowadays and in fact they no longer influence decisions to buy new hardware.
This implies that sustenance of Moore's law is getting cost prohibitive and other ways to improve performance of microprocessor's need to developed.
Mitigations Current implementations
The architecture of microprocessors continues to evolve and has contributed to improvement in their performance. The factor that has contributed the most to negate current challenges in recent years is the introduction of multiple cores. They implement threading at hardware level and thus introduce parallelism. Actually benefits depends upon the fraction of the software that can run in parallel simultaneously on multiple cores. But the need of increased computing power comes from multimedia applications, Artificial Intelligence, mathematical and biological modelling etc where calculations are indeed repetitive. The number of cores is generally even and current microprocessors have a few 10s of cores. Multiple cores help reduce power consumption as they can be operated at reduced speed and degradation in performance gets compensated by the number of cores working simultaneously.
It is also possible to manufacture transistors which are 3D. These transistors use less power and suffer from less current leakage. Intel's Ivy Bridge series of processors use these types of transistors.
Potential implementations
It is also possible to create three dimensional ICs. They will consume less power while the additional connectivity will improve signal propagation time. Memory chips using this technology are already in the market e.g. High Bandwidth Memory from AMD.
Cooling systems are getting overhauled. One approach is to use liquid coolant directly into the chip rather using air circulation.
Another option and still at research stage is to work with materials other than silicon which is the material of choice for manufacturing ICs. Silicon has its own set of problems e.g. serious performance degradation with heat, poor light emitting properties and hence cannot work with light based devices, poor hole mobility etc. The other materials could be compounds of silicon, germanium, arsenic or allotropes of carbon e.g. graphene or nanotubes. Besides metals like Molybdenum, Hafnium etc. and different elements from Groups III to V of periodic table are being tried. Transistors of size less than 5 nm have been created at different universities including at Berkeley (USA), Chungbuk (South Korea) etc.
One another research area focusses on use of light in ICs. Light is the default medium of choice for data transfer but has to give way to copper based wires and silicon based chips when it comes to processing of data. Microprocessors which directly use light for I/O are an important area of research.
Customized Machines
There are certain types of computers in use or in research stage that can outperform conventional computers for certain types of tasks though they are not cost effective as general purpose computer. They have other issues but are less impacted by current set of challenges that microprocessor's performance face.
Current implementations
Super computers provide computing power far in excess of conventional computers. Their computing power is measures in petaflops (10^15 floating point operations per second). Super computers with speeds in tens of petaflops are in use today. They consume massive amount of resources. For example, the fastest one, Sunway TaihuLight uses more 10.6 million cores and consumes 15 MW of electricity and is housed in a building. They also get heated up but their large size allows various unconventional methods of cooling including use of water. They work best for tasks that need repetitive calculations and thus are used in weather forecasting, cryptography, atomic bomb simulations, molecular modelling etc.
Potential implementations
Another promising area seems to be quantum computing which uses concepts of quantum mechanics e.g. Superposition and Entanglement. It carries out a number of operations in parallel each with a varying degree of probability and so reduces time. As of now the research is focused on using them to solve certain types of problems e, g, finding prime factors of a number etc. The challenge lies in getting qubits (equivalent to bits in classical computers) to work together, i.e. remain coherent. Only about 10s of qubits have been shown to work that way so far. One company, D Wave from Canada is commercializing the technology.
Yet another area of research is DNA computing which uses DNA and molecular biology. They work using DNA strands and are slow individually but millions can work in parallel. However, reading correct answer from them is still a challenge. Initially they too will be developed to work on specialized problems e.g. efficient routing between cities or cryptography.
Conclusion
The chip in general and microprocessor in particular, has powered the Information Technology revolution of the last half century. Sustenance of the same speed of performance improvement of microprocessors faces multiple technological and commercial challenges. This has necessitated various innovations e.g. change of materials, architecture and altogether different ways to do computation etc. While this can assure that the performance continues to improve in the near term, implications for the distant future cannot be spelt out with that certainity.