Blog on Supercomputer

Supercomputers

We are often very curious about supercomputers. When we hear about supercomputers, we have a lot of questions. This blog will give an introduction to supercomputers and their uses. Let us move on to our main topic without further delay.

What are supercomputers?

Supercomputers are the class of computers with a speedy and high level of performance compared to regular computers. They have the capacity to process large amounts of data quickly. Due of their vast size and high expense, supercomputers are rarely used by ordinary people. Large commercial and academic organisations utilise these computers to do complex computations and simulations.

 

 

A supercomputer's performance is measured in floating-point operations per second, or FLOPS. Floating-point operations can only be performed on computers that have built-in floating-point registers.

Megahertz is the unit of measurement for a computer's clock speed (MHz).Supercomputers' performance is measured on a much broader scale because to their significantly greater power capacity. Supercomputers have thousands of processors and can do billions or trillions of computations per second.

The Evolution of Supercomputers

It was during the 1960s that Seymour Cray, an electrical engineer, sought to build the world's fastest computer that the concept of a supercomputer originated. The "Father of Supercomputers," Cray, is well-known.

The IBM 7030 was the fastest computer in the world at the time. In 1964, Cray released the CDC 6600, a computer that could do three million floating-point operations per second at a speed of 40 MHz, making it the fastest computer in the world. The CDC 6600 outperformed most computers by ten times and outperformed the IBM 7030 Stretch by three times. The term was finally given to its replacement, the CDC 7600, in 1969.

 



Uses of the Supercomputers

As previously stated, the supercomputer is out of reach for the average person due to its high cost and complexity. A supercomputer may be used to evaluate mathematical models of complicated physical processes, as well as sophisticated system design and simulations, and research studies. Climate and weather, the development of the cosmos, nuclear weapons and plants, and the discovery of new chemical compounds are all examples of applications.

Supercomputers are used by the military to test new planes, tanks, and other equipment. They are also utilised to comprehend the impacts of war on troops and nations. Supercomputers are used by the military to design successful plans against its adversaries.

In recent years, as supercomputers have gotten more economical, more businesses have begun to use them for market research and other business-related tasks. In the film industry, these computers are used to create animations and special effects. Today, supercomputers also offer online services due to their ability to handle a large number of users. Gamers use these services to play online games. Supercomputers are being used by numerous academic and scientific research institutions, engineering organisations, and huge corporations that demand tremendous processing capacity.

Supercomputers in India

Dr. Vijay Bhatkar is known as the “Father of Indian Supercomputers’ ‘, he introduced the Indian PARAM Supercomputer. In November 1987, the Centre for Development of Advanced Computing (C-DAC) obtained a three-year budget of Rs 375 million to create supercomputers with 1000Mflops (1 Gflop). Three C-DAC flights revealed the “PARAM’’ (Parallel Machine) supercomputer family.

PARAM 8000

In August 1991, the PARAM 8000, a 64-node system, was introduced as the first machine designed from the ground up. This 256-node system has a theoretical performance of 1 Gflops, but in actuality, it only has roughly 100–200Mflops. The PARAM 8000 was built on a distributed memory MIMD architecture with a reprogrammable connectivity network.

PARAM 8600

PARAM 8600 was released in 1992 as an upgrade to PARAM 8000. The Intel i860 CPU was chosen by C-DAC as a way to increase power. Each 8600 cluster was equivalent to four PARAM 8000 clusters in terms of processing power.

PARAM 9000

The PARAM 9000 was initially shown in 1994, and it was created to combine cluster and massively parallel processing computer tasks. Employing the Clos network design, this system expanded up to 200 CPUs using 32–40 processors. The SuperSPARC II processor was used in the PARAM 9000/SS; the UltraSPARC processor was used in the PARAM 9000/US; and the DEC Alpha was used in the PARAM 9000/AA.

PARAM 10000

In 1998, the PARAM 10000 was revealed. This supercomputer featured many independent nodes, each of which was based on a Sun Enterprise 250 server with two 400 Mhz UltraSPARC II processors. This system’s top speed was 6.4 Gflops. This would have 160 CPUs and 100 Gflops of performance, readily expandable to the Tflop range.

Later on, PARAM Padma, PARAM Yuva, PARAM Brahma, PARAM Ishan and PARAM Siddhi-AI were introduced which were known to be technologically advanced successors of each other and have high computing power. The PARAM Siddhi-AI supercomputer is India’s fastest, with a peak Rpeak of 5.267 Pflops and a maximum Rmax of 4.6 Pflops (Sustained). The AI assists research in innovative materials, combinatorial chemistry, and astrophysics, as well as universal healthcare, disaster prevention, and the Covid-19 application, by allowing for faster simulations, diagnostic imaging, and genome sequencing. In November 2020, PARAM Siddhi-AI was ranked 63rd amongst world’s best supercomputers. NVIDIA DGX SuperPOD networking is used, as well as C-own DAC’s HPC-AI engine, software frameworks, and cloud platform.

Supercomputing Challenges

Most of the challenges and issues related to supercomputing are because of the hardware and software interrelationships and their algorithms. However there are some other challenges as well such as movement of data , fault tolerance, power consumption, and extreme parallelism. In this section we will be talking about all these challenges and issues in brief.

Speed of data moment

This challenge basically refers to the lack of available technology which are mainly used to fetch and retrieve data at sufficiently high rates. For faster computation, fast execution speed and data transfer rate is very important and that too has to be compatible with the given power packet which gives rise to the other challenge about which we will discuss further. Actually in most of the cases it has been found that the time and energy required for data movement from main memory to the processing unit and from processing unit to main memory again is greater in magnitude than the energy and time required for the actual computation and operation.

Fault Tolerance

Fault tolerance is the ability of a system to continue the given computational task without being affected by any available faults in the program. Incase of supercomputing where a huge number of components and peripherals are being interfaced with larger systems along with advanced technologies being operated at specific and limited voltage levels, providing this feature is yet another challenge.

Power Consumption

As discussed earlier, power consumption rates along with the speed of data transfer are the most basic and yet challenging issues faced in the field of supercomputing. As per findings, average power consumption of an exaFLOP built with the existing available technologies would be more than 600 megaWatt. This will not only require a more powerful source of energy but also increase the cost of computing.

Level of Parallelism

For faster execution, greater no of threads along with multilayer parallelism is needed. Infact, if the computation rate is to be as fast as 1 exaFLOP, the requirement is of 1 billion floating point units, where each unit is performing 1 billion calculations per second. As far as parallelism is concerned, the requirement is parallel processing of billion threads being solving a small segment of a single problem.

 

Supercomputers in the future

442 petaflops is the processing performance of today's fastest supercomputer. Before the end of this decade, supercomputers with capabilities of tens and hundreds of exaflops will be available. This fast rate of processing will open up a whole new world of possibilities.

The human brain will be mapped by a future supercomputer. The creation of super AI with a supercomputer is an even more fascinating potential. The medical and research fields will be transformed by computer simulations. Using supercomputer simulations, doctors may easily simulate the result of a certain therapy. A computer model of the exceedingly expensive scientific project might be created. It will assist you in saving both money and time.

Supercomputers will be able to model the Earth's climate system by 2030.Thanks to supercomputers, weather forecasts will be exact. Agriculture, like other industries that are completely reliant on weather, would tremendously benefit from supercomputers..

Conclusion

Supercomputers have opened new doors of opportunities. Research, military, agriculture, and industries extensively using it. Humans can tackle increasingly complicated issues as computing power improves. The human brain's barrier is broken by a supercomputer. We may apply them to come up with new ideas and get a better understanding of the cosmos.

The supercomputer has been a critical part in humanity's progress. Humans should carefully handle this technology. The power of a supercomputer can change the trajectory of human history. If this power turns out to be wrong, the implications for the entire globe might be disastrous.

Blog by — Suraj Upadhye, Vaibhav Shahabade, Vaibhavi Jorvekar, Manasi Yadav, Yash Wavare.


Comments

Post a Comment