Posts

33)71st Republic Day 2020 highlights| Beating retreat ceremony on Attari-Wagah border on Republic Day

India Republic Day -- India celebrates the 71st Republic Day these days. On this day in 1950the Constitution of The indian subcontinent came into force. The Republic Day paradewhich is considered as the main attraction of the days celebrationwas held along Rajpath. It was a 90-minute occasion. Brazilian President Jair Bolsonaro was the chief guest at the parade. Before the parade beganPrime Minister Narendra Modi paid tribute at the Country wide War Memorial and Director Ram Nath Kovind unfurled the national flag as well as General Manoj Mukund NaravaneChief of the Army TeamAdmiral Karambir SinghChief of the Naval StaffMarshal Rakesh Kumar Singh BhadauriaChief of the Air Team. 5 41 PM WIRD PM Narendra Modi arrives at Rashtrapati Bhawan for At home reception hosted through President Ram Nath Kovind. 5 12 pm WIRD Beating retreat ceremony on Attari-Wagah border on Republic Day. 4 36 pm hours IST Air India Redirects 30000 National Flags To Passengers On Republic Day The national

Supercomputer

Image
A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers. Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and proper

History

Image
In 1960 UNIVAC built the Livermore Atomic Research Computer (LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speed drum memory, rather than the newly emerging disk drive technology. Also among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis. The third pioneering supercomputer

Special purpose supercomputers

Image
A number of "special-purpose" systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom ASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle, Deep Blue, and Hydra, for playing chess, Gravity Pipe for astrophysics, MDGRAPE-3 for protein structure computation molecular dynamics and Deep Crack, for breaking the DES cipher.

Energy usage and heat management

Image
Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers. The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components. There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 megawatts (MW) of electricity. The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year. Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways. The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional comput

Software and system management

Image
Operating systems edit Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux. Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes. While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and co

Distributed supercomputing

Image
Opportunistic approaches edit Opportunistic Supercomputing is a form of networked grid computing whereby a "super virtual computer" of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing tasks such as fluid dynamic simulations. The fastest grid computing system is the distributed computing project Folding@home (F@h). F@h reported 2.5 exaFLOPS of x86 processing power As of April 2020update. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems. The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of distributed computing projects. As of February 2017update, BOINC recorded a processing powe