How Chernobyl could have been prevented with IoT and Blockchain

In The Chernobyl Podcast, Mazin said: “The RBMK (the reactor used in Chernobyl) was designed to produce an enormous amount of power very cheaply, and in order to do that, certain things were implemented.”

He mentioned that the Soviets knew about the design flaw in the reactor but had not changed it.

He also said: “People knew – there had been essentially a mini-Chernobyl – in fact, there had been a couple of mini-Chernobyls and by mini-Chernobyl I mean the phenomenon that leads to Chernobyl exploding had happened in a couple of other reactors years earlier.”

He argues the reason the workers at the reactor did not stop the nuclear test when it was getting dangerous was because of the culture in the plant.

To begin with, automation can’t correct design errors, but it can protect the plant from the consequences of them. The basic design error at Chernobyl was that scramming (shut down) at low loads caused a temporary and self-accelerating power surge. This occurred because the reactor had a positive void coefficient, while all properly designed reactors have a negative one.

With the Chernobyl reactors, the loads were under 700 MWt, the VC was positive (+VC), and the operators either were not told or did not understand this counter-intuitive characteristic. This was the case because the control rods had graphite tips and were 1.3 meters shorter than necessary.

The Test That Caused the Accident

At Chernobyl, it was the conducting of a “safety test” that caused the meltdown. The purpose of the test was to determine if, in case of the failure of the external power supply grid, the residual “rotational energy” (inertia) of the turbines would be enough to provide electric power until the backup diesel generators (DG) started up. The goal of the test was to determine if this “rotational inertia” was enough to supply the plant with electricity for about a minute after a grid failure.

The test should have been performed when the thermal power generation exceeded 700 MWt—when the void coefficient is negative (-VC)—but the operators, being in a hurry at 1 A.M and ignorant of the consequences, started the test before reaching this minimum power and, therefore, started the test under +VC conditions. They ran the test “in manual,” disabled the turbine generator’s safety systems, and therefore, the main process computer could not shut down the reactor or even reduce its power.

Based on the series, even after the accident, the operators were making decisions based on their thoughts and not on real data about what was happening, even after measurements they were blaming the measurement devices. From that, they started misleading the processes and lying about the situation, and as a result, the committee decided to close the city, instead of evacuating it. In other terms, there was no transparency at all, just to make the problem seem like a normal situation and everything was under control. Even worse, using the situation to catch the eye of the Soviet Union for “great crisis management”.

In the case of an automated IoT system, processes control and automation of operations would have In the case of an automated IoT system, the process control and automation of operations wouldn’t allow the test to start at all. Even if that would have happened due to the design errors, the data would be stored in the blockchain, where all institutions and parties had access, getting transparent, secure, and tamper-proof data from the site. As a result, this would have led to taking better control of the situation, having much less damage, and saving thousands of people’s lives.

This is why it is very important that the future will be built first upon some very fundamental core values, such as transparency, and all the processes, starting from the industrial ones, are data-centric. From this statement, Cortex’s vision is to build a data-centric future, with transparency at its core.