Technology
Computer, algorithm, AI: the power technologies climbing the sequence
As renewable generation pushes technological development, algorithms are starting to learn that their actions have consequences. Matthew Farmer considers some of the potential impacts on the power sector.
I
magine a world where industrial artificial intelligence (AI) does not revolutionise the world. After years of eager promises and enthused headlines, any artificial intelligence technology that does not completely overhaul workflows seems like an anti-climax.
Since the power industry first heard the promise of AI, that promise has changed. Those in the energy industry – and the population at large – now better understand the technologies surrounding AI, which have started to materialise.
Today, industrial sensors connect operators directly to the systems powering society. These create massive amounts of information, known as “big data”, as they work. The world understands the word “algorithm” better than ever before, and the potential benefits and harms of computers executing one programmer’s will.
At the same time, the definition of AI has narrowed. Any computer can take an action, and an algorithm can decide that action based on a snapshot of its programmer’s values. For these to become AI, the computer must be able to discard this snapshot, and develop its own values. An AI learns from its actions and mistakes to prevent those mistakes in future. And, while many still wait on the dawn of industrial AIs, this future has started to arrive.
Behind all predictive maintenance stands a good AI
Danish manufacturer and operator Vestas offers five different standard maintenance packages, of which four offer different tiers of preventative maintenance. This has become the standard across most power generation and distribution, rarely involving automation any greater than setting an alarm.
In nuclear, preventative maintenance is the strict industry standard due to the potential ramifications of faults. At the other end of the scale, some hydroelectric stations need so little maintenance they could continue operations for weeks without human input.
Renewables often benefit from remote placement in areas with more powerful wind or sunshine, such as the North Sea or Australian outback. Placing these generators far from high land costs and potential planning complaints makes projects more lucrative for developers.
However, placing renewables in remote locations raises maintenance costs, unless predictive maintenance can help. As such, renewables have encouraged better and cheaper automation technologies, which now allow maintenance to ascend the sequence: computer, algorithm, AI.
/ Placing renewables in remote locations raises maintenance costs, unless predictive maintenance can help. /
Since machines cannot currently and reliably fix other machines, the most a computer can do is notify operators of their next scheduled maintenance. However, with the right inputs, an algorithm can predict faults before they become problems.
Predictive maintenance tracks the “vital signs” of a system and alerts operators to irregularities. In the case of a wind turbine, these can include vibration, noise, or temperatures anomalies outside regular operational thresholds. Catching faults automatically allows operators to dispatch maintenance crews only when needed, saving time and money.
Algorithms can process sensor data more easily than humans, but can also filter out misleading data before it reaches humans. A minor fault could trigger several warnings, distracting operators from more pressing issues causing fewer alarms.
Using algorithms to simplify human decision making, and the step beyond
A presentation by Vestas states that reliable fault detection requires monitoring of more than 1,000 alarm limits per turbine. The company uses software to condense alarm data into one human notification per physical fault, graded by severity.
Each fault can then generate up to four notices, each with an estimated time until repair crews must take action. The system itself may also develop faults, which would require action as soon as possible.
Furthermore, some systems can use sensor readings to specify where potential faults may lie. When properly logged, every fault and fix adds more data to the knowledge base, creating big data to draw from. When software uses this new data to improve future maintenance, it crosses the line into becoming AI.
According to GlobalData information, Siemens is the largest employer for AI roles in the power industry. The company offers predictive maintenance consultations, broken down into three parts mirroring the computer, algorithm, AI cycle.
The first of these assesses how viable predictive maintenance can be on site, since the technology is still far from standardised. The second step installs sensors to gather data, and the third uses proprietary AI to predict faults.
/ The companies themselves no longer know what lies inside their “black-box” algorithms. /
This raises another obstacle to standardisation: coding transparency. An artificial intelligence is only as good as its inputs, its programmers, and its application. Technology giants currently wrangle with this in the algorithms driving content on social media. After giving an AI goals and allowing it to learn, sites such as YouTube and Facebook let their AIs loose. The companies themselves no longer know what lies inside their “black-box” algorithms.
Code is proprietary, and there is no legal mechanism for breaking open the black box. In the power industry, this may lead to a lack of cross-compatibility and standardisation, keeping prices unnecessarily high.
Predictive maintenance risks isolating projects within their own silos, where one maintenance system cannot interface with another. Each system may only work with other proprietary products. This works fine for committed patrons of a technology company, but discourages open-source alternatives for less expensive predictive maintenance.
Digital twins
This problem rears its head in the expanding realm of digital twins. In 2021, transmissions operators in Tasmania and the UK announced plans to create digital twins of their entire grids. These rely on detailed sensor networks to produce a digital map of their transmission grids, mimicking real network conditions virtually.
Digital twins bring AI back to its roots, redesigning computerised data for humans to interpret. These models then allow humans to change the variables, making predictions of how the system may react.
Systems like these are impossible without AI, and the massive number of sensors necessary allows systems to learn like never before.
/ Systems like these are impossible without AI, and the massive number of sensors necessary allows systems to learn like never before. /
In announcing the digital twin of Tasmania’s power grid, mapping company Fugro said that the project would help grid operator TasNetworks to perform its own preventative maintenance by trimming vegetation away from power lines before they pose a fire risk. The National Grid model of Great Britain’s infrastructure will instead examine the island’s balance of power as decarbonisation changes traditional operations.
While these systems push the limits of energy modelling, AI has enabled even larger projects. On 12 November, computer hardware manufacturer Nvidia announced its plans to create a digital twin of the entire earth to model climate change. This could in turn allow power grid modelling to better predict future demand, and allow areas threatened by climate change to build the right infrastructure in the right places.
If, of course, the company decides to share the code.