Enterprise Architecture Blueprint for Utilities – Part 2:

“Bad Choices make Good Stories.”

There is a famous saying “Bad choices make good stories”. What might be true in life, seems wrong for utilities’ IT or enterprise architecture.

Part 1 of our Enterprise Architecture Blueprint for Utilities – “The Good, the Bad and the Ugly”, discusses how bad decisions lead to huge technical debt, an Accidental IT or bad architecture. In part 2 of our EA series, I will focus on the good choices that make good stories and identify KPIs for a good architecture.


Good Architecture

Whenever and wherever I discuss architecture with peers, clients, or partners in the sector, it only takes a few seconds before we philosophize about loose coupling, ease of integration, flexibility, adaptability, interoperability, elastic scalability, or cost efficiency. It always reminds me of the good old times where almost every tender had a must-have requirement “The solution shall have a good user interface”. I am sure you remember those endless discussions about what is “good”. So, let us try to be more concrete by defining what we mean with good architecture.

If you would be the pilot or captain of an “enterprise architecture” airplane or vessel – instead of flying blind or navigating in the dark - what numbers, figures, instruments or KPIs would you need to monitor in your cockpit?

Cost to Serve: Instead of Total-Cost-of-Ownership (TCO), I’d propose Cost-to-Serve as KPI number one. Marc Andreessen famously said that “Software is eating the world.” As utilities move more and more into analytics–and a data driven future as well as a new generation of consumers that expects an entirely digitized on-demand user excellence, IT and data science has transitioned from being supporting function to be core business for energy companies. Instead of reducing the IT budget or cost, utilities will even increase their TCO. Therefore, the KPI to monitor should be the Cost to Serve for all grid and business operations.

Speed of Delivery: Speed is the new black. The future utility will have to adapt quickly to changing market regulations, changes in the energy system, new requirements and opportunities. Some of our clients have told stories where changing some few fields in an interface or adding a new service to an API took 6 months or even more. Speed of delivery would be my second KPI. I would even go as far as to monitor the average cost per change, too.

Media / Technology Breaks in the Value Chain: Everybody talks about Digital Transformation or the Digital Utility. Everybody talks about the importance of automating and digitizing business processes and the user experience / interaction. Many utilities struggle with medial segregation leading to broken processes with multiple manual steps in the value chain. The number of these manual steps or “media breaks” (commonly known in German as Medienbrüche) where data must be entered several times or manually transferred between different systems or technologies would be another one of my KPIs.

Dirty Data Cleansing Jobs: Bad or dirty data today is already a huge challenge for many utilities. Moving fast into the new digital, analytics-driven future leveraging the power of AI and Machine Learning requires a high degree of data quality. As an initial KPI, I’d count the number of business or case management issues caused by dirty data that the organization have to handle and fix.

End of Life Applications or Technologies: Many utilities have built a huge technical debt over the last decades. Many applications are running on aging technologies or antique platforms. Several applications have already reached their “End of Life” and vendors continue to announce they will discontinue support for those applications within the next two to three years. Monitoring those “End of Life” and “Soon-End-of-Life” systems makes total sense for me.

“It’s not Possible”: If I would be an Executive or manager of a utility, I’d count the number of “It’s not Possible” responses I receive from the IT department on my requests, ideas or requirements. Examples like it’s not possible due to missing integrations or interfaces; it’s not possible because it’s hardcoding, somewhere, deep in the code; it’s not possible as the data is not correct; we cannot handle the data volumes anyway; it’s not possible because the person who build the system has retired and traveling around the world. Monitoring the number of “It’s not Possible” instances might be a good indicator to track your architecture’s evolution.

I could continue my list of KPIs with

  • Number of security issues / breaches
  • Capacity of resources with required competency / skill set to operate and management the IT landscape
  • Amount of overlapping functionality in different applications
  • Copies or replica of data

It would be great to have a discussion on the EA Cockpit for utilities. Possibly create a set of standard KPIs and benchmarks for the industry.

In the next blog post, the focus will shift towards modern architecture principles to drive the way forward to the Utility 4.0 future.

Ready to read part 3? In the next post we will discuss Architecture Principles.

At Greenbird, we work to simplify the complexity of big data integration for utilities to kickstart their digital transformation. We have put together an e-zine with articles which speak to these questions. Download the digital magazine to get more information on enterprise architecture, the build vs buy debate, understanding the digital integration journey and how to simplify the IT/OT relationship.

Related stories