AGL's Business Intelligence Journey, applied to the production of AGL Platform

AGL's Business Intelligence Journey, applied to the production of AGL Platform

Business Intelligence Journey

The business intelligence journey for any organization has five core stages

agl-bi-journey.png
  1. Data Foundation: collect and transform raw data from different data sources into consumable data, accessible across the organization.

  2. Understanding the Present - From What to Why: provide an end-to-end picture of the software production system, revealing trends (descriptive analytics) and uncovering the root causes of performance inefficiencies (diagnostic analytics).

  3. Anticipating the Future - Forecast: move beyond reactive analysis to predictive modeling (predictive analytics) to anticipate delivery risks before they materialize.

  4. Driving Optimized Outcomes: provide data-driven recommendations to optimize decision-making (prescriptive analytics) to improve both delivery and organizational performance.

This page summarizes the process and key findings of applying business intelligence to the Automotive Grade Linux (AGL) software production system and environment. This initial iteration is restricted to the first two stages of the BI journey:

  • Data Foundation

  • Descriptive Analytics

This first iteration of the study is largely focused on AGL’s main value stream, the Unified Code Base. The analysed data provided insights about:

  • Very basic aspects of the AGL community

  • AGL’s activity around code and builds

  • Two of AGL’s core processes:

    • Code review process

    • Delivery process

This effort was carried out by the Delivery Performance Analytics (DPA) team at Bitergia, and by Agustín Benito Bethencourt (Toscalix Consulting), in collaboration with some of AGL’s core contributors. Check the main team and references section for more details about the authors of the study and to contact them.

Analysis Goals

The key motivations for the authors to do this report are:

  1. Provide an end-to-end view of AGL’s software production system and environment (descriptive analytics) through data to the AGL community and ecosystem. Identify points for improvement.

  2. Identify areas that will require further investigation to identify the root causes of the system and processes behaviors that lead to inefficiencies: preparation for the diagnostic analytics stage.

  3. Showcase the benefits of applying business intelligence to the production of software-defined products, to support the improvement of the delivery and organizational performance, while increasing contributors' well-being.

Why is AGL a good project to analyze?

The following characteristics were key for the proponents of the study to select AGL as a target project:

  • AGL’s data is open.

  • AGL uses open source tools.

  • AGL’s key processes are documented.

  • AGL contributors and ecosystem members are reachable and willing to support this effort.

    • This is a luxury for the team behind this study.

  • AGL contributors have implemented a centralised platform, shared across the project, that provides unattended testing on distributed hardware to deliver the different software products.

  • AGL ships, as the main deliverable, a fully functional, usable, complex automotive in-vehicle platform that supports a variety of hardware and different architectures.

    • AGL also develops automotive software components.

  • The open source project is part of a complex supply chain.

We hope that this study can trigger conversations around this topic that can impact, not just other AGL and other open source, but also commercial environments in the automotive industry, and beyond.

The study’s structure: table of contents

The study is documented across several pages, as follows:

  1. Measurements and plots: definitions related to measurements on visualizations, as well as implementation details.

    1. Activity: explains the data types and metrics used to study AGL’s activity related to code and builds.

    2. Code review process: explains the metrics used to assess the efficiency of this process.

    3. Delivery process: describes how the model is created to study the performance of this process.

  2. Analysis: presents the findings of the descriptive analysis performed on the measurements and plots above.

  3. Report: discusses 10 conclusions that summarise the key findings of the study.

  4. Description of the team and core references

Study’s full ToC

This is full table of contents of the entire study

Download the full study

Every once in a while we export every page of the study and merge it into a single document so it can be consumed offline.

The report page will be formatted and transformed into a collateral for mass consumption. As soon as it is ready, it will be published here. In the meantime, here you can find the export of the report wiki page for offline consumption.