Popular searches
//

Pull off Architecture Reviews at Light-Speed with LASR!

4.4.2025 | 13 minutes reading time

Pull off Architecture Reviews at Light-Speed with LASR!

Foreword:

This blog is loosely based on a recent project experience. All persons, companies and names are fictitious, as to make them NDA compliant. Any resemblance to a person, existing company or brand is purely coincidental and unintentional.

For most people, doing a software architectural review in a few days sounds like nonsense. For us too, to a certain extent, until we had a chance to apply LASR (Lightweight Architecture Software Review) in a recent project. We were extremely restricted in terms of time, the to-be-reviewed system was not simple, and we had a lot of information to process. Yet, using this methodology made it possible for us to provide meaningful insights in a short time, which then could be used by our client for informed decision making. 

LASR truly made the entire Architectural Review process much more flexible and lean, and we believe it is a worthwhile tool to learn about and be able to use. In this article, we’ll tell you about how to use it, and what our experience with it played out.

Challenge

Lets start by painting the situation at the start of the project assignment. It could be described as a nightmare scenario: 

A customer - lets call them RiskInvest Ltd. - hires us for an architectural review of a Software Solution - lets call it MoneyMaker - written by a young company called GarageCompany
We’re added to multiple meetings over the course of two days, where we will be learning about the architecture, infrastructure and process of this to-be reviewed system. 
After these two days, RiskInvest Ltd. tells us they need a justified go- or no-go statement for the acquisition of MoneyMaker and/or the whole GarageCompany, by the end of the week. 

By this point, we've collected enough information to fuel weeks of work, if we were to approach it with traditional review methods such as ATAM, CBAM, etc. ...

Not an optimal situation. Let us sum up the challenge thus far, in a couple of points:

  • We had very little time.
  • We didn’t know what was important to RiskInvest Ltd: Different organisations have different needs and tolerances regarding software, which can influence the review to a certain degree.. 
  • There was a lot to look at - the architecture of the software we saw was of non-trivial complexity.
  • We were to do the whole process fully remotely.

What now?

Thankfully, we found a fitting approach for the problem relatively fast - LASR!

It stands for Lightweight Approach for Software Review, and was devised by the company embarc. LASR is a relatively new process that we were already somewhat familiar with, but never had a chance to implement in a project in the wild before.

LASR focuses on iterative, quality-focused analysis, with less set-up time and formalities, which in theory would enable us to get the much-needed statement more quickly. LASR is also very well documented, and developed based on the experience of professionals who had done many, many software reviews before.

We figured it’d be worth a shot, so we went for it.

About LASR

There are two “versions” of LASR: LASR and LASR+ (depicted below). Due to our time constraint, we focused on applying the simpler version LASR that consists of the first four steps: Lean Mission Statement (1), Evaluation Standard (2), Base-Review (3) and Goal-oriented Analysis (4):

  • The first step focus on understanding what makes the software being reviewed special, in relation to the client, his context and his mission.
  • Step two focuses on the evaluation standard of top 5 key aspects for the success of the system based on quality attributes (DIN/ISO-25010) and defines an expectation level for each key aspect.
  • With the third step, success-critical risks are identified for the top 5 key aspects as part of the base review in a “pre-mortem” analysis in order to identify possible gaps and set them in relation to expectation levels.
  • In the fourth step, target-oriented analyses are carried out, as we know them from ATAM. However, we only target key aspects where expectation and actual state are clearly different.
LASR-overview.png

We will describe how we went about the process together with the client, and what output came out of applying this methodology. Reader beware: specifics on this process are under an NDA as of right now and therefore will be omitted where needed. Pictures used will always use a fictitious scenario.

Wednesday Morning (1): Grab the Lean Mission Statement for MoneyMaker

The preparations were short and sweet: Open a Miro board with a frame containing a website-icon and PostIt stacks. Once done we invited three members of RiskInvest Ltd. to your new Miro Board.

The most important thing in the first step is to limit yourself to essential statements or "claims". This is why LASR suggests the “landing page” metaphor: Capture what it's about as well as possible, at a glance:

  • 10 minutes - RiskInvest Ltd. writes statements on PostIt's. The employees are guided by the following questions:
    • What are the key features or properties of MoneyMaker?
    • What makes MoneyMaker special? What is particularly good?
    • What requirements do important stakeholders have of MoneyMaker (“claims”)?
  • 20-30 minutes - similar statements are grouped by RiskInvest Ltd. into headlines, following short discussion.
  • 5 minutes - RiskInvest Ltd. votes on 7 -/+ 2 statements that absolutely have to be on the landing page.

Et voilà - after less than an hour the lean mission statement was done!

LASR Mission Statement.png

Wednesday Morning (2): Defining an Evaluation Benchmark

With a mission statement at hand, we moved on with the next step of the process, which was to find exactly how we should look at the software product, so we can set a certain expectation level, i.e. what MoneyMaker should be like in an optimal case.

To facilitate the discussion, the makers of LASR provide us with a game, which we promptly prepared into the workshop. The base concept for this game is a Quality Attribute, which are attributes based on the ISO-25010 specifications for software quality. 

We prepared a card came on our Miro board with the quality objective cards (PNG images) from the LASR Community supporting material.

The first step was to determine the 5 most important “quality objectives” of MoneyMaker. In the second step, an expectation level was then determined for each of the most important quality objectives.

For the first step, a frame on which 5 randomly selected quality objectives is arranged. To the right, the remaining quality attributes are arranged in a backlog. Below this, an area remained free for quality objectives that have been discussed and sorted out:

LASR Top 5 start.png

The game was played in the following way:

  • A RiskInvest Ltd. employee selects a quality objective from the backlog that has not yet been discussed and places it among the 5 selected quality objectives as a “challenger”.
  • RiskInvest Ltd. then considers and discusses whether and which of the selected quality objectives would be displaced by the “challenger”.
  • At the end of each round, one quality objective is put in the "Declined" area, regardless of whether it was a “challenger” or an ousted quality objective.
  • The most important pro and con arguments are collected as a PostIt next to the respective quality objectives as documentation of the discussions.

RiskInvest Ltd. then compared - as an optional step - the selected and eliminated quality targets once again in the overview, to see whether the selection made is coherent for RiskInvest Ltd. overall and eventually reduce the number of selected objects i.e. less than 5.

Our challenge was to give RiskInvest Ltd. a good and clear understanding of quality atributes, to keep them focused on attributes important or relevant to the system and the mission statement instead of “pain points” or “goodies”.

LASR Top 5 final.png

LASR considers 14 quality objectives, meaning there are 9 “challengers” plus a final discussion of 5-10 minutes. You should therefore take between 50 and 100 minutes to select the quality objectives - i.e. between 1 and 2 hours with a break.

⁠In the second step, we wanted to quantify the expectations of the quality objectives and present them clearly in a graphic.

We asked the employees of RiskInvest Ltd. to evaluate these quality objectives within 5 minutes according to the following scheme by each employee attaching a PostIt with their evaluation (a value between 25 and 100) to the respective quality objective.

LASR Evaluation metric.png

A 5 minutes time-box is a short time, but the time pressure was initially intended to promote intuitive evaluation in order to be able to focus later discussions on evaluations that diverge strongly.

Finally, only attributes that were rated significantly differently among team members were discussed (a good guideline here could be greater than 20% of the average). The mean value was used for all others. We also limited these discussions to an overall 20 minutes.

The LASR material also includes an Excel sheet that displays a spider map for max. 5 quality objectives. We entered the results of the assessment and got the following expectation horizon:

LASR SpiderMap.png

At noon we ended up with a lean mission statement for the MoneyMaker decision, the most important quality objectives and an evaluation of how important they are from the point of view of RiskInvest Ltd.

Wednesday Afternoon: Base Review

The missing building block for a first rough but reliable statement for a decision by RiskInvest Ltd. on MoneyMaker was now an assessment of how MoneyMaker actually fulfills the expectations of the quality objectives. This required a deeper insight into MoneyMaker (processes, code, deployment, operations, etc.), which we had gained in the previous workshops with the GrarageCompany.

LASR proposes a “pre-mortem” analysis for this purpose: Risks to the success of the use of the software are identified and assessed according to the probability of occurrence and the amount of damage. To do this, we again expanded our Miro Board with the risk cards from the LASR community material:

  • In the first step, potential risks were identified and assigned to the relevant quality objectives - 10 minutes for each risk group:
LASR Pre-Mortem.png
  • The next step was to assess these risks: How likely would they be to materialize? And how high would the damage be? - 10 minutes per risk.
LASR Pre-Mortem evaluation.png

Finally, the risk scores for each quality objective were summed up and subtracted relative to the expected level value. Here is an example where we calculate the actual quality level for the Usability Attribute:

  • Initial Expectation Level: 95
  • Summed up risk score: 40
  • Formula risk value:

<Expectation Level> - (<Expectation Level>/100 * <Summed up risk score>)

95 - (95/100 * 40) = 57

For each quality object we calcluated and ⁠entered the risk values into the spider map:

LASR SpiderMap Risks.png


In the case of the MoneyMaker Usability, major differences between expectations and reality were immediately apparent, and a later, more in-depth analysis would focus on this.

The challenge was to keep the discussion of risks in terms of probability of occurrence and impact within the time frame and at a proportionate and pragmatic level.

Thursday Morning - Dig Deeper

In the context of the “pre-mortem” analysis discussions, we had already gained an initial overview of MoneyMaker as a software product and decided to invest some limited time in a deeper analysis of the expectation vs. reality gap - which is the next step (and last) in the LASR base method, the target-oriented analysis. Some risks needed a sharper look, in order to raise the confidence of the team in the conclusions we reached to in the day before. This took the form of an actual code dive (bottom up), at times together with the people who worked on the system.

In the frontend, we found relatively well-maintained clean code with good test coverage through fully automated testing and did not find an above-average amount of technical debt. The CI/CD process was state of the art (QA gateways such as static code analysis, security scans, etc.) and was automated throughout.

In the backend, however, we encountered code of varying quality (test coverage, technical debt, etc.). We could only recognize an inconsistent structuring of the modules or components and found significant differences between architecture and implementation. Neither architecture nor implementations could be recognized as functionally appropriate in random samples and we found the implementations that we looked at more closely to be unnecessarily complex. As an Example, we tried a very simple and quick change to the behavior of a single function of a module in the backend and failed because this would have required modifications to several other code locations (configuration, tests). This meant tight coupling, which means more effort when doing anything with the code.

Statements from the MoneyMaker developers confirmed that UI flows could not always be implemented optimally due to the increased complexity in the backend & processes.

After having taken the time to actually look at tangible aspects of the application, we came out of this step with a refined notion of how real the picture of the current status vs. expected status was.

Thursday Afternoon - Summary

We made a quick, rough and conservative estimate of the effort required to eliminate the technical debt in the backend of MoneyMaker in order to provide RiskInvest Ltd. with a basic orientation.

Together with RiskInvest Ltd. we finally discussed the following scenarios:

  1. Acquire only the rights to MoneyMaker including code, DevOps etc. and develop our own version of MoneyMaker+ 2.0 on our own, or
  2. only take over the rights to MoneyMaker including code, DevOps etc., organize your own further development and fix technical debts on your own or
  3. take over GarageCompany including MoneyMaker completely or
  4. take over the rights to MoneyMaker including code, DevOps etc. and commission GarageCompany with the further development and rectification of technical debts.

RiskInvest Ltd. was convinced by the functionality of MoneyMaker, but decided against developing its own solution due to the professional and technical risks and did not want to take over the legal succession of GarageCompany. Ultimately, the rough and conservative estimate of the cost of eliminating the technical debt was to be an important decision-making factor in the negotiations between RiskInvest Ltd. and GarageCompany.

Summary and Conclusion

Going through this project challenged our notion that architectural reviews have to be time intensive, and made us much more agile and flexible when critically looking at software. However, it should always be borne in mind that LASR is less aimed at a complete architecture review than at focusing the review on the relevant aspects of the system.

In two days, we:

  • Broke down what the actual software solution was about;
  • Found out what characteristics of the software were the most important to look at;
  • Identified very real risks and managed to estimate them in terms of probability and impact;
  • Compiled a report that could directly be used to make an informed decision by management at the client

Both us as consultants and the client RiskInvest Ltd. were satisfied with the results.

These results were not extremely specific or in depth, but it was just enough depth in order to get clarity, which is what we ultimately needed. It is also very fun to go through these workshops, which definitely doesn’t hurt!

We were very pleased with this framework and highly suggest getting acquainted with it.
We can recommend buying the book, and along with it comes access to a lot of supporting material such as the card decks, fill-in graphs, and so on, which truly help facilitate the entire review process.

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.