Latest Trends in Software Architecture
With the advancements of technology in the present, it is essential to find and analyze how well can the changes that are being made to the software be absorbed. Upon the requests from the customers, changes needed to be done from time to time in this developed software. Following is the cycle of the procedure of how software is made and how the changes are incorporated to that while developing the said.
• First, the customer requirements are taken down to get a basic idea of what they want.
• After the preliminary requirements are acquired, the software design and architecture are implemented.
• Having this software design and architecture as a base, version one or the very first version will be developed. Naming is solely for the ease of reference for the developer and the customer.
• Once the software is developed, it is delivered to the customer. Or in other terms, deployed.
• After getting the feedback from the customer, it will be incorporated into the software and afterward subsequent versions are developed according to the feedback received every time delivery happens.
• And the cycle continues from delivery to customer feedback to incorporating that feedback to developing the next version and so on.
Usually, executing this cycle takes some time. As an example, it takes about three months or six months to deploy software in the case of using a software development methodology such as a waterfall. But the advancements in technology have made it easier to deliver the versions, sometimes even 100 times per day.
One of the key aspects of modern software delivery is ‘high velocity’. It is an aspect that needs to be addressed without having to fail the existing functionality of the software. The delivery of software has three stages before it goes into the hands of the customers.
1. Experiment: before delivering any kind of developed software, experimenting is always needed. This is where it takes a lot of time, sometimes weeks or months.
2. Story: there are stories inside the experiments which run for several days.
3. Tasks: within the stories, there are tasks that the developers or the engineers take a few hours to complete.
To speed up the delivery process, there are a few things that can be done by the developers.
• A code analysis can be done by using tools such as SonarQube. SonarQube is useful to automate the source code analysis to analyze the source code in terms of security as well as quality. This process can be automated using these tools by using the CICD pipeline.
• A unit test can be done where it is used to ensure that the existing functionality, the developed code, or the expected outcome will not break.
• Build and automation is used with the CICD pipeline along with containerized mechanisms such as docker and test automation.
e-manual or quality assurance is not being used at present due to its high time consumption as there is human involvement. As a result of that, test automation is required with automated scripts which are used to make sure that the required functionality is there. Moreover, container management/orchestration tools can be used such as ‘Kubernetes’, ‘Rancher’, ‘Mesos’, etc.
Everything up to now leads to microservices. They are artifacts that are independently deployable and scalable. In the case of geographically separated teams, they add value during the development process. Furthermore, microservices improve fault isolation and are technology agnostic, have increased resiliency but the list does not end here. For the question, of whether we can overcome all our problems using microservices, the answer is no. As with any other service or tool, it also has its issues.
• Complexity: inherently, a very complex architectural pattern where if it’s done right, it is known to be a distributed architecture pattern. Hence the complexity.
• Monitoring and discovery: proper monitoring and discovery tools are needed to avoid chaos.
• Low performance: when microservices aren’t used properly with the correct architecture patterns and other relevant aspects, it will lead to low performance.
• Not suitable for small organizations: with the number of microservices that has to be run for a particular task, human involvement is needed on a large scale to manage them.
To overcome these problems in microservices, there are several ways of approaches. Starting from the ‘big ball of mud’, most of the architectures used this where they had ever-growing monoliths that are complex which tangles everything together, and after they migrated to microservices, it ended up with something different called a ‘death star’. Larger scale teams were needed to manage this kind of system as the probability of tracing and monitoring this system otherwise is much harder. Whether these systems are good or bad depends on the requirements of the company.
Architecture patterns
Domain Oriented Microservices (DOMA)
As discussed earlier, the microservices architecture pattern is inherently suitable for fairly large organizations. If a small organization still needs to use this pattern, the services should be segregated into a certain domain. To do that, techniques such as domain-driven design can be used. These domains will be divided where a lot of microservices will be allocated independent responsibilities and each domain will be handled by a team which in return resolves the problem of needing more people. For the communication between teams, a generic API gateway or a message queue hub is used which enables domain separation.
With the use of DOMA, the advantages of microservices can be obtained for small organizations.
The next question arises in the communication between the domains. Is it possible to use other methods such as API calls or other mechanisms? Yes, it is possible but with the use of a proper message queue such as ‘Kafka’, an Event Driven Architecture can be enabled.
Event Driven Architecture (EDA)
When the messaging protocols support the Event Driven protocols, the implementation of Event Driven Architecture can be done and the communication between the domains can be initiated. Even if this EDA can be implemented within the domains, there are more advantages to them being implemented between.
• Reduce the coupling between services: the basic principle is to reduce the coupling and improve cohesion.
• Event storming: a methodology used in the design stage.
• Event sourcing: to persist the events in a chronological order which enables replay when required.
• Ability to handle heavy data loads: because the scaling can be done horizontally in between the microservices, or the domains based on the data load that is received by the event.
For a data-driven or data-intensive platform, an EDA is known to be the most suitable. In a distributed architecture pattern, managing the services is important and that can be done either by choreography or orchestration. If not, these services in the microservices architecture will be catastrophic and that is something that most people can’t afford. Thereby, there needs to be a mechanism where it is possible to list all the exceptions or the executive branches of all the failed scenarios in between the domains. In that case, choreography can be used. In the case of the opposite, orchestration is to be used.
The orchestration tools such as ‘Netflix’, ‘Camunda BPMN’, ‘Cadence’, ‘Temporal’, etc. are used to manage microservices. Patterns such as ‘Saga’, and ‘Two-page commits’ are used to make sure that the atomicity between the microservices is maintained.
Serverless Architecture
Serverless Architecture is a well-used one when it comes to modern architecture. The main advantage of this is that when heavy data loads are used or data incentive applications are used, and if there are spin-ups used based on the number of requests or data loads, serverless is a better choice. Since this is based on horizontal scaling, you are required to pay for only what you use.
Command Publish Consume Query (CPC)
As opposed to Command Query Responsibility Segregation (CQRS) where data is written and read into two clusters, Command Publish Consume Query publishes the commands which are being received to a DB or a message broker where those events are published into an event hub. Through an event hub, communication between two microservices or domains can be arranged. Once the query is received at the receiving end, it executes the query.
Micro Front End
Based on the domains, people started segregating the front ends, about two years ago. In the different front ends, they all have their build and test pipelines where they can push into the production environment as independently as well. There are frameworks such as single SPA which can be utilized to build micro front ends.
All these features are enabled to deploy the changes rapidly into production. All the pre-discussed architectures lead to ‘The Evolutionary Architecture’. With the rapid changes in the industry along with the software, to make sure that they are intact and don’t fall apart, an architectural fitness function is used. Here, some parameters are defined during the design phase where none of these parameters are subjected to change with the changes of the functionality. Fitness function is measured using matrices, monitors, unit tests and chaos engineering, etc.
The use of the fitness function in the automated pipelines is as follows. There can be different fitness functions according to the requirement and once the pipeline is created and the fitness functions are integrated into it, the running scripts are to ensure that none of the fitness functions are failing which means that the product is good to go into production. The following can be checked using fitness functions.
• Code style quality check.
• Unit tests.
• Static code analysis
• Automatic API, load performance.
• Dynamic analysis.
• Container compliance and security scans.
Apart from the above-mentioned trends, the other latest trends that need to consider when designing new software are
• AI/ML:
• Edge computing
• Blockchain
• Data-intensive computing
• Rapid Application Development — LCNC
• Hybrid cloud
• Multiple data source
• Real-time stream
Finally, the most important thing shortly is quantum computing where most of the paradigms we are used to at the moment will be changed from security through cryptography to many other aspects. In another 6–7 years, quantum computing will play a huge role in our day-to-day life.