SmartBear heading to Kafka Summit Americas to discuss maintaining quality of Event-Driven APIs
  September 01, 2021

As a developer and technology architect involved in the world of APIs, I have keen interest in the renewed emergence of event-driven architectures. The Kafka Summit always provides a great way to keep in touch with the latest developments and expert opinions related to Apache Kafka. The upcoming Kafka Summit Americas happening on September 14th and 15th is no different and is packed full of great sessions covering broad areas of interest within the event-driven technology space, while also providing insights on how to address complexity challenges in the Apache Kafka ecosystem.

I am delighted to be speaking at the upcoming summit where we’ll investigate the challenges of testing and maintaining quality in the complex world of event‑driven architectures. Complexity exists mainly due the distributed nature of event‑driven systems combined with the situation that many modern-day integration landscapes have different flavors of integration architectures, patterns and protocols in coexistence.

Event‑driven architectures are complementing existing integration patterns

At SmartBear, we have seen our customers and the market moving away from accumulating data in centralized data lakes to focusing on data in-motion to serve real-time digital immediacy needs. Information technology systems, applications, and devices are producing, consuming, and processing more data than ever before with integrations often spanning natural organizational domain or capability boundaries. Organizations are leveraging event-driven architectures (EDAs) often in conjunction with other integration patterns, like microservices, to meet common non-functional requirements such as performance, availability, scalability, resiliency, and decentralized autonomy for development teams striving to deliver quality software.

Modern application development is more distributed, and the use of event-driven architecture aids the decoupling of system components and data by domain. Such separating of concerns promotes the logical separation of event production and event consumption. As a result, the non-blocking nature of event driven systems means it’s never been easier to transmit information to a variety of consumers. Consumers can subscribe to events without affecting producers and similarly producers do not need to concern themselves with how events they emit will be consumed and/or what parts of the event messages are being used.

Figure 1 - Consumers can easily subscribe to topics without impacting producer

The loose coupling enables teams to implement services in languages and stacks most suited to them or to the task at hand. Development teams are embracing the flexibility, which has seen these multifaceted integration and architectural styles become more popular. However, it must be noted they are still maturing and can easily introduce challenges and concerns.

Event-driven architecture challenges

Most organizations do not have the benefit of starting afresh, thus the introduction of event-driven architectures is predominantly complementing other forms of integrations like REST, GraphQL and SOAP services.

As teams move away from heavily orchestrated integration patterns into more choreographed patterns, they still need to have a grounding in the overall business value outcome being delivered. As far as a customer is concerned, the delivered business value and behavior is still of primary importance.

Delivering independent and fully decoupled services is difficult and brings many challenges for organizations. For example:

  • Design and delivery complexity – traditional code-first approaches can impact the speed of delivery and prevent efficient handoffs resulting in too much collaboration between the teams.
  • Standardization – with the distributed nature of teams it becomes harder to achieve consistency in implementation, and conformity with design standards is harder to assert.
  • Implementation complexity – asynchronous event processing is non-trivial and more difficult compared to synchronous processing. This introduces more points of failure and increases complexity in testing and deployment. Testing and deployment strategies need to evolve with EDA.
  • Operations – debugging and exception handling are more involved with challenges in simulating the change triggered or mediated by events. The importance of being able retain event information increases the need for highly available, scalable, and fault-tolerant systems.
  • Managing change – consumers are required to be aware of the data format and data validation when sending or receiving data from brokers. If changes are not communicated in the appropriate manner, consumer implementations can break.

Shifting-left with event-driven APIs

True independence is difficult to maintain and sharing information between producers and consumers should not involve excessive collaboration. While it can be tempting to share information relating to data formats and validation rules through a glass-box approach or through informal channels, one must be aware such approaches can never scale! It is impossible to maintain such practices once external consumers (including those within the same organization but from a different business domain) or third parties begin to subscribe.

Problems such as these are familiar to us at SmartBear given our experience with REST, Swagger tools and the OpenAPI specification. A solution has always been focused on contracts and clear documentation. Having a simple format enabling the sharing of contracts, which represent your services, among all its users is powerful. The contract-first process creates the contractual representation of the proposed behavior for the service before anything is built. In this way, producers and consumers know upfront what behavior to expect from the service enabling them to work in parallel. Additionally, the contract provides the means to capture the impact of future change prior to it being implemented or released allowing adequate room for scrutiny to determine if the change constitutes a “breaking change”.

For some time, there was no equivalent to the OpenAPI specification for event-driven APIs. Fortunately, that has changed and the emergence of specifications such as AsyncAPI, which focuses on describing events while also describing and documenting the full interaction model for event‑driven APIs across multiple protocols, is gaining popularity in the industry. Organizations are seeing benefits to documenting and cataloging their event APIs like what they have done for REST APIs which is contributing to an API design-first approach for event driven systems.

The ‘shifting-left’ approach enables teams to obtain faster feedback loops and to iterate quickly towards an approved version of the service before investing in writing the implementation. Additionally, the existence of specifications improves the delivery cycles by enabling better interoperability with other tooling. As an example, specifications can be easily imported into testing platforms to setup rich and representative testing flows to assert the expected behavior.

Figure 2 - Changing message schemas can break consumers unless managed appropriately

Schemas and definitions leveraging a common specification which are governed by a thoughtful and empathic organizational standard, are more than just a contract between two event streaming microservices. They act as contracts between independent teams, and even between organizations and their customers. They form part of the collective Developer eXperience (DX) and contribute to the overall customer interaction an organization has with its customers.

Having the right mix of standardization and governance is a key aspect of managing any new architecture or technology. The use of specifications to document and describe the events, and the event channels available within a domain (or organization) forms a foundational element of any standardization approach. Better standardization increases reuse opportunities and accelerates the maturity of EDAs within organizations. All of which leads to events being elevated to the same level as APIs.

Conclusion

The evolution of APIs has seen modern applications evolve from traditional request/response messaging systems to fully enabled event-driven systems. Events themselves are arguably taking the primary focus and are recognized as the conduits of information from one service or one domain to another. Events mediate and trigger change within a system and are often choreographed into a larger pattern.

In order to continue to deliver quality at speed in the event-driven world, testing strategies need to appreciate holistically the system under test and recognize the importance of testing event-driven APIs as part of delivering on an overall business capability. Being able to test the individual bounded context and surface area of a microservice or event API is important, but individual services generally do not live in a vacuum. Determining the quality of the overall behavior of the system is of primary importance, and the ability to have test automation across workflows and choreographed event-based systems, agnostic of the technology choices for the individual parts, is what affords assurances that the business value promised in the contracts will be realized when the rubber meets the road on production.

At SmartBear, we want to help our customers adopt EDA to deliver business value in the most efficient and scalable manner. That’s why we build tools to help easily document or import Apache Kafka topics, validate messages and headers against serialized schema, and link those events into broader representative flows. All of which enables testing of the entire service architecture, both functional and performance testing, across multiple specifications, patterns, styles and protocols: like REST, SOAP, JMS, MQTT, AMQP – and now Apache Kafka.

Check out our ReadyAPI platform which empowers teams maintain controlled speed regardless of technology choices.

Join us virtually at the Kafka Summit

To explain more about testing event-driven architectures and Apache Kafka, SmartBear is sponsoring the Kafka Americas Summit 2021 from September 14th to 15th, 2021.

Check out our following talks: