How Klaviyo launched its ecommerce public APIs with Postman
This is a guest post written by David Henriquez, lead developer advocate at Klaviyo.
Klaviyo is a marketing automation software that primarily focuses on the ecommerce and retail industries. We help brands store, manage, and communicate using their customer data. As the recommended email solution for Shopify Plus, we handle incoming and outgoing data in the realm of millions of events per second.
Along with Shopify, we support several native integrations to ecommerce platforms, including Adobe Commerce (Magento), WooCommerce, Salesforce Commerce Cloud, BigCommerce, and Wix, as well as integrations to many ecosystem technologies. The APIs from these platforms come in various flavors, including REST, SOAP, GraphQL, and gRPC, as well as others (including non-conforming APIs).
On October 20, 2022, Klaviyo launched our new APIs. It was the public release of a project that has been under development for well over a year, and it was a huge step toward our future growth as an API-first platform. We’ll be continually releasing new endpoints and improvements to our API-surface area—all to power a new generation of data-driven experiences and applications.
In this blog post, we’ll highlight some of Klaviyo’s project goals and planning process, learnings, and insights to help others create better developer experiences.
Our original APIs
The earliest APIs at Klaviyo were built for very specific use cases and handled billions of requests. Our most used APIs were called “Track” and “Identify,” and they were implemented as GET requests that persisted data by accepting a single base64-encoded JSON data structure in a query param. Successful requests responded with a 200 status and 1
body, while unsuccessful requests responded with a 200 status and 0
body. When developers received0
, they were often not sure what went wrong. We also didn’t enforce strict rate limits.
Klaviyo’s new API project
With our goal of becoming an API-first company, and our extensive experience as API consumers, we embarked on a new API project. This project was sponsored by our CTO, Allen Chaves, who made this one of his major goals for 2022. He created a special team that drafted our API standards and specifications, and that team sought approval from the larger engineering organizations as it made progress.
Early on in this project, we formed our API Review Council to serve as the resident API experts for cross-functional initiatives, API spec reviews, and planning, to help provide guidance in the face of edge cases, and to help solve challenges.
API principles
As part of this project, we created a few guiding principles:
- Strive to expose primitives: If a user can accomplish something via our UI, they should be able to do it via API as well. The building blocks that make up our application should be exposed to developers to enable them to build new applications and integrations with us.
- Prioritize use cases: Consider the real workflows of users and design APIs that are use-case driven, to the extent that they support efficient and effective usage patterns. We looked into ways that developers were using our existing APIs and considered how we could build APIs to improve these workflows.
- Be wary of solutions without a problem: When designing APIs, it is important to be hyper-focused on what you are solving for to make intentional, grounded decisions. Solve for the problems your developers are currently facing, while building a strong foundation for future iteration.
- Develop standards (and stick to them): When making decisions we often ask ourselves, “What does the standard say about this?” Unless we have a truly compelling reason to do something differently, we follow the standard. This will create predictability for our developers, which increases their efficiency and maintains their expectations.
API decisions
As the project progressed, we faced several decisions. The first big decision was which protocol to use, which eventually resulted in considering between REST or GraphQL.
REST vs. GraphQL
GraphQL seemed to solve many of the limitations of more traditional REST APIs and has great support for typing. However, it is also newer and presented a steeper learning curve for developers. It also requires careful consideration when it comes to HTTP caching. None of these are deal breakers, and we are big consumers of GraphQL and really like using it.
Related: GraphQL vs. REST
REST is a very familiar technology for our developer community. Our existing APIs were mostly based on REST, and many ecosystem APIs were RESTful. We researched and found a specification for REST that provided many of the benefits of GraphQL: JSON:API. This made us comfortable with choosing REST for our new APIs.
Improved performance
Our existing APIs suffered from several performance issues, including bloated response sizes and the need to chain several API calls together to get the necessary data. Many data structures from Klaviyo are most valuable in the context of related data. For example, `Events` are very often used in relation to the `Profiles` those `Events` belong to. We wanted to provide the ability to “include” multiple resources in one response and allow the developer to limit the response to only the desired fields for each resource. We also wanted to improve our filtering operations to provide better access to exactly the data required. The result for this was using JSON:API’s includes as well as sparse fieldsets and filtering.
Rate limiting
Early on in our history, we considered the lack of rate limits as a strength of our product—“Send us virtually an unlimited amount of events/second.” As demand increased, we scaled out to handle it. We once had a client with a serious fleet of servers and a willingness and capability to distribute their workload across all of them to see if they could push the limits. They lit up a few charts, but it worked. As we matured as a company, we encountered clients who were much less impressed by our lack of rate-limiting. Our security team and our engineers responsible for the systems underneath the APIs were even less impressed. So, for our new APIs, we implemented per-endpoint rate limits so engineering teams can maintain rate limits that make sense for their service.
Scopes
To prepare for our future API improvements (such as OAuth) we implemented scopes as part of this API release. This allows our customers to provision read/write permissions for various logical boundaries around our product and achieve better granularity and restriction of access to only what is necessary.
Versioning policy
As we release endpoints and update existing ones in our new APIs, we wanted to have clear protocols for revisions. This will allow developers to accept and test new API changes when they are ready, on an endpoint-by-endpoint basis. It will also allow us to set clear expectations around when APIs will be sunset.
Documentation, SDKs, and tooling
Keeping documentation and SDKs up to date can be challenging. We had previously relied on manual updates to each, which would lead to prioritization challenges. As part of this project, we prioritized automation wherever we could. This was essential with documentation and SDKs. We utilized the OpenAPI specification and OpenAPI Generator to automatically generate API documentation, SDKs, and a one-click Postman Collection import. Our Klaviyo Developers Postman workspace holds collections for our legacy, stable, and beta APIs and allows engineers to quickly get up and running with our APIs. Now, when a Klaviyo engineer introduces a new API endpoint, the docs, Postman workspace, and SDKs are all able to be immediately updated—automatically.
Alpha and beta testing
To support this project, we ran an extensive alpha and beta program. We first used logs to find prolific API users and invited them into our private alpha. We then made it possible for users to request an invitation. One major learning from these processes was the limited amount of feedback we would receive. It was difficult at times to know if developers were too busy to test the APIs or if we had achieved their requirements and they were happy with what they saw. We will definitely be more strategic in the future of alpha/beta testing APIs to increase participation. In the end, internal API consumers and a few external champions were a major source of feedback during the alpha and beta phases.
Conclusion
We launched new APIs built using REST with JSON:API, and we utilized the OpenAPI specification to achieve always-up-to-date documentation and SDKs, as well as our Postman workspace and other tooling. This has allowed us to deliver on the core use cases for our developer customers while we’ve built a foundation for making our application API-first. Developers can now make compound requests to retrieve multiple resources at once, utilize sparse fieldsets to limit response sizes, and use improved filtering syntax to get exactly the data they need.
What do you think about this topic? Tell us in a comment below.