How Postman uses Postman: bridging the gap between technology and data science

We love hearing from our community about how you’re using Postman to improve your API development experience—it truly continues to inspire us. But have you ever wondered how we might be using Postman internally as we’re building the Postman API Platform and optimizing our own workflows? In this installment of our “How Postman uses Postman” blog series, we’re taking a look inside Postman Open Technologies to see how they’re using Postman at Postman.

Today, we’re chatting with Pascal Heus, who is the data lead in Postman Open Technologies. Pascal is based in Alberta, Canada.

Pascal, what kind of work do you do with data?
For over 20 years now, I’ve been working around data, with a strong focus on “high-value data,” which is data that’s used to measure things like the health of the planet and state of our societies. This includes for example official statistics, like population census, employment, socio-economic data, or in the scientific domain, things that have to do with climate change, health, agriculture, and so on. Such data is used by governments, international organizations, and the private sector when making decisions that have national or global impact, aiming at making the world a better place. My mission has been to bring technology, standards, best practices, and machine intelligence into that space.

How does this data work relate to Postman and APIs?
When you think about things like books or cars, there are common practices that exist for describing them. I’ve basically been involved in establishing similar standards but for documenting various kinds of datasets in a very comprehensive way, and then developing tools and technologies based on those specifications. As for how this relates to Postman, the end game is to make this knowledge available in ways machines can understand for discovery and access over APIs, so it can effectively be used by applications or AIs.

My goal is to convert human-friendly information into knowledge that can be understood by machines, because this allows APIs to deliver both data and machine-actionable metadata (digital documentation).

If you just have data files or tables, there’s not much you can do with them without serious extra efforts (a.k.a. data wrangling). APIs are a gateway to digital knowledge. At the end of the day, that’s where all the information is channeled through, and that’s how I found a space at Postman.

Getting to know Pascal

What were you doing before you joined Postman?
I worked at the World Bank for many years, and that’s where my data career started. I was working with national statistical agencies in developing countries, strengthening capacity in terms of producing and disseminating statistical data. Later on, I joined forces with a few people working in this area, and we created a small company called Metadata Technology North America. We always had a vision of bringing data and metadata together through APIs, and we built platforms around that.

As part of our projects, we used Postman to document our APIs. When Postman Visualizer first came out, we were among the first to do some really cool things with it. We would take some metadata from the database, generate Postman Collections, use the Postman Visualizer, and generate data documentation. Now that I’m at Postman, I feel like I’ve found a really good place to keep building on that dream of delivering APIs bringing metadata and data together.

What are some challenges you’ve found with bringing technology into the field of data?
High-value datasets can be very complex and they typically aren’t just stored neatly in a database somewhere. They also tend to be managed by organizations that aren’t particularly technology-centric (government agencies, archives, research centers) and often lack what is called metadata. You’ll find data out there when it’s public, but with very little documentation. You may find CSV files and, if you’re lucky, some PDFs or Microsoft Word documents, but you commonly won’t find documentation in machine-friendly formats, which is what I call metadata.

Related: Open data APIs: standards, best practices, and implementation challenges

What does your day-to-day work look like?
It involves a lot of research because it’s all about bringing in new technologies and practices into existence. For example, you’ll find that data exists. We know how to manage data—we can put files in a database, and that’s not a problem, but the lack of metadata and APIs are the barriers. We often don’t have tools to manage the knowledge around that data.

I mentioned earlier that there are standards for describing the data, but typically these standards don’t come with tools or APIs. Basically, I’ve been trying to build solutions that surround these standards and best practices. My work also involves talking to people about standards, educating them about both technologies and data science, and explaining how and why to do things this way.

What I really like though is the technical part—actually coding and developing tools. Part of my work though is also spending time in meetings, talking to people, or writing articles to advocate standards and best practices. I’m trying to spend the majority of my time building things rather than talking, because I think it’s important to deliver tools that people can actually use rather than just advocating.

How Pascal uses Postman

Exactly how do you use Postman while working on Postman?
I work with Postman quite a bit. I’ll often create collections because there’s some API out there that I think is important. That’s what I call invisible APIs. They exist, but they’re not documented, and they don’t adhere to standards. Such APIs should always be documented in Postman, as this is how many people discover and use APIs. But my overall goal is trying to automate and facilitate this process as much as possible.

I, therefore, spend time implementing utilities that ultimately generate well-documented collections. With the right metadata surrounding the data, you can almost completely automate the process. So that’s where I am trying to go—some of my work is done outside of Postman, but I always end up documenting APIs in Postman.

Another big part of what I try to do is not just document an API, but document the data. When something comes back from the API, visualizing it is extremely important. Most of the people who work with data are not developers—they don’t care about JSON, and having the ability to visualize what comes out of an API is extremely important. So, I’ll code stuff in the Visualizer, or even use Postbot to quickly generate outputs. It’s not the majority of my work, but I will always end up spending time in Postman because it’s at the end of the chain for everything I do.

Did using Postman change any of your workflows or processes?
I would say yes, particularly the Visualizer, as I think it’s a useful feature, and it’s an example of a tool that I believe is underutilized in Postman. I think there’s so much more we could do. It did change the way I deliver APIs, because it gave me a way to show the output of an API that isn’t JSON. When you talk to non-developers, it’s extremely useful.

The other one that I find really interesting is Postman Flows. My focus is on data, but naturally I have been paying attention to AI, and I believe Flows has amazing capabilities and possibilities there. I would love to see Flows and some of the AI components come together. I think that would be extremely powerful because, in the AI space, you see a lot of these pipelines with things like LangChain and processing workflows. I think it’s almost a natural fit for Flows.

What Pascal has learned from using Postman

What was it like to move from working with data in the public sector to the private sector?
It’s been a very, very different experience. I realized there’s a tremendous gap between the technology sector and the data within public and scientific communities. Previously, I was mainly working with academics, government, or international organizations, and that space is not about technology but is instead about policymaking and making a difference in the world. Technology was all under the hood, and there was a lot of knowledge about data and metadata, how the world revolves around data, and how we make decisions, but not so much about technology skills.

Now that I’m in the private sector, everything is about technology, and there are a lot of experts and gurus, but there doesn’t seem to be a very good understanding of what data is about, specifically high-value data. Of course, there is a good understanding of business databases, but the concept of data as knowledge that drives the world—or the understanding that this is the commodity used to measure how our planet or societies are doing—is missing. Nobody talks about that. So, I realized there’s a very strong need for these two universes to come together because they both need each other. I think there will be a tremendous benefit for the technologists to start collaborating more closely with data custodians and researchers.

What’s the difference between the way developers and data scientists use Postman?
Technically, every dataset in the world is a potential API. It’s important to differentiate between an API and a dataset, because I can have one API that exposes 100 datasets.

A developer will typically document the API regardless of how many datasets are behind it.  But when you look at it from a data perspective, I’m not going to make that one collection. If I have 100 datasets, I will create 100 collections, as each of them tells you something about each particular dataset. That’s a big difference between the developer view of Postman and the data scientist view of Postman.

People who use data are not particularly interested in APIs; they’re interested in datasets. So when they open Postman, they don’t want to see an API that gives them access to 100 datasets. They want to see an API that allows them to access the one dataset that truly interests them. Even if the same API is used for many datasets, every dataset becomes a potential collection. And this is why Postman is so interested in data and supporting researchers and scientists, because we have the potential to generate literally thousands—even hundreds of thousands—of collections that are dataset-centric versus API-centric.

How does this relate to your ongoing project?
That’s kind of the endgame for the project. The goal is to promote the use of data and metadata with APIs, but for Postman, it’s also a way to create a lot of workspaces with hundreds or even thousands of collections. Because, at the end of the day, the more APIs exist, the more Postman can be a tool that is attractive not just to developers but also to data users and other communities that don’t currently think about Postman as a tool they can use.

If you talk to data scientists, policymakers, or economists, they might not even know the term API and don’t realize they use APIs all the time. And that relates to our vision: how do we grow to 100 million users? We know we have to grow beyond developers. So that’s another aspect of our data project—to reach out to new communities of Postman users. The vision is Postman for data, but the work is more about building tools that enable the creation of more data APIs.

How do you see the fields of data and technology evolving in the coming years?
If you look at the world today, there’s an interesting thing happening. If you talk to data scientists who are in their thirties or younger, they were born with technology. They use Python or R, and they’re pretty comfortable with APIs because they’ve been taught how to program at university. But if you look at data producers or researchers that are a little older, they’re used to statistical tools like SAS, Stata, SPSS or Excel, and they often don’t know what an API is. These are the senior managers in charge today, and the unfamiliarity with technology and APIs is a barrier to modernization.

I think in the next 10 years, things will change significantly as the younger individuals who grew up with technology and understand the value of APIs will eventually become the decision-makers. But in the meantime, convincing today’s  decision-makers that they need APIs and metadata is a priority. We have to present Postman in a very different way to help them understand that our platform will help them on their mission to deliver data to key stakeholders and the public.

Without revealing any secrets, what’s something you’re excited about working on or exploring for the future of Postman
For me, even before I worked for Postman, I’ve been advocating for Postman as the right tool to deliver APIs to users, whether they’re developers or data scientists. Of course, the platform can get better as well, and hopefully the work we’re doing will impact the product in that sense. That’s kind of the role of Open Technologies—to sometimes push the platform into places it hasn’t been designed for and say, maybe we could enhance a certain area of the product. We contribute to the platform in an indirect fashion. The beginning of this year has really been a reboot of the data project and I’m really excited about building that.

More about Pascal

Do you have any side projects or hobbies that you want to share?
I use Postman outside of work for a few things. For example, I really like to play EVE Online. It’s a MMORPG space game, and it’s likely the most complicated of its kind. People who play that game spend hours analyzing the game data, and some of this information is available through APIs. So, sometimes I do use Postman to do some extraction or analysis of gaming data, which I think is a very interesting market as well. The gaming industry has a lot of data, and naturally, there are a lot of APIs that are out there. So sometimes I play a little bit with Postman and these types of APIs and some geeky stuff around that.

Besides that, I play hockey and music, but I don’t use APIs on my drum set yet. The rest of my free time is with my kids. My son is going to university next year, and he’s trying to figure out what he’s going to do but has shown some interest in data and sports analytics. At some point, I hope to introduce him to the technology side of that.

I think for me, the main thought at the end of the day is that it’s not about technology. It’s about making a difference. And I think APIs and Postman provide some amazing mechanisms to make a difference in the world. For me, that’s the important message. I believe Postman is the best platform we have available today. I’ve been saying that since before I joined the company, and I’m very happy to be part of its growth. My work and my passion are around technology, but it’s what we can do with it that actually makes a difference in the world, and I think Postman is a pretty cool platform for that.

The bottom line

Pascal’s team focuses on the open source tools and standards around APIs, strengthening the global open source community that we all rely on. Postman Open Technologies uses, supports, and contributes to open source technologies that relate to APIs and uses their experience and expertise from these communities to continue making the Postman platform better. In particular, Pascal is dedicated to bridging the gap between data science and technology so that policymakers and decision-makers can reliably access data about the world and its citizens through APIs. Pascal also leverages Postman Collections and the Postman Visualizer to make APIs accessible to non-developers.

Thanks for sharing your thoughts and experience, Pascal!

Tell us what you think in a comment below. Interested in becoming a Postmanaut by joining our team? Check out our Careers page.

What do you think about this topic? Tell us in a comment below.

Comment

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.