Skip to main content

· 17 min read

In this episode, Ry and Pavlo talk about pg_timetable, about the value and place of risk in the ecosystem, and about building for the Postgres core. If you haven’t seen or listened to it yet, you can watch below, or listen on Apple/Spotify (or your podcast platform of choice). Special thanks to Pavlo for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.


[00:00:12] - Ry

Hello, welcome to Hacking Postgres. I'm Ry Walker, founder of Tembo, a managed Postgres company. And today I have Pavlo Golub. Did I say that right? Pavlo? Yeah, you can fix it from Cybertec as my guest. So Pavlo, welcome to the show.

[00:00:30] - Pavlo

Yes, thank you for having me. Hi there.

[00:00:32] - Ry

Hi there. Yeah, so maybe you could start by giving us a quick background, like where you grew up and what were you doing before you started working on Postgres.

[00:00:41] - Pavlo

Okay, so I was born in Soviet Union so I'm a little bit old, but yeah, I lived in Ukraine. And the first time I saw Postgres, when I was in my high school, it was like fourth year of high school and at that time it wasn't very popular. Everybody we're talking about MySQL like the only database with open source and yeah, at that time I saw the PostgresQL, I tried to install it and run it. It was hard for me to do that because at that time the version was like 7.4 something and at that time there was no installer for Windows and we were mostly like Windows guys at that times. So, yeah, I need to install Linux and then install Postgres. Yeah, I managed to do that, but I wasn't happy about it. The whole process is like I feel like cumbersome or something like that. So I left it for like a couple of years and then my first job in IT I was a support guy, so it was directly connected with the Postgres. So I started my IT career with Postgres and I'm happy that I had this chance.

[00:02:36] - Ry

Yeah, that's crazy. I don't personally remember when I first started using Postgres. I think it could have just been when I started using Ruby on Rails, I don't know. But it's funny, I don't remember when I first used Linux or a lot of things, but it's great that you have that distinct memory. You started in support, you've built some stuff. Now on top of it, tell me about the body of work you've done on Postgres or with Postgres. I'm curious to know what you've built.

[00:03:13] - Pavlo

So right now at Cybertec I'm a consultant, so my position called consultant, they called me consultant, but I prefer to think about myself as a developer. I'm not an Admin, I'm not a DBA, I'm a developer. So I'm mostly doing developing things for Postgres. I had a couple of small patches to these source, but I don't consider them as major things. So I did some client libraries for Delphi if somebody still knows what it is, and later I switched to C, Go, Python, that kind of stuff. And right now I'm mostly develop with the Go language. And at the moment we have a couple of utilities made in Cybertec, so I'm in charge of them. So one of them is like pg_timetable, which is a scheduler for PostgresQL, and another is pg_watch which is a monitoring solution for Postgres.

[00:04:30] - Ry

Yeah. Yeah. What caught my attention was pg_timetable because I come from a company that my previous company was Apache Airflow. So the idea of doing dags jobs that are chained off of other jobs, to me, there's no other way. Trying to schedule jobs and hope that they finish by the time of the next one, that's dependent is something that people used to do and they should never, ever do anymore. But I'm sure it still happens. Right. So certainly if people are using pg_cron, no way to chain them, as far as I can see.

[00:05:11] - Pavlo

We had a client and we have a process, like, lasting, like 14, 15 hours consisting of many steps which are dependent in some complicated ways. So we used a make file to split them into the jobs and to specify that this stage must be executed after that. And that and that worked fine. It's cool to have a special utility to run that kind of jobs or chains, whatever.

[00:05:56] - Ry

Yeah. Well, one of the things I think about a lot is how Postgres, the Postgres server is doing lots of jobs, right. I don't know how many processes it's dealing with, someone with better internal knowledges, but I know it's certainly vacuuming. It's certainly processing queries. It's doing a lot of things right. And then question is I'm a fan of SRP single responsibility principle and programming in general, but I also like monoliths too. We all have conflicting ideas in our heads. And having a queue managed inside the Postgres, you're kind of like straddling, right? Because you have a container next to it. So some of the work is happening over there, but some of it is inside of Postgres too. Why not have it all be inside? What are you thinking about for future versions of pg_timetable? Why not just do all the work inside of the running Postgres cluster?

[00:07:00] - Pavlo

So, yeah, thank you for the question because that was like, the second thought when we released the pg_timetable. So the idea is that PostgreSQL uses a separate process for everything. If you have a new connection, you need a new process. So if you have a background worker for cron or for another scheduler, you will need a background worker. A separate process. Yeah. It's possible. We can create an extensions with Go. It's possible, yes. But right now, I'm not sure what benefits we will have.

[00:07:58] - Ry

It also feels kind of gross, right, to think of throwing it all together. But I have this conversation all the time with my team. If you have a system that's dependent on the Postgres, let's say there's an API, right? And the API, if the Postgres is only invoked when the API happens you could theoretically put all that compute into one bucket and just let the API talk to I'm thinking, like, say, take it's a Rust API. Should you serve that Rust API as a separate container and have the separation of concerns? Or should you just create a custom Postgres extension and let the Rust run inside the Postgres cluster and there's zero latency to the data in that regard, but it's a little bit of a Frankenapp at that point. It's a tricky problem.

[00:08:52] - Pavlo

I will tell you that, that I want to implement this scheduler as an extension for Postgres, not because I think that we will have some benefits, but because this is very interesting task for me as a developer, first of all.

[00:09:08] - Ry

Yeah. Is pg_timetable supported by any cloud providers right now?

[00:09:16] - Pavlo

Yeah, we can run pg_timetable against any cloud provider. If you want, you can run it in Docker and Kubernetes as a standalone binary, or even you can try to grab the source code and put it into this AWS fancy one time run and forget I don't remember how they called it.

[00:09:46] - Ry

Yeah, but because it has the sidecar, it probably can't be like a standard extension in RDS, for example.

[00:09:54] - Pavlo

Right, it might be, but I think to make this happen, we need to somehow communicate with AWS and make that happen. It's more like politics, not yeah, well.

[00:10:10] - Ry

I think it's also just like I don't know if any of the other extensions that they offer have a sidecar requirement. Right. So that's one of the things we're thinking about at Tembo is like, let's allow sidecars if you allow sidecars to extensions in our managed service yes. It's like, again, it's a lot more dangerous in the sense that there's more things that could break, but it's also more powerful. And if we had the rule that dangerous things are bad and you wouldn't have a combustible engine inside of a car where little explosions are happening all the time to power it.

[00:10:50] - Pavlo

On the other hand, you can build your own image, right, and somehow to limit the binary in it to do some dangerous things, et cetera.

[00:11:02] - Ry

Yeah. I joke sometimes and say we're building the dangerous Postgres just because it's not really we support 150 extensions now, so it's like, yeah, our support footprint is increased and scarier than if you had 75, but I still say it's worth it to the developer to have the capability. So I'd love to get pg_timetable as a standard option. I want to use it, basically. What's funny is I'm building a data warehouse right now for our company, and I started with pg_cron, and I have these four jobs that all start at the same time and it's very early, so it's not a problem. But I would love to have them be chained. Like I said, I could do that fake with some spacing things out, but I really feel like I've just stepped back 20 years into the world of early data processing. So I appreciate what you built and use it soon.

[00:12:08] - Pavlo

Okay. Let me know if you're happy with it.

[00:12:11] - Ry

Yeah, I think it's a good idea and it could also help developers avoid having to go pick up some other tool to take the next stage of their data processing, which to me is a big win. And we're trying to make the Meme "Postgres for everything." I know you are too, in a sense, because you built the extension. But yeah, I think there's so much it can do, and it can do a lot more than just writing and reading data.

[00:12:54] - Pavlo

One more thing about having the pg_timetable as an external application is that you can use your own binaries inside jobs like baseql or pg_dump or something, like to grab something, to convert something, et cetera. So we want to have a possibility for developers to create their own docker images with the binaries or tools they only need and pg_timetable. So in this way it sounds like a Swiss knife, right? So I need a pg_timetable and a couple of utilities for grabbing something to manipulate with files, et cetera. And then we pack it and that's all. We don't need to think how we should allow the application to install it or to have it on the system version, et cetera, et cetera.

[00:13:58] - Ry

Yeah, basically what you're saying is it gets even more dangerous quickly, right? Like the base use case is one thing, but then when people start doing some more wild things, including their own, it could be their own packages, right? Their own, of course, completely custom. Nobody knows what's inside of it. It could be a bitcoin miner, it could be anything, right?

[00:14:23] - Pavlo


[00:14:26] - Ry

Are you working on the next version of it or is it pretty stable right now and doesn't require a lot?

[00:14:32] - Pavlo

So at the moment we are stable. So we are open for ideas and then requests, et cetera. But at the moment we are fine, I believe. So it's like in maintenance mode, so we update versions, packages, et cetera. So no new box. I'm happy with it.

[00:14:56] - Ry

Did you build it? Was it sort of a small team there or did you kind of build it solo?

[00:15:02] - Pavlo

So the initial idea was by Hans, our CEO, and yeah, that was my first project in Go. So I was trying to learn a new language to see how can I do this? And we are now here. That's cool, I believe.

[00:15:36] - Ry

What were the biggest challenges would you say you faced while working, while building it? Was it hard to, for example, understand internals or was that part easy? I'm kind of curious what was toughest.

[00:15:49] - Pavlo

For me, I would say that the new tools that you need to use, like if you are familiar with your favorite IDE and then for Go language, you need to switch to something like Vs code or Go land or whatever. This is completely new tool for you. And you have no this muscular memory for shortcuts, et cetera, et cetera. It's kind of difficult. Like six months I was trying to get used to new tooling, but after that I'm pretty happy.

[00:16:26] - Ry

Yeah, that's so when was it that you started working on timetable? pg_timetable?

[00:16:35] - Pavlo

Maybe three years ago, something like that.

[00:16:39] - Ry

Yeah. Okay, well, that's great. So have you guys implemented it many times now for customers or how has it been? I know it's not a commercial product, but I assume it should be helping the consulting business, right? Yeah.

[00:16:58] - Pavlo

So it usually goes in a package with something. Usually if we have a new client, we are talking about high availability and we are talking about monitoring, and we are talking about that and that, and then we say, okay, we have a scheduler if you want it, and usually people want it. I don't remember we have like a standalone cell for a pg_timetable. But yeah, it's usually like packaged into something bigger, usually with a high availability and monitoring.

[00:17:33] - Ry

Got it. It’s like a sweetener, right? Cool. Well, maybe I'll shift into some more general topics. I'd love to know what are some of the developments in Postgres ecosystem that you're excited about?

[00:17:50] - Pavlo

So the Postgres itself, the core, I always want to constantly want to do something for the core, but.

[00:18:03] - Ry

It's like you're racking your brain around there.

[00:18:07] - Pavlo

The process itself is not like user friendly. First of all, you need to communicate through the email lists. No GitHub, no issues, no pull requests. And you need constantly update your patches because the development is very active, and you need to defend your patch against many people in the community because why you did it, what purpose, why you do it that way, not another. So that might take some time for.

[00:18:46] - Ry

Who are your favorite people in the Postgres ecosystem? You hate to play favorites, but people I should interview get to know anybody come to mind?

[00:18:56] - Pavlo

Yeah, a lot of them, frankly speaking. Well, I think that the community is the reason, number one, why I am still doing Postgres things. I was working like, for five years without knowing nobody from the community. And then in 2011, I was first time on the PG conference in Europe, in Amsterdam, and that changed my mind. Absolutely. I saw what the community is and what are the main things and why are those working that way and not another. So if you want names, so Bruce Momjian, for sure. Magnus Hagander, Ilya Kosmodemiansky, Vik Fearing, Andreas Scherbaum, a lot of them. Like Lætitia Avrot . A lot of them.

[00:19:59] - Ry

Well, I'll reach out to those people. It's obviously a very big community too, certainly, considering the extent, anyone who's touching it is very excited. First, I'm going to the know, the New York City conference here in a couple of months. Yeah. So I'm excited to meet a bunch of people there. And we went to an event in San Jose last spring, which was fun. All right, so here's another one. If you had a magic wand and you could add any feature to Postgres? What would you change about it or what would you add?

[00:20:39] - Pavlo

I don't think that I can change something in a way that it will be a brick and change but I would like to see if we can implement Postgres using threads, not the processes. I would like to see if that might be better solution that we have now. I'm not sure.

[00:21:04] - Ry

It's being debated, right?

[00:21:07] - Pavlo

Yeah, absolutely.

[00:21:08] - Ry

Yeah, we'll see.

[00:21:09] - Pavlo

It's a hot topic.

[00:21:10] - Ry

Hot topic, yeah. I did join the mailing list for a short period of time. I'm starting a company too and it just was too much. I was like, oh my gosh, this is a fire hose and someday I want to be able to have the time to consume it. But today wasn't that day. Is there anything about Postgres that no one agrees with you about? Do you have any contrarian views about any part of it that you can think of?

[00:21:38] - Pavlo

Yeah, probably my views about the Windows because I was a Windows guy and I'm still a Windows guy and I'm always saying that we need to support Windows as well as we support Linux. So that means installers for Windows and extensions and tools, et cetera, et cetera. And people usually say, no, Linux is fine and that's enough is enough.

[00:22:08] - Ry

Yeah, it's interesting, it's fun. I'm interested in a lot of different areas of programming. I was looking at, say, Unity, you know, game development and it's C# and I'm like "Ugh" and it seems like very Windows centric and I'm like because I'm a Mac user and whatever. For me it'd be like Mac, Linux, Windows in terms of my priority but I'm like certainly not C#, right? But anyway, yeah, the religious wars will never end there, I guess, in terms of platforms.

[00:22:47] - Pavlo

The bad thing about our approach right now that I think that we don't have enough good GUI programs for PostgreSQL every time we are using psql to show something to demo. And I think that for newbies it would be much easier to follow a demonstration if you use some fancy GUI application.

[00:23:15] - Ry

Well, it was great chatting. Where can listeners find you online? Are you on social media anywhere?

[00:23:23] - Pavlo

Yeah, I'm like on every kind of social media. Like GitHub, LinkedIn, Twitter, Instagram, Facebook, Blue Sky, Mastodon.

[00:23:36] - Ry

Great. All right, well, like I said, someday soon I'm going to try pg_timetable and I'll give you some feedback and maybe some ideas in the GitHub and we'll see you there, I suppose.

[00:23:47] - Pavlo

Sure. Thank you.

[00:23:48] - Ry

Great, great chatting with you. Thanks for joining.

[00:23:51] - Pavlo

Thank you. Thank you.

· 19 min read

In this episode, Ry and Eric talk about ZomboDB, the complicated road that led to the development of pgrx, and what it means for the future of extensions within Postgres. You can watch it below or listen to it on Apple/Spotify (or your podcast platform of choice). Special thanks to Eric for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.


[00:00:12] - Ry

Hey, Eric. What's going on, man?

[00:00:14] - Eric

Hi, Ry. How are you?

[00:00:15] - Ry

Long time no talk.

[00:00:16] - Eric

Yeah, it has been a while.

[00:00:18] - Ry

How are things going?

[00:00:19] - Eric

Well, I think.

[00:00:21] - Ry

Well, maybe let's give our listeners a quick intro on what you've done. I mean, Eric Ridge is pretty much a household name in mean, in case there's someone listening who doesn't know what you've been working on. You want to give us a quick background on your involvement in postgres over the years?

[00:00:36] - Eric

I think a household name is absolutely not true. First of all, I'm Eric. I am the reason that this postgres extension name ZomboDB exists. And over the years of developing that, something that's now called Pgrx kind of fell out of all that work that lets us develop extensions for postgres using the Rust programming language. And these things have been where my interest has been as it relates to postgres. I guess past seven, eight years now. Things ZomboDB has been out since 2015, so we're right at eight years.

[00:01:14] - Ry

Do you know how many it's at? An uncountable number of Rust extensions now, probably you probably at one point knew how many there were, but now you've lost track.

[00:01:24] - Eric

Yeah, that's a good question. It's hard to know. There's no real clearinghouse around extensions.

[00:01:30] - Ry

If only there was, that would be a great thing for someone to build.

[00:01:33] - Eric

Yeah, it would. I wish somebody would get on that. Right. Yeah, that's a good question. There's some bigger names using Pgrx, and the Timescale is one. Supabase is one. We recently released plrust, and that's found its way out on RDS. So that's exciting for us. What we see popping up are small extensions that are written in Pgrx, and we see a lot of people writing I don't want to call them one off extensions, but one off extensions that are just, like, specific to their business use case and solve some interesting data domain problem that only they would have. But now they have the ability to build that business logic or data manipulation or what have you into their database and into the database itself.

[00:02:27] - Ry

Without having to go find a C programmer that's willing to do such work. Right?

[00:02:31] - Eric

Yeah. And that was one of the motivators for developing Pgrx, was I guess I had been working on postgres extensions in some form or fashion since postgres 8.0, and I think that was even before the extension packaging and create extension command and all that came into existence not long before. But I think that that was before and yeah, I mean, just over the years, I got tired of working in C. When you're developing a postgres extension in C, you're not only putting on your C programmer hat, but you're trying to put on the hat of a postgres hacker. Right. You need to understand postgres sources just as much as you do the language itself. So Rust was becoming a popular a number of years ago. So I was like, you know what, I'm going to try to do this.

[00:03:22] - Ry

Well, it's interesting. It's like how a lot of games, if they have mods, the language for the mods has to be easier than the language for the core game. Right. So a game might be written in something like Unity, but then they give you, like, Lua as a tool or JavaScript or something simple to build. But in Postgres, that wasn't true. Now, is Rust as simple as Lua or JavaScript? Maybe not, but it's still like very beloved people who try to learn it. In my experience so far, it's 100% like anyone who tries to learn Rust learns it and loves it. I haven't found someone yet and I have a small sample size, but, yeah, seems like it's a pretty good hit rate in terms of the perfect mix of control and speed with the ease of onboarding.

[00:04:10] - Eric

If you're not a C programmer, then writing the postgres extension in C is going to be difficult again, because C and then also because you're really programming Postgres's version of C, which is great, but it's not JavaScript, it's not Ruby, it's not Java. If you're not a Rust programmer, first you've got to learn Rust, and Rust is different enough from all the other languages to make that a point. But what we're trying to do with Pgrx, and it's going to be a multi year effort to do this, it may never get done, but we're trying to present the postgres internals in the most idiomatic way possible to Rust. Right. So that if you are a Rust programmer or you are beginning to learn Rust, it'll be pretty obvious what you need to do in order to make your postgres extension useful.

[00:05:10] - Ry

So you said you were working on extensions as early as Postgres Eight. Were you a Postgres user or our developer even earlier than that, or was that sort of when you got involved?

[00:05:21] - Eric

Yeah, Postgres user, I think we started using Postgres at work in the year 2000 and that might have been I mean, is that 7.1, 7.2? It's quite a long time ago and we've been using it every day since. And, yeah, somewhere along the way we needed to do some work inside the database. So here we are today.

[00:05:46] - Ry

Yeah. For me, when I think about when did I start using Postgres, it's like thinking about when did I first start using a laptop, it's like I don't remember exactly when, but definitely transitioned to having a laptop at some point. But, yeah, it didn't seem monumental to start using Postgres back then, but obviously it's come a long way and it's a pretty exciting ecosystem right now. So, yeah. What do you think the most powerful thing about extensions are? Do you have any thoughts on what are the most powerful things you can do with extensions, out of curiosity?

[00:06:19] - Eric

It's an interesting question.

[00:06:21] - Ry

What's the coolest part of, I guess what API? Or maybe another good question is what hasn't been exposed yet that should be or needs to be in Pgrx? That would be powerful.

[00:06:32] - Eric

Yeah, I'll start there and then work backwards. There's a lot of internal APIs within Postgres, and there's a number of interesting ones. There's the index access method API, which is really easy to implement. We haven't added safe wrappers around that to Pgrx, but it's real easy to implement. It's really just a handful of functions. There's the Foreign Data Wrapper API, which Supabase released a project that they call Wrappers that'll let you create foreign data wrappers using Rust backed up by Pgrx. So kind of the next big API around that genre of internals. There would be the Table Access Method APIs, and we haven't exposed these at all yet. They're actually difficult to get at even through our bindings just because of the way they're set up in the postgres sources. So when we get around to doing that, and I hope that we will, there's going to be some development effort on our part to safely expose these or safely expose the Table Access Method APIs. But that's the API that would let you write your own heap storage engineer system in Rust.

[00:07:49] - Ry

Tell me about I mean, probably not everybody knows what ZomboDB is. Maybe quickly describe what it does and what it doesn't do and what's coming next with it.

[00:07:59] - Eric

Yeah, it's the worst named software product.

[00:08:03] - Ry

First off, why'd you call it Zombo, everyone wants to know.

[00:08:06] - Eric

I think, Ry, you're old enough to remember If you don't remember it, then that just means you weren't playing on the internet. You're probably doing real work. It still exists so listeners can go and check it out, but the name just came from that. So ZomboDB is a postgres extension, and it's been through numerous iterations over the past eight years, but it has always been postgres extension that lets you create an index on a table that's backed by Elasticsearch. Typically the usage pattern is you create the index indexing all of the columns in the table and you point it to an Elasticsearch endpoint over Http or Https, and the ZomboDB keeps that index up to date. It keeps the index MVCC correct. All of your search results are you only see results that would otherwise be visible within the executing transaction has its own query language for doing full text queries, so users don't have to learn the Elasticsearch query DSL, which is pretty intense.

[00:09:14] - Ry

Can you use the Elasticsearch DSL with it, though?

[00:09:17] - Eric

You can, and there's a number of ways to do that. The query language itself allows you to just insert as part of an expression. You can insert a JSON block to represent a query DSL node. There's a full set of SQL builder functions, so you can build query DSL through SQL statements. You can mix and match these things together. It's pretty sophisticated on that front, and it also exposes nearly all of Elasticsearch's aggregate endpoints to enable you to do really interesting aggregations with your full text. And since all the fields of a table are also stored in the Elasticsearch index, you can do some really interesting things that involve full text queries and aggregating on various metadata fields and do all this with a large distributed elastic search cluster behind the scenes and do it in near real time. It's really slick. The downside to ZomboDB is that you now have an Elastic search cluster running alongside your postgres cluster. And ES is a living breathing system. It requires significant attention and maintenance in production. It gives you a lot of capabilities to scale out horizontally very easily, but it is its own living breathing system there.

[00:10:42] - Ry

Yeah, it'd be great if you can build a version of this without requiring elastic.

[00:10:46] - Eric

Yeah, I know a guy that's working on that. I know a company that's working.

[00:10:50] - Ry

Yeah, someday that'd be possible.

[00:10:53] - Eric

Yeah, well, you had asked where the name came from and I said all that to say that the name came from. Once we kind of got ZomboDB working, we were like, whoa, it's really amazing what you can do with this and how easy it is to use. And one of the mantras from is the only limit is yourself. And ZomboDB really kind of makes it feel that way because it makes querying your data, especially real dense text data, just super simple.

[00:11:26] - Ry

And so you use Zombo pretty aggressively within TCDI, which is where you work, right?

[00:11:33] - Eric

Yeah, TCDI is my employer. Yeah. We've had Zombo in production since 2013 and we didn't open source until 2015. Yeah, I don't want to give away numbers, but we have more ZomboDB installations and Elasticsearch instances than we probably want.

[00:11:53] - Ry

Is it sort of a competitive advantage for the business? The power that I know, you guys, it's like legal documents. I might get it wrong, but like searching legal documents to some degree, right? Is it a competitive advantage for the business? The fact that Zombo is inside, would you say?

[00:12:08] - Eric

Yeah, absolutely. Our industry is ediscovery litigation support and among many other areas within that domain, searching large volumes of text is a key thing. And we're the only player in our industry that has text search capabilities to the degree that we do. And I'm not going to go name or other competitors or anything, but their abilities to do complicated text search that involves fuzzy queries plus regular expressions, plus boolean expressions, plus long tail proximity queries hit highlighting it's our abilities to do that just far exceed our competitors because of ZomboDB, because of postgres, and because of Elasticsearch.

[00:12:55] - Ry

Do you use the extension in both transactional use cases as well as analytical? Or is it mostly just powering the search bar inside the product or is there some back end? I guess I call it like analytical type use cases for it as well.

[00:13:12] - Eric

Yeah. So with postgres. Right. Everything is transactional.

[00:13:15] - Ry


[00:13:16] - Eric

We expose search in a very complex manner. I guess, to our users. Users, it's much more than just a search box. They can search by particular fields and design proximity queries, and they can do term lookups. They can create real time dashboards and whatnot through our software. That's all backed by ZomboDB. One interesting use case is in the world of litigation. The plaintiffs will provide the defendants with a list of keywords that need to be searched in order to find documents that are responsive to particular to these topics. And these keywords are sometimes just literally a keyword one word. Sometimes they're phrases, sometimes they're wild card terms. Sometimes they're phrases with embedded wildcards. Sometimes they're patterns that need to be recognized. And so, like an analytics side of what we use ZomboDB for is taking these ginormous lists of things and kicking it off and getting back a list of the 6 million documents out of the 200 million that might match this set of keywords that the plaintiffs have requested production for.

[00:14:25] - Ry

Yeah, that's intense. Could see the power of that. What are your other favorite extensions? Probably all the other ones you've made, but give me your top five postgres extensions. zombos, one. What are a couple of others that you like?

[00:14:39] - Eric

I don't use PostGIS or PostGIS, but I am super impressed with it. I think of it as kind of like the gold standard of what a postgres extension ought to look like. ZomboDB does integrate with it, but the data world that I live in doesn't involve the need to do geographical type queries. But I've always been really impressed with it. I think the postgres foreign data wrapper as an extension is really great, and that comes from the core community, but pretty impressed with that. Citus has always been interesting to me. I guess they've long since been bought out by Microsoft and are still going strong. I think that Citus really shows off the power of what a postgres extension can do, and it also shows off the power of having committers on the hackers and hackers to be able to help move the extension system in postgres forward so that you can then implement something like Citus on top of it.

[00:15:36] - Ry

When they were building that, did they have to add things to extensions in order to build that? I don't know the whole history of Citus, but was that kind of like.

[00:15:46] - Eric

Yeah, they've had a number of changes in postgres, I think, especially in the planner and executor to be able to support the ability to extend it better. And I think some of that came from the original Citus company people. That's great, right?

[00:16:03] - Ry

Yeah, that's awesome.

[00:16:05] - Eric

What I like to see are people using Pgrx to just solve a problem that they have with their application. I see things like somebody just needed to talk to an S3 bucket, and so they put together a little Pgrx extension to do that. Little things like that are what get me excited about Postgres extensions.

[00:16:26] - Ry

Yeah, it's pretty timely. I have a co-op that started literally today. Shout out to Jay and I told him to build a clerk FDW using the Supabase Wrappers SDK. And he's like, what the hell do you say to me? Just try read the docs, we'll see where it goes. But I'll report back how that goes. But yeah, Clerk, we're using Clerk for Auth for our company, and I just need a simple read access to three collections. So I figure he should be able to get it done by the end of the day, but I might be pushing too hard.

[00:17:00] - Eric

Well, it's possible. And that's a great example of where having a simple framework for making an extension is really powerful. Right. I mean, once you figure out the documentation and how to tie the two things together, it probably is just an afternoon's worth of work. And now you've got something that solves a problem and you can move on.

[00:17:21] - Ry

Yeah, I figure his second one maybe will take an afternoon, first one might take a few days, but we'll see.

[00:17:26] - Eric

Well, sure, yeah.

[00:17:28] - Ry

Well, that's great. Yeah, we're super excited, obviously, by all the possibilities that Pgrx brings to Postgres, and the great work you guys have done historically to get us to this point is much appreciated.

[00:17:42] - Eric

Well, thanks. That's one of the reasons why we, TCDI donated Pgrx to the PG Central Foundation is to provide it a long term home for the Postgres community. TCDI is a litigation support company. We're not a database company, so to speak. So we wanted to get this stuff in the hands of a foundation that can help shepherd it forward and make sure that it can survive.

[00:18:09] - Ry

I imagine you have some open issues there that you could use help with. They're probably pretty hard to do, right? Like building a framework is a lot harder to building a thing with a framework. But are you looking for additional help from people to build out Pgrx?

[00:18:23] - Eric

Yeah, for sure. I can't even tell you off the top of my head how many contributions we've had over the past four years that Pgrx has been alive. And it seems like every couple of weeks somebody new comes by with a new contribution. But yeah, one of the things that we're focused on right now is I guess it's two things kind of concurrently. One is we're improving Pgrx, is testing fundamentals, and it's both tying in more interesting things in CI like Valgrind, to different approaches to unit testing, which includes we're doing a big push on property testing right now within Pgrx. I think there's going to be some pull request landing in the next week or so with that. And we're also focused on improving some of the memory safety and soundness impedance mismatches between Postgres internals and Rust's compiler. So it's a big concerted effort to take a look at these issues.

[00:19:25] - Ry

Well, it seems like contributing to Pgrx could be like a gateway drug towards getting closer to helping with postgres itself. Right. Because you need to understand the internals to do anything in know, at least unless you're working on tests or yeah, I mean in general, adding new features to Pgrx means understanding postgres well enough to create that interface, which is big challenge.

[00:19:48] - Eric

It does. And we myself and the others that on the decor development team there, we spend probably as much time in the postgres sources as we do actually writing code for it in.

[00:20:03] - Ry

You guys. If you come up with a big new release of Pgrx or any other extensions that you can think of, I'd love to have you back. Eric, it was great catching up with you.

[00:20:12] - Eric

Yeah, sure. Likewise. Thank you so much for having me.

· 19 min read

In this episode, Ry and Adam talk about developing extensions for Postgres, being in the trenches of the modern data stack sprawl, and the future of what’s possible with Tembo Stacks. If you haven’t seen or listened to it yet, you can do so below, or on Apple/Spotify (or your podcast platform of choice). Special thanks to Adam for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.


[00:00:13] - Ry

Welcome to the show, Adam. Excited to chat with you today. How many postgres extensions have you authored in 2023?

[00:00:23] - Adam

Oh, it at least two.

[00:00:28] - Ry

At least two?

[00:00:29] - Adam

Well, in early 2023, I was just like, getting started and trying to figure out how to write one. And I know I wrote like a handful of them that didn't really do anything. I think the first one was just put like a rest server inside an extension and see what happens. Kind of didn't work very well.

[00:00:57] - Ry

Yeah, I remember that.

[00:00:59] - Adam

I think I made like, a prometheus exporter, and then recently there were two ones that are kind of more useful.

[00:01:05] - Ry

Yeah, that's great. Well, so obviously we know each other. We both worked together at Tembo. I mean, I could ask you questions as if I don't already know the answers, but since I do, I'm not going to do that. I was going to say getting into postgres, though, joining Tembo wasn't your first touch of postgres. Do you remember starting with, like I can't really quite remember when I started. But can you remember when you did?

[00:01:33] - Adam

Well, the first time I heard of postgres I'm pretty sure was like in undergrad, like database management systems class. And it was like, there's Oracle, there's these companies around it, and then there's open source stuff, postgresQL, you know? And I remember thinking, like, oh, that's And I probably Googled it and saw the elephant. And then I know I started using it in undergrad. I was doing some scientific computing stuff and I just needed somewhere to I didn't want to just keep writing to CSVs. I wanted to put it somewhere that was easier to search. And I just used postgres for it. Didn't really know what I was doing. It was just like, dump it into this database.

[00:02:21] - Ry

Nice, nice. And then as far as hacking postgres, that just started recently, or have you messed with the internals of postgres prior to this year?

[00:02:35] - Adam

Definitely not prior to this year. It's mostly focused on the application layer, building stuff on top of postgres. This year still getting deeper into the internals of postgres. I had installed some extensions before this year. Definitely didn't even think about how do you write build an extension, but it's.

[00:03:00] - Ry

Pretty easy to build them?

[00:03:01] - Adam

Yeah, that's very recent for me.

[00:03:02] - Ry

Pretty easy to build them. Right. Would you say?

[00:03:04] - Adam

Yeah. A shallow one is easy if you can learn rust. Yeah, it's a challenge. I think it's hard.

[00:03:17] - Ry

Yeah, I was kind of kidding.

[00:03:19] - Adam

And then there's like a learning curve, and then once you get over a curve, you can kind of get moving with it.

[00:03:27] - Ry

Maybe. Let's talk about the extensions that you're, I guess, most proud of or have the most traction so far. What are those?

[00:03:39] - Adam

Pgmq, I think, has the most traction message queue extension. It's a lot like SQS Amazon's simple Queue service or Redisimple Message queue except on postgres. Well, we wrote it for Tembo to help run our cloud. We needed a message queue between Control Plane and Data Plane, so we wrote it for that. And then just in the last couple of months, we started kind of talking about it in the community. And after we wrote a blog about it, we've had a few people from the community that didn't know before who are now consistently contributing to the project. So Pgmq is definitely the one that I think has most traction.

[00:04:36] - Ry

And I think you're also working on well, you kind of have a stack of them, right? Because your other extension relies on Pgmq. True?

[00:04:48] - Adam

PG later.

[00:04:50] - Ry

Is that true?

[00:04:51] - Adam

Yeah. Okay. Yeah, PG later, that's another extension that lets you submit a query to Postgres and then forget about it and come back later and see the results from that query. And that's built on top of Pgmq. So it's like a stack of extensions.

[00:05:20] - Ry

Nice. I guess you're in the position now to be the extension mentor for others in the company who may be building extensions.

[00:05:29] - Adam

I know.

[00:05:29] - Ry

I just hired a co op who's working on one, and I imagine you're mentoring him to some degree.

[00:05:37] - Adam

Yeah, Jay talking about? Yeah, Jay's. Great. Hey, Jay. Jay, if you're listening yeah. Jay's been working on the another extension written in Rust, Clerk FDW. So it's a foreign data wrapper around Clerk, which is like our identity provider. We just want to be able to build this data warehouse that has our users and their organizations and have that data persisted in our data warehouse. So we built well, Jay is building that foreign data wrapper around Clerk, which is pretty cool.

[00:06:17] - Ry

Yeah, we probably should give a shout out to Supabase for the Wrappers project that we're using to build that with. Sounds like it's been a pretty nice experience. Jay's a co op that just joined, like, last week and so already being very productive. So I'm excited. I think that there's the possibility of an explosion of new extensions for Postgres now that Pgrx allows us to use Rust. You agree with that?

[00:06:51] - Adam

Yeah, it's pretty powerful because take this Clerk FDW, that extension that Jay's building. We can bootstrap the software together the same way as you might do outside of Postgres, but we don't have to spin up another resource somewhere, manage a pod or another VM, can package it up into an extension, use Supabase Wrapper. So that's a software that's already built. It's tested. We don't have to reinvent the wheel on that. We write the extension in Rust. So we pull in Rust libraries that are already built, already tested. Then we write our own tests on top of these things and then package it into our extension and then deploy it as part of the database. And then we can kind of just monitor the data there and not have to manage another service that could crash.

[00:07:54] - Ry

Yeah, it's pretty nice. So I was going to change a little bit've also been using a lot of other people's extensions. Again, probably a lot more since joining Tembo than before, but what are some of your, I don't know, favorite extensions that you've been exposed to?

[00:08:15] - Adam

Yeah, you said PG Cron a few different places. There's another extension that I'm working on. It's, like, really early, but it's kind of wrapping up kind of some of the stuff that Langchain does in Python, but wrapping it up into an extension, pulling in PG vector into that, pulling in pg_cron, pulls in Pgmq and PostgresML as well. So it's like a bootstrapped extension. So all of those are pretty good PostgresML, pg_cron, pgvector, and they all have pretty good docs, so you can get up and running with them quickly and kind of figure out what you need to do.

[00:09:09] - Ry

Yeah. So have you found that you've had to study the internals of postgres much as part of the process of building these extensions?

[00:09:23] - Adam

Not really too much. Only when I'm having trouble getting it up and running for the first time do I need to really look at postgres. But a lot of these extensions don't. They're not like touching with hooks into and replacing core functionality of postgres. They're extending things. So a lot of what PostgresML does is just give you functions that go off and make rest calls or download data from elsewhere in the Internet and put it in your database for you, give you access to some pre trained machine learning models. And it's not changing fundamentally how postgres runs. It's still like the normal postgres. It's kind of like a layer on top of yeah. Yeah.

[00:10:19] - Ry

I think some extensions can do surgery and some are, know, laying on top of and not messing with the core functionality.

[00:10:28] - Adam

Right? Yeah, it's an area I'm kind of new with that, so I haven't gotten super deep into replacing core functionality of postgres quite yet. But soon we'll probably start working in that space a little bit more.

[00:10:47] - Ry

What's, like the testing story? If you're a dev that likes to write TDD, test driven development, are you able to do that with extensions in particular, I guess, with Pgrx. What's the test driven story there look like?

[00:11:03] - Adam

Pgrx, I think, is pretty good. They have a way for you to say unit test or integration test for the extension that you're writing, so you can say, execute some queries and then make assertions on the outputs of those queries. So kind of like, have normal assertions that you would do in whatever language that you're testing. Pgrx has some tooling around like, oh, I want to spin up postgres 15.3, spin up a new environment for that, install your extension into it, and then run your test suite on it. Yeah, I guess it can get a little bit trickier, I think could be because depending on how complex the extension is, you could have system dependencies, um, like, oh, I need a specific version of GCC or something, or OpenSSL version something has to be installed. I haven't really quite found a good way to test all of those things to make it super portable to say, like, yep, all these test pass. And that means my extension is just good for everybody.

[00:12:28] - Ry

For the test running on your local machine. I know they're running on your local machine, but in a docker container or just natively.

[00:12:41] - Adam

You could do both, I guess. Kind of like my workflow for it. I guess I would do, like, run the test locally and then have the extension in a GitHub repo and have a CI pipeline that runs the same tests, but in Docker or within a GitHub workflow to be like, hey, before you merge this, you need to pass all these tests, kind of gate yourself a little bit.

[00:13:10] - Ry

That's great. Well, cool. So I was going to ask, as far as your background prior to Tembo, you were doing little ML engineering too. Did you use postgres much in that role? I guess the question is, did you use any of the special parts of Postgres or was it really just like a standard transactional store with little knowledge of the extension ecosystem?

[00:13:39] - Adam

Nothing too crazy. I've always tried to do a queue on postgres, so be like, build up this giant queue of things that a machine learning model needs to make predictions on, or make a bunch of different predictions, dump them into a queue, and then have another process. Look at all these predictions and pick the best one before giving it back to the user. But a lot of OLTP like high transaction workloads in kind of machine learning space. Yeah.

[00:14:18] - Ry

So you said you've been trying to build queues historically, a lot of times in bigger companies, there are less tools for various tasks. For example, use redis if you want to build a queue, or use snowflake if you want to build a data warehouse. I don't even know that these things were officially declared by any sort of powers that be. It's more like the sales reps at those companies decided that this company is going to be a salesforce company or a snowflake company. Right. It's kind of a slow boil that you suddenly realize, oh, I guess we have kafka now. I guess we're a kafka shop and starts with a POC and then ends up you have all these tools. We call it the modern data stack. But yeah, I'm curious how your take on that in particular? Around start with the queue since you've just implemented it in postgres and probably had to use other tools in the past.

[00:15:16] - Adam

Yeah, a couple of places that I've worked at, it's like, hey, we need to pass messages between two services. And any engineer will be like, all right, well, we're going to evaluate what we have here. We want to try to do things like, oh, let's keep things simple, not make things too complex. And then depending on the size of the organization, it'll be like, well, we use Kafka for messaging, so use Kafka. And so it's like, okay, so make progress. A lot of times it's better to not fight that and just be like, we want to make progress, we just need our application to work. It's not like Kafka wouldn't work for this, but it's definitely overkill for a lot of situations. So then it's like, okay, we're going to build this in Kafka and maybe the rest of the application is in postgres, but this message piece of it, we're going to put it in Kafka. And from the application perspective, it's kind of no difference. Like I said, it's an overkill tool for a lot of use cases. But then when things start to go wrong and need to troubleshoot it, it's like, okay, now we have to bring in the Kafka expert.

[00:16:35] - Adam

And if it's a big company, it's probably a handful of people who are on this Kafka team and they got a lot of other stuff going on. So it's like, what do you have to do? You have to learn Kafka. Yeah.

[00:16:47] - Ry

And you're a small use case, right? You're like a small unimportant use case. And so, yeah, you can't get their attention. You kind of accepted the fact that, okay, I can learn. It's kind of fun to learn new technologies and try new things out, right? That's the day. Zero joy of learning something new. But then now all of a sudden you've got to support it, right?

[00:17:09] - Adam


[00:17:10] - Ry

You support what you make a lot of. Yeah, you got judoed into that.

[00:17:16] - Adam

In my career, it was fun to learn Kafka and there's some things I really like about it, but then at the same time, it's a very complex tool and it does take a team to run and manage it. Same with RabbitMQ. If you're going to do those things, you need some people dedicated to making sure that they're functioning the way you expect it to.

[00:17:45] - Ry

Yeah, well, I think kind of leads into one of our core missions at the company, which you're leading at Tembo, which is our stacks. I guess it probably would make sense for you to say a few words on what we're trying to accomplish with stacks at the company.

[00:18:05] - Adam

Yeah. So stacks are workload optimized postgres clusters. So the message queue stack is one that we have and our goal with that is if you need to run a message queue, we want this to be the most optimized way to do a message queue on postgres. Of course, there'll be a point in time when it's like, hey, your use case has grown so big that maybe that stack's not going to fit you, but that stack will be to the very edge of what postgres can do for that workload. We're taking that same approach with our OLAP stack that we're building right now. We have an OLTP stack, there's a machine learning stack. So each one of these stacks is do it on postgres and we're going to make it be the best possible squeeze every last piece of juice out of postgres that we possibly can for that workload.

[00:19:08] - Ry

Yeah. And you're curating extensions for each stack. What else are you doing besides that.

[00:19:13] - Adam

Yeah, extensions are a big piece. A lot of that has to do with there are certain types of developer experience that we think people want around workloads. And then there's the postgres configuration itself. So like, what should shared buffers be or how many parallel worker processes and how should the auto vacuum vacuum error be tuned for that workload? That's a whole class of things that are unique to every stack. Of course, there's like a user interface component of every stack as well. So if you want to come up and to look and see and observe the stack, there'll be user interface metrics are kind of really tightly related to the UI, so there's different metrics for every stack as well. Some of them are going to be the same across stacks. But for example, current number of messages in a queue, that's like a metric that you can monitor, you can write alerts around that metric and it's mostly unique to the message queue workload.

[00:20:32] - Ry

Yeah. And if the message queue stack is competing against a commercial queue product, they probably have some sort of visualization of the state of each queue.

[00:20:45] - Adam


[00:20:46] - Ry

And so a postgres backed queue ought to have that same UI, that same monitoring tailored monitoring system. It makes it, I don't know how many times harder versus just configuration and curating some extensions, but I think it's all worth it to the user to be able to really defend their choice to use postgres for this use case against one of the modern data stack alternatives.

[00:21:20] - Adam

Right? Yeah. I think something really useful that I like about having stacks and having postgres aligned to specific workloads, the complexity of an overall application can really come down a lot by running everything on postgres. You could still have separate postgres clusters for workloads, like a certain set of CPU and memory dedicated to mission critical process A and a separate one for this other mission critical process. But still when it comes to troubleshooting these things, you're troubleshooting postgres and it's not like, hey, I have to switch and be like, okay, now I'm troubleshooting Kafka, or jumping to Redis or jumping to Mongo or Snowflake. It's still like the context switching I think, for the developers is big time minimized when it's still the same underlying data store between all the different workloads, same technology. Yeah.

[00:22:27] - Ry

And we have this vision of potentially having some built in stack connectivity, right, where these databases, if they're all kind of sitting back to back to back so you have five different stacks they could and should be able to communicate really well with each other. And you should be able to write queries across them in the same way that Citus allows you to write queries across an array of postgres clusters very efficiently. You should be able to do the same thing here and pull data from multiple stacks with a very nice user experience, again, without having to move data around. So that's one of the exciting things for me as a former airflow company founder. All these data pipelines are very painful between modern data stack companies. And one of the things I'm excited about is the possibility that we can give developers the option to not have to create all those pipelines.

[00:23:36] - Adam

Yeah, I'm really excited to work on that problem when we start doing that, but that'll be I think, a really big differentiator is to say I have ten different machines running postgres and I have a single pane of view across all of them. I think it's definitely doable. It'll be challenging.

[00:24:02] - Ry

Yeah, it's a dream we're chasing. Yeah. Good. Well, I think it was great chatting with you, Adam. I'm sure we'll have you on the show again. I know that, again, you've got a lot more extensions coming and appreciate the work you've done for the community so far and yeah, looking forward to seeing your future work and talking about it.

[00:24:28] - Adam

Sounds good. Thanks a lot, Ry.

· 8 min read

Language models are like the wizards of the digital world, conjuring up text that sounds eerily human. These marvels of artificial intelligence, such as GPT-3.5, are sophisticated algorithms that have been trained on vast swathes of text from the internet. They can understand context, generate coherent paragraphs, translate languages, and even assist in tasks like writing, chatbots, and more. Think of them as your trusty digital scribe, ready to assist with their textual sorcery whenever you summon them.

If you have used ChatGPT in the past, you probably were able to suspect that the previous paragraph was generated using it. And that's true. See the prompt here.

From the example above, you can witness the eloquence LLMs are capable of. Some people have been shocked so much that they became convinced that these models were sentient. However, in the end, they are nothing but a large, complex series of matrix and vector operations. These matrices and vectors have been trained to represent the semantic meaning of words.

In today's post, we will explore these meaning vectors and how they are related to Postgres. In particular, we are going to play with sentence transformers, vectors, and similarity search. All of that with the help of the pgvector Postgres extension.

Let’s go!

From words to vectors

Like we said, a vector can represent many things, for example, the position of a character in a 3D video game, the position of a pixel in your screen, the force applied to an object, a color in the RGB space, or even the meaning of a word…

Word embedding refers to the technique by which words can be represented as vectors. These days, the embeddings offered by OpenAI are very popular. However, other alternatives exist, like word2vect, Glove, FastText, and ELMo.

Similarly, entire sentences can be represented as vectors using OpenAI embeddings or SentenceTransformers, for example.

These models can be accessed through libraries for different languages. The following Python snippet shows how to obtain the vector embeddings of three sentences using SentenceTransformer:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer('all-MiniLM-L6-v2')
sentences = ['SentenceTransformers is a Python framework for state-of-the-art sentence, text and image embeddings.',
'Pgvector is postgres extension for vector similarity search.',
'Tembo will help you say goodby to database sprawl, and hello to Postgres.']

sentence_embeddings = model.encode(sentences)

for sentence, embedding in zip(sentences, sentence_embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)

The code used in this blog post can be found in this gist.

The mind-blowing part is that words and sentences with a similar meaning will have similar vectors. This characteristic is the basis of a search technique called similarity search, where we simply find the nearest embedding vectors to find texts that are similar to our query.

Postgres meets Language Models

Models are great at generating content that seems credible, as shown earlier. However, you may have experienced cases where ChatGPT hallucinates answers or delivers out-of-date information. That's because LLMs are pre-trained using general data. And, because of that, creating a chatbot based only on the pre-trained data wouldn't be helpful for your customers, for instance.

The concept of RAG (Retrieval-Augmented Generation) acknowledges this limitation.

One way of overcoming these problems is to store your company's knowledge base in a database.... preferably in a vector database. You could then query related content and feed that content to the LLM of your preference.

RAG with pgvector

Specialized vector databases include Milvus, Qdrant, Weaviate, and Pinecone. However, you probably want to stick to your Postgres database.

Postgres is not in itself a vector database, but extensions can come to the rescue one more time... This time with pgvector.

Let's use it and explore how we would query related content from a Postgres database.

pgvector: Postgres as a vector database

pgvector is a Postgres extension that helps work with vectors and stores them in your postgres database. It offers functions for calculating the distance between vectors and for similarity search.

For the following demo, I converted all of Tembo’s blogs into document vectors using the following Python script that uses the langchain framework.

from langchain.document_loaders import TextLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores.pgvector import PGVector

import os

CONNECTION_STRING = "postgresql+psycopg2://postgres:password@localhost:5432/vector_db"
COLLECTION_NAME = 'my_collection'

embeddings = HuggingFaceEmbeddings(model_name='all-MiniLM-L6-v2')
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 20)

files = os.listdir('./corpus')

for file in files:
file_path = f"./corpus/{file}"
print(f"Loading: {file_path}")
loader = TextLoader(file_path)
document = loader.load()
texts = text_splitter.split_documents(document)
sentence_embeddings = embeddings.embed_documents([t.page_content for t in texts[:5]])

db = PGVector.from_documents(

It basically loads each document and then inserts them into Postgres using the PGVector class. As a result, in my Postgres database called vector_db, I got two tables:

Show tables

  • langchain_pg_collection: contains information about all collections.
  • langchain_pg_embedding: contains all the resulting vectors.

The following picture shows part of the contents of (2):

Show vectors

The resulting vectors have 384 dimensions.

Are these sentences similar?

Let’s now play with these vectors.

Using pgvector we can search content that is similar to a query. For example, we can find content related to postgres 16.

First, we can obtain a vector that represents a query:

from langchain.embeddings import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name='all-MiniLM-L6-v2')
print embeddings.embed_query(“What is new in postgres 16")

Then we can search vectors stored in the database that are similar to the query vector. The tool for that is cosine distance, which in pgvector is represented with the <=> operator:

SELECT document, 1-(embedding <=> '[<your_vector_here>]') as cosine_similarity
FROM langchain_pg_embedding
ORDER BY cosine_similarity DESC

The above query retrieves vectors/chunks of text ordered by how close they are (in terms of cosine distance) to the query vector. In my case, the most similar chunk of text was:

In case you missed it, Postgres 16 came out last week - and this year it arrived earlier than the last few years. There are many features that I’ve been looking forward to for the last few months and I’m excited to see them get into the hands of users. Before we dive into the specific features of this release, let’s discuss what a Postgres major release actually means.


Postgres Releases

The PostgreSQL Global Development Group releases a new major version every year with new features.

In addition, Postgres releases minor versions of each major release every 3 months or so with bug fixes and security fixes. No new features are released in minor versions, and that’s what makes major version releases so exciting as it’s the culmination of about a year’s worth of development work on the project.

Which is an excerpt from Postgres 16: The exciting and the unnoticed.

Let us look at what Postgres is doing behind the scenes, using explain analyze:

Limit  (cost=28.07..28.08 rows=2 width=641) (actual time=1.069..1.071 rows=2 loops=1)
-> Sort (cost=28.07..28.53 rows=181 width=641) (actual time=1.067..1.068 rows=2 loops=1)
Sort Key: ((embedding <=> '[<your_vector>]'::vector))
Sort Method: top-N heapsort Memory: 28kB
-> Seq Scan on langchain_pg_embedding (cost=0.00..26.26 rows=181 width=641) (actual time=0.036..0.953 rows=181 loops=1)
Planning Time: 0.079 ms
Execution Time: 1.093 ms
(7 rows)

We can observe that Postgres is sequentially scanning all rows. Then it computes the cosine distance for all those rows and sorts them. Finally, it takes the first two rows.

The sequential scan could be avoided if we had an index. Indeed, we can create one thanks to pgvector, for example:

alter table langchain_pg_embedding alter column embedding type vector(384);

CREATE INDEX ON langchain_pg_embedding USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
 Limit  (cost=5.01..5.11 rows=2 width=641) (actual time=0.175..0.179 rows=1 loops=1)
-> Index Scan using langchain_pg_embedding_embedding_idx2 on langchain_pg_embedding (cost=5.01..13.49 rows=181 width=641) (actual time=0.172..0.175 rows=1 loops=1)
Order By: (embedding <=> '[<your_vector>]'::vector)
Planning Time: 0.154 ms
Execution Time: 0.224 ms
(5 rows)

One thing to keep in mind is that these indexes are used for approximate nearest neighbor search. We’ll explore what that means in a future blog post. Let us know if that would be interesting for you.


Ok, at this point you should now have a sense of what pgvector is, and how to use it together with Python. However, wouldn't it be great if the vectorizing step could happen all within Postgres?

Pg_vectorize is an extension being developed by Tembo that intends to streamline the process of generating vectors from the data in your Postgres tables. It uses a background worker to generate and update the embeddings in batches every N seconds. Also, if you need to find similar vectors, the extension can do that. All within Postgres. Isn't that a cool idea?

I invite you to check out the repository and stay tuned.

To wrap up...

In this post, we briefly discussed the concept of embeddings, why they are important, and how they can be generated using one of the multiple available libraries. We also explored how to store and query the resulting vectors using Postgres and the pgvector extension.

These concepts are relevant to leveraging a knowledge base in conjunction with LLMs in an emerging technique called RAG. Of course, when implementing a real-life solution, more factors need to be considered, and this post was just an introduction.

I invite everyone to try out pgvector (e.g. using the scripts in this post), and the different operations that it offers. Also, can you think of other uses of pgvector? Let us know your thoughts in @tembo_io.


The first paragraph in this blog post was generated using ChatGPT. Here’s the prompt

· 26 min read

Episode Notes

In this episode, Ry and Marco talk about the early days of Citus and its development, creating pg_cron (on a plane!), and the new possibilities on the horizon for extensions in the Postgres landscape. If you haven’t seen or listened to it yet, you can play the video below, or listen on Apple/Spotify (or your podcast platform of choice). Special thanks to Marco for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.


[00:00:12] - Ry

Welcome to Hacking Postgres. Today we have Marco Slot joining us. I'm meeting Marco for the first time today. I'm super excited to be meeting you, Marco. Welcome to the show.

[00:00:25] - Marco

Hey Ry. Yeah, very nice to meet you. Yeah. So I'm Marco, I work at Microsoft and I've been working on Postgres extensions for about nine years now, starting with Citus, pgCron, and a bunch of other ones. Yeah.

[00:00:44] - Ry

That's awesome.

[00:00:45] - Marco


[00:00:45] - Ry

So this is not really supposed to be an interview. It's more just us geeking out about Postgres, extensions, what they mean to users and opinions about anything and everything related to our work is fair game. Yeah, sounds good. So what'd you do before you got into Postgres stuff, out of curiosity?

[00:01:09] - Marco

Yeah. My background is more in distributed systems, so I spent a few years at AWS. It was quite early days in AWS when I think I joined. There were three Web Services. S3, SQS and EC2.

[00:01:24] - Ry


[00:01:25] - Marco

And then we built number Four, which was CloudFront, which gives people started using S3 for websites, which it wasn't really built for. But then we were asked to solve this and we built the CDN, and then we also built Route 53 for the DNS.

[00:01:40] - Ry


[00:01:41] - Marco

And then yeah, I did a PhD in sort of self driving cars, but specifically like, cooperation between self driving cars, like, what if they can communicate with each other and sort of coordinate? So it's sort of advanced distributed systems in a way as well. Yeah. So that led to me joining Citus, which is sort of founded by former Amazon folks.

[00:02:05] - Ry

Got it. So building Citus was probably easy compared to all other stuff, actually.

[00:02:12] - Marco

Yeah. If you do self driving cooperation between self driving cars, which is this kind of like life critical system distribute where your nodes kind of just move around and they move away from each other, it does make things relatively easy. But then yeah, building databases is never actually very easy.

[00:02:31] - Ry

No, I know. How good do you want to make it right? Is how hard it becomes.

[00:02:38] - Marco

Yeah. I guess the challenge is always everything sort of in the word relational. Right. Like everything relates to everything else. There's not like a feature you can implement without considering all the other aspects of the database and all the things that can happen concurrently with the operation that you're working on and all the possible ways in which things can fail. Especially if you're in a kind of distributed setup.

[00:03:01] - Ry

Yeah, I think that's really interesting perspective. You think of all of the configuration options, too. Adding a new configuration option adds complexity, adds flexibility, adds potential complex, and it's tough.

[00:03:20] - Marco

Yeah, definitely. I think one thing we learned is there's a sort of certain danger in creating modes where every setting you add, which can be on/off now you have two modes in your system. You add another setting that can be on/off. Now you have four modes in your system. It's kind of this exponential explosion of possible ways in which your database might be running. And so that's hard learned lesson that's, like, don't introduce major modes. Sometimes you have to because of backwards compatibility reasons, maybe, but you want to get rid of them over time.

[00:03:57] - Ry

Were you one of the first team members on Citus? Actually, I don't really know the full history. Did it start as a fork and made its way as an extension, or was it always an extension?

[00:04:10] - Marco

Yeah, I was one of the first, though I think Samay was already there. I think he's joined Tembo. So but yeah, we started out as a fork and then we called it CitusDB, and then we made it an extension and called it Citus. Though I think the name CitusDB kind of stuck around for...I still hear it sometimes. But yeah, it was very pretty early days for extensions. Like 2015 PostGIS obviously existed. It was one of the first major ones. But yeah, we were lucky at the time. We had Andres Freund working for us. Well, he's also now working for Microsoft, but he's like one of the top Postgres committers, and we put him in a room for a while, know, can you turn this into an extension, please? And he came up with some terrible, terrible hacks. But over time, those also drove some of the extension APIs in Postgres itself, where things that used to require really terrible hacks are now kind of more structured and you can introduce your own sort of data structures into Postgres without doing lots of weird stuff.

[00:05:22] - Ry

Well, I was thinking probably not everybody knows what Citus is. Maybe if you could give a quick I'd love to hear a quick overview of what it is and then maybe a couple of things that maybe you can think of that are unusual, that people might not know about it, that are part of it. I don't know if that's a stupid question, but yeah, if you could try no.

[00:05:41] - Marco

Yeah, so it comes from basically this denotion that Postgres is ultimately limited to a single machine. I mean, you can read replicas, but you definitely cannot scale the writes. Usually if you have read replicas, they'll kind of end up having the same stuff in memory. So you cannot really scale the memory either. And so Citus is a solution to basically adds sharding to Postgres in a very transparent way, where you can have tables that are sort of transparently sharded across many nodes, you still connect to a Postgres server. There's just a table, looks, walks, and talks like a table. But if you insert into it, it actually gets rooted to a particular node. And if you insert another value, it might get rooted into another node that uses this hash distribution. And that way you can scale across infinitely many machines and have some users have petabytes of data in their Citus cluster part of the advantage is also that queries can get paralyzed on these tables because we can use all the nodes at the same time and multiple cores per node. But more and more it's also actually being used for LTP workloads.

[00:06:55] - Marco

And particularly the most popular use case is actually multitenancy, where you have this kind of B2B app where you have lots of relatively independent users and you kind of want to just transparently spread those across different machines without having to actually manage that yourself and figure out how to move data around. Because that was all very kind of automated. You can just ask it to rebalance your data and it does that using logical replication. And so then you have the advantage of just as much memory as you need to make your application fast and as much disk IOPS as you need by just adding more machines with more disks and so that helps you scale. And then we have a sort of managed service in Azure around that. But yeah, we use every hook that's available in Postgres to achieve things like starting with the planner hook. That's one of the interesting things about Postgres. It's like you just replace the whole planner with something else. You get a parsed query tree and then you can do whatever you want with it. You can just do some like okay, look at this query and log something interesting and then just go to the regular planner.

[00:08:12] - Marco

Or you could just do something completely different. And so we use all these hooks to kind of create this facade around Postgres tables where it's like Postgres table is still there, you just cannot touch it. We just intercept everything that's going on. The most recent and most terrible hack we did it was like the give it as this hack in Postgres so we can figure out so we have shards. Like the writes go into shards but then if you do logical decoding you see writes on shards, you don't see writes on these what we call distributed tables. But the user doesn't even know about shards probably. So how can we fix that? And there wasn't really any kind of hook we could use to change the way that the logical decoding process works except that we realized that the client specifies a decoder name usually just hard coded like Pgoutput or Wal2json or test_decoding and that actually refers to the name of a .so, like a shared library. And we realized we could create a .so with the same name and put it in a different directory and then change the dynamic library path to be prefixed with that directory so it would load our library instead.

[00:09:34] - Marco

This is kind of the worst thing we did. But it works nicely actually. I mean, that new .so file calls the original .so file and it kind of makes some changes to hide the sharding from the coding. But yeah, it is a hack. It's very much a hack. But it's nice that Postgres ultimately lets you do these kind of things.

[00:09:59] - Ry

I think, like Citus says, there's like a requirement that Citus loads first as an extension. Is that true? I'm trying to remember if I'm making that up.

[00:10:08] - Marco

It is true. I mean, maybe it's a bit of laziness on our part, but it's also.

[00:10:13] - Ry


[00:10:16] - Marco

We generate very weird plans that wouldn't make any sense to other extensions. So if you have multiple layers of planner hooks, you kind of want Citus to be but by putting it first, it actually becomes the last because everyone else overwrites Citus. And then hopefully when you go through the chain of planner hook, Citus is the last one remaining and it can produce some quirky distributed query plan that doesn't make sense to anyone else and pick that up.

[00:10:48] - Ry

Are there known extensions that aren't compatible with Citus that you're aware of or as far as you know? It kind of works with most everything.

[00:10:56] - Marco

Yeah, I mean, Citus is a little bit of a paradigm shift because now you have many servers and some extensions just don't make sense if you're on many servers because they keep their state on one of them and then they're not aware of others. But most things just work. But I kind of feel like there's this notion of a deep extension, if you will, like timescaleDB or Citus. They kind of go really deep into planner hooks and change behavioral characteristics. Whereas something like PostGIS and a lot of extensions that just introduce new types and functions. They're sitting on top of this much more clean interface and usually the interoperability of those things is pretty good. But Timescale has this notion of a hyper table inside as a distributed table and you cannot really make a distributed hyper table. But yeah, compatibility, it comes down to sometimes also the individual features. It's not like if you install one, then the other doesn't work anymore. I mean, that happens for some extensions, but for Citus, most other things just kind of work with it. Yeah.

[00:12:08] - Ry

So let's talk about pg_Cron. How early were you involved in that project?

[00:12:13] - Marco

Well, I created it. I used to have this thing where I am much more productive on flights and I flew to San Francisco a lot for a while and then I had a few of these projects that I was just working on on flights and pg_Cron was one of them. And so at the time we had customers who did kind of this real time analytics scenarios on Postgres on Citus. And those involve a lot of materialization. So a bunch of raw data keeps coming in like time series data. And at some point you want to take all the new data and kind of pre aggregate it inside of the database and that needs to happen periodically. I think the background worker concept was pretty new at the time. I don't know. This was many years ago. And so I realized you could do something like Cron. It maybe also comes from the Amazon background because Amazon, at least at the time, was all like glued together with Cron, Perl and R Sync. There were Cron jobs that would R sync metadata files to other servers and it was all pretty hacky. But anyway, I figured it would be pretty useful to have this kind of thing in Postgres also for just vacuuming or calling certain stored procedures that, I don't know, delete your old data.

[00:13:42] - Marco

Like those kind of maintenance tasks. It was just much easier if that's like you run a command once and then it works forever, rather than I need to maintain this separate infrastructure that periodically connects to the database and does that. So that became quite popular. I think pretty much all major managed services have pg_Cron now. Yeah, I kind of built it as a very I mean, it was definitely my side project. So I also built it as a very low maintenance thing that I wouldn't have to constantly fix bugs in. Also a little bit selective about adding new features. For a long time I resisted adding people want like, can I run a job every few seconds? And normally Cron is like at the minute, granularity. But recently I caved and I added doing it every few seconds because I realized it's kind of feature if there's some kind of issue, some kind of memory leak or whatever, if you run it every few seconds, it's going to be way worse than if you run it once a minute. So you need to be more careful in that case. But you can now also do like every second, for example, which is kind of useful.

[00:14:56] - Ry

That's awesome. Yeah, I was looking at we're just building a new data warehouse, trying to use Postgres to do that. I'm exploring all the columnar extension as well, which sort of is part of Citus. I don't know if you did any work on that, but yeah, looking at pg_cron versus pg_timetable, have you had the question of what about have you looked at timetable and its features and yeah, I'm kind of curious to get your assessment of the two.

[00:15:28] - Marco

Yeah, I've looked at it a mean for a long time. It wasn't so much an extension. I think nowadays it is more or less an autonomous extension, I think. Yeah, I mean, it's somewhat more complex in a way. I don't feel like competitive of like you should use pg_cron or use pg_timetable. I think pg_cron is just intentionally simple such just this very simple thing that just works and does what you'd expect it to mean almost unless your time zone is in a different unless you're not in GMT, then it sort of doesn't quite do what you'd expect it to anyway. So yeah, I think Cron is just simpler. But if you need some more specific, I guess it comes down to do you need the extra features that pg_timetable adds? But I think for most people, pg_cron is good enough because you can also just I've also seen people do their own kind of job queues on top of pg_cron, where you just have this thing that runs. Every few seconds or every minute, and then looks are there some jobs to do? And it actually executes the stuff that's in the job state.

[00:16:39] - Marco

So it's kind of composable as well. You can build your own things on top if you want. Yeah.

[00:16:45] - Ry

And especially if you could trigger it every second to check for work. Yeah, that's just fine. Yeah, I'm coming from my previous company was doing Apache Airflow and so it's like kind of getting back to some of the stuff. I mean, obviously that's a very complicated system with a lot of capabilities and dag like chaining of tasks and fanning out and all that kind of stuff. But yeah, I think it's interesting to try to do some of that stuff inside of Postgres without requiring we're trying to go with the mantra of like just use Postgres for everything.

[00:17:29] - Marco

It's not a bad mantra. You've got, I guess, different levels of commitment to that idea of how much of our backend can we shove into Postgres? And there's these GraphQL extensions where you pretty much put your entire backend in Postgres. Yeah.

[00:17:51] - Ry

Your API, is there?

[00:17:52] - Marco

Yeah, I guess Supabase is going pretty long way in that direction, but at some point you want to debug stuff or you want to have certain rollout procedures. And sometimes Postgres is not as mature a tool as just shipping Python files somewhere. There's just better CI tools for most normal programming languages than for Postgres. So you have to find the right balance, I think.

[00:18:24] - Ry

Yeah, we're building and maybe this shouldn't have happened, but apparently yesterday we were working on a new FDW and I think it hit an exception that wasn't handled and it appeared to shut down the Postgres server. This is kind of fun. Should that have happened? Should that be possible? Should an extension error take it down? But we're still investigating that. But yeah, I think there's always risk with trying to do a lot inside, but I think with a mature extension that's well tested and battle tested, it should be. I mean, if you think about everything that's happening, there's no SRP in Postgres, right? It's not just doing one thing, it's got a vacuum. There's like lots of processes running and so then the question is, is it a sin to add one more thing? Especially if it has the capability to have a background worker? It's just like staring you in the face. Like use me.

[00:19:29] - Marco

Like the extension APIs are slowly evolving from just a complete hacky approach for people to try out their Postgres batches before merging them into core. To some, I guess Extensibility was there from the start. But that's more too for custom types and custom functions than it is for planner hooks and background workers and those kind of things. Those are a little bit more what if we add this function pointer to see maybe extensions can do something interesting with it. And it's not very extremely well structured, but I mean the whole rust like pgrx and building extensions in rust, it does create an opportunity to have a little bit more of a well defined method of developing extensions.

[00:20:25] - Ry

Yeah, I think what's interesting to me about Postgres is like the history of forks and it sucks to be on the true fork, as I'm sure you were aware. Creating the extension framework helps people out of the land of forks, but it does create the possibility for even more risk extensions in terms of all that. So it's like a double edged sword. I think it's great though, that to me is the reason why Postgres is doing so well now, is all that freedom that the core team has given the community in terms of how to use it. It's unprecedented, almost dangerous, but I also think empowering to developers who want to have some fun with their database.

[00:21:21] - Marco

Yeah, definitely. What we also often find is just an extension versus an external tool. At least we're sort of in the business of running a Postgres managed service. But even if you have just your own Postgres servers, it has a fairly straightforward deployment model, right? Like you don't have to create a whole separate cluster with its own configuration that connects to your Postgres database and does things with it. You just shove it into your existing Postgres server and it'll restart the background workers and you don't have to worry about all those things. It does have a lot of merit, actually, to develop a software as a Postgres extension. I mean, it comes with its own protocol. I think the most interesting thing is just like you get all this synergy between all these different extensions, that the sum of the parts is greater than the or what is it? The whole is greater than the sum of the parts. But also things like PostGIS and Citus, they kind of layer like you can distribute it, geospatial joins, whatever. But then also maybe you have some interesting function and you can run it periodically using pg_cron.

[00:22:39] - Marco

And it's like all these things kind of give you this little platform where you can just by running SQL queries, do increasingly interesting things. Maybe you have an FTW that does an Http call and you run it using pg_cron, for example. And now your Postgres server is not just pg_cron, is not just Postgres cron, it's actually anything cron. Like you can reach out to other systems. So that plugability.

[00:23:10] - Ry

I literally have a co-op that started last week and his first task was, I said create an FDW for this API and use pg_cron and ingest that data. And he's like, well, how do I do it? Just brand new to data warehousing in general, but he's got the FDW built and now he's working on if he has any problems, I'll send him your way. Kidding. I won't. Kidding, kidding. I won't do that.

[00:23:41] - Marco

If there's bugs open issues like I'd like to know, of course.

[00:23:44] - Ry

But there's no bugs. No. So do you have any long flights coming up where you have some new extensions coming?

[00:23:51] - Marco

No, we had a baby last year. It's like my flight budget is very low at the moment. That's true.

[00:23:58] - Ry

Your latest extension is human.

[00:24:01] - Marco

Yeah, but yeah, I'd like to play more with Rust because that the problem with extensions in the past. Like developing them in C has always been like c doesn't have any good sort of dependency framework. There's these old established library like, I don't know, LibZ and LibXML or something, like something that was developed 20 years ago and now, okay, every system has it, you can use it, but for anything newer than ten years, it's extremely annoying and hard to use a C library. And then even if you can figure out your configure and your make files, the memory allocation is probably going to not play nicely with Postgres. So that's where Rust is kind of interesting. Now there's this whole ecosystem of open source libraries that can potentially become extensions and we're still sort of at the very start of that, what's going to happen?

[00:25:05] - Ry

It's kind of scary because it could be very much of a Cambrian explosion. What happens if there are 300 new extensions that are worth adding? It's sort of a pain for the managed service providers.

[00:25:19] - Marco

Yeah, definitely. Yeah. And they can have funny incompatibilities. It helps that then Rust is a little bit more safe and a little bit more not managed, but the memory allocation is a bit more sort of safe as well. But then, yeah, you can have a lot of weird bugs if you combine all these unknown bits of code.

[00:25:46] - Ry

That's great. Are there any extensions that you've run across either recently or in the past that you love, that you would mention as exciting? I mean, you mentioned PostGIS and anything else that you can think of. It's no big deal if not.

[00:26:04] - Marco

Yeah, there's many. I'm sort of intrigued by Postgres_ML. It's like this just machine learning extension that they seem to be doing a lot of interesting things. I don't have a good use case for it myself yet, but I really like what you're doing. Also in Rust, I think MIT licensed, so it can be used in a lot of places. There's of course a lot of buzz around pg_vector and more recently pg_embedding, which is sort of an interesting kind of development because now I think they both added this HNSW indexes which are faster than the IFV flat indexes that pgvector provided. But then it also means they are now incompatible with the latest versions. You can create one or the other, not both, which is a little awkward. But I think that is where, of course, one of the most fastest moving areas and I think some of these things will be figured out.

[00:27:06] - Ry

Have you gotten those extensions added to the Citus managed service?

[00:27:11] - Marco

Yeah, we definitely have pg_vector. Yeah, I mean, I think that took the landscape by storm. I think they're pretty much on every managed service now.

[00:27:19] - Ry

It's interesting, we're working on this thing called Trunk where we're trying to maybe I don't know if we can get you guys to participate, but the idea would be to send some metadata to some sort of central repository that we'd know, like which extensions are trending, waning, completely dead or not. I just think it's interesting as a new person coming into the Postgres ecosystem, there's just this library of potential extensions that it's pretty expansive and kind of trails off in a know because you can find them loosely here or there on GitHub. But I think having a better directory of them would be good for the community.

[00:28:04] - Marco

Yeah, I guess. One thing that's also tricky about extensions, it's like you'd want it in an ideal world, you write it once and then it works forever. In the real world, every Postgres version potentially breaks your extension, so someone has to actually go and fix it. For new Postgres versions, sometimes it's okay. I think with pg_cron, I had like once or twice. It's like I think Postgres 16 didn't actually no, it did require some fixes. I think there was one Postgres version which didn't require any fixes. Maybe PG 15, I was already happy. But usually C is a bit of this wild west of programming languages. But yeah, sometimes just some function header changes, there's an extra argument and then none of the extensions that use that function compile you have to have some level of maintenance behind each extension. Well, not every extension does well. Great.

[00:29:03] - Ry

I mean, it was great to chat with you. Happy to have you back on again. If you have a big new release of Citus, I don't know, do you guys have any big plans for it or has it reached a certain point of stability where it is what it is? Yeah, I'm kind of curious.

[00:29:19] - Marco

Well, our most recent major release added this kind of notion of Schema based Sharding, where so far it's always been like you have distributed tables and you have a distribution column and you need to pick which columns you use. But the Schema based Sharding is just like every Schema becomes its own group, own Shard, essentially, so it might be on a different node. So for apps that use Schema per tenant, that's a very nice model. And so we're investing more in that at the moment.

[00:29:49] - Ry

Well, great. Good to meet you. Looking forward to continuing to get to know you. And thanks for joining us on the show.

[00:29:56] - Marco

Yeah, it was great. Thanks for having me.

· 3 min read


At Tembo, we want to empower developers to build fast. That’s why we built the Terraform Provider for Tembo Cloud, allowing developers to manage Postgres resources using Infrastructure-as-Code (IaC). In this blog, we'll explore the Terraform Provider for Tembo and how it can help you streamline database management while keeping teams agile, accountable and enforcing organizational policies.

Simpler, faster, safer, and easier? That’s Tembo - but let’s back up and start at the beginning.

What is Tembo Cloud?

Tembo Cloud is a developer-first, fully-extensible, fully-managed, secure, and scalable Postgres service. It not only simplifies Postgres management but also provides specialized Postgres deployments through Tembo Stacks, catering to various use cases.

Database as Code using Terraform

Databases as code provides a safe, consistent, and repeatable way of managing databases and is a game-changer for developers. This approach allows you to version-control any changes, ensuring auditability, and facilitates efficient workflows such as Pull Request flows and GitOps.

Terraform is the most popular Infrastructure as Code tool today. It provides a way to define the desired state of your infrastructure using Hashicorp Configuration Language(HCL). Terraform handles all the complexity of making sure that when changes are applied, the actual state matches the desired state. Therefore, it is an efficient way of managing databases as code.

Using Terraform Provider for Tembo

With the Terraform Provider for Tembo, you can manage Postgres databases on Tembo Cloud with all the benefits mentioned in the previous section. You can find the Terraform Provider for Tembo on the Terraform Registry along with the documentation, and the Github repo is here. A new Postgres instance can be provisioned on Tembo Cloud in about 1 minute and destroying an instance takes just 10 seconds.

Example Terraform file

You can create a new Tembo instance on Tembo Cloud using the tembo_instance resource available with the provider. Below is an example configuration.

Note: Tembo Terraform Provider needs an access_token to authenticate with the API. Generate a long-lived API token by following steps here.

terraform {
required_providers {
tembo = {
source = "tembo-io/tembo"
version = ">= 0.1.0"

provider "tembo" {
access_token = var.access_token

variable "access_token" {
type = string

resource "tembo_instance" "test_instance" {
instance_name = "test-instance"
org_id = "org_test" # Replace this with your Tembo organization id
cpu = "1"
stack_type = "Standard"
environment = "dev"
memory = "4Gi"
storage = "10Gi"
replicas = 1
extensions = [{
name = "plperl"
description = "PL/Perl procedural language"
locations = [{
database = "app"
schema = "public"
version = "1.0"
enabled = false
database = "postgres"
schema = "public"
version = "1.0"
enabled = true

output "instance" {
value = tembo_instance.test_instance

Applying the above Terraform will provision a Postgres Instance on Tembo Cloud. For a demo explaining the steps watch the video below.


The demo in the video explains how to use the Terraform Provider for Tembo. The Terraform code used in the video can be found here.

What’s next for the Provider

We are actively working on an Import feature that will allow you to import existing Tembo instances to Terraform to easily start managing existing instances with Terraform. Stay updated by starring our GitHub repository and monitoring changes on the Terraform Registry. Most importantly, please try it out, and file any issues and feature requests in our repository. You can also access the documentation for the Tembo provider here.

· 5 min read
Jay Kothari

The maxim of building a product is to know your user. However, any company big or small will often have user data spread around in various systems. A data platform team will often deploy various pipelines that can sync data from various sources into a data warehouse. As an alternative, Postgres supports the concept of a Foreign Data Wrapper. Let’s dive into what this is and how it can help us.

In this blog, we'll look at clerk_fdw—a tool that bridges the gap between Clerk, a leading user management solution, and your very own Postgres Database. By the end, you'll discover how this integration can empower you to make data-driven decisions, optimize your pricing strategy, and refine your market approach. Let's get started!

What’s a Foreign Data Wrapper?

A foreign data wrapper is an extension available in PostgreSQL that allows you to bring ‘foreign data’ (i.e. data in a different Postgres DB, a different database like DB2, or even a different kind of data source, like an API) and query it the same way you would query a normal Postgres table. They are particularly useful when your data may be segregated into different databases, but are still related in ways that you could gather some useful information from them. In building a foreign data wrapper for, we have used Supabase Wrappers that make it easier to build Foreign Data Wrappers and interact with third-party data using SQL.

If you should take something away from this blog, is that Postgres’ Foreign Data Wrappers are a great tool to build an analytics platform based on Postgres. See examples of other FDWs in Trunk

What’s Clerk?

Clerk is a user management tool. With Clerk, users experience a seamless sign-up and sign-in flow, whether they prefer using email, SMS, or even their favorite social media accounts. Its versatility and developer-friendly APIs make it an excellent choice for us at Tembo for both efficiency and a superior user experience.

The Power of Integration

Being able to access data from a User Management Tool like Clerk as part of your data platform is especially useful because it enables you to have a 360-degree view of the user experience on your product, without having to set up any complex data export pipelines from Clerk into other systems.

In fact, we built clerk_fdw at Tembo to address needs in our internal analytics pipeline . Here are some of the ways we are using it:

  • Run advanced analytics that combine internal data with user data from Clerk.
  • Understand user interaction patterns with our product.
  • Identify and engage with top users.


Setting up clerk_fdw

The first step would be installing the clerk_fdw extension. You can install this extension using trunk.

trunk install clerk_fdw

The next step would be to enable the extension in your postgres instance. You can do so using the following command:

create extension if not exists clerk_fdw;

Create the foreign data wrapper for clerk

create foreign data wrapper clerk_wrapper
handler clerk_fdw_handler
validator clerk_fdw_validator;

Connect to Clerk using your credentials

create server my_clerk_server
foreign data wrapper clerk_wrapper
options (
api_key '<clerk secret Key>');

Create Foreign Table:

User table

This table will store information about the users.

Note: The current limit is 500 users. We are working to increase this limitation in future releases.

create foreign table clerk_users (
user_id text,
first_name text,
last_name text,
email text,
gender text,
created_at bigint,
updated_at bigint,
last_sign_in_at bigint,
phone_numbers bigint,
username text
server my_clerk_server
options (
object 'users'

Organization Table

This table will store information about the organizations.

Note: The current limit is 500 organizations. We are working to increase this limitation in future releases.

create foreign table clerk_organizations (
organization_id text,
name text,
slug text,
created_at bigint,
updated_at bigint,
created_by text
server my_clerk_server
options (
object 'organizations'

Junction Table

This table connects the clerk_users and clerk_orgs. It lists out all users and their roles in each organization.

create foreign table clerk_organization_memberships (
user_id text,
organization_id text,
role text
server my_clerk_server
options (
object 'organization_memberships'

Dive into the Data

Now you can query through your database and get useful information like:

  • How many organizations have been created each week in the past
  • How many users have signed up in the past 30 days
  • What organizations is a user part of
  • All users and their roles in an organization
  • And more….

Here are some of the charts we were able to make from the clerk foreign data wrapper using some synthetic data.

chart1 chart2


In conclusion, we believe that Postgres’ concept of Foreign Data Wrappers is more than just a technical integration—it's a game-changer that allows Postgres users to build data warehouse platforms that reach across all data sources in the business. It paves the way for businesses to harness critical insights directly from their operational databases, making informed decisions easier than ever before. See examples of other FDWs in Trunk

Give us a star and try out clerk_fdw by running the example in the README. If you hit any snags, please create an issue. We would greatly welcome contributions to the project as well.

· 18 min read
Steven Miller


A nice feature of AWS S3 is version history and lifecycle policies. When objects are updated or deleted, the old object version remains in the bucket, but it’s hidden. Old versions are deleted eventually by the lifecycle policy.

I would like something like that for my Postgres table data. We can use the temporal_tables extension for version history, and combine it with pg_partman to partition by time, automatically expiring old versions.

Data model

Let's say we have a table employees, and it looks like this:

       name       |  salary
Bernard Marx | 10000.00
Lenina Crowne | 7000.00
Helmholtz Watson | 18500.00

We will add one more column to this table, sys_period, which is a time range. This time range represents "since when" is this row the current version. This range is unbounded on the right side, because all the rows in the employees table are the present version.

       name       |  salary  |             sys_period
Helmholtz Watson | 18500.00 | ["2023-09-28 13:30:19.24318+00",)
Bernard Marx | 11600.00 | ["2023-09-28 13:33:58.735932+00",)
Lenina Crowne | 11601.00 | ["2023-09-28 13:33:58.738827+00",)

We will make a new table employees_history to store previous versions. This will have the same columns as the employees table, but all the rows in sys_period are bounded on the the right and the left sides. These ranges represent when this row was the current version. We will configure temporal_tables to automatically create these rows when anything changes in the employees table.

     name      |  salary  |                            sys_period
Bernard Marx | 10000.00 | ["2023-09-28 13:30:19.18544+00","2023-09-28 13:33:58.683279+00")
Bernard Marx | 11200.00 | ["2023-09-28 13:33:58.683279+00","2023-09-28 13:33:58.731332+00")
Bernard Marx | 11400.00 | ["2023-09-28 13:33:58.731332+00","2023-09-28 13:33:58.735932+00")
Lenina Crowne | 7000.00 | ["2023-09-28 13:30:19.239152+00","2023-09-28 13:33:58.738827+00")

To automatically delete old versions, we'll add one more column to the employees_table, created_at. We will use this information to expire old versions after they are older than our retenion configuration, with the help of pg_partman.

Getting set up

This guide covers how to quickly try out Postgres extensions locally. I've followed that guide to set up my environment with temporal_tables and pg_partman.

I have a Dockefile, two SQL scripts, and a file with Postgres configurations.

├── Dockerfile
├── 0_startup.sql
├── 1_create_versioned_table.sql
└── custom.conf

Dockerfile: We use Trunk to install pg_partman and temporal_tables. Then, we copy the three other files into the image.


RUN trunk install pg_partman
RUN trunk install temporal_tables

COPY 0_startup.sql $PGDATA/startup-scripts

COPY 1_create_versioned_table.sql $PGDATA/startup-scripts

COPY custom.conf $PGDATA/extra-configs

0_startup.sql: Enables temporal_tables and pg_partman when Postgres starts.


1_create_versioned_table.sql: Creates a sample table, then enables version history on it.

-- Sample: an existing table we want to enable versioning on
CREATE TABLE employees
department text,
salary numeric(20, 2)

Adding version history to the table,
first we need to add a time range to the existing table.
This represents "since when" has this row been current.
ALTER TABLE employees ADD COLUMN sys_period tstzrange NOT NULL;

Creating a time-partitioned version table
each row has the range the data was valid for,
and also the time this version was created.
CREATE TABLE employees_history (
created_at timestamptz NOT NULL DEFAULT now())
PARTITION BY RANGE (created_at);

-- Allow efficient querying of partition key and name
CREATE INDEX ON employees_history (created_at);

Enable automatic partitioning with pg_partman, partitioning every 1 minute.

It's more realistic to partition daily or greater.
SELECT create_parent('public.employees_history', 'created_at', 'native', '1 minute');

-- This connects employees table to employees_history
CREATE TRIGGER versioning_trigger
FOR EACH ROW EXECUTE PROCEDURE versioning('sys_period',

Configure retention policy for employee history to keep old versions for 10 minutes.

It's more realistic to configure retention for 1 year.
UPDATE part_config
SET retention = '10 minutes',
retention_keep_table = false,
retention_keep_index = false,
infinite_time_partitions = true
WHERE parent_table = 'public.employees_history';

custom.conf: our additions to the Postgres configuration.

# Enable pg_partman background worker
shared_preload_libraries = 'pg_partman_bgw'

# How many seconds between pg_partman background worker runs
# It's more realistic to run every 3600 seconds, or longer
pg_partman_bgw.interval = 10

# Which database pg_partman should target
pg_partman_bgw.dbname = 'postgres'

# It's best practice to use limited permissions for the background worker
# pg_partman_bgw.role = 'limitedrole'

# This was helpful when I was working on getting the settings working
# log_min_messages = 'DEBUG1'

With those four files in place, we can run Postgres like this:

docker build -t example-local-image .
docker run -it -d --name local-tembo -p 5432:5432 --rm example-local-image

In a separate shell, I connect into the Postgres container.

psql postgres://postgres:postgres@localhost:5432

Basic demo of saving old versions

After we are set up, we have version history and retention policy configured on the employees table, but both the employees table and the employees_history table are empty.

SELECT * FROM employees;
 name | department | salary | sys_period
(0 rows)
SELECT * FROM employees_history;
 name | department | salary | sys_period | created_at
(0 rows)

Adding data:

INSERT INTO employees (name, department, salary)
VALUES ('Bernard Marx', 'Hatchery and Conditioning Centre', 10000);

INSERT INTO employees (name, department, salary)
VALUES ('Lenina Crowne', 'Hatchery and Conditioning Centre', 7000);

INSERT INTO employees (name, department, salary)
VALUES ('Helmholtz Watson', 'College of Emotional Engineering', 18500);

Now, the employees has some data, and employees_history is still empty.

SELECT name, salary, sys_period FROM employees;
       name       |   salary  |             sys_period
Bernard Marx | 10000.00 | ["2023-09-28 20:23:14.840624+00",)
Lenina Crowne | 7000.00 | ["2023-09-28 20:23:14.911528+00",)
Helmholtz Watson | 18500.00 | ["2023-09-28 20:23:14.913555+00",)
(3 rows)
SELECT * FROM employees_history;
 name | department | salary | sys_period | created_at
(0 rows)

Modifying data:

UPDATE employees SET salary = 11200 WHERE name = 'Bernard Marx';
UPDATE employees SET salary = 11400 WHERE name = 'Bernard Marx';
UPDATE employees SET salary = 11600 WHERE name = 'Bernard Marx';
UPDATE employees SET salary = 11601 WHERE name = 'Lenina Crowne';

Now, the employees_history table has past versions.

SELECT name, salary, sys_period FROM employees;
       name       |  salary  |             sys_period
Helmholtz Watson | 18500.00 | ["2023-09-28 20:23:14.913555+00",)
Bernard Marx | 11600.00 | ["2023-09-28 20:23:50.731597+00",)
Lenina Crowne | 11601.00 | ["2023-09-28 20:23:50.734214+00",)
(3 rows)
SELECT name, salary, sys_period FROM employees_history;
     name      |  salary  |                            sys_period
Bernard Marx | 10000.00 | ["2023-09-28 20:23:14.840624+00","2023-09-28 20:23:50.684293+00")
Bernard Marx | 11200.00 | ["2023-09-28 20:23:50.684293+00","2023-09-28 20:23:50.727283+00")
Bernard Marx | 11400.00 | ["2023-09-28 20:23:50.727283+00","2023-09-28 20:23:50.731597+00")
Lenina Crowne | 7000.00 | ["2023-09-28 20:23:14.911528+00","2023-09-28 20:23:50.734214+00")
(4 rows)

Looking up past versions

Let's say we want to look up Bernard's salary at a previous date. We can check the employees_history table to find the row where the time range matches our provided timestamp. However, this wouldn't find the correct salary if we provide a timestamp that is after the most recent update to Bernard's salary, since that row is in the employees table.

We can first create a view for this purpose. We only need to do this once, then we can query this view like a table going forward.

CREATE VIEW employee_history_view AS

SELECT name, department, salary, sys_period
FROM employees


SELECT name, department, salary, sys_period
FROM employees_history;

Then, we can use this query to find Bernard's salary at any given date.

SELECT salary
FROM employee_history_view
WHERE name = 'Bernard Marx'
AND sys_period @> TIMESTAMP WITH TIME ZONE '2023-09-28 20:23:30+00'

@> Is a containment operator and you might recognize it if you have used JSONB.

Comparing to the employees_history table shown above, it is returning the correct value.

(1 row)

It also works to look up the current salary:

SELECT salary
FROM employee_history_view
WHERE name = 'Bernard Marx'
AND sys_period @> now()::TIMESTAMP WITH TIME ZONE
(1 row)
SELECT salary FROM employees WHERE name = 'Bernard Marx';
(1 row)

If I try to query a salary from the future, it will return the current salary. If I try to query a salary from before Bernard is known in the employees_history table, then I get an empty result.


What is partitioning? Postgres documentation has detailed information on partitioning but just to summarize, partitioning is about splitting what is logically one large table into smaller tables. Typically, this is done for query performance. In our case, we are partitioning to expire old versions.

Partitioning tables is something I’m familiar with from Tembo’s work in PGMQ, which is a queueing extension for Postgres.



We should expect write performance to be slower, since we are writing to two tables for every update.

I created a new table that does not have versioning enabled to compare write performance.

-- Create a table like employees
CREATE TABLE employees_write_test
AS TABLE employees

-- ...and insert one row
INSERT INTO employees_write_test (name, department, salary, sys_period)
VALUES ('Bernard Marx', 'Hatchery and Conditioning Centre', 11600.00, tstzrange(now(), null));

Then, I used EXPLAIN ANALYZE to compare the write performance. I ran the query a few times for each.

Without versioning:

UPDATE employees_write_test
SET salary = 11608 WHERE name = 'Bernard Marx';

Three samples:

 Planning Time: 1.654 ms
Execution Time: 1.540 ms

Planning Time: 0.760 ms
Execution Time: 0.707 ms

Planning Time: 1.707 ms
Execution Time: 2.079 ms

With versioning:

UPDATE employees
SET salary = 11610 WHERE name = 'Bernard Marx';

Three samples:

 Planning Time: 2.423 ms
Trigger versioning_trigger: time=2.430 calls=1
Execution Time: 4.783 ms

Planning Time: 2.311 ms
Trigger versioning_trigger: time=1.091 calls=1
Execution Time: 2.979 ms

Planning Time: 2.825 ms
Trigger versioning_trigger: time=1.711 calls=1
Execution Time: 5.686 ms

It's more than twice as slow on a single update. That's because we have to write to two rows instead of one, there is more data to write (the time ranges), and because there is some additional processing, for instance determining which range to put on each row. In the next section, I also compare how much time it takes to write 100,000 rows in each of these tables.


We created a view which is a union between employees and employees_history, then we query the view to find an employee's salary at a given time.

To generate some data, let's make a procedure to update a salary 100,000 times in a row. The below example uses PL/pgSQL. By default, PL/pgSQL functions run as a single transaction, so it would only result in a single update to the employees_history table. For this reason, I am using a procedure with COMMIT so that each increment will be a separate transaction, this way we also get 100,000 updates to the employees_history table. I had to explain that nuance to chatGPT in order for this procedure to be produced properly.

-- Table name and employee name as inputs
CREATE OR REPLACE PROCEDURE increment_salary(p_name text, p_table_name text)
LANGUAGE plpgsql AS $$
v_salary numeric(20,2);
i integer;
v_sql text;
-- Dynamically construct the SQL to get the current salary
v_sql := format('SELECT salary FROM %I WHERE name = $1', p_table_name);
EXECUTE v_sql INTO v_salary USING p_name;

-- Loop 100 thousand times
FOR i IN 1..100000
-- Increment the salary
v_salary := v_salary + 1;

-- Dynamically construct the SQL to update the salary
v_sql := format('UPDATE %I SET salary = $2 WHERE name = $1', p_table_name);
EXECUTE v_sql USING p_name, v_salary;

COMMIT; -- Commit the transaction, triggering the versioning procedure

Run the procedure:

CALL increment_salary('Bernard Marx', 'employees');

This took 55 seconds to run on my laptop. I also tried it on the table without versioning enabled, at in this case it took 38 seconds. I ran it a couple more times on the table with versioning enabled, so that the versions would be distributed across multiple partitions. Now we have an employees_history table that's populated with many rows for Bernard.

SELECT count(*) FROM employees_history WHERE name = 'Bernard Marx';
(1 row)

Let's run the same type of query command we ran before, with EXPLAIN ANALYZE. I picked a timestamp that will not be found to ensure it's as slow as possible.

SELECT salary
FROM employee_history_view
WHERE name = 'Bernard Marx'
AND sys_period @> TIMESTAMP WITH TIME ZONE '2023-09-28 15:28:25+00'

Simplified query plan output:

-> Append
-> Bitmap Heap Scan on employees
Recheck Cond: (name = 'Bernard Marx'::text)
Filter: (sys_period @> '...')
Rows Removed by Filter: 1
Heap Blocks: exact=1
-> Bitmap Index Scan on employees_pkey
Index Cond: (name = 'Bernard Marx'::text)

... Empty partitions omitted ...

-> Seq Scan on employees_history_p2023_09_29_0030
Filter: ((sys_period @> '...') AND (name = 'Bernard Marx'::text))
Rows Removed by Filter: 31

-> Seq Scan on employees_history_p2023_09_29_0031
Filter: ((sys_period @> '...') AND (name = 'Bernard Marx'::text))
Rows Removed by Filter: 99969

... Empty partitions omitted ...

-> Seq Scan on employees_history_p2023_09_29_0035
Filter: ((sys_period @> '...') AND (name = 'Bernard Marx'::text))
Rows Removed by Filter: 97393

-> Seq Scan on employees_history_p2023_09_29_0036
Filter: ((sys_period @> '...') AND (name = 'Bernard Marx'::text))
Rows Removed by Filter: 102607

... Empty partitions omitted ...

Planning Time: 12.427 ms
Execution Time: 262.706 ms
(47 rows)

This query took 263 milliseconds. We notice this query needs to scan all partitions, because we are partitioning by created_at, and querying sys_period. We can improve the speed with indexes.

If this was a real workload, I doubt that employees' salaries are being updated so frequently, or at least that's been the case in my personal experience. However, if it's a big company, then there could be a lot of employees. In that case, it would be best to add an index on the name (or more realistically, employee ID) in the employees_history table. Then, withing each partition it will find only rows for the employee being queryed using the index, then it would scan the remaining rows, probably typically zero, one, or two rows, to find the correct salary.

Expiring old versions

Earlier in this blog, we configured pg_partman to partition in 1 minute increments, to expire partitions that are older than 15 minutes, and to check every 30 seconds. Every 30 seconds, any partition that is older that 15 minutes is deleted by the pg_partman background worker.

With this query, I can check how many rows and the total data size in each partition.

-- This query was produced by ChatGPT 4 with the prompt:
-- "How can I check the number of rows in each partition of employees_history?"
child.relname AS partition_name,
pg_total_relation_size(child.oid) AS total_size,
pg_relation_size(child.oid) AS data_size,
pg_stat_user_tables.n_live_tup AS row_count
pg_class parent ON pg_inherits.inhparent = parent.oid
pg_class child ON pg_inherits.inhrelid = child.oid
pg_stat_user_tables ON child.oid = pg_stat_user_tables.relid

In order to check that old versions are being dropped, I ran the procedure to create a lot of salaray increments several times in a row.

Then, running the above query, I find an output like this:

           partition_name           | total_size | data_size | row_count
employees_history_default | 16384 | 0 | 0
employees_history_p2023_09_28_2204 | 16384 | 0 | 0
employees_history_p2023_09_28_2205 | 16384 | 0 | 0
employees_history_p2023_09_28_2206 | 16384 | 0 | 0
employees_history_p2023_09_28_2207 | 16384 | 0 | 0
employees_history_p2023_09_28_2208 | 16384 | 0 | 0
employees_history_p2023_09_28_2209 | 16384 | 0 | 0
employees_history_p2023_09_28_2210 | 32768 | 8192 | 4
employees_history_p2023_09_28_2211 | 9584640 | 7995392 | 68267
employees_history_p2023_09_28_2212 | 4489216 | 3719168 | 31733
employees_history_p2023_09_28_2213 | 13180928 | 11018240 | 94144
employees_history_p2023_09_28_2214 | 868352 | 688128 | 5856
employees_history_p2023_09_28_2215 | 16384 | 0 | 0
employees_history_p2023_09_28_2216 | 16384 | 0 | 0
employees_history_p2023_09_28_2217 | 16384 | 0 | 0
employees_history_p2023_09_28_2218 | 16384 | 0 | 0
(16 rows)

In this output, we can see that we have 1 partition for every minute, and a total of 15 partitions. I have old versions expiring after 10 minutes. I thought it's interesting to note that pg_partman is preemptively creating partitions for the future, in this case 5 minutes into the future.

If you refer to the original set up steps, I have configured infinite_time_partitions = true, and this means we will generate partitions even when we are not generating any data for them. I think this is the proper configuration since we also have a retention policy that will drop the old partitions. The concern of making infinite partitions as time passes, even if no data is being generated, is not applicable because old tables are being dropped.

To confirm data was being deleted, I sampled the above query over time, and we can see the large body of inserts moving up into the oldest available partitions, then falling outside of the retention policy and being deleted.

partition_name | total_size | data_size | row_count
employees_history_default | 16384 | 0 | 0
employees_history_p2023_09_28_2207 | 16384 | 0 | 0
employees_history_p2023_09_28_2208 | 16384 | 0 | 0
employees_history_p2023_09_28_2209 | 16384 | 0 | 0
employees_history_p2023_09_28_2210 | 32768 | 8192 | 4
employees_history_p2023_09_28_2211 | 9584640 | 7995392 | 68267
employees_history_p2023_09_28_2212 | 4489216 | 3719168 | 31733
employees_history_p2023_09_28_2213 | 13189120 | 11018240 | 94144
employees_history_p2023_09_28_2214 | 876544 | 688128 | 5856
employees_history_p2023_09_28_2215 | 16384 | 0 | 0
employees_history_p2023_09_28_2216 | 16384 | 0 | 0
employees_history_p2023_09_28_2217 | 16384 | 0 | 0
employees_history_p2023_09_28_2218 | 16384 | 0 | 0
employees_history_p2023_09_28_2219 | 16384 | 0 | 0
employees_history_p2023_09_28_2220 | 16384 | 0 | 0
employees_history_p2023_09_28_2221 | 16384 | 0 | 0
(16 rows)

partition_name | total_size | data_size | row_count
employees_history_default | 16384 | 0 | 0
employees_history_p2023_09_28_2211 | 9584640 | 7995392 | 68267
employees_history_p2023_09_28_2212 | 4489216 | 3719168 | 31733
employees_history_p2023_09_28_2213 | 13189120 | 11018240 | 94144
employees_history_p2023_09_28_2214 | 876544 | 688128 | 5856
employees_history_p2023_09_28_2215 | 16384 | 0 | 0
employees_history_p2023_09_28_2216 | 16384 | 0 | 0
employees_history_p2023_09_28_2217 | 16384 | 0 | 0
employees_history_p2023_09_28_2218 | 16384 | 0 | 0
employees_history_p2023_09_28_2219 | 16384 | 0 | 0
employees_history_p2023_09_28_2220 | 16384 | 0 | 0
employees_history_p2023_09_28_2221 | 16384 | 0 | 0
employees_history_p2023_09_28_2222 | 16384 | 0 | 0
employees_history_p2023_09_28_2223 | 16384 | 0 | 0
employees_history_p2023_09_28_2224 | 16384 | 0 | 0
employees_history_p2023_09_28_2225 | 16384 | 0 | 0
(16 rows)

partition_name | total_size | data_size | row_count
employees_history_default | 16384 | 0 | 0
employees_history_p2023_09_28_2212 | 4489216 | 3719168 | 31733
employees_history_p2023_09_28_2213 | 13189120 | 11018240 | 94144
employees_history_p2023_09_28_2214 | 876544 | 688128 | 5856
employees_history_p2023_09_28_2215 | 16384 | 0 | 0
employees_history_p2023_09_28_2216 | 16384 | 0 | 0
employees_history_p2023_09_28_2217 | 16384 | 0 | 0
employees_history_p2023_09_28_2218 | 16384 | 0 | 0
employees_history_p2023_09_28_2219 | 16384 | 0 | 0
employees_history_p2023_09_28_2220 | 16384 | 0 | 0
employees_history_p2023_09_28_2221 | 16384 | 0 | 0
employees_history_p2023_09_28_2222 | 16384 | 0 | 0
employees_history_p2023_09_28_2223 | 16384 | 0 | 0
employees_history_p2023_09_28_2224 | 16384 | 0 | 0
employees_history_p2023_09_28_2225 | 16384 | 0 | 0
employees_history_p2023_09_28_2226 | 16384 | 0 | 0
(16 rows)

postgres=# select count(*) from employees_history;
(1 row)

partition_name | total_size | data_size | row_count
employees_history_default | 16384 | 0 | 0
employees_history_p2023_09_28_2214 | 876544 | 688128 | 5856
employees_history_p2023_09_28_2215 | 16384 | 0 | 0
employees_history_p2023_09_28_2216 | 16384 | 0 | 0
employees_history_p2023_09_28_2217 | 16384 | 0 | 0
employees_history_p2023_09_28_2218 | 16384 | 0 | 0
employees_history_p2023_09_28_2219 | 16384 | 0 | 0
employees_history_p2023_09_28_2220 | 16384 | 0 | 0
employees_history_p2023_09_28_2221 | 16384 | 0 | 0
employees_history_p2023_09_28_2222 | 16384 | 0 | 0
employees_history_p2023_09_28_2223 | 16384 | 0 | 0
employees_history_p2023_09_28_2224 | 16384 | 0 | 0
employees_history_p2023_09_28_2225 | 16384 | 0 | 0
employees_history_p2023_09_28_2226 | 16384 | 0 | 0
employees_history_p2023_09_28_2227 | 16384 | 0 | 0
employees_history_p2023_09_28_2228 | 16384 | 0 | 0
(16 rows)

postgres=# select count(*) from employees_history;
(1 row)

partition_name | total_size | data_size | row_count
employees_history_default | 16384 | 0 | 0
employees_history_p2023_09_28_2215 | 16384 | 0 | 0
employees_history_p2023_09_28_2216 | 16384 | 0 | 0
employees_history_p2023_09_28_2217 | 16384 | 0 | 0
employees_history_p2023_09_28_2218 | 16384 | 0 | 0
employees_history_p2023_09_28_2219 | 16384 | 0 | 0
employees_history_p2023_09_28_2220 | 16384 | 0 | 0
employees_history_p2023_09_28_2221 | 16384 | 0 | 0
employees_history_p2023_09_28_2222 | 16384 | 0 | 0
employees_history_p2023_09_28_2223 | 16384 | 0 | 0
employees_history_p2023_09_28_2224 | 16384 | 0 | 0
employees_history_p2023_09_28_2225 | 16384 | 0 | 0
employees_history_p2023_09_28_2226 | 16384 | 0 | 0
employees_history_p2023_09_28_2227 | 16384 | 0 | 0
employees_history_p2023_09_28_2228 | 16384 | 0 | 0
employees_history_p2023_09_28_2229 | 16384 | 0 | 0
(16 rows)

postgres=# select count(*) from employees_history;
(1 row)


If you got this far, thank you for reading this! I hope that you are inspired to try out extensions on your own and see what they can do. The next time you have some problem to solve with your data, consider that maybe it could just be handled by a Postgres extension.

If you want to try extensions without any local setup, you should try Tembo Cloud at

Just use Postgres!

· 12 min read

In my previous submission to this space, I described my experience with pgmq while using the Python library. In this post, I'll share what I found after inspecting the code.

So, first, I'll describe the general structure of the project. Then, I'll explain what happens when we install the pgmq extension. Finally, I'll describe how some of its functions work.

In this post, I'll be using version v0.25.0, which you can find here.

Project structure

After cloning the appropriate tag, we can see that the repository contains the following files:

$ ls -1

The project uses pgrx. From pgrx's README, we know that the relevant files for the extension are Cargo.toml, pgmq.control and the src and sql directories:

$ tree sql src
├── pgmq--0.10.2--0.11.1.sql
├── pgmq--0.11.1--0.11.2.sql
├── pgmq--0.8.0--0.8.1.sql
├── pgmq--0.8.1--0.9.0.sql
└── pgmq--0.9.0--0.10.2.sql
├── sql_src.sql

0 directories, 7 files

Installing the pgmq extension


This section assumes that you have successfully installed the pre-requisites as described in

To build the pgmq extension, we can do the following:

cargo build

Alternatively, to build and install the pgmq extension, we can do:

cargo pgrx install

In either case, we can see a shared library being created. The installation process also places the shared library in the lib directory of the postgres installation; and the sql files and the control file in the extensions directory. In my case:

$ ls -1 /opt/postgres/share/extension/pgmq*

$ ls -1 /opt/postgres/lib/pgmq*

To test the extension, we can do:

cargo pgrx run

and it'll start a psql prompt. In the prompt, we can execute the create extension statement to start using pgmq:

-- List installed extensions

-- Enable pgmq
create extension pgmq;

-- List installed extensions again

The output will look something like:

pgmq=# \dx
List of installed extensions
Name | Version | Schema | Description
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(1 row)

pgmq=# create extension pgmq;

pgmq=# \dx
List of installed extensions
Name | Version | Schema | Description
pgmq | 0.25.0 | public | A lightweight message queue. Like AWS SQS and RSMQ but on Postgres.
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(2 rows)

We can also list the available functions:

-- List available functions under pgmq schema
\df pgmq.*
pgmq=# \df pgmq.*
List of functions
Schema | Name | Result data type | Argument data types | Type
pgmq | archive | boolean | queue_name text, msg_id bigint | func
pgmq | archive | TABLE(archive boolean) | queue_name text, msg_ids bigint[] | func
pgmq | create | void | queue_name text | func
pgmq | create_non_partitioned | void | queue_name text | func
pgmq | create_partitioned | void | queue_name text, partition_interval text DEFAULT '10000'::text, retention_interval text DEFAULT '100000'::text | func
pgmq | delete | boolean | queue_name text, msg_id bigint | func
pgmq | delete | TABLE(delete boolean) | queue_name text, msg_ids bigint[] | func
pgmq | drop_queue | boolean | queue_name text, partitioned boolean DEFAULT false | func
pgmq | list_queues | TABLE(queue_name text, created_at timestamp with time zone) | | func
pgmq | metrics | TABLE(queue_name text, queue_length bigint, newest_msg_age_sec integer, oldest_msg_age_sec integer, total_messages bigint, scrape_time timestamp with time zone) | queue_name text | func
pgmq | metrics_all | TABLE(queue_name text, queue_length bigint, newest_msg_age_sec integer, oldest_msg_age_sec integer, total_messages bigint, scrape_time timestamp with time zone) | | func
pgmq | pop | TABLE(msg_id bigint, read_ct integer, enqueued_at timestamp with time zone, vt timestamp with time zone, message jsonb) | queue_name text | func
pgmq | purge_queue | bigint | queue_name text | func
pgmq | read | TABLE(msg_id bigint, read_ct integer, enqueued_at timestamp with time zone, vt timestamp with time zone, message jsonb) | queue_name text, vt integer, "limit" integer | func
pgmq | read_with_poll | TABLE(msg_id bigint, read_ct integer, enqueued_at timestamp with time zone, vt timestamp with time zone, message jsonb) | queue_name text, vt integer, "limit" integer, poll_timeout_s integer DEFAULT 5, poll_interval_ms integer DEFAULT 250 | func
pgmq | send | bigint | queue_name text, message jsonb, delay integer DEFAULT 0 | func
pgmq | send_batch | TABLE(msg_id bigint) | queue_name text, messages jsonb[], delay integer DEFAULT 0 | func
pgmq | set_vt | TABLE(msg_id bigint, read_ct integer, enqueued_at timestamp with time zone, vt timestamp with time zone, message jsonb) | queue_name text, msg_id bigint, vt_offset integer | func
(18 rows)

With this, we can now explore the extension from the inside. And, if needed, recompile and reinstall the extension to play with it.

The internals

We know that when an extension is created with pgrx, it generates a file. Let us explore it.

One of the first thing we can see, is that the other five files in the src/ directory are included:

pub mod api;
pub mod errors;
pub mod metrics;
pub mod partition;
pub mod util;

After reviewing these files a little bit, we can notice that there's also some relevant code in another module, the one in core/. For example, in src/

use pgmq_core::{
assign_archive, assign_queue, create_archive, create_archive_index, create_index,
create_meta, grant_pgmon_meta, grant_pgmon_queue, grant_pgmon_queue_seq, insert_meta,

So, at this point we know that we can find the source code in two places: src/ and core/.

If we continue exploring, we can see that a sql file (sql_src.sql) is executed when the extension is enabled:

CREATE TABLE pgmq.meta (
is_partitioned BOOLEAN NOT NULL,

We can actually see that table with psql:

-- List tables in the pgmq schema
\dt pgmq.*

-- List contents of pgmq.meta
select * from pgmq.meta;
pgmq-# \dt pgmq.*
List of relations
Schema | Name | Type | Owner
public | meta | table | binidxaba
(1 row)

pgmq=# select * from pgmq.meta;
queue_name | created_at
(0 rows)

The following diagram shows what the pgmq schema looks like right after CREATE EXTENSION is executed:


From this point, we can suspect that every time we create a queue, a new row is inserted into this table.

Let us see what pgmq.create() does...


Most of the functions provided by pgmq are defined in src/ In that file, we can find the function pgmq_create(queue_name: &str), and if we chase the call sequence, we can discover that the interesting function is init_queue(name: &str) in core/src/

pub fn init_queue(name: &str) -> Result<Vec<String>, PgmqError> {
let name = CheckedName::new(name)?;

This function generates several sql statements that are later executed in pgmq_create_non_partitioned using an Spi client.

I'll skip the details, but the sql statements basically do:

  1. Create a table pgmq.q_<queue_name>.
  2. Assign the table to the pqmg extension.
  3. Create an index on the pgmq.q_<queue_name> table.
  4. Create a table pgmq.a_<queue_name>.
  5. Assign the table to the pgmq extension.
  6. Create an index on the pgmq.a_<queue_name> table.
  7. Insert a row on the pgmq.meta table.
  8. Grant privileges to pg_monitor.

We can see the effects of this in psql using the following lines:

-- Create a queue
select pgmq.create('my_queue');

-- List tables
\dt pgmq.*

-- List indexes
\di pgmq.*

-- See the contents of pgmq_meta
select * from pgmq.meta;

The output will show something like:

pgmq=# select pgmq_create('my_queue');

(1 row)

pgmq=# \dt pgmq.*;
List of relations
Schema | Name | Type | Owner
pgmq | a_my_queue | table | binidxaba
pgmq | meta | table | binidxaba
pgmq | q_my_queue | table | binidxaba
(3 rows)

pgmq=# \di pgmq.*
List of relations
Schema | Name | Type | Owner | Table
pgmq | a_my_queue_pkey | index | binidxaba | a_my_queue
pgmq | archived_at_idx_my_queue | index | binidxaba | a_my_queue
pgmq | meta_queue_name_key | index | binidxaba | meta
pgmq | q_my_queue_pkey | index | binidxaba | q_my_queue
pgmq | q_my_queue_vt_idx | index | binidxaba | q_my_queue
(5 rows)

pgmq=# select * from pgmq.meta;
queue_name | is_partitioned | created_at
my_queue | f | 2023-09-18 23:35:38.163096-06
(1 row)

The following diagram shows what the pgmq schema looks like at this point:


For the queue my_queue, we can see the underlying table and the corresponding archive table. Each table has an index associated with the primary key. The pgmq.q_my_queue table also has an index on the vt column, and pgmq.a_my_queue has an index on the archived_at column.

We can suspect that the pgmq.q_my_queue table is used in the send and read operations. Let us look at those two functions.


We can explore the send operation in a similar way. The relevant SQL is straightforward. It just inserts a new row in the the underlying table:

VALUES {values}



At this point, we can see the following pattern in the pgmq project:

  • the exposed SQL functions are defined in src/, and
  • the underlying SQL statements are defined in core/src/

So, let's see. If I were the one programming, I would perhaps do something like "get the first {limit} rows from the queue table whose {vt} has already expired, and for those rows, also update the visibility timeout to now() + {vt}." Naively, maybe something like:

update pgmq.q_my_queue
vt = clock_timestamp() + interval '10 seconds',
read_ct = read_ct + 1
msg_id in (select msg_id from pgmq.q_my_queue where vt <= clock_timestamp()

In reality, is more interesting than that 😅. It performs the following DML:

SELECT msg_id
WHERE vt <= clock_timestamp()
LIMIT {limit}
vt = clock_timestamp() + interval '{vt} seconds',
read_ct = read_ct + 1
FROM cte
WHERE t.msg_id=cte.msg_id


Firstly, in pgmq's version, there is a CTE (Common Table Expression) to obtain the first {limit} message IDs whose vt has expired. (It would be interesting to discuss why pgmq developers used a CTE, but we can explore that in another post.)

There are two crucial things to notice in the CTE. One is the order by clause that ensures the FIFO ordering. The other one is the FOR UPDATE SKIP LOCKED clause, claiming the rows no one else has claimed. This part is essential because it ensures correctness in the case of concurrent operations.

The next step in the DML is to update the corresponding rows with a new vt value by adding the supplied {vt} to the current timestamp. Additionally, the read_ct value is incremented by 1. What is the use of this counter? In general, we can suspect that there is a problem processing a given message if it has a high read_ct value because users usually archive the message after successfully processing it. So, ideally, a message is only read once.


The next stage in the lifecycle of a message is archiving it. For that, pgmq uses the following insert statement:

WITH archived AS (
WHERE msg_id = ANY($1)
RETURNING msg_id, vt, read_ct, enqueued_at, message
INSERT INTO {PGMQ_SCHEMA}.{ARCHIVE_PREFIX}_{name} (msg_id, vt, read_ct, enqueued_at, message)
SELECT msg_id, vt, read_ct, enqueued_at, message
FROM archived

Essentially, it deletes the message with the provided msg_id from the queue table, and then the message is placed in the corresponding archive table.


One interesting thing to notice is that pgmq.archive() can be used to archive a batch of messages too:

select pgmq.archive('my_queue', ARRAY[3, 4, 5]);
pgmq=# select pgmq.archive('my_queue', ARRAY[3, 4, 5]);
(3 rows)

That is achieved in pgrx by declaring two functions using the same name in the pg_extern derive macro as follows:

#[pg_extern(name = "archive")]
fn pgmq_archive(queue_name: &str, msg_id: i64) -> Result<Option<bool>, PgmqExtError> {

#[pg_extern(name = "archive")]
fn pgmq_archive_batch(
queue_name: &str,
msg_ids: Vec<i64>,
) -> Result<TableIterator<'static, (name!(archive, bool),)>, PgmqExtError> {


Finally, let's talk about pgmq.drop_queue(). It essentially executes multiple statements:

  1. Unassign the pgmq.q_<queue_name> table from the extension.
  2. Unassign the pgmq.a_<queue_name> table from the extension.
  3. Drop the table pgmq.q_<queue_name>.
  4. Drop the table pgmq.a_<queue_name>.
  5. Delete the corresponding row from the pgmq.meta table.

Nothing surprising in this one, and with it, we conclude our tour.


In this post, we explored how the pgrx tool is used to generate the pgmq extension. We explored how the metadata objects are created and how they are used in the basic send, read and archive operations. At least from an explorer perspective, the internals of the extension are currently easy to read and understand.

I invite everyone to explore how the other pgmq functions work. You can explore the code at And you can learn more about pgrx at:

· 9 min read
Samay Sharma

In case you missed it, Postgres 16 came out last week - and this year it arrived earlier than the last few years. There are many features that I’ve been looking forward to for the last few months and I’m excited to see them get into the hands of users. Before we dive into the specific features of this release, let’s discuss what a Postgres major release actually means.


Postgres Releases

The PostgreSQL Global Development Group releases a new major version every year with new features.

In addition, Postgres releases minor versions of each major release every 3 months or so with bug fixes and security fixes. No new features are released in minor versions, and that’s what makes major version releases so exciting as it’s the culmination of about a year’s worth of development work on the project.

While there's a case to be made for faster release of features, Postgres prides itself on stability and this release cadence provides enough time for features to be proposed, reviewed, committed and tested before they get shipped.

Should I upgrade to Postgres 16?

If you are building a new application, yes, I would recommend that you start with the latest major version of Postgres. This will guarantee the latest and greatest features, and a continuous flow of minor releases that fix bugs and improve the security of your database.

If you are upgrading an existing system, there are more factors to consider. The general advice is to upgrade minor versions always - because they contain security and bug fixes and the risk of not upgrading is higher.

However, for major versions, you will need to consider the tradeoffs as the major versions usually change the internal format of system tables and data files. That means, you can’t just use previous versions of the data directory — you’ll need to use pg_dump / pg_restore or pg_upgrade to upgrade. In addition, depending on the features you are using and the Postgres release, manual changes to your code or queries may also be required.

Obviously, another important factor if you are using a managed service provider is when they provide support for Postgres 16. At Tembo Cloud, we’ve already started working on supporting Postgres 16 and expect it to be available in a few weeks.

What’s most exciting about Postgres 16?

Postgres 16 delivers exciting features in all aspects of the database ranging from performance improvements, monitoring enhancements, better security and privilege handling, replication improvements, new server features and commands and a lot more.

If you’re interested in the complete list of features, you can read the detailed release notes. Below, I’ll talk about the aspects of this release which excite me the most and we will talk about a few not-so-talked about features which lay the groundwork for more exciting features in Postgres 17.

Logical replication improvements

Logical replication is a feature I’ve always been interested in, as it allows you to expand the capabilities of Postgres by moving data between different Postgres databases or from Postgres to other databases. It finds interesting use cases in: replicating selectively from one database to another, replicating across Postgres versions, online migrations and allowing consolidation from multiple databases.

This release, arguably the most exciting logical replication feature, is allowing logical replication from standby servers. Prior to this feature, you could only create a logical replication slot on the primary which meant adding more replicas would add more load on the primary. With Postgres 16, secondaries also have the ability to create replication slots allowing for more distribution of that load. What’s more is that the replication slots on the secondary are persisted even when the standby is promoted to the primary. This means that subscribers won’t be affected even during a failover! You can read more about this feature in Bertrand’s blog post.

The short of it is that now you can do this on a Postgres 16 standby:

postgres@standby=# select pg_is_in_recovery();
(1 row)

postgres@standby=# SELECT * FROM pg_create_logical_replication_slot('active_slot', 'test_decoding', false, true);
slot_name | lsn
active_slot | 0/C0053A78
(1 row)

On Postgres 15, the same thing would have errored out:

postgres@standby=# SELECT * FROM pg_create_logical_replication_slot('active_slot', 'test_decoding', false, true);
ERROR: logical decoding cannot be used while in recovery

In addition to this, there are a number of other logical replication performance improvements. This includes faster initial table sync using binary format, use of btree indexes during logical replication apply when tables don’t have a primary key (previously the table would be scanned sequentially) and parallel application of large transactions (~25-40% speedups).

Monitoring improvements

The other bucket of features which intrigued me are the monitoring enhancements. While Postgres provides a number of statistics tables with monitoring information, I believe more can be done to provide actionable insights to users. As an example, Lukas pointed out several interesting gaps in Postgres monitoring in his PGCon 2020 talk.

Coming back to this release, pg_stat_io has to be the most useful piece of information added to the Postgres stats views in Postgres 16. It allows you to understand the I/O done by Postgres at a more granular level, broken down by backend_type and context. This means you can calculate a more accurate cache hit ratio by ignoring the I/O done by VACUUM, differentiate between extends and flushes, and separate out bulk operations while deciding which configurations to tune. Melanie talks about this and much more in her talk and this blog post approaches how you would use this as a DBA.

Here is an example of the statistics you can see in pg_stat_io:

$ SELECT * FROM pg_stat_io ;
backend_type │ io_object │ io_context │ reads │ writes │ extends │ op_bytes │ evictions │ reuses │ fsyncs │ stats_reset
autovacuum launcher │ relation │ bulkread │ 0 │ 0 │ [NULL] │ 8192 │ 0 │ 0 │ [NULL] │ 2023-02-27 13:25:39.725072+01
autovacuum worker │ relation │ bulkread │ 0 │ 0 │ [NULL] │ 8192 │ 0 │ 0 │ [NULL] │ 2023-02-27 13:25:39.725072+01
client backend │ temp relation │ normal │ 0 │ 0 │ 0 │ 8192 │ 0 │ [NULL] │ [NULL] │ 2023-02-27 13:25:39.725072+01
background worker │ relation │ bulkread │ 268221 │ 268189 │ [NULL] │ 8192 │ 0 │ 268189 │ [NULL] │ 2023-02-27 13:25:39.725072+01
checkpointer │ relation │ normal │ [NULL] │ 32121 │ [NULL] │ 8192 │ [NULL] │ [NULL] │ 3356 │ 2023-02-27 13:25:39.725072+01
standalone backend │ relation │ bulkread │ 0 │ 0 │ [NULL] │ 8192 │ 0 │ 0 │ [NULL] │ 2023-02-27 13:25:39.725072+01
startup │ relation │ vacuum │ 0 │ 0 │ 0 │ 8192 │ 0 │ 0 │ [NULL] │ 2023-02-27 13:25:39.725072+01
walsender │ relation │ bulkread │ 0 │ 0 │ [NULL] │ 8192 │ 0 │ 0 │ [NULL] │ 2023-02-27 13:25:39.725072+01
walsender │ temp relation │ normal │ 0 │ 0 │ 0 │ 8192 │ 0 │ [NULL] │ [NULL] │ 2023-02-27 13:25:39.725072+01

In addition to this, there are other improvements including the addition of last_seq_scan and last_idx_scan on pg_stat_* tables which allow you to understand index usage better and figure out when plans for a query might have changed.

Special mentions

Like I said, each release comes with many improvements - and I could not outline them all in a blog post (and if I did, nobody would read it!). But I do want to mention a few other items in Postgres 16 that I won’t be able to dive deeper into but are interesting as well.

  • Load balancing with multiple hosts in libpq: This feature allows to balance the load across Postgres read replicas directly within libpq (which is the foundational Postgres client library) without having to use another load balancer. You can read this blog post on how this new feature is implemented and can be used.
  • Performance: I won’t repeat what’s in the release notes but there’s a long list of performance improvements in this release. There’s support for more parallelism on FULL and OUTER JOINs and on more aggregates, greater usage of incremental sorts, window function optimizations and even an upto 300% performance improvement in COPY.
  • VACUUM improvements: Last thing I’d mention is improvements to VACUUM which include freezing performance improvements, ability to increase (or decrease) shared buffer usage by VACUUM, and faster loading of VACUUM configs.

Laying the groundwork for an exciting Postgres 17 (and beyond)

All of the features mentioned above can immediately add a lot of value to your Postgres usage, but there are new features which lay the groundwork for powerful features in future releases. I’ll quickly touch upon three items that I believe are noteworthy:

  • Direct IO and Async IO in Postgres: In Postgres 16, several building blocks for implementing direct and asynchronous IO for Postgres were committed. This includes reduced contention on the relation extension lock and addition of Direct IO as a developer only option via the debug_io_direct setting. This important but hard work has been ongoing for several releases and Postgres 17 will likely be the first release where users will be able to utilize these features.
  • Moving towards Active Active replication: A feature was committed in Postgres 16 to allow logical replication to avoid replication loops, which is when a transaction gets replicated from source to target and back. Postgres 16 allows subscribers to process only changes which have no origin which allows you to prevent these loops. Bi-directional active-active replication is still very complicated and requires solving a lot of problems, but this feature tackles one of those important sub-problems.
  • Migration of the build system to Meson: This might have slipped under your radar but in this release, Postgres added support for a new build system which is expected to replace the Autoconf and Windows based build systems. Why, you might ask? Andres makes a compelling case for it in this thread if you’re interested but some reasons include faster build times, simpler dependency management and moving towards a common build system across Linux and Windows.

Postgres continues to deliver

With every new release, Postgres becomes more and more compelling and adds great features that improve the experience of its users. I recommend you check out the press release, and the release notes to see everything coming along with this new release.

And if you want to try out the full power of Postgres including its powerful extension ecosystem on a managed service, try out Tembo Cloud.