Skip to main content

· 26 min read

Build the thing you wish you had right now. That’s the story behind Hacking Postgres, Ep 9, where Ry sits down with Bertrand Drouvot of AWS Cloud. Today they talked about the struggles of trying to build from scratch vs having a template, what should be core vs an extension, and being able to see what’s really happening inside your database.

Watch below, or listen on Apple/Spotify (or your podcast platform of choice). Special thanks to Bertrand for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.

Transcript

[00:00:12] - Ry

Welcome to Hacking Postgres, a podcast about extensions, extension creators, and other interesting, innovative things in the world of Postgres. I'm Rye Walker, founder of Tembo, a managed Postgres company. And today I'm here with Bertrand Drouvot. And I know I said it poorly, but Bertrand, welcome to the show.

[00:00:34] - Bertrand

Thanks for having me. And no, that was perfectly said.

[00:00:39] - Ry

Yeah, I'd love to start with if you could give us a quick background. Where'd you grow up? I'm guessing it's France.

[00:00:49] - Bertrand

Yes.

[00:00:51] - Ry

What were you doing before you started working on Postgres? I'm always curious to hear background.

[00:00:57] - Bertrand

Yeah. So as you can hear, I am French. I grew up in the east of France. So before doing Postgres, I did about 20 years of Oracle DBA as a freelancer or working in some companies. And then I switched to Postgres about five years ago now and doing it as a full time job since about four years now.

[00:01:27] - Ry

Awesome. Yeah. Do you remember when you first started using Postgres? I mean, you probably touched it earlier in your career, or was it really like a first touch just five years ago?

[00:01:38] - Bertrand

Yeah, it was really a first touch, like five years ago. Before that this was mainly Oracle but wanted to see something else and wanted to contribute one way or another, not using a black box. And so that's why I moved to Postgres.

[00:01:56] - Ry

Do you like Oracle better than Postgres?

[00:02:00] - Bertrand

That's not the same, but I prefer Postgres because I can be involved in the new features, bug fix and all this kind of stuff.

[00:02:10] - Ry

Yeah. Awesome. Let's talk about some of your Postgres work. I know you've worked on pgsentinel, but yeah, I'd love to know what you've built in the past and then maybe what you're working on now. But we can start with the past.

[00:02:26] - Bertrand

Yeah. So when I started to study Postgres, I was comparing with what I knew in Oracle, in the Oracle world, and I was like something like active session history was not there. And I was like, so let's try to build this. And this is how pgsentinel came up. So it's an extension that record active session. So it's pulling regularly active session from pgstat activity. You can define the sampling period and as well as the number of record to retain. It's in-memory ring buffer. And before PG 14, I think the query ID was not present in pgstat activity. So I had also to make use of, I think the [inaudible 00:03:18] analyze hook to get the query ID and to show this up so that in every pgsentinel record you see what the session was doing and which query ID was associated at the time of the sampling.

[00:03:33] - Ry

It's interesting. We're literally building dashboards right now and I think we need pgsentinel, perhaps because I'm like, I want to know the history of the sessions. Does it get exposed? I don't know. Are people getting this data pushed out via because like we we consume all these metrics from Postgres via Prometheus. I wonder. You said it's stored in memory, but how do I access that besides running a query, I guess.

[00:04:06] - Bertrand

Yeah, so there is basically a view on top of the in-memory ring buffer. So you can query the data and I guess you can push it to Prometheus.

[00:04:17] - Ry

Yeah, it's awesome. Yeah. Okay, I think I just found a solution to a problem by inviting you to the podcast. Thanks. Are you actively working on that or is that pretty much completed? What are you working on these days?

[00:04:36] - Bertrand

Yeah, so for [inaudible 00:04:37] it's more ensuring you know when there is a new measure, ensuring it compiles and still works as it should and no more than that, and fixing bugs when they are reported. I also wrote another extension which is a pg_orphaned, could be useful to detect or find files. I mean in Postgres it's easy to create orphaned files. You just begin create table, insert some data and crash the server. Then you will get orphaned files so the extension detects the orphan files. Then you can move them to a backup directory and verify everything is still okay and then delete them or push them back in the original directory if anything went wrong. It also takes care of existing transaction using a dirty snapshot so that you can see transactions that have started but not committed yet. So that you don't move files that are part of any active transaction. And another one that is funny is pg_directpaths. It enables to do insert bypassing the share buffer and writing directly to disk. It has performance benefits because at the end you do less work. I mean you don't need to find a free buffer and all this stuff you just write to disk. And also it generates less [inaudible 00:06:13] because it only flush [inaudible 00:06:17] images by a bunch during that time.

[00:06:21] - Ry

Interesting. I'm curious the origin of these extensions, like for example pgsentinel. Like for know, you work at AWS, right? Are you on like the RDS team? I'm curious how it's organized. Are you working directly on the features of that or are you working with customers trying to use those RDS and Aurora?

[00:06:47] - Bertrand

Yes. So currently I am part of the RDS contributor Postgres team, which our role is mainly to contribute to upstream fixing bugs and provide new features.

[00:06:59] - Ry

Got it. Okay. But like Sentinel, was that something that RDS team wanted or was that like a customer, what caused you to decide to work on that? Or was it like your own idea? Like, hey, you just personally wanted it to exist. I'm curious, like the origin of the project.

[00:07:17] - Bertrand

Yeah. So for pgsentinel, I was even not employed by AWS at that time. I did it when I started to learn Postgres, in fact, because I like to learn doing something and just not reading books or whatever.

[00:07:34] - Ry

Got it. All right, that's cool. Yeah. So are you building extensions for Postgres as part of your, I get like you're contributing to the core project, but is the extension work similarly just your own personal interests or is it related to making the RDS platform better?

[00:07:58] - Bertrand

I guess, yeah. So pg_orphaned is an extension that is being used internally because we see orphaned files, so we have to deal with them. And so that's why this extension has been built. pg_directpaths. It was more for fun and to mimic an oracle feature that is allowing direct path insert.

[00:08:27] - Ry

Going back to building your first extension, what challenges were you faced with? What was hard about it?

[00:08:35] - Bertrand

Yeah, it was not that easy. So first of all, there is a lack of kind of template. You have to find an existing extension and try to, I mean, at least that's what I have done, try to understand how the existing extensions works and try to find documentation around it. I think it was not that easy. I think it is going better with pgrx because now it kind of provides an initial template so you can start building on top of this. But yeah, it was not that easy at the beginning.

[00:09:12] - Ry

Yeah. Would you use pgrx and Rust to build your next extension? Or, I don't know, maybe you're a hardcore C++ guy and prefer that. I'm curious, what's your thought on that?

[00:09:27] - Bertrand

Yeah, so the last one I built, I built it in C and in Rust, just because I am newcomer in Rust and I wanted to learn Rust and pgrx as well. So yes, I started to learn about it. So I think maybe there is some limitation in Rust. So it all depends what you need to do for memory management and this kind of stuff. Maybe it's more difficult outside of the C world for Postgres.

[00:09:54] - Ry

Yeah, I think I've talked to Eric about not everything has been exposed in pgrx, so you may have to... If you're doing something super deep, it may not have the right hooks for you.

[00:10:07] - Bertrand

But the cool thing is there is module that you can rely on. So it's easy if you want to interface with someone else's work well.

[00:10:20] - Ry

The other thing I think it brings is that a standardized template that helps a new extension developer, you don't have to go hunt for one similar to yours. I imagine you probably spent some energy like trying to look at all the different extensions to find the right pattern, the close enough pattern to borrow. Yeah. So are there big milestones related to either your extensions or you could even say in Postgres that you're looking forward to?

[00:10:54] - Bertrand

Yeah. What I would like to see is improvement in observability and this kind of stuff. For example, there is wait event, but for the moment there is no really details around it. For example, if you have a buffer content wait event, I would like to know for which relation, for example. And so this is kind of stuff I would like to try to contribute myself and to improve. That's an area I am interested in.

[00:11:29] - Ry

Is your goal to become a Postgres committer? Are you working towards that in your head?

[00:11:37] - Bertrand

Yeah, my goal is first to be a contributor because for the moment that's not the case. And yeah. Trying to move step by step. Yes. And to improve my knowledge and improve the product if I can.

[00:11:52] - Ry

Yeah. I think our CTO at Tembo did an analysis recently and I think he saw it was like forget it was like seven and a half years average amount of contributor time before you get to be a committer in Postgres. I'd say it's as much time as going to school to become a doctor, maybe even longer. It's pretty high bar. I think it's great in some ways that Postgres is so selective.

[00:12:24] - Bertrand

So maybe I'm already too old to reach this goal.

[00:12:27] - Ry

I don't know. I don't know. Yeah. Well that's good. I'm curious, kind of shifting away a little from Postgres, but as an engineer, what are some of the most important lessons you've learned over your career? I know this is kind of an open ended question, but does anything come to mind?

[00:12:48] - Bertrand

Yeah. So listen to others, learn from others and accept remarks and different point of view. I think it's very important.

[00:12:59] - Ry

Yeah. What do you think about the way Postgres work is managed? Mailing list and diffs and I guess patches. Does it bother you at all or do you kind of find it a refreshing difference from the GitHub way?

[00:13:20] - Bertrand

Yeah. So I am not that young. So working with email for me is perfectly fine and I think it works pretty well. I mean, the project I think is working well that way. But yes, I can imagine that maybe younger people would like to work differently than by email, but for my personal case, that's okay for me.

[00:13:47] - Ry

Yeah, I think the reason why I think it works so well is it's truly a global community. Right. Obviously if you have everybody in approximately the same time zone on a team, you can make decisions and have conversations outside of email. But I think the discussion first, it's almost like a demo first, design first sort of mentality versus Hey, A PR just popped in. Let's talk about that code. I don't know, I think it's refreshingly different. I think some other open source projects that want to be notable could adopt this methodology, even though on the surface it feels very frictionful. But I think it's intentionally that way. Let's talk about this before we do it, right?

[00:14:38] - Bertrand

Yeah, that's right. And also explain why you would find this useful and everything like. Yes, exactly.

[00:14:45] - Ry

Yeah.

[00:14:46] - Bertrand

And also there is conferences, I mean during conferences we can reach out to people and speaking about ideas and what you are working on as well.

[00:14:55] - Ry

Yeah. Do you go to the US conferences?

[00:15:00] - Bertrand

Very much? I went to PGConf, New York, not the last one, but the one before, so two years ago. And yes, that's for the US. That's the only one I did for the moment.

[00:15:12] - Ry

Got it. Yeah, I was in New York, this most recent one for the first time. Yeah. It was an interesting experience to see how internal is a lot of the vendors and builders of Postgres at the event. I think again, it's just impressive to me that Postgres has gotten to the point that it can have an event like that and that's sustainable. A lot of other open source projects I think can't pull that off without being somehow a commercial endeavor. So I think it's cool.

[00:15:49] - Bertrand

And there is also PGConf Europe. That is a very big one as well.

[00:15:55] - Ry

So talk to me, do you spend more of your. I would like to give people a chance to talk about the commercial product they're working on too. Are you spending more time on RDS or Aurora? Or are the lines between those pretty blurry in terms of how you work on your product?

[00:16:14] - Bertrand

No, I spend almost all my time on community side of things and I am still involved in some Aurora or [inaudible 00:16:23] related project because I was working in this area before I joined the Postgres contributor team.

[00:16:31] - Ry

Got it.

[00:16:33] - Bertrand

And yes, we are involved in case there is a bug or things like this.

[00:16:37] - Ry

Right, of course. Yeah. That's the idea is to have, it's nice to have a pack of tributors nearby when things are going.

[00:16:47] - Bertrand

Yeah. That helps.

[00:16:49] - Ry

Yeah. Okay. All right. That makes sense. It sounds like the role is pretty pure now. Pure community.

[00:16:57] - Bertrand

Yes.

[00:16:58] - Ry

Except when asked, what are some of the developments in Postgres ecosystem that you're excited about, whether it's other extensions or even stuff in the core.

[00:17:13] - Bertrand

Yeah. So I contributed to logical decoding on standby. I think that is now part of PG 16. I think it's a very useful feature and very asked one, so I think that's good. And now I am mostly involved in the slot synchronization. So to synchronize a slot on standby from slot on the primary so that during your failover you can resume working from the slot.

[00:17:47] - Ry

Tell me I'm new. I'm sort of new to Postgres. I'm meeting all these people, meeting you for the first time here. Who are some of your favorite people that deserve attention for their good work? I'm curious. Name two or three people and maybe potential future guests of the podcast.

[00:18:09] - Bertrand

Yeah. So, I mean, there is people working on the community side and also people not known. So it really depends. I mean, the community itself is very good and really kind to help and all this stuff. So that's good. I mean, everyone on [inaudible 00:18:26] is good people.

[00:18:28] - Ry

Yeah. So you could just randomly select someone and then you're going to find someone good. Yeah, I got it. Did someone recruit you? Who was it that got you into Postgres? Who hooked you in from Oracle Land?

[00:18:45] - Bertrand

Myself.

[00:18:46] - Ry

Oh, you just decided to defect one day, right?

[00:18:50] - Bertrand

Yeah, exactly. I was like, I really want to contribute to something open source. So doing databases, since 20 years, I choose a databases, and Postgres was the evident choice for me.

[00:19:04] - Ry

Yeah. Okay. Have you ever messed with MySQL? Did you consider MySQL at the same time?

[00:19:10] - Bertrand

Not at all.

[00:19:11] - Ry

Not at all. Yeah. Why not now? Okay. All right, so if you had a magic wand and you could add any feature magically to Postgres, what would you use that magic wand on?

[00:19:26] - Bertrand

Yeah. So, observability to improve this area, I think now it's very rich in term of feature and development feature, but observability, I think it's really an area to improve my opinion.

[00:19:43] - Ry

It's tricky with an open source project. Again, I'm sure AWS you guys do it a certain way too, but we get, I don't know how many 350 Prometheus metrics out of it right now? Would you argue that? How many more bits of information should we get? What's great observability? It seems like a lot, but I also think sometimes I'm looking for information that's just not there. So maybe a lot of low hanging fruit has been done, but not the hard stuff or. Yeah, I'm kind of curious what's missing in the observability that's available today.

[00:20:26] - Bertrand

Yeah, so I think it's really, for example, wait event. I would like to have more details like depending on each wait event, for example, data file. Wait which file for example. Also, since how long are you waiting on this wait event and stuff like this?

[00:20:47] - Ry

Do you feel like is that data for, it's really for the people running Postgres, not so much for the end user necessarily. Right. You're looking for better observability for the managed service provider of.

[00:20:59] - Bertrand

Yeah, for DBA, I think DBA, yeah. Especially the one coming from Oracle and they used to have a lot of observability that I think.

[00:21:14] - Ry

Yeah, it's interesting. I'm trying to think the role of DBA. I think the more managed Postgres there is, the less DBAs there are per se in terms of having access to those low level details. But yet in Postgres a lot of people still self host and you probably expect like that'll never go away. Right.

[00:21:44] - Bertrand

But even in managed services for the moment, we can only provide what is available. If you look at performance insight, for example, you have the wait events, but at some time you need to guess what is really happening because there is a lack of details, I think.

[00:22:04] - Ry

Yeah. So you would make that available again in your magic one scenario you would add that information and you would make it available even to end users in RDS because it can help them understand how to optimize how they're using RDS. Right?

[00:22:21] - Bertrand

Yeah, exactly.

[00:22:22] - Ry

Okay, I like that. Any other areas of observability that you think are big opportunities for improvements?

[00:22:33] - Bertrand

Yeah, for example, when a query is running, we'd like to know in which part of the plan is it running currently and pgstat activity displayed the plow ID. So I can also know which Plow ID was used in the past, which one is used currently and maybe there is a bad plan that were used and that's why it were slow before this kind of area.

[00:23:01] - Ry

Yeah.

[00:23:02] - Bertrand

Also able to export statistic and import statistic table statistic.

[00:23:09] - Ry

Do you ever run a query and it's like I would say I'm more on the novice side. I mean, I can write some pretty nice queries with all kinds of features, but I really don't understand how files are interacting and so on. But sometimes I run a query and it's like, oh, this one's taking a long time. I wonder why obviously. Has it ever been thought like it'd be kind of cool to run that same query, but get a visual of what's going on inside? Are there services or have you seen anything like that where a novice user could visualize what's going on inside of a long running query kind of as it's happening? Or does it have to be like an analytical process to figure out why it's going to go slow?

[00:24:02] - Bertrand

I think so once you have the explain of the query, there is some tools like for example one that Dalibo provides which can graphically represent your execution. And then with the hotspot and the one that are taking most of the time, it's very important to visualize this kind of thing.

[00:24:28] - Ry

So which tool did you say does that?

[00:24:30] - Bertrand

I don't know the name by heart, but it's done by Dalibo.

[00:24:35] - Ry

Okay, Dalibo.

[00:24:37] - Bertrand

Yeah.

[00:24:37] - Ry

Okay, I'll have to check that out. Yeah. Again, one of the things we're trying to do with Tembo is just bring advanced Postgres things to novice users. As you know, most people using Postgres have gone 1% deep into the depths of what's there and it's pretty intimidating for new users, I think. In a good way. I think it's nice that it's deep and not shallow, but it's a lot. True. That's true. Yeah. I mean the same thing with like Linux, right? Or git. If you actually try to understand how git works, you'd be like oh man, is there anything about Postgres that almost nobody agrees with you? Do you have any contrarian opinions on anything? I know this is like me trying to get you to fight with the community, but I'm just curious. I love hearing contrarian views.

[00:25:38] - Bertrand

No, not that I can think about. For example, when I say there is lack of ops availability. Yeah, I don't think I have heard someone seeing the contrary.

[00:25:54] - Ry

Well, where does the observability belong though? Does it belong inside of the core? I guess this could be a point. There's a bunch of extensions inside the core, you might argue, like the pgsentinel capability probably should be available to everyone without having to install an extension. But if there's 100 things like that, it bloats up. Postgres but yeah, maybe that. Do we need to put more observability stuff in the main project or does it belong outside?

[00:26:27] - Bertrand

Yeah, I was mainly thinking in the main project. Yes. Provide more details about what's going on.

[00:26:34] - Ry

And turned on by default.

[00:26:39] - Bertrand

Okay. I mean, depending of the cost of it, obviously.

[00:26:44] - Ry

Does anyone? Well, like pg_stat_statements, that's not turned on by default, right?

[00:26:51] - Bertrand

On core Postgres, no. But on managed providers, I think so, yeah.

[00:26:59] - Ry

So would you argue that should be turned on by default all the time? I know that because there's little cost. Right. Everything has a cost. That's a tricky one.

[00:27:10] - Bertrand

I think in production it has to be. I mean, at the end when something goes wrong you need to have something to analyze. If not.

[00:27:19] - Ry

Yeah, I agree it's difficult. Okay. But once again production issues. It's just a tricky thing to me because the open source project the is kind of like the inner core and things related to running it in production. Let's take for example like a Kubernetes operator. I think we could probably all agree, like a Kubernetes operator shouldn't be inside of the Postgres project per se, but that means a backup strategy can't be inside of the project either. Yeah, these edges are interesting. I think it's just like how much observability should be inside of versus outside of the core. Would you argue all of it should be inside? Like, would you say pgsentinel ought to be merged into core?

[00:28:15] - Bertrand

No, I don't think so. Because an extension is easy enough to have this kind of observability.

[00:28:28] - Ry

But people don't know it exists. Right. Unless they find it. That's the tricky part.

[00:28:33] - Bertrand

Yeah, that's the problem.

[00:28:36] - Ry

Yeah. Discoverability of extensions, that's one of the things we're working pretty hard on too, is trying to. I think it'd be great to have. We want to have metadata maybe at some point. We haven't really talked to the Amazon team about this, but I want to basically have a place where managed service providers can report which extensions are being used. So just so we can know what's popular, what goes well with something else, what pairs nicely. If I'm using five extensions, what's the 6th extension I ought to be using? Given that profile would be nice because I think digging into these extensions is a big opportunity for developers. Someone might be banging their head against the problem that pgsentinel solves. Developers might be. And I have, I've banged my head against that problem for hours, not days or anything like that. But I've thought about checked. Checked for the. Certainly there's got to be a history of active sessions somewhere in here, and it couldn't really find it or even a way to make it very easily. That was my thought. I was like, should be some way to create it. But anyway, curious. Do you listen to podcasts very much?

[00:30:00] - Bertrand

Times to times, yes. But it really depends what I am doing and if one of them pop up, depending of the subject. Yes, I can listen to one.

[00:30:12] - Ry

Yeah. What are your favorite podcasts? Even if it's non technical, I'm always looking for good new podcasts to listen to.

[00:30:21] - Bertrand

There is the Postgres podcast done by Datadab. You like that one that I like? Yeah.

[00:30:35] - Ry

Any others?

[00:30:36] - Bertrand

I am also following you all, by the way.

[00:30:39] - Ry

Great. Thank you. Yeah, it's interesting. Like, about half of my feed is like a Postgres related podcast, but I also love to get some other stuff mixed in. It's hard to work all the time, but yeah, IT's nice to have. I think it's great that you can just kind of passively gain Postgres knowledge now just by having that on while you're mowing the lawn or doing dishes or whatever.

[00:31:07] - Bertrand

Yeah. And the one done by Nikolai, by Dazarab is very.

[00:31:14] - Ry

His. Are you watching his Postgres marathon? Something marathon where he's posting content every day for a year? He's a madman. That's a lot of work.

[00:31:30] - Bertrand

Exactly. Yes.

[00:31:31] - Ry

But I appreciate that he's doing it.

[00:31:34] - Bertrand

Yeah, that's very good.

[00:31:35] - Ry

Yeah, it's really cool. Well, are you very active online where people want to follow you and what's your favorite place to interact with others in the community besides the mailing list?

[00:31:52] - Bertrand

Yeah, I am many on Twitter, and I think that's it for the social media on Twitter.

[00:32:04] - Ry

Do you think is Twitter better or worse after Elon took over?

[00:32:12] - Bertrand

I think it works. Not that good as before.

[00:32:18] - Ry

Yeah. Okay. He liked the original. Yeah. I mean, the one thing I think I like is that he's, like, playing with know and trying some new stuff. Like the fact that we're posting this podcast to Twitter first, at least it's like the world's changing and we can do that. Otherwise it would have been just clearly just drop it on YouTube. But, yeah, I kind of like that he's trying some new ideas in terms of, obviously a lot of people are pissed off that it's like paying, you got to pay to get priority, blah, blah, blah. But anyway, cool. Well, I don't know if there's anything else you wanted to chat about, but I enjoyed learning more about this, and I'm going to immediately go try to install pgsentinel and see what it does for us.

[00:33:13] - Bertrand

Yeah, no problem. I can help you with that if you need to.

[00:33:17] - Ry

Yeah.

[00:33:17] - Bertrand

Awesome.

[00:33:18] - Ry

Well, thank you. Thanks for joining and appreciate it.

[00:33:24] - Bertrand

Thank you, Ry. Thank you for everything.

· 11 min read
Binidxaba

An elephant representing pgvector

Image credit: Generated with Bing Image Creator

In databases, indexes are data structures that speed up finding specific values in a table column. The analogous task in vector databases consists of finding the (approximate) nearest-neighbors of a given vector. So, to accomplish this task fast, we can similarly create specialized vector indexes.

But, speeding up a query is not just about blindly creating an index. When deciding whether to create one or more indexes on a table, several factors need to be considered—for example, the size of the tables, whether the table is modified frequently, how the table is used in queries, and so on. Similar considerations apply to vector indexes.

In today's post, let us explore vector indexes and their tradeoffs in the context of Postgres and Pgvector. In particular, let us compare their build time, size, and speed, and, based on that, derive some guidelines to decide which one to choose for a given application.

Indexes in Pgvector

Pgvector is an open-source Postgres extension for similarity search. It allows for exact and approximate nearest-neighbor search. In particular, for ANN it offers two types of indexes: IVFFlat and HNSW. Let us briefly discuss them.

IVFFlat

The IVFFlat (Inverted File with Flat Compression) index works by dividing the vectors in the table into multiple lists. The algorithm calculates a number of centroids and finds the clusters around those centroids. So, there is a list for each centroid, and the elements of these lists are the vectors that make up its corresponding cluster.

When searching for the K nearest vectors, instead of calculating the distance to all vectors, the search space is narrowed to only a subset of the lists, thus reducing the number of computations. Which lists are the candidates? The ones whose centroid is closer to the search vector.

IVFFlat

IVFFlat generates lists based on clusters.

So, we can infer that the effectiveness of the index depends on two parameters: the number/size of the lists and the number of lists that need to be examined during the search (aka probes).

In pgvector, these two parameters are selected in two distinct moments. First, the number of lists is chosen when creating the index, for example:

CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 1000)

Second, the number of lists to be explored is established during execution, e.g.:

SET ivfflat.probes = 32

The pgvector documentation suggests the following:

Choose an appropriate number of lists - a good place to start is rows / 1000 for up to 1M rows and sqrt(rows) for over 1M rows

When querying, specify an appropriate number of probes (higher is better for recall, lower is better for speed) - a good place to start is sqrt(lists)

So, imagine that we have a dataset of 1M vectors. With the parameters above, pgvector would generate 1,000 lists of approximately 1,000 vectors. When executing a query, it would only query ~32 of such lists and execute ~32,000 comparisons to find the closest neighbors to a search vector. That is, only 0.032X compared to a full scan.

Of course, you can choose different parameters to achieve the desired recall. More on that later in this post.

HNSW

The Hierarchical Navigable Small Worlds (HNSW) index creates a graph with multiple layers. The nodes in the graph represent vectors, and the links represent distances. Finding the ANN consists of traversing the graph and looking for the shorter distances.

We can think of these layers as different zoom levels of the graph. Zooming out, we see a few nodes (vectors) and links. But as we zoom in, we see more and more nodes and more and more links.

The traversal of the graph resembles a skip list in that if no more candidate nodes are found in the current layer of the graph, the search continues in the next layer (zoom in), where more links should exist.

HNSW

HNSW creates a graph with multiple layers

For HNSW, two tuning parameters are decided at creation time: the maximum number of connections per layer (m) and the size of the dynamic candidate list for constructing the graph (ef_construction):

CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops) WITH (m = 16, ef_construction = 64)

A bigger value for m means that each node would have more links in each layer of the graph. A big m affects negatively during query time since more connections need to be checked, but it also improves recall.

ef_construction is also used during the build phase of the index, and it indicates the entry points in layer i+1. That means that bigger values would lead to heavier indexes.

Pgvector does not mention any particular recommendation for HNSW, but the defaults are m=16 and ef_construction=64.

Another parameter, ef_search, determines the size of the dynamic candidate list of vectors. In pgvector, the default is ef_search=40, but can be modified as follows at query time:

SET hnsw.ef_search = 100;

Impact of Approximation on Recall

In the previous sections, I mentioned a couple of times that the build parameters affect recall. What do we mean by that?

Well, Recall measures how many retrieved neighbors are indeed in the true kNN group. For example, a recall of 1.0 means that all calculated neighbors are really the closest. Whereas a recall of 0.5 means that only half of the computed neighbors are the closest. Recall is an important metric because it helps measure the approximation errors and tune the index parameters.

IVFFlat and HNSW are approximate indexes that work with heuristics. That means that there could be errors in their search results.

Take IVFFlat as an example. When deciding which lists to scan, the decision is taken based on the distance to the centroid of the list. Depending on the data and the tuning parameters, the closest vector to the search vector could correspond to a list that was not selected for probing, thus reducing the accuracy of the result.

One way to mitigate this problem and boost recall is to increase the number of lists to probe. But that, of course, would incur a performance penalty. So, improving the recall is not for free, and careful evaluation and tuning of the parameters is paramount.

IVFFlat vs HNSW in the pgvector arena

Now, let us examine the two types of indexes quantitatively. We will use the ANN benchmark, which we modified to have both algorithms available.

For pgvector, the benchmark creates a table with a vector column, taking the chosen dataset and inserting the items into the table. Then, it builds the selected type of index, and after that, it executes a bunch of queries. The dataset contains both train and test data. The benchmarking program uses the test data to evaluate the recall for each algorithm.

For this comparison, let us use a small dataset of around 1M vectors of 50 dimensions. The test set consists of 10K queries, and we want to obtain the 10 nearest neighbors.

The aspects that we want to evaluate are:

  • Build Time
  • Size
  • Recall
  • Speed

For these experiments, let us ask ourselves: How are the different parameters affected if I want a recall of X? In particular, let us set a recall of 0.998.

In pgvector, such recall is achieved with the following settings:

Index typeParameters
IVFFlatLists = 200, Probes = 100
HNSWm = 24, ef_construction = 200, ef_search = 800

Build Time

For the chosen parameters, IVFFlat indexes can be created quicker (128 seconds) compared to HNSW (4065 seconds). HNSW creation is almost 32X slower.

Build time

Size

In terms of index size, IVFFlat is again the winner. For a recall of 0.998, IVFFlat requires around 257MB, whereas HNSW requires about 729MB. HNSW requires 2.8X more space.

Index size

Speed

note

The benchmark uses one thread to execute the vector queries.

It is in speed where HNSW shines. With a recall of 0.998, HNSW can achieve a throughput of 40.5 QPS, whereas IVFFlat can only execute 2.6 QPS. HNSW is 15.5X better in this aspect.

Queries per second

Recall vs Index Updates

Another weakness of the IVFFlat index is that it is not resilient to index updates in terms of recall. The reason for that is that the centroids are not recalculated. So, the different regions in the vector space remain the same if vectors are added or removed. If, after a series of updates, the real centroids (if they were recomputed) are different, the previous mapping would be less effective, leading to a worse recall.

In fact, in psql you will get the following messages if you attempt to create an IVFFlat index when there are only a few rows in the table:

postgres=# CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(3));
CREATE TABLE

postgres=# INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]');
INSERT 0 2

postgres=# CREATE INDEX ON items USING ivfflat (embedding vector_l2_ops) WITH (lists = 100);
NOTICE: ivfflat index created with little data
DETAIL: This will cause low recall.
HINT: Drop the index until the table has more data.
CREATE INDEX

postgres=# drop index items_embedding_idx;
DROP INDEX

The solution to this problem would be to recalculate the centroids and the lists... which effectively means rebuilding the index.

HNSW doesn't show that:

postgres=# CREATE INDEX ON items USING hnsw (embedding vector_l2_ops);
CREATE INDEX

To exercise this behavior, I modified the benchmark to set up the database as follows:

  1. Insert 1/100 of the training dataset
  2. Build the index
  3. Insert the rest of the training dataset
  4. Execute the queries using the test dataset

For the chosen parameters, we can see that the recall is more sensitive to index updates for IVFFlat. For HNSW, the change is negligible.

IVFFlat recall

HNSW recall

Picking the right index for your use case

As a quick summary, these were there results that we obtained from our experiments:

IVFFlatHNSW
Build Time (in seconds)1284,065
Size (in MB)257729
Speed (in QPS)2.640.5
Change in Recall upon updatesSignificantNegligible

With the results above, we can then make the following recommendations:

  • If you care more about index size, then choose IVFFlat.
  • If you care more about index build time, then select IVFFlat.
  • If you care more about speed, then choose HNSW.
  • If you expect vectors to be added or modified, then select HNSW.

Let us see some examples.

Imagine a case of a database of Constellations for Astronomy. Data updates would be infrequent (inverse of guideline #4), so IVFFlat would be a good candidate. The index would remain of modest size (guideline #1) and give good recall by tuning only two parameters.

Let's take another example. Imagine a system of Facial Recognition. You'd likely want a fast response time (guideline #2) with good accuracy. You may also be OK with the size of the index (inverse of guideline #1). So, HNSW would be the best choice.

The case of an IoT Sensor Data Database where read values keep changing (e.g., temperature, position, etc.) would also be a good candidate for HNSW (guideline #4). IVFFlat could not handle the index changes properly.

Finally, imagine a database of vector embeddings obtained from your company's knowledge base to generate a chatbot. If the knowledge base rarely changes (inverse of guideline #4) and rebuilding the index is acceptable (if recall ever degrades) (guideline #3), you may choose IVFFlat.

Wrapping up…

In this post, we discussed the two types of indexes currently available in pgvector: IVFFlat and HNSW. We discussed their build parameters and how they affect recall.

With the help of a benchmark, we compared the indexes quantitatively in terms of build time, index size, QPS, and recall. We derived some general guidelines for choosing the appropriate index type based on our results.

I invite everyone to try out the benchmarking program, which can be found here, and the modified version, which is here.

What other elements should we consider when choosing a vector index? Let us know your thoughts at @tembo_io.

Appendix

The experiments in this post were carried out using a machine with the following characteristics:

CPUIntel(R) Core(TM) i7-8565U CPU @ 1.80GHz 8th Generation
Number of Cores4 (8 threads)
CacheLevel 1: 256KiB write-back, Level 2: 1MiB write-back, Level 3: 8MiB write-back
Memory16 GB SODIMM DDR4 2400MHz
Disk150GB SSD
Operating SystemUbuntu 22.04.3 LTS, Linux 5.15.0-79-generic (x86_64)
Postgres14

· 32 min read

Search is simple in theory. In practice? Anything but. In today’s Episode 8 of Hacking Postgres, Ry sits down with Philippe Noël of ParadeDB to talk about how search is evolving, the influence of AI, and the lessons you'd tell your younger self.

Watch below, or listen on Apple/Spotify (or your podcast platform of choice). Special thanks to Regina and Paul for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.

Transcript

[00:00:12] - Ry

Welcome to Hacking Postgres, an interview podcast with the people working on open source projects around Postgres, which in my opinion is the world's best open source database. I'm Rye Walker, founder of Tembo, a managed Postgres company, and today I have Phil Noel from ParadeDB, working on ParadeDB as my guest. Phil, welcome to the show.

[00:00:37] - Phil

Thanks. Yeah, thanks. Nice to meet you. Thanks for having me.

[00:00:42] - Ry

Great to actually meet you. I'd like to start...maybe you could give us a quick background, like maybe where'd you grow up in the world? And what were you doing before?

[00:00:53] - Phil

Yeah, absolutely. Happy to. So I'm originally from Quebec City, the French part of Canada. I actually grew up in this small town called [inaudible 01:02] , which is 2 hours East. Spent most of my life there, and then eventually I left to go to university. I stayed in Boston, studied computer science and neuroscience, did a few things. I started a browser company before ParadeDB, which I ran for about three years, and then after that got acquaintance with the joys of Postgres, you could say through that experience. Sounds somewhat similar to yours, maybe, although not to the same scale. And my previous co founder and I only started ParadeDB.

[00:01:35] - Ry

Nice. Tell the browser company, what was that called and how far did you get in that project?

[00:01:43] - Phil

Oh, that is both a short and a long story. It was called Wisp. We were building like a cloud based web browser, so the initial idea was to offload heavy processing to the cloud by streaming, which is like an age old idea, but we were young and dumb, so we thought we had invented something. We did that for a while, raised a couple of rounds of funding, grew a team to like 25 people, but we never quite found PMF and eventually pivoted to cybersecurity. Did that for a while, but it was a much better market, but not our battle to win. So we closed down the company. Yeah.

[00:02:21] - Ry

Okay. I'm sure we both have a lot of battle stories we could commiserate over. Do you remember when you first started using Postgres? I can't remember. That's why I ask this question, because it kind of just happened at some point in my career. But do you remember specifically when you started with Postgres?

[00:02:45] - Phil

I don't know. Actually. That's a good question. I don't know. As a user? I don't know. As a developer since earlier this year or last year.

[00:02:58] - Ry

Well, cool. Obviously as a user, you've used it for years, I'm sure. Tell me about your Postgres work. What have you built in the past and what are you building now? Or is this all just the first thing?

[00:03:12] - Phil

I guess I would say it's the first major thing. So we can keep it to that, what we're doing. So we're building ParadeDB. ParadeDB is like a Postgres database, essentially, or Postgres extension, where we're integrating native full text search within Postgres, which is something that some people are quick to jump to say this already exists, and some people are quick to jump to say the one that exists is very bad. And so they're excited about what we're doing. So it depends which camp you fall on. But the first thing we released is this extension is called pg_bm25, where we're essentially integrating proper full text search within Postgres and then combine this with some of the existing innovations that's come, like pgvector to build hybrid search. And our goal is to build this Postgres type database that is sort of the go to choice for companies where search is critical to the product they're building.

[00:04:08] - Ry

Nice. Why do you think you're working on this? What led you here?

[00:04:14] - Phil

Yeah, that's a good question, actually. This is a problem we face ourselves and that's kind of why we wanted to solve it. So after my first company, my co founder and I, we were trying to decide what we're going to do next. So we started doing some contracting work sort of left and right. I was living in Paris at the time and working with a lot of French and German company. And when we were working with this German automaker, they just really needed high quality full text search within Postgres. So the promo we were building for them, and instead we had to glue Elasticsearch or some vector database on top. And it was just a nightmare. And we were very frustrated how bad it was and decided, you know what, maybe we should just fix it ourselves. And so that kind of led us to today.

[00:05:00] - Ry

Got it. Yeah, it's interesting. I don't know. There was a famous tweet that I can't remember the exact wording on, but it basically said something to the effect that, hey, tech founders, you do yourself well to pivot to building something that would support whatever you would be building today. Basically go one level deeper than your current idea is similar to what we did at Astronomer, because we were basically building a competitor to segment clickstream data processing. And then we started using Airflow. Then we said, hey, let's just provide airflow as a service. And kind of made that switch to go down a level and that was the winning pivot for that company. So it sounds like you've kind of jumped down to more of an infrastructural issue, which I think is good. Well, what challenges? Obviously there's a lot of ground to cover, but what are your biggest challenges, would you say you face so far? Building this thing.

[00:06:12] - Phil

You mean like business wise or technical wise or process wise?

[00:06:16] - Ry

Actually, yeah. I'd love to hear all categories of challenge. Yeah, spill the beans. What's hard about this? I imagine fundraise. I mean, it's a tough time to fundraise, so that's probably not awesome. But yeah, maybe start technically.

[00:06:33] - Phil

Yeah, I can start technically. I think software is the easiest part of building a company, so I would say these are our simplest set of challenges. Not to say it's know, I was actually talking to Eric that I listened to the Hacking Postgres episode with Eric and I talked to him afterwards. At first we didn't know pgrx was a thing, and we were like, oh my God, how on earth is this even going to be a thing? And then we found out pgrx was a thing and we're like, wow, it went from near impossible to quite doable, really. So, like on the technical front, I mean, there's just so much you need to do when you're building search, right? We have covered a small ground of features of what people expect, and there are so many more that we need to build, and so we're trying to prioritize that.

[00:07:25] - Ry

Obviously Elastic is probably the big competitor. Can you estimate what percentage of their functionality you aim to replace? Is it essentially 100% and what's your progress towards that?

[00:07:41] - Phil

Yeah, that's a good question. I mean, our progress was small. It's small enough that you can use it, but Elastic is enormous, right? I don't think we aim to replace or not to replace, sorry. To match 100% of their features, at least for transactional data, which is what we need today. I think we actually had a decent bit of the way done and definitely enough to keep us busy for a couple of months. But I think that's sort of the first step of what we're looking to match beyond that, like what everything elastic does for observability, that's sort of another story, and that's another milestone that we will see depending on what our customers want.

[00:08:29] - Ry

Obviously I'm building a fresh new tech startup as well, and I understand you got to match the functionality of, I call it the table stakes features. But then you also have to come up with some differentiators or else no one will care. So what are the biggest differentiators you're thinking about with ParadeDB?

[00:08:49] - Phil

Yeah, so that's a good question. I think for ParadeDB right now, the differentiator has been, ironically, I would say the combination of two non differentiated things, if that makes sense, which is like, if you want to do search over transactional data, ParadeDB is sort of the best solution today because it's the only thing that actually truly combines the two. It wasn't done before. And so sort of our initial interest, I would say, has come from that. So typically when people wanted to do this, they would have some managed Postgres provider, of which there are many, and then they would use Elastic and then they would need to combine the two and then run the two and so on. It can be a ton of work. And so ParadeDB sort of makes that a single system and especially for small companies, which is where you've been really, it's been a lot easier to manage, I would say. And so that really draws them to us. That's sort of like our initial point initially, but we have some other things that will be coming up which hopefully will expand on this sort of initial interest.

[00:09:52] - Ry

Yeah, it seemed to me like as I think about building my next search bar in a product, I have to decide do I want to go what I can so call like the old school way versus vector search? Maybe I can get away with some sort of AI integration for search. Are people asking that question or do they have the perception that maybe like vector search somehow gets, replaces or displaces in some way traditional full text search type features?

[00:10:28] - Phil

Yeah, that's a very good question. A lot of people have, I've talked to a lot of people who weigh in on both sides of the coin. I would say my personal take is both will remain quite important just because at the end of the day people will still have data and we'll still need to be able to find instances of those data. Right. And there's one way to find it via similarity, but there's also sometimes you just want to start from a clean place. And so I think both are quite valuable. Most of the people we talk to, hybrid search is very important to them, like the combination of the two, both big and small companies. And that's one of the reasons why we built Parade the way we did today, where pgvector exists and so many others. And I would almost say vector search is kind of table stake now. We don't really have to innovate on that so much ourselves, I would say. But the traditional search is kind of the piece that's been missing, which is why we released this first extension, pg_bm25, and then we offered the combination of both as well [inaudible 11:29] with another extension called pg_search, which we're going to be publicizing more heavily soon.

[00:11:35] - Ry

Yeah, I think it seems like, well, it seems to me like everything has to have an AI story or else perhaps it's been outdated. Right. So I think even search, even full tech search, you can say I like that the hybrid answer is a good answer. Obviously then you have to figure out how to hybridize it all. And that's not necessarily trivial. But yeah, I'm curious to see what you come up with on this.

[00:12:07] - Phil

Yeah, it takes some effort. It takes some effort. I would say we definitely have an AI story, but our personal take on the whole AI world and fade is like AI is sort of a step function improvement on existing things being done, but you need a foundation of value proposition for customers on the fundamental non AI part of the product, if that makes sense. And that's kind of how we think about it. There are so many people that are trying to bring AI when there was nothing useful to begin with. Well, I really think of AI as taking something that was useful to begin with and making it that much better. And so it kind of plays quite well actually with the way we've been going about delivering this product and saying like, hey, we'll just give you really good full text search and obviously so much more can be done. But if that is not useful in the first place in Postgres, then who cares that I can add AI to.

[00:13:01] - Ry

Yeah, yeah, I agree. Yeah, I think about it. Know you can't build AI without training data, right? And you can't, you can't have training data without a product. You know, to collect some base level data. If you have a base level of product without AI, basically you have to start without AI, don't have data at all. And I think that's a challenge. A lot of founders, maybe they're thinking they're going to go straight to AI. They're like, there's nothing. You got to start basic. Yeah, I agree. What are some big milestones? How many big milestones do you have envisioned for ParadeDB at this moment?

[00:13:44] - Phil

That's also a good question.

[00:13:46] - Ry

I mean you probably have a next one, right? You usually know your next big milestone.

[00:13:50] - Phil

Hopefully you'd have that. Yeah, of course.

[00:13:53] - Ry

And you also tell me what it is. But I'm curious.

[00:13:58] - Phil

Happy to share. I mean, to be honest our big milestones was to get a thousand stars in new Hub and then some stranger posted our repo on Hacker News and somehow that happened in a couple of days, which was really exciting and rather unexpected. Now our big milestone is we're about to release our own small version of managed Postgres ish specifically for ParadeDB to test how else we can deliver value to people. Like many that have been using the product so far have been using our self hosted version, which I'm sure you're extremely familiar with how people don't want to do this, doing Tembo. So that's the next big milestone that's coming, trying to learn to manage this. And then after that we have a couple of, I think after that we'll see the exact number. We're just trying to get more people on it and see what they think. Really.

[00:14:53] - Ry

Yeah, just grow users. This is another thing I've been thinking about around just even like we have stacks at Tembo with different use cases for Postgres, and then some things ought to be workloads separated from other things. Like for example, do you keep your analytical workload separate from transactional? That's clear. But search is interesting, whether that's like an add on. Is that just work that the transactional database should be doing, or is it somehow, do you think the workload for search should be isolated from traditional transactional workloads? From an application standpoint, that's a good question.

[00:15:36] - Phil

I think it depends what product you're building, to be honest, because you don't want to slow down transactions. That's something that's very important to us. For example, I think it's possible to integrate it, actually. I think it's up to you, the way we're thinking about it. So the way we've built ParadeDB is we offer weak consistency over search. So what this means is the way we build searches, we have inverted indices that store the tokenized data that you want to search over, but those are weakly consistent. So just like if you use Elastic, for example, and something like Zombodb, and then you synchronize it, there will be some lag between transactions being updated in your database and eventually being reflected in your search engine, the same thing happens, but it happens automatically for ParadB, instead of being it can be minutes sometimes here will be like a few seconds at the very most, but it ensures that your transactions are never slowed down, which is quite critical for people. So I think if you build in that way, it's actually quite possible to centralize it and have it be integrated with the actual database. If you don't use a product where search is built that way, then I don't think you want to slow down your transaction so you have to isolate them.

[00:16:48] - Ry

Yeah, I think a lot of people just obviously just do it inside their transactional database because it seems like overkill. Spin up a new Postgres, just especially like traditional shitty full tech search that's available. But if we want to get good search, if I wanted to build really good search into an application, I would go buy Algolia or something like that. And that clearly is a microservice at that point, right? I'm sending data to them to index and they're sending me back results really quickly that are happening. It's not taxing my main database as a part of that. So I kind of like thinking of search as, let's call it great search as a microservice candidate for sure, because the better you make it, the more compute it requires. The more compute it requires, the more it's competing. Right? So I don't know, I like the idea of just taking that worldview. That great search needs to be separated from the transactional or analytical.

[00:18:03] - Phil

Yeah, I think you're right in a lot of ways. You do need to be able to orchestrate it properly at scale. This is definitely something that we will ponder more and more as we have more people using it and using it at greater scale. I do think even if it needs to be solely separate from what we have done today, one of the great things and where DBMS and Postgres managed companies like ParadeDB or Tembo show up is in the ability to just make it transparent to the customer. The actual complexity of the orchestration behind the.

[00:18:37] - Ry

So, you know, you've been writing code for a number of years now. What would you say your most important lesson has been? Kind of switching from the technical to the interpersonal or just like professional. Do you have any top lesson? If you're going to meet with a young developer, you'd say.

[00:19:00] - Phil

Specifically on writing code? Yeah, specifically on writing code. I would say only write the code that you need to write. I think that's like a big one. Our previous company, maybe I'll bother with this one. I like to think of products and startups now as like science experiments, as works of art. And I think our previous company, we thought of it as a work of art. So we came out of college, we were so incredibly into it and we wanted to be perfect. We cared so tremendously about what we did, that we always went above and beyond and it always made the product better and it involved writing more code, but it did not always make the product better in a way that meaningfully improved the experience of the user or brought us closer to product market fit. Right. And so I think if you think of a work of art like it's never finished, it needs to be perfect, versus a science experiment is more like, what's the bare minimum I can do to validate or invalidate this hypothesis I have or deliver value to a customer. And so what I would say, what I wish I could say to my younger self was, if no one has asked for this, I don't think you need to do it.

[00:20:12] - Ry

Yeah, it's a good point. So you're saying be more of a scientist than an artist, perhaps to some degree. When you're starting on something and it's not validated.

[00:20:26] - Phil

I think a lot of great products are almost like work of arts. Like, people were so obsessed over it and you can feel that. And so I don't think you want to leave all types of esthetic approach or care to it, but I think definitely early on, before you even sink so much time into it, you just want to get feedback quickly. Right. And that involves just only writing the code you need to write. Yeah.

[00:20:49] - Ry

I'll tell you a quick story. Like when we started astronomer, I have a good friend, Greg Neiheisel. He was CTO and basically my partner of crime. And we just very much mind melded on a lot of things. But our attitude following your principle was really what we were building was this beautiful lattice of glue, and everything was open source inside. So we used, for example, we used Google's material design, which had just come out, and we built a UI, like, adhering to it perfectly, and ended up being a beautiful UI because guess what? Those guys built a great framework and we had no designer on the team. Google was our designer. We would joke about it all the time, and we actually chose a color scheme for the company that was kind of Googley, had the same four main colors. So it was really easy to build something great. But that we really built the first version of our product was entirely open source components just glued together in a unique way. And that was our art. It was like this inner skeleton of orchestration, or just like I said, I can think of it like a lattice of glue that's holding all these things together. And, yeah, it was a very successful product. And it looks basically how you arrange open source components is art. It's a creative endeavor, I think, and it's enough for a brand new product. It's enough art. You don't have to go and reinvent, like, for example, the airflow UI during this really hard early project. And you don't have to go reinvent the Postgres UI. Maybe you want to, but you're like, let's handle the first things first. So did you think of a lesson that was non technical that you learned that you would tell your younger self or not really?

[00:22:49] - Phil

Oh, I have so many. I spent a lot of time thinking after our previous companies didn't work out specifically. I don't know. This is a broad question. I can go in every direction. Let me think, which one is the best one to share? Yeah, I would say, okay. One interesting lesson is I'll focus on the youth as well. And what you mentioned. When we were building our first company, our first product, we felt very righteous. We were doing something very difficult and objectively, technically difficult. And a lot of people have tried before, and we were like, they must have done it wrong. Who are they? People with more experience and context than us to know how to do this really well. And so we always started every premise of what we were doing from the premise that we were correct. Right. And then we were strived to further validate the fact that we were correct. And I think this leads to a lot of cognitive dissonance when you turn out to be wrong. Right. Because you had this idea of what you were thinking, and it's now being challenged versus now the way we think about it.

[00:24:07] - Phil

And I mean this from a personal standpoint, from the way we talk and hire people, from the way we talk to customers, from the way we try to build product now, we just think we're wrong. I always assume I'm wrong in what I think, and then I strive to prove myself correct. Right. And if we do manage to prove ourselves to be correct, then it's like a very positive moment. You're like, that's amazing. I found this great surprise that I wasn't fully convinced of before. And if you turn out to be wrong, you're like, well, kind of thought I could be wrong all along, not a big deal, and then sort of drop this and move to something else. And I think it's a mental framework that has served us a lot better in everything that we do now.

[00:24:45] - Ry

Yeah, I love that.

[00:24:47] - Phil

Yeah.

[00:24:48] - Ry

I think it's a tricky catch 22. Like, you have to, as a founder, live in a reality distortion field just to change the world. You have to have a crazy idea that most people disagree with. Those are the best ideas. Unfortunately, they're also the worst ideas.

[00:25:05] - Phil

Exactly.

[00:25:06] - Ry

Just accept that, hey, you might be. At the same time, you have to also fight for your idea. Even if you hear negative, it's like, oh, man, I built all this infrastructure for it to be righteous, and now it's hard to give up on it. I think it's wrong to give up on it a lot of times, because, again, I think that some of the best ideas almost everyone would disagree with as it starts. I agree. It's a much better mindset to understand that it's a lottery ticket sort of odds not, hey, well, if you were a good student, you know you can get an A in a class, right? You know you can get an A. It's just a matter of doing the right amount of work, and you'll get an A. But it doesn't work that way with startups. Exactly.

[00:25:59] - Phil

Yeah.

[00:26:00] - Ry

Cool. So, well, I'm curious, like, are you paying attention to the Postgres ecosystem and other developments maybe that you're excited about that you've seen since you started working on.

[00:26:13] - Phil

Have. I mean, there's, there's just so much that's happening in Postgres, obviously pretty excited about Tembo. I've been following you guys' journey. Just like, recently, there are people that I forget the name of the developer. That's very embarrassing. But someone released PG branching, which maybe you've seen, which is like, so I was talking to Nikita, the CEO of Neon, right? And he was, like, raving about their branching system, which I'm sure is world class and amazing. But it turns out other people might be able to do this someday, which is good for the community, perhaps. So I think that's quite an exciting extension that hasn't been made by us, for example, that we're very excited about. And there's many others that I could list.

[00:26:56] - Ry

Yeah, it's interesting. My first reaction I saw, like, AWS announce, oh, we have active, active replication available on RDS. It's a new extension, you know. And I responded on Twitter, is it going to be open source? And crickets. So it makes me feel like time to. We'll study that extension, and the community needs to just build the open source version of it if they won't do it. I'm not salty about how the big clouds operate. Extracting value from open source. Because I'm extracting value from open source, and you are, too. We all are. All the way down. Like I said, it's like value extraction all the way down to Linux, but they could be. Obviously their goal is to build a proprietary alternatives and I really wish there was a Postgres company out there that was big and was worrying about the whole open source ecosystem. That's what we're trying to do. Yeah, there's so much going on. I don't know if it's because I'm looking at it now or if things have changed in 2023 versus 2022. It's great that it's happening too during what I would consider like a law in the venture ecosystem as well. So I think the stuff that's happening now will be strong. They always say like the startups that start during the slow parts of venture usually end up being more successful than the ones at the hype cycle. So you and I are both lucky we're starting here. Even those. It might not feel that way sometimes.

[00:28:39] - Phil

I think it does feel that way though. We raised our seed round, things went quite well. It was a challenging raise, but challenging because people were very critical about what we're doing and I think that's good, right? Like the first company, we raised money on a terrible idea and we thought raising money was the good part. But actually if you have a terrible idea and people convince you of that early on, they're actually doing you a favor, right? Yeah, I think it's good. I'm very bullish on the companies that are going to be founded, that have been founded this year, that will be found next year and so on, because I think you're right. Overall, typically they turn out to be quite good. Yeah.

[00:29:22] - Ry

So do you have a couple of favorite people that you've met in Postgres Land? Give them a shout out if you do.

[00:29:29] - Phil

Yes, sure. I mean I met Eric, which I think will take the prize for my favorite. He. He's very humble about it. He keeps saying TCDI needed pgrx. I think they really did it beyond what was needed for the benefit of the community. So I think he's definitely gone above and beyond and he continues to go above and beyond. Another one maybe you know him or not. He's the writer of one of the PG internals book called Hironobu, who's this Japanese man who lives in Europe now I haven't engaged with him very much, but he's made this resource called I think InterDB.jp or something like that. I could send you the link if you're curious, which goes into Postgres internals at a great detail. And it's been like a good resource for us in it. So shout out to that guy.

[00:30:23] - Ry

Yeah, I definitely read the website, but I haven't spoken to the man.

[00:30:29] - Phil

Maybe we should try to email him a few times. Yeah, I tried to convince him to join us, but it was too difficult to sell.

[00:30:37] - Ry

Yeah, maybe he, I mean, I tried to convince Eric Ridge to join Tembo in the early on and he was too hard to sell.

[00:30:43] - Phil

I'm sure you did. Of course, I would have loved to convince, um, the naive part of me was like, I can't wait to convince Eric to join Parade. And then I talked to him, realized he basically is the co founder and CTO of TCDI. And I was like, okay, this isn't happening.

[00:31:01] - Ry

Yeah, I was probably the 25th person to go after him. And you were probably the 27th person to go after him. Whatever.

[00:31:10] - Phil

Exactly.

[00:31:11] - Ry

Yeah, I think it's great. I'm kind of excited that these people, I'll call them like the gods of Postgres, are kind of locked up up on Mount Olympus. We don't get to see them or talk to them very much. That's exciting to me, to know that someday we'll get there. Someday, if our companies are strong enough, maybe we can get to work with those people. But it's great to be in an ecosystem where there is a tier of people that are just amazing. I was talking to Samay, who's the CTO here at Tembo, and he did the quick analysis. To see that most people become a committer at Postgres takes like around seven or eight years of contributing to Postgres before you get the commit bit, which is like, wow, that is an amazing. You think of venture backed startups, it's like two startups worth of effort sticking around for your full vesting before you get to be up. So it has nothing to do with the company that you're working for, it's just you're going to dedicate yourself to an open source project for the core of your professional life. Because once you get there, you probably want to stick around for another eight years, I'm sure.

[00:32:27] - Ry

Yeah, it's pretty cool.

[00:32:28] - Phil

Yeah. But these people are great, and I think that's one of the things that's so great about Postgres community. Even though Eric and so on, they're amazing people and they're doing great things and they're very busy and so on. This are quite accessible, really. Eric is very accessible in the pgrx discord and so on. So I do feel like by building open source, we still get to work with them, even though not in the same organization. And some other people I forgot to shout out, which perhaps deserve a lot of shout out, is maybe like Umur and Ozgun from Citus, right? They were like truly visionaries of the Postgres world, building extensions when Postgres was a toy database, as people would like, critic being critical of.

[00:33:17] - Ry

Yeah, yeah, we're definitely standing on their shoulders. They've, they did a lot of hard work early on and yeah, it's great. If you had a magic wand, you could add any feature to Postgres tomorrow and it existed. Is there anything that you would use that wand on? Do you have any things you wish Postgres was that it isn't?

[00:33:40] - Phil

That's very interesting. I think the answer would be better distributed support. Actually, what Citus has done is amazing. But the whole reason Yugabyte exists today, for example, is because there's only so much you can do when you're an extension like Citus did, right? And like Yugabyte is basically trying to be the Postgres equivalent of CockroachDB. And I think if that was baked in natively to Postgres, it would be good for a lot of people that are building on Postgres included.

[00:34:13] - Ry

Yeah, that's a good one. Do you listen to podcasts very much?

[00:34:19] - Phil

Not very much, not very much. Only yours.

[00:34:22] - Ry

Okay, good, thank you. No, I listen to PostgresFM all the time. I'm huge fans of those guys. I do listen to a lot of podcasts. But yeah. Just going to ask you what your favorites are. But you already said. All right, well cool. It was great talking to you. Where can listeners find you online? Maybe just tell us about yourself and then ParadeDB url and so on.

[00:34:49] - Phil

Yeah, so you can find us on paradedb.com or on a GitHub is ParadeDB as well. It's open source. We welcome contributions and users and so on. Of course we're very responsive. As for me specifically, I think you're going to link my Twitter in the bio, but my Twitter is PhilippeMNoel. It's basically my handle across every social media. I feel like when you can find one, you sort of keep it. There's so many people now on the internet, so that's my handle everywhere. People can find me there and I love chatting.

[00:35:20] - Ry

Awesome. Well, thanks for joining. Love to have you on again in the future. We'll see again, it's super early in this podcast life, but track what's going on with ParadeDB and if there's more stuff to talk about, we'll get back together.

[00:35:36] - Phil

Of course. Yeah, I would love to be there. Thank you for having me.

· 6 min read
Adam Hendel

image

Your app needs a message queue. Simple enough—until you try to do it, anyway.

Go set it up on Kafka? Sure...but now you have a Kafka cluster to manage. Redis could work, but now you're just managing Redis instances instead. SQS? That means you have to reconfigure your application to talk to AWS, and you also get an extra external bill as icing on the cake. Let's build it on Postgres! However, if you follow most blogs and guides, you'll probably end up building an agent, watcher process, or background worker to make sure the queue stays healthy. It's better, but if the background worker fails, you could end up with bigger problems.

Fortunately, there's a better way.

By designing with a visibility timeout, we remove the need for external processes for queue management. PGMQ is a Postgres extension built following exactly this sort of self-regulating queue. Today we're going to combine PGMQ with pair of core Postgres features—FOR UPDATE and SKIP LOCKED—to cover all the needs of our message queue. FOR UPDATE helps ensure that just a single consumer receives a message in the queue. And SKIP LOCKED is required if you want to have multiple consumers on the same queue – without it each consumer would wait for the others to remove their locks. Before we put all the pieces together, let's do a quick refresher on how each of these works.

FOR UPDATE: Ensuring Exclusive Access

Imagine a crowded store with a single cashier. As customers approach the counter, the cashier must serve them one at a time, ensuring each transaction is complete before moving on to the next. Similarly, when working with a queue, it's essential to ensure that only one process or transaction works with a particular row of data at a time. This is where FOR UPDATE comes into play.

The FOR UPDATE clause is used to lock selected rows, preventing other transactions from modifying or locking them until the current transaction ends. This ensures that once a task (or row in our queue) is picked up, it isn't grabbed by another worker or process simultaneously. It guarantees exclusive access, much like how our lone cashier attends to a single customer at a time.

SKIP LOCKED: Keeping the Line Moving

Back to our store analogy, if a customer isn't ready to check out and holds up the line, it can cause unnecessary delays for the other customers. What if, instead, those who aren't ready simply step aside, allowing others to continue? That would undoubtedly speed up the checkout process. This is the concept behind SKIP LOCKED.

When combined with FOR UPDATE, the SKIP LOCKED clause ensures that if a row is locked by another transaction, it gets skipped over, and the next available row is selected. This way, other workers or processes don't get stuck waiting for a locked row to be released; they simply move on to the next available task.

By using these two clauses together, you can ensure that tasks in your queue are processed smoothly and efficiently, with each task getting picked up by a single worker and other workers moving seamlessly to the next available task.

However, how can we take this one step further? Many queue implementations have us using FOR UPDATE to mark a message as “in progress” which is great. However, that typically requires us to have a service external to postgres which monitors the queue to check for messages which have been “in progress” for too long.

Self-Regulating Queues in PostgreSQL

If you’re using FOR UPDATE SKIP LOCKED, and setting messages “in progress”, you most likely need a process to watch your queue and check for messages that have been processing for too long. Rather than running a background worker or external process, PGMQ implements a Visibility Timeout (VT). A VT is a designated period during which a message, once read from the queue, becomes invisible to other consumers or workers. This ensures that once a worker picks up a task, other workers won't attempt to process the same task for the duration of this timeout. If the original worker fails to complete the task within the specified timeout, the task becomes visible again by nature of time elapsing past the specified VT, which means PGMQ still provides an at-least-once delivery guarantee even when the VT has elapsed. The task is ready for reprocessing by the same or a different worker.

In essence, the visibility timeout provides a grace period for tasks to be processed. It becomes a buffer against potential failures, ensuring that if a task isn’t completed due to any unforeseen reasons, it doesn’t get lost but rather re-enters the queue. Without something like the VT, queue systems will need to run a process to watch the queue. If that watcher crashes, or loses connection, then messages will stay unavailable until the watcher is recovered.

Queue regulation isn’t just for error modes

A common use case is for a consumer that needs to process a long-running I/O bound task. Let’s say there is a message with a task to create some infrastructure in your favorite cloud provider, e.g. create an EC2 instance if it doesn’t already exist. That could take minutes to start up. Your consumer can submit the request to provision EC2, and then instead of waiting for EC2 to create, it can set the VT on that message to 60 seconds from now, then move on to read the next message. In 60 seconds, the message will become visible again and can be picked up. That can look something like the follow pseudo-code:

# read a message, make it invisible for 30 seconds
select pgmq.read(‘task_queue’, 30, 1)

...check if S3 bucket already already created
...request to create S3 bucket if not exists

# set message VT to 60 seconds from now If not exists:
select pgmq.set_vt(‘task_queue’, <msg_id>, 60)

# consumer archives or deletes the message when its finished with its job
select pgmq.archive('test_queue', <msg_id>)

With PGMQ’s design, messages do not leave the queue until they are explicitly archived or deleted, so you could think of using the VT directly as “returning the message to the queue”, even though it technically never left the queue. Rather, returning it to a state that can be read by other consumers.

Choose the simplest queue architecture with PGMQ

Use FOR UPDATE so that you ensure messages are only read by one consumer at time. SKIP LOCKED so that you can have multiple workers processing messages concurrently. Finally, implement a visibility timeout so that you no longer need to rely on an external process to handle messages that have failed to process. Install the open-source PGMQ extension for Postgres or try it on Tembo Cloud today and immediately benefit from all of these design decisions.

· 25 min read

Postgres for everything? How about building a new cloud provider? In episode 7 of Hacking Postgres, Ry talks with Burak Yucesoy of Ubicloud about new clouds, getting attached to extensions, and the future potential of Postgres.

Watch below, or listen on Apple/Spotify (or your podcast platform of choice). Special thanks to Regina and Paul for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.

Transcript

[00:00:12] - Ry

Hello. I'm Ry Walker, founder of Tembo, a managed Postgres company. And today I have Burak from Ubicloud. How do you say UB? Ubi.

[00:00:23] - Burak

Ubicloud.

[00:00:24] - Ry

Yeah, UB. Welcome to the podcast Burak.

[00:00:33] - Burak

Yeah, thanks a lot for having me here.

[00:00:36] - Ry

Yeah, well I'd love to start with giving us a quick background. I'm curious where'd you grow up and what were you doing before?

[00:00:48] - Burak

Well, well hello everyone, this is Burak and I work as a software developer and I guess in reverse chronological order I worked at Microsoft, Citus Data and SAP on distributed databases in all three and currently I worked at a startup called Ubicloud. We are basically building a new cloud provider and the primary thing we do differently than other providers is that well all the code is open. You can think it as open alternative to existing cloud providers. You can go to GitHub, check out the code, set it up on a bare meta server and then you can have your own cloud or you can use our managed offering of course. So yeah, this is brief history of me working professionally.

[00:01:46] - Ry

Yeah, nice. So you're open sourcing, essentially open sourcing and what AWS and GCP and Azure have done.

[00:01:55] - Burak

Yeah, definitely, I'm seeing it what Linux is to proprietary operating systems. This is what we are doing for the cloud providers.

[00:02:07] - Ry

Yeah, and you only have to build 300 different services on top of it. Right, but luckily a lot of those are open source too.

[00:02:15] - Burak

Kind of. Well, I guess our current plan is not building all 200 render services because most of the time people use only ten at most 20. If you implement 20 of them, I guess you have 80% of the use cases. So I guess this is our initial plan but we never know what would feature show.

[00:02:42] - Ry

Yeah, I think it's a great idea. I'm excited to watch it evolve and hopefully partner with you guys at some point.

[00:02:50] - Burak

Yeah, that would be awesome.

[00:02:52] - Ry

Do you remember when you first started using Postgres?

[00:02:57] - Burak

Yeah, I do. Well to be honest, when I start programming I started as a web developer and at that time Lamp stack was very common. So my first database was MySQL but then I started working at Citus Data and which is the place that I started working development part of the Postgres. So basically for people who don't know Site, the Sitestata was the company behind many popular extensions such as Citus or Pgcron or PostgreSQL HLL. So when I first joined Citus Data, I initially worked on building Citus extension and while doing that you need to dig to the Postgres code first you need to understand and then you build your extension on top of it. And then we built our own managed service. So I switched to that team to build a Postgres managed service and some of our customers were well, they were heavily using PostgresQL HLL extension and at that time the original authors of the PostgreSQL HLL extension, they went through an acquisition process and they didn't have enough time at their hand to maintain the extension. And well, we know the people and at that time PostgreSQL Community was much smaller.

[00:04:37] - Burak

So we just called them and said that, hey, we want to maintain this extension, what do you think about? And they were pretty happy to find a new maintainer. So, long story short, I found myself as the maintainer of the PostgreSQL extension and then the Citus Data got acquired by Microsoft and then well, I continued my Postgres journey on Microsoft. Like we also build a managed server stash. Yeah, I guess that's how I start with Postgres development. Well, not just start like till the almost whole journey.

[00:05:18] - Ry

Yeah. And are you taking a little bit of a break from it right now or are you working on Postgres stuff over at Ubicloud?

[00:05:31] - Burak

Yeah, well, first of all, we use Postgres in Ubicloud as like most of my professional community, like the friends, and they are from Postgres Community, so there's no way I can leave that community at all. But as a cloud provider, Ubicloud also needs to offer a Postgres service. So we are planning some things like it's not very well defined yet, but I think eventually we will have something for Postgres as well.

[00:06:12] - Ry

Out of curiosity, are you guys I'm sure this is a hotly debated topic, but Kubernetes or no Kubernetes inside of UBI Cloud?

[00:06:22] - Burak

Well, in our case right now, no Kubernetes. We have pretty simple control plane, which if you want, I think you can move it to Kubernetes. But right now we don't use it. Not that we have anything against Kubernetes, it's just I guess right now we don't need that complexity, I believe.

[00:06:50] - Ry

Yeah, well, and I imagine you'll have managed Kubernetes. You'd have to have that. That'll be one of your 1st 20 most likely.

[00:06:57] - Burak

Yeah, definitely. Because managed Kubernetes is quite one of the most demanded products, so it needs to be one of the first.

[00:07:11] - Ry

Well, so again, you were working on Citus...When you started working on it, was it an extension or was it, I don't know the full history? Was it a fork at any point and then became extension?

[00:07:25] - Burak

Yeah, at the beginning it was a fork and just before I joined the team it become an extension. So basically Citus become an extension and become an open source at the same time. And I think I joined Citus team in like one month after that.

[00:07:45] - Ry

Got it. So you never were part of that previous era. Were a lot of the hard problems already solved, would you say, with Citus? Usually. I just did a talk with the PostGIS team and they said early on is where they solved the most of the problems and it was more gradual after that. Is that the case with Citus too, or was there a big project that happened after you joined well, I think.

[00:08:19] - Burak

Most of the difficult problems were already solved and to be honest, I think one of the problems with the extension development is that there isn't good documentation about it. Like for Postgres has pretty good documentation for user facing features. But if you want to as a developer, there isn't that much resources so you usually need to read lots of Postgres code. And to be honest, I think Postgres code is pretty readable for a project at its size and that big and that old, so I think that's a huge plus. But still you need to read lots of code to understand what you need to do. And in our case, thankfully, Citrus hired one of the most prominent contributors of the Postgres Andres Freund and he primarily lead the effort to make Citus an extension. And I think at that time I believe Postgres extension framework also didn't have some of the features we need. So we had to do some hacky workaround. But eventually Postgres extension framework also got improved and we had chance to remove those hacky workaround.

[00:09:48] - Ry

Yeah, that's great. Yeah, it sounds like that's happening now with lots know a lot of the innovation in Postgres happens as a fork with the hope that in a version or two can become an extension and then maybe a couple of versions. After that it becomes a less hacky extension. Right? You can streamline it, but it's a four year journey or so. Tell me about PostgreSQL HLL. Tell me what's HLL stand for.

[00:10:19] - Burak

Yeah, well, HLL stands for Hyperlog log and it is an extension to make a cardinality estimation, which is a fancy way of saying doing count distinct but approximately. Let me first explain why approximately. The reason is doing count distinct as an accurate number is pretty difficult. Well, not difficult, but maybe unfeasible. If your data size is small, that's okay. But if you have lots of data, the usual way is keeping a hash map or hash set. Every time you see an item you put it to hash set and at the end you count number of items in it. But if you have lots of data that becomes unfeasible. If you have a distributed system, like if you are doing count testing in two different nodes, that becomes even more difficult. Because let's say you bite the bullet and calculate the counter stick in one node and the other node, it's not possible to merge the result. Because there might be common elements, you cannot just sum them up. So what Hyperlog log does is it uses an approximation algorithm. I can go into detail of it as well to have an internal representation of the number of unique elements which is both memory efficient compared to doing a hash map which is also easy to merge.

[00:12:04] - Burak

So it allows you to do like the parallel computing. And the only gotcha is it is not an exact number, it's an approximation, but it turns out that we don't need exact number most of the time. Like for example, especially in analytical use cases, let's say you want to count the number of unique users that visit your website. It doesn't matter if they are 4 million or 4.1 million, like you want a ballpark number. And also the good thing is the error rate of hyperlog log is quite small. It is usually around 2% and you can make it even smaller if you give it a bit more memory like you can make it more accurate while this hyperlog log algorithm is just out there. And what PostgreSQL hyperlog log does is it implements this algorithm for PostgreSQL.

[00:13:09] - Ry

So how much of a resource reduction would you estimate that using an approximation? So you lose a percentage or two of accuracy, but you get how much less compute required?

[00:13:24] - Burak

Well, usually it's about 1.5 KB. So the hyperlogo data structure on default it takes about 1.5 KB memory.

[00:13:38] - Ry

Orders of magnitude smaller.

[00:13:40] - Burak

Yeah, actually log log parts come from that. So if you are dealing with 32 bit integers, it can go up to two to the 32. You get the log of that, it is 32. You get another log that you get five. So you need five bits of memory to be able to store one bucket. And then what hyperlog log does is it keeps multiple buckets to increase the accuracy. So at the end it end up about like 1.5 kilobyte.

[00:14:17] - Ry

Got it. So how involved were you in that extension? Did you start it or did you inherit it?

[00:14:25] - Burak

I inherited it. So actually another startup called Aggregate Knowledge built that extension. Then I think they got acquired by New Star and at that time the project was not maintained frequently. So there were some boxes we need to be merged in and our customers were also using it. So we contacted the original authors and said that hey, we want to maintain this. And they were happy to hand over the maintainership to us. And then after that we did bug fixes, we did regular releases. I presented a few conference talks about hyperlog log in Pgcomp EU and Pgcomp US. Yeah, that's the overall story.

[00:15:24] - Ry

I'm curious, have you been able to disconnect from I imagine it's easier to disconnect from Citus as an extension after leaving Microsoft, but disconnecting from this extension PostgreSQL HLL. Are you still kind of watching that because you have that knowledge?

[00:15:47] - Burak

Yeah, I have a little bit of emotional bond to that extension. Well, for example, there were few improvements that I wanted to do, but I didn't have time while working at Microsoft and it just, itched me from time to time and it is open source. So I guess at some point in near future I'll open a pull request and hopefully it would get merged. I hope.

[00:16:19] - Ry

Yeah, but Microsoft team controls that extension as it sits.

[00:16:25] - Burak

Yeah, right now Microsoft team controls, they continue to do regular releases and every time new PostgreSQL version comes up they ensure that it works well and they update the packages, release new packages. If there's a bug report, they are the one who fixes it.

[00:16:45] - Ry

How many extensions, I mean, obviously there's the Citus extension, this one. How many total extensions would you say like Microsoft has in Postgres? I know maybe it's hard to nail down a number, but there are a bunch of others too.

[00:17:01] - Burak

There are there's Citus, there's PostgresQL HLL. There is Pgcron which is also quite popular. It is a Chrome based job scheduler.

[00:17:10] - Ry

Yeah, I just started using that.

[00:17:12] - Burak

Yeah, that's pretty cool. It is primarily developed by Marco Slot. There is one extension to ensure that Postgres works. Postgres is well integrated with Azure. So there's like an extension called PG Azure. I think it's not open source but if you start a Postgres instance from Azure and check the extensions, if there's that extension there is TopN which is also approximation based extension. It gives you top N elements of a sorted list. And if you think about it is also expensive operation to do on big data set because you need to sort them first and take the top N. And I think there are more optimized algorithms that where you can keep heap which is more memory efficient but you still need to go over lots of data at that time. At Citus we also developed this TopN extension. Actually, if you look at it, the general team is about being able to do things at scale because Postgres is already pretty good at doing things in a single node. And like the title primary use case was make. A distributed PostgreSQL and we developed few extensions to make some operations that are not feasible to do at scale and find a ways to make them more feasible.

[00:19:06] - Ry

So I'm curious, are there some big milestones in Postgres that you're looking forward to?

[00:19:14] - Burak

Yeah, well, actually I was looking forward for Postgres 16 release mostly because of the logical replication improvement. There are few and I think there will be more upcoming because I think logical replication is a very powerful concept but it's still a bit cumbersome to use it with PostgreSQL, especially when there's a change in the data, like if when you run a DDL command or when you have a failover. So there are few gotcha and I think with the Postgres 16 some of these are less problematic and maybe I hope in the upcoming versions it would be even easier to use. Well, when you have a very solid logical replication, it opens up lots of cool features. Well, one thing that I personally invested in is being able to do zero downtime failovers and when I say zero time downtime, I mean like the real zero downtime. Not just like 1 second downtime, but real zero downtime. And I think logical replication, solid logical replication would open up that yeah, I agree.

[00:20:45] - Ry

It's one thing to do that too with a single node versus a Citus cluster too right. Zero downtime gets complicated the more complicated your deployment is. But I agree on a single deployment I have this idea where we could basically build scaffolding around the existing thing, whatever it takes, like get another one working. In other words, temporarily have replication happening and then somehow seamlessly up above you have to have some fulcrum that you can so it's a complicated thing to figure out but I think it'd be great for a managed service provider to basically build up temporary infrastructure that helps the zero downtime thing happen. If that's true, then you can restart for all kinds of reasons with impunity like auto scaling is possible, stuff like that.

[00:21:50] - Burak

Yeah, and one reason I especially interested in was that in our previous managed service we do lots of operations via failover. Like for example, if you want to scale up what we would do is we create another server with higher number of cores and then we would do failover. Or if you want to like for example, we might need to do maintenance like maybe we found a security vulnerability or there is the regular maintenance. What we would do is instead of going and patching the existing server we would create a new one and then we do a failover to that one and each of those failovers. It takes some amount of downtime which is not obviously not preferable and not a good experience for customers but if they were virtually free from the perspective of customer if they don't even notice that there's a failover then you can do as many failovers as you want.

[00:22:55] - Ry

You share the same vision as I do there. I think it would be exciting to get there. So I'm curious though, if there was one thing that you could if you had a magic wand and this weekend something new would be in Postgres core, what would it be? What would you use that wand on?

[00:23:11] - Burak

Yeah, that's difficult. I want lots of things like picking one is difficult but I guess one thing that bothers me is that for high availability and backups you always need to depend the third party tool I would really love that to be sold in Postgres. Like for example, for Redis it comes with very good default settings that you can use for high availability. But for Postgres there are solutions. There are good solutions but I would love them to be in the core.

[00:24:03] - Ry

Yeah, I get that for sure. It's tricky when you have to buy a product or let's say you adopt a product and you immediately have to adopt x additional products right off the bat and that's not a good feeling. It feels complex, right? Yeah, that's cool, I would say. Do you have any opinions about Postgres that almost nobody agrees with you about? Are you a contrarian in any area that you can think of?

[00:24:35] - Burak

Let me see. I don't think so. Well.

[00:24:45] - Ry

Maybe that's one of the areas that put that you think backrupt should be inside.

[00:24:48] - Burak

There is that. But I know there are people who also share that opinion. Yeah, I'm not sure it's okay.

[00:25:04] - Ry

Yeah, I was just curious. It's always fun to talk about those things, if they exist. Give you a soapbox.

[00:25:14] - Burak

Actually, there is one thing, but that's also even I think Tembo kind of agrees with me on that one, is that I think many different use cases can be implemented in Postgres. So instead of having a lot of specialized databases, you can have Postgres. And with some configuration and maybe few extensions, you can implement, like you can implement Kafka in Postgres, you can implement Redis, you can implement like the NoSQL in Postgres. I guess if I said this maybe two, three years ago, probably I would get more raised eye growth. But now I think more people start thinking to think like that. I think Tembo is also thinking things along similar lines, right?

[00:26:14] - Ry

Yeah, I think there's openness to it. I talked to a lot of people, and the question is how people are Postgres for everything. Or question is what percentage of developers are actually on that bandwagon? Obviously on Twitter, it just takes one person, and it's a big community, too, especially if it's a contrarian view. But I'm kind of curious. One of the things I want to find out over the next few months is what percentage of developers would actually if this Postgres for Everything was real, would they actually use it versus still saying "Ehh." And I think it all comes down to like I think, yeah, you can do Kafka, like work on Postgres right now, but it doesn't feel as clean as buying Confluent. That seems like a very safe decision. And doing something like Kafka on Postgres seems like you're just kind of stringing together a Rube Goldberg machine and it doesn't feel like a solid. But the question is, if those solutions were solid, would people use them? And that's our big thesis, is that if they were solid, people would use them. But I just don't know what percentage of people would do that.

[00:27:38] - Ry

A big percentage or a small percentage?

[00:27:40] - Burak

Yeah, I'm not sure. But there is one interesting thing that come into my mind, is, well, today Postgres supports JSONB type, but it was not like that all the time. So in the earlier days, if you want to store JSON data, we had an extension called hstore, which we still have, but not as commonly used as before. And what hstore does is, and this is one of the very powerful part of PostgreSQL extension framework, hstore defined the data type, and on top of it, they defined how you can hash this data type and how you can compare this data type. And when you do this in Postgres, Postgres allows you to create index on that data type. So suddenly you are not only able to store JSON data, but you can index it. And at that time this is kind of rare things even for NoSQL database. So I think it's a bit funny. And also it shows the power of PostgreSQL extension framework is that suddenly you are able to do what NoSQL database does but better. I mean, not in all perspectives, like connection scaling was still a problem, but being able to index NoSQL data, being able to index JSON, it was a rare feature even for NoSQL databases.

[00:29:19] - Burak

But you had it in Postgres. I don't know, maybe some of these other databases or other use cases Postgres might have something unexpected that would make it better.

[00:29:36] - Ry

An unexpected advantage. Yeah, it's the same way with pgvector right now the great thing about doing vector embeddings inside of Postgres is that you don't have to move the data out of Postgres as part of the process. Right. You can just add a column and keep it where it is, whereas anybody else has if it's an external vector database that's specific for that use case, you have to have data pipelines and all that kind of machinery. Which that's to me, one of the big benefits of keeping it all in Postgres is less data movement. And less data movement can mean much like no data delays and all that kind of stuff go away. So yeah, I agree with you, there's a lot of unexpected benefits for keeping things together.

[00:30:25] - Burak

Yeah, I guess since Postgres provides pretty strong asset guarantees, it allows you to build things on top of that. And when you have asset, then you can be much more free to develop complex features. Because what I realize is like while developing software, most of the time as a developer, I try to ensure that hey, what I'm doing is atomic or what I'm doing does not it is isolate. So it's not caused any problems if something is coming and not in the expected order. But you have chance to delegate all this to Postgres, I think that gives you quite a bit of advantage.

[00:31:21] - Ry

Well, one of the things I love too is that because the Postgres core team is quite separated from the commercial products, is that I just think it seems like a very stable chassis to build these things on top of. And you really can't if it's more of a captive, open source project, say, like what Kafka is to Confluent. They can move Kafka quickly if they need to for commercial to help their commercial product, but that could introduce more instability. I just don't see this Postgres team doing anything very risky, which to me is a great counterbalance to people developers trying to move fast and build crazy cool new things. It's just nice to have that as a stability factor, I think, inside the product.

[00:32:17] - Burak

I think so, yeah. Well, I guess historically I think the Postgres community kind of divided into camps and some of them would want to implement new shiny thing and some of them would try to hey, just let's get stabilized. And I believe this JSONB support comes from the part who wants to innovate and try new things. And at the beginning, I think they got some hesitation from the other company. But at the end, I guess what they do proved itself to be very valuable. And then now JSONB support is one of the most widely used features of Postgres. Yeah, so I guess there is some sort of balance to try risky things and also try being stabilized.

[00:33:10] - Ry

If you look at the number of the change log for Postgres 16 they did a lot of things. It's almost more than anyone can keep in their head. That's what I'm saying. The good stuff gets through, it's just the bar is high and then with a lot of assurances that they didn't break any part of Postgres in the process. I really appreciate that part of this thing and it's one of the reasons why I'm so excited to be building a product on top of it.

[00:33:41] - Burak

Yeah, well, at the same time it is sometimes bit frustrating because sometimes you have a feature you want it to be merged in like you might be author or you might be just someone watching from the site and desperately need that feature. And then you see that there is a huge discussion going on and people cannot convince each other and it falls to the next cycle, which is like the one years later. So that's a bit frustrating but I guess yeah, it is kind of cost of having a quite stable system.

[00:34:18] - Ry

It's the cost. And like I said, well, obviously I haven't been here for the 26 years watching the mailing lists and maybe I'm jumping in here relatively late in a cycle and I just appreciate all the efforts and all the debates and all the fights that have happened there because I think it's created such a great core engine. All right, well, so where can listeners find you online? I imagine you're on X. Yeah, they.

[00:34:50] - Burak

Can find me at X, at LinkedIn. Yeah, I guess those two would be the places they could find me, but mostly at X. My alias is at BYucesoy. So basically my first letter of my name and my last name. Great.

[00:35:10] - Ry

Well, we're excited to see the outcome of as you guys are shipping Ubicloud. Excited to see that. And yeah, appreciate you joining us today.

[00:35:24] - Burak

Thanks a lot for having me here, it was great talk, I enjoyed a lot and I'm definitely looking for the other episodes to release so that I can listen.

· 6 min read
Adam Hendel

alt_text

So you have a database, and that database does something. (Probably several somethings, if we're honest). However, today, you need it to do something else.

Simple enough...you just create a tool to give it that new functionality. Job done.

Except it isn't. Because the requests for new "things" never stop. They get complicated. They slow things down. They conflict with one another. Sometimes they even screw up your database along the way. And if we're honest, you don't really want to be constantly building new tools for every new need anyway. Before you know it, all of these "things" you're asking the database to do are starting to get in the way of its core performance.

The good news is that Postgres has a rich ecosystem of tools and services built by the community. Many of these run in the database as Postgres extensions, while others run outside the database as external services. Some of the most well known examples are PostgREST, an out-of-the box REST API for Postgres and pgbouncer, a production ready connection pooler. A scalable way to run these pieces of software with the Tembo operator is to utilize a new feature called Application Services, which runs these applications in containers next to postgres.

Understanding the Essence of Tembo Operator

We have developed the Tembo Operator looking to add new capabilities for developers and enterprises. We believe that it stands out from other techniques in many ways, and in particular, one important feature is that it lets users deploy applications in containers right alongside their PostgreSQL instances, ensuring efficient connection to Postgres with very low latency.

The Advantage of Running Applications in Separate Containers

PostgreSQL is a very powerful piece of software. When running in Kubernetes, a useful pattern is to run important applications in a separate container, in the same namespace. By running these applications in separate containers, we isolate application requirements, such as resource allocations. This means that each container can have dedicated CPU and Memory, ensuring there's no competition with the resources reserved for running PostgreSQL. This segregation ensures that the Postgres database and other applications can function and scale efficiently.

Spotlight on PostgREST: A Prime Example

PostgREST is a perfect example where an application can run with this pattern. PostgREST serves as a standalone web server that turns your database directly into a RESTful API. The immediate advantage? Developers can use the auto-generated API to build robust applications without writing any code. By simplifying the process and reducing the need for middleware, PostgREST has become a popular tool in the Postgres ecosystem.

However, let’s remember that the main advantage of this method is not just resource allocation. It's about ensuring the optimal performance of Postgres without bogging it down with additional tasks.

Let’s look at an example of a spec that would run the workload that we just described. This will look familiar if you’ve worked with the Kubernetes Pod spec.

apiVersion: coredb.io/v1alpha1
kind: CoreDB
metadata:
name: my-postgres-deployment
spec:
image: "quay.io/tembo/standard-cnpg:15.3.0-1-1096aeb"
appServices:
- name: postgrest
image: postgrest/postgrest:v10.0.0
routing:
- port: 3000
ingressPath: /
env:
- name: PGRST_DB_URI
valueFromPlatform: ReadWriteConnection
- name: PGRST_DB_SCHEMA
value: public
- name: PGRST_DB_ANON_ROLE
value: postgres

The Tembo operator always starts postgres with default configurations, so let’s focus on the appServices section of the spec, which will tell us what and how we’ll run an application container.

We can run any number of applications in containers near the Postgres instance. Only the name and image parameters are required, but you can configure commands, arguments, environment variables, CPU and memory, and readiness and liveness probes for the application.

If you need network communication, you can also configure the ingress by specifying a port and a path. In our example, postgREST runs on port 3000 and expects traffic routed to the root path. appServices also support various Middleware configurations, but we will cover those in a future blog.

Environment variables also have some advanced configuration. The Tembo operator creates a few roles in Postgres, and tracks those credentials in a Kubernetes secret. If we want to pull those credentials into an application, we can do so by using the valueFromPlatform option. The service currently supports pulling in credentials for ReadWriteConnection, ReadOnlyConnection but we’ll be building out more role assignments soon.

Arbitrary container deployments

The Tembo operator is not limited to postgREST; it can run nearly any containerized application. For example, run your Rails application, or FastAPI web server. Specify the image, and other configurations as necessary and the Tembo operator will provision the resources.

apiVersion: coredb.io/v1alpha1
kind: CoreDB
metadata:
name: my-postgres-deployment
spec:
appServices:
- name: my-app
image: quay.io/myImage:latest
routing:
- port: 3000
ingressPath: /
command: [ "python", "-m", "myapp.py" ]
env:
- name: MY_ENV
value: my_value

Kubernetes Footprint

Let’s take a look at what actually gets created when you specify an appService; we’ll use the postgREST example as an illustration.

Every application service gets its own Kubernetes Deployment. If the appService has any ingress requirements, then a Kubernetes Service and an ingress resource (Tembo currently uses Traefik) is created for that appService. Middleware is also created (also Traefik) if the appService has any middleware configuration specified. Here’s an image of the pods you’d likely see in the namespace you create for Postgres.

alt_text

More to follow

The Tembo operator is under continuous development and runs the entire Postgres Stack, which is not limited to just the core Postgres engine. It also includes auxiliary container services that run outside of Postgres. These auxiliary services augment the Postgres experience by running complex applications outside of Postgres which isolates their workload from your database. Running this with the Tembo Operator makes it simple to get these services up and running.

And if you want to try out the full power of Postgres without being concerned with how it will run, try out Tembo Cloud.Drop into our Slack channel to ask questions and get help from the Tembo team and other community members.

· 31 min read

In this episode, Ry, Regina, and Paul talk about geospatial development, the challenges of creating and maintaining an extension across multiple Postgres development cycles, and what they’re hoping for in the future of Postgres.

Watch below, or listen on Apple/Spotify (or your podcast platform of choice). Special thanks to Regina and Paul for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.

Transcript

[00:00:12] - Regina

Hi, I'm Ry Walker, founder of Tembo, a managed Postgres company. And today I have Regina Obe and Paul Ramsey, two who are working and have been working on PostGIS for quite some time. Welcome to the show, both of you.

[00:00:28] - Regina

Thanks, Ry.

[00:00:28] - Paul

Thanks, Ry. Nice to be here. Great. Yeah.

[00:00:32] - Ry

So I think you guys have all been using Postgres for quite some time. Maybe. Paul, when did you start using Postgres?

[00:00:42] - Paul

I started using Postgres, and I'm sure I can give you a version number, I think like 6.4, which I think puts me in like the '98, '99 era. And yeah, I was using it as a consultant. My first career was in consulting and working for the provincial government. And we were doing a big data analysis project and it was a geospatial project, but all the geospatial processing was being done by an external piece of software and we used Postgres as the backing store for what was a very long, like, 20 day compute cycle to spit out all the results for the entire province.

[00:01:17] - Ry

Nice. How about you, Regina?

[00:01:18] - Regina

I started out as a SQL Server person, so yeah, Paul dragged me into Postgres. So my first introduction was via PostGIS in 2001, I think it was. So it was like 7.1 or 7.3, I can't remember somewhere between 7.1 and 7.3.

[00:01:39] - Paul

It was 7.1 because that was the first version of Postgres that you could actually do a geospatial extension. Oh, nice.

[00:01:45] - Ry

And do you regret leaving SQL Server?

[00:01:49] - Regina

Well, I still consult for SQL Server, and what's interesting about that is I think I've gotten to be a better SQL Server expert knowing Postgres because Postgres would always introduce things first, like all the lead functions, all the window functions, Postgres introduced those before SQL Server had them and the CTEs and everything.

[00:02:18] - Ry

Yeah, got it. What's the state of geospatial in Microsoft SQL Server, would you say compared to Postgres? I don't know too much about that ecosystem, but I'm curious, how much parity have they achieved?

[00:02:32] - Regina

Well, I think their geography is still better than ours, but in terms of the geometry support, they're way behind, which is what most of the state people care about.

[00:02:42] - Ry

Awesome. Cool. All right, well, I'm going to kind of go off my standard script because how long have you guys been working on PostGIS for? This is one of the oldest extensions, or is it the oldest extension?

[00:02:59] - Paul

Yeah, well, I mean, so when we started extension, this didn't exist. It's a pre extension as a package concept. It's just like the idea of runtime, adding things at runtime to the database which was there from the very beginning. That was a stonebreaker original. Like, this is one of the things Postgres can do. PostGIS is from 2001, as Regina mentioned, she started using it within three months of the initial release. So, yeah, may of 2001. So yeah, we're at 22 years.

[00:03:28] - Ry

So how has it, has it changed much? Would you say since? Has it been pretty steady progress towards what it is now? Or were there any big milestones, big changes that you can remember that are super noteworthy?

[00:03:42] - Paul

It has not been like one linear rise. It's definitely been punctuated equilibrium. And as these things go, the most new features and new capabilities happen right at the start, because we started with nothing, right? We started with first release. It had geospatial types, it had a geospatial index. It had eight functions, I think only one of which was analytical in any way. But that was like release 0.1 in 2001. I think we got to 0.8 in two years, like 2003. And that had the first full set of spatial functions where you could really test all the relationships between the geometries. You could do constructive geometry operations like buffering intersections and unions and so on in the geometries, that was the really big like, you could have stopped there in many respects. If you look at what SQL Server has, that's kind of what they did until 2008. They had no spatial at all. And then they came out with their spatial extension, which was basically what they have now because it was a capable, complete vector spatial database, full stop. But since then, we've kept adding stuff. Maybe Regina will take another tour of what she thinks the biggest things to happen since 0.8 were yeah, so I.

[00:05:08] - Regina

Think PostGIS just improved as Postgres improved. So they introduced Gist and all our indexes changed from the old R-tree to the Gist index. They improved aggregation in Postgres, which I think was a huge milestone for us because a lot of the processing we do involves aggregating geometries together. So we would see something like from a tenfold speed improvement in terms of aggregation of geometries. And then the other things. I think CTEs were pretty useful. Now, nobody does any queries in Spatial without using a CTE anymore.

[00:06:01] - Ry

Would you say, like those advances in Postgres? I don't know. Let's say that they did 80% of the work towards the thing working ten times faster, and you guys had to do 20%. Or what was the ratio of effort from where were you getting great improvements virtually for free without any effort on your side?

[00:06:21] - Regina

Yeah, I think we were getting great improvements for free, pretty much.

[00:06:25] - Ry

Oh, that's great.

[00:06:25] - Regina

And then there's the whole KNN thing, which they drastically improved from 9.2 to yeah, I was making fun of Bruce, and then Bruce said, I can't believe you'd have to do that. We need to fix it. And so after 9.2, the KNN became real KNN instead of just box KNN. But yeah, in all those cases, there wasn't that much we needed to do to get the improvement.

[00:06:49] - Paul

Parallelism.

[00:06:50] - Regina

Oh, yeah, parallelism.

[00:06:51] - Paul

Another one of which is like a huge freebie. Like, look at that. We do parallel processing.

[00:06:57] - Regina

Yeah, the only thing we needed to do is tell people how to configure their Postgres comps to take advantage of parallelism.

[00:07:05] - Ry

Curious, what kind of challenges did you guys face? I'd say early on building this.

[00:07:14] - Paul

The biggest thing? Well, I don't know. There are a lot of things. As a geospatial extension, our code isn't packed like into one hunk of code that we control. We use a lot of library dependencies. And so we end up having, and we still do having a really complex build almost right out of the gate compared to other extensions, because other extensions like, hey, we're a self contained thing. You just type, make and you're done. Whereas with us it was always like, hey, guess what? You get to track down three or four different other libraries in addition to the thing that we have and make sure they're the right versions, and here's the configuration, so on. So we end up with this really naughty configuration setup. And initially, like, if you're going to start using this stuff back in the day, step one was always to build it. They weren't prepackaged. Postgres itself wasn't prepackaged, everything was from source, so it put this fairly steep entryway in for new users.

[00:08:14] - Regina

Yeah, although early on, we didn't have any other dependencies except Postgres, so it was much easier.

[00:08:19] - Paul

Well GEOS after two years and then, well, proge at the same time. I mean, we had GEOS and proj almost from the start, and then started picking up format libraries after that.

[00:08:28] - Regina

Do we have proj?

[00:08:29] - Paul

Yeah, I don't remember projection because proj already existed, so tying it in was a pretty straightforward thing to do.

[00:08:35] - Regina

Okay, I don't remember that. Or maybe it was an optional dependency, so I just never built with it.

[00:08:43] - Paul

Everything was optional.

[00:08:44] - Ry

Tell me what GEOS is. I've seen that mentioned, but I didn't dive in. Is that powering other geospatial solutions besides PostGIS?

[00:08:55] - Paul

Yeah, it is. GEOS is an acronym of course, stands for Geometry Engine open source. It provides the computational geometry underpinnings for a bunch of sort of the key functions. I mentioned a buffer, that's a GEOs function at the back end. Intersection is a GEOS function at the back end. The Boolean predicates intersects contains within those are all GEOS functions at the back end. Some of the fancier stuff that we've added to GEOS is now exposed in PostGIS. So if you ask for Delaunay triangles or [...] polygons, you get those from GEOS and then GEOS, because it's like this useful Swiss Army knife of computational geometry is used by other programs in the geospatial ecosystem. So if you use the Google library, it backs out of GEOS for its geospatial operations. And like most prominently, if you use the QGIS desktop GIS, you'll find that the Q just backstops its geospatial algorithms.

[00:09:54] - Ry

So did you guys refactor code into that, or did you just end up replacing some stuff you had built early on with that library later on?

[00:10:01] - Regina

Well, Paul started GEOS too.

[00:10:04] - Ry

Okay. All, it's okay. So it sounds like refactored into that.

[00:10:09] - Regina

Well, not so much refactor. I think it was always kind of a separate thing. Right. It was always intended to do more.

[00:10:15] - Paul

And as an external library, there's actually a slight penalty to using it because you got to take your data out of the Postgres memory space and copy it over into the JS memory space to work with it. And that's just expensive enough that for simple things, we still actually kept the original native post, just side implementations for things like area or length or the most complex one that we kept is distance. Yeah.

[00:10:45] - Ry

Are there any milestones going in the future for PostGIS that you're looking forward to or is it just kind of stability and continuous improvement?

[00:10:56] - Paul

You go, Regina.

[00:10:58] - Regina

Oh, no, you're not going to ask me that. I think speed is always good, and my concern, I think, is mostly improving Raster. And I'm looking forward to Toast's API changes that are coming along and how we could leverage those.

[00:11:16] - Paul

That's funny you bring up raster. Maybe we should talk about this in the part we have a little get together, because it's one of the things which in front of me lately is like the underlying infrastructure we have for handling rasters was built well, no, 2010, anyways. It was built, like, in an era when the idea of, say, doing cloud based raster processing was kind of not what would be done. It was built around the idea that you would store your own rasters kind of locally in your local area network, that increasingly organizations just don't do that. They still want to have raster access. And while you can do remote raster access with what we have, it's kind of clunky. It's not optimized for that at all. I feel like just like a relook at what the raster use cases are and kind of a reevaluation of whether how we handle rasters is right is due. I was looking at the API that Alibaba offers on their cloud databases and thinking, yeah, that's kind of an interesting way of tackling rasters.

[00:12:17] - Ry

Yeah, you're talking about like Alibaba is like a they have a proprietary database or not, their Postgres.

[00:12:25] - Paul

They don't say. They're really cagey about it. So I'm not sure whether it's back then Postgres or not, but their take on rasters is very much all their own. I haven't seen anyone else go at it that way.

[00:12:35] - Regina

Oh, I haven't seen that. I should check that out. But yeah, that's one of the complaints that people have, at least the clients I have, that the outdb is much slower and yeah, that could be improved.

[00:12:48] - Paul

Yeah, great.

[00:12:51] - Ry

I'm curious if you're paying attention. I don't know the difference. I've not studied how PostGIS works or raster, and it is but I'm curious, is the vector search stuff happening that's happening in the ML space? How closely related is the math of that to the math of I'm sure you've kind of paid a little bit of attention to that space. Is it wildly different or is it kind of remarkably similar or neither of those?

[00:13:23] - Paul

Yeah, well, I mean, insofar as a 2D vector is the same as a [...] vector. At a conceptual level, they're the same. But from a practicality point of view, the practicalities of handling super high dimensional stuff are just different. Like one of the first things you learn, even like we go to four dimensions, even at four dimensions, the indexing properties start to break down if the kind of sort of standard R-tree stuff just doesn't work as nicely. You don't have to get very high up into a complex dimensional space for that form of indexing to be like, it doesn't work, it's not doing what we need to do. And you can really see that in just how different the indexing approaches are for ML vectors compared to sort of 2D vectors.

[00:14:10] - Ry

So, like, high dimensionality just requires an entirely different mindset and solution set. So I'm hearing you say.

[00:14:17] - Paul

Yes, totally.

[00:14:18] - Ry

Just curious if it somehow scales into that or not. That's cool. Yeah. Well, tell me, I guess, real quick, I'd love to learn a little bit more about the commercial products, I guess. How does this manifest? How does PostGIS manifest commercially for both you and Regina?

[00:14:41] - Regina

For my side, it's mostly consulting. Yeah, I don't have any commercial things around.

[00:14:52] - Paul

I make sideline in talking about and studying the economics of open source. One of the things that's kind of obvious once you start looking at this stuff is that there's like a project size threshold before you start to see enough money around a project to support full time maintainership or people whose jobs are mostly around the project. And PostGIS is interesting in being like one of the few Postgres extensions which has received achieved that level. But even at that level, it's quite small. So you got Regina, who has a good consulting business. I work for Crunchy Data, which is a professional open source support company, which is to say they sell support contracts to Fortune 100 companies and US Federal government and increasingly a number of international organizations, but of similar size and scale, big organizations. And then also has a software as a service called Crunchy Bridge, which is basically database in the cloud of the sort that everyone's gotten used to. So, I mean, in that respect, I'm kind of like Regina. I work for Crunchy because they value my expertise as a PostGIS committer and my ability to help their customers who deploy PostGIS.

[00:16:20] - Paul

So it's still very much like skills for hire. No one has wrapped it up, has wrapped PostGIS itself in particular up as a specific product.

[00:16:28] - Regina

Yeah, I mean, others have, it's just.

[00:16:30] - Paul

We haven't and then yeah, other members of the development team are also still sort of on the consulting bandwagon and that's how it bundled.

[00:16:41] - Ry

I'm not familiar with anyone who's bundled it up as a product per se, who's done that?

[00:16:47] - Regina

I mean, it's not so much a product, but like all the cloud providers so Amazon has it, Microsoft has an installable extension. Yeah. As an installable.

[00:16:59] - Paul

From the point of view of ubiquity and market spread.

[00:17:03] - Regina

Yeah. CartoDB used to be the closest though. But do they still use Postgres or did they switch to something else?

[00:17:09] - Paul

Which DB?

[00:17:10] - Regina

CartoDB. Carto.

[00:17:12] - Paul

They still use Postgres. Yeah. They haven't moved up. That would be a good sort of like example of a productization of PostGIS. Certainly in their earliest incarnation they had a software as a service which did a very good job of allowing you to put data in, visualize it in a whole bunch of ways. And that exposed like SQL as the language for customization of what you were seeing. And it was all sitting on top of PostGIS, but it was marketed as CartoDB. So they had productized around a software service that more or less made the database not invisible, but the actual brand of the database was irrelevant.

[00:17:51] - Ry

Do you see real old versions of PostGIS surface? Like, I'm sure you probably don't see 0.8 anymore, but no. How good are people at staying up on the latest, would you say? I have not as good as you.

[00:18:08] - Regina

Haven't seen any 1.5 recently. I think there might have been one.

[00:18:16] - Paul

Your standards are different from mine, Regina, because I,...I'd freak out if someone brought me a 1.5. I'm shocked at how many version two is just still in the wild. Let me account back that first digit is worth about a year. So we're at 3.4 now. So 3.0, so that's five years. So yeah. So some shows up with a two point something. It's a five year old installation.

[00:18:35] - Regina

Yeah.

[00:18:35] - Ry

And they're probably five versions of Postgres old too, right?

[00:18:39] - Paul

Yeah, exactly.

[00:18:41] - Ry

What's the biggest jump you'll do? Will you take someone from eleven to 15 or Postgres? That is the equivalent?

[00:18:48] - Regina

Yeah, because the latest wouldn't even run on that. Well, in theory, anybody from PostGIS 2...

[00:19:00] - Regina

should be able to go straight to 3.4 without any issue as long as they upgrade their Postgres too. What happens to their development, their applications? Breaking that's on them.

[00:19:15] - Ry

Well, it must be nice to be relying on Postgres. I think you can criticize, if you would like, various aspects of how Postgres is built, but I think that it's really great how stable, I want to say slow that the progress is made because it gives you a very stable and reliable chassis to build this on top of. I'm sure you guys agree with that.

[00:19:44] - Regina

Yeah, I think in terms of slowness, they're actually much faster than other relational databases.

[00:19:49] - Paul

Much, much faster in terms of how fast things change. Yeah, I guess there's always that push particularly thing. New SQL dialect stuff has come in pretty quick.

[00:19:58] - Regina

Yeah, because I remember when I be talking to my SQL Server friends and they're still waiting for the lead function. That happened like five years ago back in the day. But yeah, I think it moves in terms of the SQL standard a lot faster than the others. I think even faster than Oracle, though I don't have too many Oracle friends to talk about.

[00:20:20] - Paul

Yeah. I'm frequently, frequently surprised by how much internal churn there is because I always feel like, oh, we're the supermature extension. We don't reach like super deep into the extension, like into the into the core. We're not hooking into like executor hooks or planner hooks or stuff like that. And yet there's always this there's always this medium to long list of things that have to be tweaked when you move up to a new, like in terms of our code or for it to build and still run correctly, that have to be tweaked at each major Postgres release.

[00:20:57] - Regina

Yeah, because I can't think of any release we've done that we didn't have to tweak it for the in development major release of Postgres. So they changed it enough that it always affects yeah.

[00:21:10] - Paul

Yeah.

[00:21:10] - Ry

So even if the outer shell seems pretty stable, the insides are changing and you guys have some stuff poking down at least to the middle, if not the bottom. That's great. Yeah, like I said, I think it's to me a really perfect pace because we do get that kind of like annual innovation and if there's something that's really important, it'll get taken care of, I think. I'm curious, are there things happening in Postgres core that you guys are excited about? Maybe for we could talk about 17 or whatever, 18 future? Is there anything in particular that you guys are excited to see?

[00:21:53] - Regina

Well, there's bi directional seems to be making its way. I don't know if it's going to make it into 17, but it looks like they're putting in the hooks for it anyway, so that's kind of exciting.

[00:22:05] - Paul

Yeah, for me, that's what's exciting. I've been watching the decade long crawl towards being able to do a replicated OLAP system where you get full push down to all the nodes. So every release, the Postgres FTW folks from Toshiba and other places add a couple more turns of stuff that you can push down, which is exciting to me because we're that much closer. We're that much closer to be able to do big multi node queries because there's OLAP workloads and LTP workloads for spatial databases. Really, the bias is towards OLAP for sure. In terms of what you see, how you see people using the database. I mean, they like transactionality, they like to be able to manipulate their data. But when it comes time to ask, what is this for? It's like, oh yeah, we run big analyzes. So the ability to push stuff out to multi node as that gets more mature, as it gets more possible, like, that becomes something that's really exciting on the spatial side. So I watch every little tick of the gears towards that endpoint and get very excited. So it's been pushed down in the last couple of releases.

[00:23:22] - Paul

Last release had good stuff around parallelization in the planner and executor as well for partitions, which is like, that's a big deal because the holy grail is you've got your big table partitioned out across the remote nodes. So each of your partition is actually a foreign table. And when you run the query, the planner says, oh, look, I can just ask all the partitions to run it simultaneously, it gets back the result, assembles it and says, Here you go, we're getting close to that. Which would be a big deal because there's a whole bunch of workloads that you can just turn into sort of like a standard OLAP star schema. One big fact table, a bunch of dimensions and you'd be able to do lots of really cool spatial stuff. That right now, not so much. I don't know when we'll ever get to the holy grail, which is to be able be able to do shipping data between nodes in order to allow nodes do things like join across these across two big fact tables. That might never happen. I don't see anyone doing that. But that's the one that would like having me tear my clothes off and dancing in the street.

[00:24:34] - Ry

Yeah. So that's interesting. You're talking about like I haven't followed the development of Postgres FDW. I used it recently and it's pretty powerful at low scale, but you're talking about at high scale, at OLAP scale, just optimization across.

[00:24:53] - Regina

And Paul has this fantastic foreign data wrapper, a spatial foreign data wrapper, which can read how many formats? 800?

[00:25:03] - Paul

Yeah, no, there's not that many formats in the world, but I don't know several different yeah.

[00:25:08] - Ry

Formats of geospatial formats. Is that what you said? Or other types of formats.

[00:25:14] - Paul

Geospatial formats. But to an extent, geospatial formats is a category which also includes non geospatial things because all you have to do is not have the geospatial column and then what do you call it? Oh, it's just a format. So Excel can be a geospatial format. Got it. And it's something that you can read. SQL Server.

[00:25:32] - Ry

What's the name of that extension?

[00:25:34] - Paul

Or that extension is called OGR. Under more FDW.

[00:25:37] - Ry

OGR.

[00:25:38] - Paul

It stands for well, OGR is like it's not even worth trying to unpack it. OGR is just the vector side of the GDL library, which also has an acronym which is not worth unpacking anymore because it's a 20 year old acronym, but it refers to the library that it's binding. This is the OG library for vector formats? Yeah.

[00:25:58] - Ry

Cool. Yeah, I'll check that out. That's cool. Kind of wrapping this up on the core. Like if you had a magic wand and you could add any feature to know. Of course, if that was added, then you immediately have work to do in PostGIS as well. But what would your magic wand manifest this weekend in Postgres if you could pick one thing?

[00:26:20] - Regina

Yeah, I can't think of any.

[00:26:21] - Paul

I know what Regina's is. Oh, come on. I'll tell you what yours is then, Regina, if you're not going to say.

[00:26:30] - Ry

You guys can answer for each other if you want.

[00:26:32] - Paul

It's handling extensions in a slightly different way or sorry, had an extension update in a slightly different way. So extension versioning.

[00:26:40] - Regina

Oh, yeah. How did you read my mind? I completely forgot about it because I gave up hope on it.

[00:26:47] - Ry

Yeah, tell me about that.

[00:26:48] - Regina

So Sandro, who's also on the PostGIS team, he's been working on an update to the extension machinery that would allow us to reduce our upgrade script to one file because right now we ship like 500 files, which are all pretty much just SIM links to the same thing. And so it's just a way to have the extension machinery understand that, hey, this script can be used to upgrade this version, this version, this version and this version, instead of having to itemize every single version upgrade.

[00:27:30] - Paul

Yeah, the extension upgrade machinery is very clever, but it starts with an immediate assumption, which is that as a developer, you will manage your extension upgrades as incrementals between versions and that it will cleverly find the path through the incrementals from the version you have to the version you're going to, applying all the little slices on the way.

[00:27:55] - Regina

And it uses Dijkstra for that.

[00:27:58] - Paul

Yeah, it's super clever. Even like in the best case scenario where you'd already been doing that, it's probably not super ideal for any project where the development path isn't linear. So Post just has, I don't know, 25 or so minor releases like X, Y, and then within those, there's maybe five patch releases across each of those. So we'll have a whole bunch of parallel version trees, right? You're on 2.3.5, or you're going to go to 2.3.6, but then you might want to go to 2.4.8. And that means you have to have all these individual even if you're doing things like one tiny step of time, you would have all these individual little hops across the branches. If you have just this one line, it kind of worked. You just sort of chained together this one little line. You don't have very many files when you have all these little branches, all of a sudden you need all these extra little hops across the branches. And it's made worse because all of our management of the SQL side of it, the SQL side of the extension where you define the functions on SQL land and say, oh, it's over here in the dynamic library.

[00:29:24] - Paul

We've been managing that since pre extension world. And our way of managing it was to have a SQL file which can always be cleanly applied against any previous version. So it's quite long because the SQL file has every single definition of every single thing in it. So how you handle the incremental thing from 1.2 to 1.3? Well you have one copy for 1.31 copy for the upgrade as well. So every little upgrade has a full copy of that fairly large SQL file. On Unix systems now we just ship sip links instead of syncing the whole file. But you end up with just a huge pile.

[00:30:05] - Regina

Yeah, actually we changed from that to just a file that has nothing in it. Right. To just on any version which kind.

[00:30:13] - Paul

Of the chain eventually arrives at the full one.

[00:30:17] - Regina

So now that's the same across. But I think ours is more complicated too because for each version we support multiple versions of Postgres and we also enable new features. If you're on like twelve you get something, if you are eleven you don't get that something.

[00:30:40] - Paul

Certainly something which is not contemplated by the original designers is our file. Our input file actually goes through, we put it through the C preprocessor before we give it to Postgres because we have a whole bunch of if defs against what Postgres version you're on. Living inside the SQL file that have to be pre processed before it's usable.

[00:31:02] - Ry

Yeah, I understand. Was maybe you're saying like the current design is just naive thinking that you're not going to try to support multiple versions of Postgres with one of your versions of the extension and there's not home for that information, I guess for what the range is to some degree.

[00:31:22] - Paul

Yeah, I mean although the extension framework does contemplate the idea of versioned extensions, again, it doesn't really contemplate them as anything except for a linear chain. And once you have a more complex situation than that it's kind of hard. Like we for a very long time supported being able to run different versions of PostGIS inside the same Postgres cluster. We still do actually support that, but it's a feature that it seems like mostly OA developers use. So it's optional now and we default just to like one version of Postgres or PostGIS for each Postgres cluster. But that functionality was always there. But the extension facility did not grok that.

[00:32:05] - Regina

Yeah, and packagers did not grok that either. So they always, always ship one.

[00:32:10] - Ry

Great. I'm curious, try to wrap up here a little. I realize now I've been kept you here for quite some time, but do either of you listen to podcasts very much?

[00:32:20] - Paul

I do all the time. It's my gym thing. I go down to the garage gym and that's what keeps me from going crazy with me.

[00:32:28] - Ry

Your give me some of your favorite podcasts.

[00:32:31] - Paul

I tend to go on the current affairs side, so I listen to the Ezra Klein show from New York Times a lot and Odd Lots from Bloomberg, a little bit of financial news.

[00:32:40] - Regina

Yeah, you can tell he's the son of a politician.

[00:32:42] - Paul

It's interesting stuff.

[00:32:44] - Ry

Yeah. How about you, Regina?

[00:32:47] - Regina

No, I'm not a podcast person. I go swimming. But yeah, I can't really hook up a podcast.

[00:32:55] - Ry

Yeah, underwater is probably possible, but not super comfortable. Right, great. All right, well, so where can listeners find you online? Maybe share your websites or Twitter? Mastodon handles.

[00:33:13] - Paul

On the site formerly known as Twitter. I'm PWRamsey. I'm also PWRamsay at Mastodon Social and on the blog world, I'm at cleverelephantCA.

[00:33:25] - Regina

Yeah, I have how many blogs do I have? I have BostonGIS.com Postgresonline.com. Twitter is just Reginaobe, I think that's and my website, Paragoncorporation.com, I guess those are. Oh, and my book site PostGIS.us.

[00:33:47] - Paul

Yeah, the book site.

[00:33:49] - Ry

What's that? I always say PostGIS, by the way. I got to learn it's PostGIS. But do you have a book for around it, or have you written many books?

[00:33:57] - Regina

Oh, yeah. So I wrote "PostGIS in Action." I'm working on "PG Routing." I'm also supposedly working on a Postgres book, both of which I'm very behind on. And I did "SQL in a Nutshell." And let's see, what else. Is that it? Oh, and "Postgres Up and Running." That's a pretty popular book. Surprisingly popular.

[00:34:25] - Ry

Yeah, it's sitting right there. I own that one.

[00:34:28] - Regina

Oh, really?

[00:34:29] - Ry

Thanks for writing it.

[00:34:29] - Regina

Oh, yeah.

[00:34:31] - Ry

Thank you both for joining. Appreciate you, all the work you've done for Postgres and PostGIS, and appreciate having you on the show.

[00:34:42] - Paul

Thanks for having us, Ron.

[00:34:44] - Regina

Thanks, Ryan.

[00:34:44] - Paul

Thanks. Bye.

· 8 min read
Ian Stanton

At Tembo, we’ve been developing an open-source Kubernetes Operator for Postgres. We use this operator to power our managed Postgres platform, Tembo Cloud. We’re excited to share our progress, experience, and vision for this project. This post aims to assist anyone interested in utilizing Kubernetes operators for Postgres or writing Kubernetes operators using Rust.

What is a Kubernetes Operator?

Kubernetes was designed with automation in mind, and operators allow for users to extend native Kubernetes behavior and principles to manage custom resources and components.

With a Kubernetes operator, users can write code that defines how their application should be deployed and managed on Kubernetes. This code is then packaged into a container image and deployed to Kubernetes. The operator then watches for changes to the custom resource and takes action to reconcile the state of the application’s components with the desired state of the custom resource.

In short, using a Kubernetes operator is the most effective way to run applications on Kubernetes in 2023.

You can read more about Kubernetes operators on this CNCF blog post, where the image below is.

./k8s-operator.webp

*Image credit: CNCF blog*

Kubernetes Operators and the Rise of Rust

Because Kubernetes itself is written in Go, the majority of Kubernetes operators available today are also written in Go. The kubebuilder project simplifies the process of building Kubernetes operators in Go and is widely considered the de facto standard for doing so.

With the increasing popularity of Rust, it was only a matter of time before someone developed a framework for building Kubernetes operators in Rust. The kube-rs project allows developers to build Rust-based Kubernetes operators in a similar manner to the kubebuilder project. This project excited us for a few reasons:

  1. We were interested in learning Rust.
  2. We wanted to explore whether Rust could be a viable alternative to Go for writing Kubernetes operators.
  3. We were inspired by the success of companies like Stackable, who have developed numerous Kubernetes operators in Rust.

This excitement led us to the decision to write our Kubernetes operator in Rust.

Building the Tembo Operator

Tembo Cloud distinguishes itself from other managed Postgres offerings in several ways, one of which is the ability to install and enable Postgres extensions on the fly. This experience is in part powered by Trunk, a Postgres extension registry and companion CLI that provide a simplified extension management experience.

It also introduces the concept of Stacks, which are pre-built use-case-specific Postgres deployments which are optimized and tuned to serve a specific workload.

Roll Your Own

In order to build these unique capabilities, we knew we’d need to harness the power and flexibility of a Kubernetes operator in our own way. Although there are several Kubernetes operators for Postgres available, none of them offer the same unique Postgres extension management experience or the concept of Stacks.

Initially, we attempted to build our own operator from scratch. We had successfully built the extension management piece, but soon realized that we were duplicating existing efforts. We had a comprehensive list of baseline features to develop, which included:

  • Backup
  • Recovery
  • Connection Pooling
  • Failover
  • Upgrades

CNPG to the Rescue

Enter CloudNativePG (CNPG). CNPG is a Kubernetes operator for Postgres created by the folks at EDB. We found it to be the most compelling of the many Kubernetes operators for Postgres out there. It provided many of the features we needed, including backup, recovery, connection pooling, failover, and upgrades. However, we still needed the ability to install and enable any Postgres extensions on the fly and define Stacks.

This is where the Tembo Operator comes in. We built the Tembo Operator in a way that utilizes CNPG, which enables us to offer a distinctive management experience for Postgres extensions and Stacks while utilizing a reliable and stable Postgres solution.

Using the Tembo Operator

Let’s take a look at what a custom resource spec looks like for the Tembo Operator. Here’s an example for our Machine Learning Stack. We can see this sample spec makes use of our Machine Learning Stack and includes a handful of extensions. Keep in mind, these extensions are installed at runtime with Trunk and are not built into the container image.

apiVersion: coredb.io/v1alpha1
kind: CoreDB
metadata:
name: sample-machine-learning
spec:
image: "quay.io/tembo/ml-cnpg:15.3.0-1-a3e532d"
stop: false
stack:
name: MachineLearning
postgres_config:
- name: pg_stat_statements.track
value: all
- name: cron.host
value: /controller/run
- name: track_io_timing
value: 'on'
- name: shared_preload_libraries
value: vectorize,pg_stat_statements,pgml,pg_cron,pg_later
trunk_installs:
- name: pgvector
version: 0.5.0
- name: pgml
version: 2.7.1
- name: pg_embedding
version: 0.1.0
- name: pg_cron
version: 1.5.2
- name: pgmq
version: 0.14.2
- name: vectorize
version: 0.0.2
- name: pg_later
version: 0.0.8
extensions:
# trunk project pgvector
- name: vector
locations:
- database: postgres
enabled: true
version: 0.5.0
# trunk project postgresml
- name: pgml
locations:
- database: postgres
enabled: true
version: 2.7.1
# trunk project pg_embedding
- name: embedding
locations:
- database: postgres
enabled: false
version: 0.1.0
- name: pg_cron
description: pg_cron
locations:
- database: postgres
enabled: true
version: 1.5.2
- name: pgmq
description: pgmq
locations:
- database: postgres
enabled: true
version: 0.14.2
- name: vectorize
description: simple vector search
locations:
- database: postgres
enabled: true
version: 0.0.2
- name: pg_later
description: async query execution
locations:
- database: postgres
enabled: true
version: 0.0.8
runtime_config:
- name: shared_buffers
value: "1024MB"
- name: max_connections
value: "431"
- name: work_mem
value: "5MB"
- name: bgwriter_delay
value: "200ms"
- name: effective_cache_size
value: "2867MB"
- name: maintenance_work_mem
value: "204MB"
- name: max_wal_size
value: "10GB"

To create our Postgres instance, we run the following command:

❯ kubectl apply -f yaml/sample-machine-learning.yaml
coredb.coredb.io/sample-machine-learning created
❯ kubectl get po
NAME READY STATUS RESTARTS AGE
sample-machine-learning-1 1/1 Running 0 19s
sample-machine-learning-metrics-5fbcf9b676-hkxtk 1/1 Running 0 31s

Once we’ve connected to the Postgres instance, we can run \dx to confirm the extensions were installed and enabled as expected:

export PGPASSWORD=$(kubectl get secrets/sample-machine-learning-connection --template={{.data.password}} | base64 -d)
❯ psql postgres://postgres:$PGPASSWORD@sample-machine-learning.localhost:5432
psql (16.0 (Ubuntu 16.0-1.pgdg22.04+1), server 15.3)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.

postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+------------------------------------------------------------------------
pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL
pg_later | 0.0.8 | pglater | pg_later: Run queries now and get results later
pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed
pgmq | 0.14.2 | public | A lightweight message queue. Like AWS SQS and RSMQ but on Postgres.
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
vector | 0.5.0 | public | vector data type and ivfflat access method
vectorize | 0.0.2 | vectorize | The simplest way to do vector search on Postgres

Let’s install a new extension by adding the following to our sample spec:

...
trunk_installs:
- name: pg_bm25
version: 0.4.0
...
extensions:
- name: pg_bm25
locations:
- database: postgres
enabled: true
version: 0.4.0

After applying the updated spec and connecting to Postgres, we can see the new extension pg_bm25 is installed and enabled as expected:

❯ psql postgres://postgres:$PGPASSWORD@sample-machine-learning.localhost:5432
psql (16.0 (Ubuntu 16.0-1.pgdg22.04+1), server 15.3)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.

postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+------------------------------------------------------------------------
pg_bm25 | 0.0.0 | paradedb | pg_bm25: PostgreSQL-native, full text search using BM25
pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL
pg_later | 0.0.8 | pglater | pg_later: Run queries now and get results later
pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed
pgmq | 0.14.2 | public | A lightweight message queue. Like AWS SQS and RSMQ but on Postgres.
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
vector | 0.5.0 | public | vector data type and ivfflat access method
vectorize | 0.0.2 | vectorize | The simplest way to do vector search on Postgres

Up Next

We’re currently working on exciting new features that enable the deployment of custom applications alongside Postgres. These features include a REST API, GraphQL, and more. Stay tuned for future updates!

For more information on running the Tembo Operator, check out our docs at:

If you're interested in contributing to the project, check out our Github repo at:

And if you want to try out the full power of Postgres and fully delegate extension management to us, try out Tembo Cloud.

· 3 min read
Steven Miller

Someone on X wanted to know how to get an anonymous dump of Postgres data, but doesn't want to install an extension in their production DB. I want to show how you can start a local database, dump the production data there, then do an anonymized dump from that without too much hassle.

Getting set up

Dockerfile:

FROM quay.io/tembo/tembo-local:latest

# Install extensions from Trunk
RUN trunk install pgcrypto
RUN trunk install postgresql_anonymizer

# Setting samples to use for anonymization
RUN cd /var/lib/postgresql/data/tembo/extension/anon && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/lorem_ipsum.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/identifiers_category.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/identifier_fr_FR.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/identifier_en_US.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/address.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/city.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/company.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/country.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/email.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/first_name.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/iban.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/last_name.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/postcode.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/siret.csv && \
wget https://gitlab.com/dalibo/postgresql_anonymizer/-/raw/master/data/lorem_ipsum.csv

Build and run it like this:

docker build -t example-local-image .
docker rm --force local-tembo || true
docker run -it --name local-tembo -p 5432:5432 --rm example-local-image

This guide covers how to quickly try out Postgres extensions locally in more detail.

Dump the data into your local DB

pg_dump 'your-connection-string-here' | psql 'postgres://postgres:postgres@localhost:5432'

Anonymize the local DB

Initialize the extension:

SET session_preload_libraries = 'anon';
LOAD 'anon';
CREATE EXTENSION IF NOT EXISTS anon CASCADE;
SELECT anon.init();

For example, I have a table called "extension_owners", and I would like to anonymize the user_name column:

postgres=# select extension_id,user_name from extension_owners limit 1;
extension_id | user_name
--------------+---------------------
26 | coredb-service-user
(1 row)

I configured anonymization on that column like this:

SECURITY LABEL FOR anon ON COLUMN extension_owners.user_name
IS 'MASKED WITH FUNCTION anon.lorem_ipsum( words := 1 )';

There are a lot of other options for anonymizing data, and you can even write your own functions. More information in these docs.

caution

This next step replaces data in the local database.

Since we are working on a local copy of the data, we can just use this function to replace anonymized columns in-place.

SELECT anon.anonymize_database();

We can see now this column has been anonymized.

postgres=# select user_name from extension_owners limit 10;
user_name
------------
First
They
Cell
Price
Them
Go
Parent
Republican
With
Between
(10 rows)

You can do further modification from here, for example masking and replacing additional columns, formatting columns, etc.

Done!

Now you have an anonymized database locally. From here, you can pg_dump to a file, or do something else!

If you think this kind of thing is cool, follow me on X (@sjmiller609) for more content. At Tembo, we are all about Postgres extensions. You can try out extensions on Tembo Cloud for free.

· 24 min read

In this episode, Ry and Alexander talk about OrioleDB (and the challenge of fighting bloat), fuzzy and vector search, and the challenges behind database management. Watch below, or listen on Apple/Spotify (or your podcast platform of choice). Special thanks to Alexander for joining us today!

Want to know more about something they mentioned? Here’s a starting point:

Did you enjoy the episode? Have ideas for someone else we should invite? Let us know your thoughts on X at @tembo_io or share them with the team in our Slack Community.

Transcript

[00:00:12] - Alexander

Hi, I'm Ry Walker, founder of Tembo, a managed Postgres company. And today I have Alexander Korotkov from OrioleDB as my guest. Alexander, welcome to the show.

[00:00:24] - Alexander

Hello, thank you very much for inviting me. It's a pleasure to participate.

[00:00:32] - Ry

Awesome. Maybe I'd like to start by having you give us a quick background, like where did you grow up and what were you doing before you started working on Postgres?

[00:00:41] - Alexander

Even before Postgres?

[00:00:43] - Ry

Yeah, way back. Were you only like ten years old then?

[00:00:49] - Alexander

Yes, I started actually from web development. Yes. And for web development, Postgres become my favorite database management system. That time there were basically two popular open source database management system, MySQL and Postgres. And Postgres behavior looked way more consistent for me. And this is why Postgres become my favorite. And also thing which attracts my attention was generalized indexing in Postgres that Postgres have even that time had GiST and gene indexes which you could apply to different data types, different search operators. And that was very interesting for me. And I also have studied for PhD in university and I was interested in fuzzy search and features like this. And in Postgres I found fuzzystrmatch complete model. And that model contained Levenshtein function which defines editing distance between two strings, number of editing operations. And I found that it doesn't work with multibyte encoding, with multibyte encoding. So it just compares two strings, byte per byte. And my first patch was to just fix this bug. I actually didn't know if I could produce the I just downloaded sources. Thankfully that time compiling a Postgres already wasn't a problem. So I just had Linux on my desktop and just cloned it.

[00:03:08] - Alexander

Probably that time it was CVS repository, I cloned it and it wasn't difficult to locate the place in the source code responsible for this function. And I just have to find which functions are responsible for getting lengths of multi byte string in Postgres and stuff. Work was very easy, and I have submitted my first patch to Postgres, but then Robert Haas picked my patch to work on it to review and commit. And that time I get that it's not so easy this process community, because we have quite long thread studying possible performance regressions and on, and I rewrote patch this small patch many times. But finally we find a way when this patch not only fixes the problem of multibyte encoding, but also doesn't produce noticeable overhead when it's single byte encoding or strings, just uses single byte characters. In both of these cases, the overhead was negligible. And then Robert did commit my patch.

[00:04:40] - Ry

That's amazing. Yeah, it's interesting. How many lines of code was the patch, I wonder?

[00:04:46] - Alexander

I don't remember exactly. You could find it on the list, but this patch should be in dozens of lights, probably 20 or something like this, really small. But also with these fix, I started to work on improving this function, improving the performance because if you use the Levenshtein function you are typically looking for strings which are close to your string. Right. And the thing is you basically don't care about large distances. For instance, you are looking for strings with editing distance three or less. And that means that if it would be four you don't care how much bigger is that distance. And if you need only this, then you could easily, not easily, but you could significantly accelerate function Levenshtein calculation in many times and this functionality took even more work for me and Robert. But you could use it, it's Levenshtein less equal. Nice function.

[00:06:09] - Ry

That's great. Yeah. I also came to Postgres from being a web developer prior and I've realized in recent weeks actually, that the reason why I went to Postgres and not MySQL is primarily I was using Ruby on Rails, which just kind of was Postgres first and then I really didn't like PHP. Hopefully you weren't a PHP developer, but the lamp stack had MySQL and PHP together and I always just like I don't want to go anywhere near anything near PHP. That's really not a great reason, but it's just a fact.

[00:06:48] - Alexander

Yes, probably one of the great feature I have discovered in podcast in early days was DDL transactions. So that if you need to do a database schema migration, you can wrap all the changes into transaction and if something go wrong you can roll. Yes, that's just amazing. And it's interesting that even old and mature database management systems like Oracle or MSS SQL Server lacks of this functionality. I'm not sure about the present day, but at time indefinitely they all were lacking and this was very useful.

[00:07:36] - Ry

Yeah, that's great. So you built obviously you started with that first patch and worked on other fuzzy search stuff. Have you worked on any Postgres extensions?

[00:07:52] - Alexander

Yes, I continued to work on Postgres. Then I have found that I get known with Oleg and Theodore who was in Russian contributors of Postgres and I get familiar with their work. Some of their work I already know Gist and Jin. But for others work, it was interesting for me that we could accelerate also search for like patterns. Not just perfect patterns, but imagine you looking something that could be in the middle of the string. And there were that time Pgtrgm module. But at that time it only supports trigram similarity. Search using indexes. But I found that it's pretty easy if you decompose string with trigrams it's pretty easy to implement like search. So you could just extract trigrams from the like patterns and search for them. And thanks that trigrams are extracted all over the string so you can find this substring anywhere. And that time I would say my feeling was just great. So it's just amazing. So with quite small patch you could teach database server with some amazing advanced functionality and I get even more encouragement when I would be sorry if I confusing with names hugh Bert I don't know the guy from blogger from Poland who was posting to the planet PostgresQL waiting for and he also posted waiting for this feature like Word for Pgtrm.

[00:10:15] - Alexander

This was one of my first patches. Another thing that during my PhD research I also researched split algorithms for R3 and I have found more advanced algorithm and I have pushed this algorithm to Postgres core and to PostGIS that took time. Yes, because communities are quite conservatives, but it was also good.

[00:11:00] - Ry

Yeah, great. Well, I want to talk about OrioleDB, but before we do that, I was thinking are you paying attention to the whole vector search since you spent so much time on Postgres search features or I don't know, at least sometime. I'm curious, are you kind of tracking what's going on with vector search?

[00:11:20] - Alexander

Yes. That's interesting. That's an interesting subject. Yes. While I have researched the split algorithms, I have also experimented with cube concrete model which supported basically multidimensional rectangles of different number of dimensions up to one dimensions. And what I have found is that if you have low dimensionality two, three, four or five, then when you are producing split, your split will be good for one dimension, but almost don't differentiate other dimension. So you can imagine this if you have two dimensional space filled with points and you need to split them into two rectangles, there is just two good way to do this split vertically or horizontally. All other ways your rectangles would have a huge overlap. But thing changes when your dimensionality increases. There is a so called Woodman quadratic split algorithm and this algorithm actually do clustering. So it just tries to find to divide the set of points into two clusters. And it doesn't work well for low dimensionality, but for high dimensionality of space it becomes better than if you just pick one axis and split in this dimension. And that was interesting for me and I got familiar with cures of dimensions so that if you have low dimensionality space, you can have some guaranteed and quite good search performance.

[00:13:36] - Alexander

Right? So imagine the best example is unidimensional space. You can index this just b three and you have some guaranteed or logarithm n time for search for point request. Right? But when you have high dimensionality, that becomes a problem and uniform random data, which could be good, which you could handle very well with low dimensionality, in high dimensionality it becomes almost unindexable. So if you have 1000 of dimension vectors and you need to search for a similarity, then search in uniform data would be almost impossible to accelerate because you can't identify which particular dimension could give you a match. You could just eat just cumulative results of cumulative results of all dimensions and it's almost impossible to accelerate. I have participated a bit in improvement of PG vector and with some developers of Supabase, we really found that indexing methods, if they applied to uniform data, then they give nothing. So indexing methods, when you have a lot of dimensions, they based on some clustering. The idea is to find, how to say, find the low in the distribution, ununiformity of distribution, and then exploit it. And this is how indexing works. There are different interesting methods for indexing multidimensional space, but I think the favorite is Hnsv method.

[00:16:05] - Alexander

And I have read about this method in the scientific paper way before this AI Hype ten years ago. And that was interesting how far these methods from all others. So it's very self.

[00:16:26] - Ry

It seems like just vector search is just on the exact edge of search. And the LLM AI, like you said, it's kind of blending both sides. Well, that's cool. So I wanted to obviously chat about OrioleDB, I imagine. Are you spending trying to spend 100% of your time on OrioleDB, or? It's probably hard to I mean, it sounds like there's lots of great things happening in Postgres to distract you from your own work.

[00:16:57] - Alexander

The thing vector search definitely distract me. And with this AI Hype, it's hard to not be yeah, yeah, I know.

[00:17:06] - Ry

For say, I would just love to kind of have the top three or four things you're trying to accomplish with Oriole maybe and maybe give us a progress report.

[00:17:20] - Alexander

Yes. But before this, I'd like to mention about how I use AI in my work.

[00:17:26] - Ry

Oh, yeah, sure.

[00:17:27] - Alexander

So I have written the blog post, no more vacuum, no more bloat. You may be aware it was on top of news and interesting that I have mostly generated this with Chat GPT. So I just wrote the short item list and asked Chat GPT to write me a blog post. Then I have corrected a little bit, then I wrote some graph, then add some graph, asked Chet GPT to add another paragraph with them, and then it was done. And I was curious that I expected that I would be revealed and blamed for this. But actually the comments I get was, oh, this blog post is so much well written.

[00:18:23] - Ry

I know, it's funny. I agree with you. I'll take just like, notes. Like, for example, here I've got like a list of 20 questions that I'm trying to ask you during this interview, possibly ask you. I guarantee you if I sent this list of 20 questions to ChatGPT and say, hey, make these questions better, it would crush it, give me all kinds of much better questions. But anyway, yeah, I agree. People who aren't using it are really missing out on a great assistant. All right, so back to Oriole. What are the top three or four.

[00:19:02] - Alexander

Things I'd like to first say about the thing which bring me to the Oriole? When I have studied Postgres, it was just like a magic how MVCC works. So you can run multiple sessions in parallel and each session will have its own snapshot of the data and that was just amazing. And I was very interested what is under the hood, but how this works from user size was perfect, but I always wondered how it implemented internally. Because when you in progress, when you're doing an update, then you just have to mark old tuple and insert the new tuple in the heap. And if hot update is not applied, then you also have to insert all the index tuples even if index set values are not updated. And I wondered if we could do this better. And I think I have this background thoughts for years. So I have studied how MVCC implemented in MySQL as and then how it's implemented in Oracle. And it was very interesting for me to get how it's implemented in Oracle because I heard that Oracle have block level undo. Yes, and then I have thought how could it be on a block level?

[00:20:54] - Alexander

Because if two transaction modifies the same tree page and this page got splitted and then one of transaction could be rolled back, then it's not linear list of log records which you could just apply one after another. Because if you need to roll back some change which was before the page split, you need to do some adjustments with it. And then I understood that I need to go deeper and get how it works and then I learn more things, how rider had log working and so on. And I think in 2017 I have started to work on design of my own storage which could work around the most of shortcomings, which I see in Postgres engine. For sure there is no plain wins, in some situation this engine works worse. But I would just like to design some new trade off which could be better in the typical situation. And I can highlight some things in OrioleDB. So the first thing which it is fighting is bloat and in order to eliminate bloat, it has undo log. So if you update a row, then you just have to add a new version of row into the undo chain and you only need to update indexes if their values are updated.

[00:23:03] - Alexander

And also indexes are versioned trees as well. In OrioleDB I have heard that it is so in Oracle and I found that this is very attractive idea. Thanks to that index only scan becomes very simple because your secondary index already contains all the information and you doesn't need to look into any secondary structures. Because I have heard that in Postgres index only scans, index only scans is amazing until you run in some problem because if you have some intensive workloads and so on, you might have significant part of visibility map zero it and your query could get into trouble. Yes, that also could happen. And this is why I found that if secondary index version that's attractive idea and OrioleDB have a mixed underlock containing both row level and block level records and block level records allows to handle some changes like eliminating of dead tuples of the page. So for instance, your page contains some tuples which are deleted but still visible for some transaction. And using block level undo lock you can issue a new version of this page and get rid of all dead rows and reclaim their space. But the transactions which need old version can traverse page level underlock and find the tops that they need and another to eliminate bloat is automatic page merging.

[00:25:12] - Alexander

Anyway, if even you have undo lock it could happen that your workload is so that you have a lot of sparse pages so you have a lot of data inserted but then the data was deleted and AudioDB supports automatic merge of sparse page to eliminate bloat. Okay, and you had some questions.

[00:25:34] - Ry

Yeah, I was just remembering that it was a blog post or a presentation where you like here are the ten worst things about Postgres. I don't know the exact phrasing you chose but yeah, basically here are ten problems with Postgres.

[00:25:52] - Alexander

That was a blog post of Rick Branson, I hope I spelled the name correctly and yes, I found that was a quite popular blog post. The first popular blog post is definitely Uber blog post about why they moved from Postgres to MySQL. But this blog post from Rick was very interesting for me because the issues he highlighted was very good fit to my vision and to things which I'm going to improve with OrioleDB.

[00:26:36] - Ry

You'll probably never be done with OrioleDB because products are never finished, but how's your progress been through your target initial list of things you wanted to accomplish with Oriole are you making good progress through that or give us a progress report.

[00:26:53] - Alexander

Yes, I'm making a good progress but currently the main target is not to add new features or handle more things, the current target is stability to get more users. So it doesn't work to have amazing products if nobody uses. And I would say that Database World is quite conservative because obviously people don't want to lose their data. And this is why, before using some new database technology, you need to ensure that it's really stable and mature. And this is the main challenge for new database management system or new storage engine and especially when it's OLTP and OrioleDB currently mainly targets OLTP because OLTP is typically the source of your data, right? So where your business operation happened, for instance, OLAP could be not so important. So you pull your data from OLTP system and put to OLAP for analysis and if it will disappear in a OLAP system you can just repeat the procedure but that doesn't work for OLTP which is initial source of the data. So this is the main difficulty I think, but we already did very good work with the team, very good work on eliminating the bugs but we definitely need more better testers.

[00:28:46] - Ry

Yeah, I mean, it's a big challenge. It's similar for us as well, but you have to create enough differentiation to get people to want to try it. But you can't go so far away from like you're saying the further the distance is from what they consider stable, the riskier it is. But if you're too close to the same thing, there's not enough value to do the switch.

[00:29:12] - Alexander

Right.

[00:29:12] - Ry

So it's a tricky catch 22 that you have to do something a little bit dangerous to get the attention. If it's too close, then people will be like, if it's 10% faster, who cares? Right?

[00:29:28] - Alexander

Yes, exactly. This is why I am highlighting the cases where OrioleDB is in times faster or even dozens of times.

[00:29:42] - Ry

Have you now released kind of the stable candidate? You said you'd need more testers. Is that's kind of your stage now?

[00:29:51] - Alexander

Yes. We are currently discussing with our team and advisors when we can make market release candidate on the release. At some point, this is just how to say at some point, this is decisions of the will. So there is no strict procedure you can go through and say, okay, your product now should be better, or your product now should be generally available. You just need to decide for yourself and make this decision weighing all the risks.

[00:30:30] - Ry

Yeah, it was funny. I'll tell you a quick story about that. With my previous company, Astronomer, we were 0.1, 0.2, 0.3. I think we got to like 0.17. And it was like there was nothing big happening in any release that would cause us to say, oh, this should be the 1.0. And then we are also thinking like, this is one of me thinking this, because I was like, let's just make the next one 1.0, goddamn it, but let's save the 1.0 for a big marketing push. I don't know that they ever did it. We waited so long that we never had a 1.0 of that platform. By the time we got to it was like, oh, let's rewrite it. I think it's a tricky thing to ship, but I'm a big fan of shipping early and often. And just mark your thing as an RC candidate. It doesn't matter. It will attract more attention with that RC dot. That zero dot.

[00:31:35] - Alexander

Yes. Actually, the number in the version is marketing you can find in the shop. The price for the good could be $20. But you go away, you come back, and now it's $30 crossed $25.01 of advices. I heard about version numbering was never number. If you have new products, never number your version 1.0.0, nobody will trust it's stable. So you should number something like 1.3 point eleven.

[00:32:21] - Ry

Yeah, well, I'm kind of a big fan of CalVer, where the version of your software should be like, how many releases? It's 23 dot whatever.

[00:32:35] - Alexander
[00:32:35] - Ry

And here's our 6th release in 2023. And that kind of gets rid of all of the traditional semver stuff. But again, that's kind of hard to do as well. Well, I had a bunch of extra little questions I want to ask you just to get to know you a little better. But if you had a magic wand that you could add any feature to Postgres and tomorrow morning we wake up and it's there, what would it be? What's the most important thing? Or would you be most excited to see?

[00:33:06] - Alexander

My question...it could sound probably selfish, but I have a patch set to improve Postgres table access method API. This patch set definitely needs some work and if I magic went I would like this work to be just done and everything is perfect and in the Postgres core perfected. And that would make OrioleDB become a pure extension. But not only that, I think it could clear the path for more table access method implementations.

[00:33:48] - Ry

Got it. That's good, that's a great one.

[00:33:50] - Alexander

And one thing if you have a few minutes, the Postgres Extensibility is probably the idea which comes from Postgres early design and remains Postgres differentiation feature till now. Because all our ideas from Postgres design today it sounds weird and it's not.

[00:34:19] - Ry

There, but Extensibility is it's it's honestly, I always just thought MySQL and Postgres were just two different. Yeah, I didn't realize how much more Extensible Postgres at least tries to be and has been and how many extensions exist for it. Once you start looking at that list, it's pretty amazing, all the work that people have done to extend it. Is there anything, would you say about Postgres that almost nobody agrees with you about? Do you have an opinion that's controversial or just it's hard for people to agree with you about?

[00:35:03] - Alexander

Probably I don't know this opinion. So there are opinions where we have disagreement in community, but I can't remember the opinion which leaves me alone with that. So probably not. Yeah, there are opinions where I appear in a minority, but not more.

[00:35:28] - Ry

Yeah, a lot of times that's just because you don't understand everything yet, right? People come in with a hot take and then the mailing list will educate them on other factors.

[00:35:40] - Alexander

Yes, there are disagreements because we are writing patches to Postgres and we understand these patches as improvements. But actually there is never a pure win. If you achieve something, you always lose something. That might be not obvious, but I don't know. So if you are adding too many SQL comments and your parser get complicated and parcel state machine cannot fit to processor cache and get slower and your source code also becomes bigger and harder to understand and so on. And finally we are just a group of people which needs to make some decision of will, of where to go. No way is a clear win. We are trying our best to don't discourage existing users, but even the best offers of them, even with best efforts of them. There are some disprovements for sure. At the end of the day, we need to negotiate and make a decision.

[00:36:54] - Ry

Where can people find you online? And in particular, what's the best way for them to get involved with testing OrioleDB?

[00:37:04] - Alexander

Yes, any person can reach me via email and testing Oriole, you just go to GitHub, download and compile sources or even easier, go to Docker Hub and get the docker image and just go experiment with your data and your workload and raise an issue or discussion and share your experience and.

[00:37:32] - Ry

Refresh us one last time. Where are the use cases where Oriole is going to outperform vanilla Postgres most dramatically I guess yes.

[00:37:43] - Alexander

So the situations are when you have a lot of updates, very update intensive workload, then OrioleDB could outperform things to undo log. Another case is huge multi core machines and OrioleDB eliminates a lot of bottlenecks and can perform in times faster and also OrioleDB implements role level write ahead log and if you have a huge write ahead log stream and you need geographical replication, OrioleDB can also help a lot. Awesome.

[00:38:31] - Ry

Okay, great. Thank you. Yeah. If there's anything else you want to share, feel free, but otherwise it was great chatting with you.

[00:38:40] - Alexander

No more from me. Thank you very much for inviting. It's always very lovely and friendly talk with.