localfirst.fm
All episodes
January 14, 2025

#19 – Brooklyn Zelenka: UCAN, Beehive, Beelay

#19 – Brooklyn Zelenka: UCAN, Beehive, Beelay
Sponsored byConvexElectricSQL
Show notes

Transcript

0:00:00 Intro
0:00:00 we've restricted ourselves down to making things look like access
0:00:03 control lists, on the outside.
0:00:06 and so it should feel very, very similar to doing things with role
0:00:10 based access control using, say, OAuth.
0:00:14 That should all feel totally normal.
0:00:17 You shouldn't really have to think about it in any special way.
0:00:20 In the same way that, you know, if you have a sync server, other than having
0:00:22 to set up the sync server, or maybe you pointed at an existing one, knowing that
0:00:26 it's there doesn't mean that you have to, like, design it from first principles.
0:00:30 Welcome to the localfirst.fm podcast.
0:00:33 I'm your host, Johannes Schickling, and I'm a web developer, a
0:00:35 startup founder, and love the craft of software engineering.
0:00:39 For the past few years, I've been on a journey to build a modern, high quality
0:00:42 music app using web technologies.
0:00:45 And in doing so, I've fallen down the rabbit hole of local-first software.
0:00:49 This podcast is your invitation to join me on that journey.
0:00:53 In this episode, I'm speaking to Brooklyn Zelenka, a local-first
0:00:57 researcher and creator of various projects, including UCAN and Beehive.
0:01:01 In this conversation, we go deep on authorization and access control
0:01:05 in a local-first decentralized environment and explore this topic
0:01:10 by learning about UCAN and Beehive.
0:01:12 Later, we are also diving into Beelay, a new generic sync server implementation
0:01:17 developed by Ink and Switch.
0:01:19 Before getting started, also a big thank you to Convex and Electric
0:01:23 SQL for supporting this podcast.
0:01:26 And now my interview with Brooklyn.
0:01:28 Hey Brooke, so nice to have you on the show.
0:01:31 How are you doing?
0:01:32 I'm doing great.
0:01:32 Super excited to be here.
0:01:33 I'm glad that we, made this happen.
0:01:35 Thanks so much for having me.
0:01:37 I was really looking forward to this episode and honestly, I was quite nervous
0:01:42 because this is certainly bringing me to an aspect of local-first where I have
0:01:47 much less first hand experience myself.
0:01:49 I think overall local-first is a big frontier of pushing the boundaries, what's
0:01:54 possible technologically, et cetera.
0:01:57 And you're pushing forward even a further frontier here all around local-first auth.
0:02:03 So the people in the audience who are already familiar with your work,
0:02:08 I'm sure they're very thrilled for you to be here, but for the folks
0:02:11 who don't know who you are, would you mind giving a brief background?
0:02:15 Yeah, absolutely.
0:02:16 I'll maybe do in slightly reverse chronological order.
0:02:20 So, these days I'm working on a, Auth system for local-first, mostly
0:02:25 focused on Automerge called Beehive, which does both read controls with
0:02:29 encryption and mutation controls with something called capabilities.
0:02:33 I'm sure we'll get into that.
0:02:35 Prior to this, for A little over five years, I was the, CTO
0:02:40 at a company called Fission.
0:02:41 so 2019, we started doing local-first, there.
0:02:44 And we worked on, the stack we always called, auth, data, and compute and so
0:02:48 we ranged out way ahead on, a variety of things, trying local-first, you know,
0:02:53 Encrypted at rest databases databases, file system, a auth system, that has
0:02:58 gotten some adoption called UCAN, and Compute Layer, IPVM and prior to that,
0:03:04 I did a lot of web and was, temporarily, did work with the, Ethereum core
0:03:08 development community, mostly working on the, Ethereum virtual machine.
0:03:11 That is super impressive.
0:03:13 I am very curious to dig into all of the parts really around
0:03:18 auth, data, and compute.
0:03:20 however, in this episode, I think we should keep it a bit more
0:03:23 focused on particularly on auth.
0:03:26 Maybe towards the end, we can also talk a bit more about compute.
0:03:29 Most of the episodes we've done so far have been very centric around data.
0:03:34 Only a few have been more, also exploring what auth in a local-first setting
0:03:39 could look like, but I think there is no better person in the local-first space
0:03:44 to really go deep on, on all things auth.
0:03:47 So through your work on Fission, and previous backgrounds, et cetera, you've,
0:03:53 both participated in, contributed to, and started a whole myriad of different
0:03:58 projects, which are now really like on the forefront on those various fields.
0:04:03 One of it is UCAN.
0:04:04 You've also mentioned Beehive at Ink & Switch.
0:04:08 Maybe starting with UCAN, for those of us who have no idea what UCAN, that four
0:04:18 UCAN
0:04:18 Yeah, absolutely.
0:04:19 So UCAN, U C A N, User Controlled Authorization Networks, is A way of
0:04:25 doing authorization, so granting the ability to somebody else to perform
0:04:30 some action on a resource, in a totally peer to peer, local-first way.
0:04:36 It uses a model called Capabilities.
0:04:39 So instead of having a database that lists all of the users and what
0:04:43 they can do, you get certificates that are cryptographically provable.
0:04:48 And so if I wanted to give you access to some resource I controlled, I
0:04:52 would sign a certificate to you.
0:04:54 And then if you wanted to give access to someone else, you
0:04:56 would sign a certificate to them.
0:04:58 And then when it came back to me, I could check that that whole chain was correct.
0:05:02 And so people have used this to, do all kinds of things.
0:05:05 So at Fission, we were using it for CRDTs.
0:05:08 For example, there's a CRDT based file system that we had developed,
0:05:12 to guard whether or not you were allowed to write into it.
0:05:15 There's a bunch of teams now using it for, managing resources.
0:05:19 So, storage quotas.
0:05:20 How much are you allowed to store inside of some data volume?
0:05:23 and for them, it's really helpful because then they can say, Okay.
0:05:26 Here's a certificate from us to, you know, say a developer, and then they can
0:05:31 portion that out to all of their users without having to always register all of
0:05:35 their users back to, the storage company.
0:05:37 and so it can, both lower the amount of interaction that they have to do
0:05:41 with, you know, registering all of these different people, but it also
0:05:44 means that they can scale up, really nicely their service so as long as
0:05:48 they know about the root signature.
0:05:50 They can scale horizontally, very, very easily or interact with other teams very
0:05:55 easily by just issuing them certificates.
0:05:56 So, like, people are doing that kind of thing,
0:05:58 So, you've mentioned the term capabilities before, and I think
0:06:01 that's also a central part in UCAN.
0:06:04 I'm most familiar with, from my more traditional background of like building
0:06:08 more centralized server applications, et cetera, and how you implement auth is
0:06:13 always very, very dependent on the kind of application that you want to build.
0:06:17 if you want to start out a bit more easily, then you could maybe lean
0:06:21 on some of the primitives that a certain technology or platform is
0:06:25 giving you, maybe using Postgres and use sort of like the, role based
0:06:29 access control patterns that you have in Postgres or maybe something
0:06:33 even as off the shelf as Firebase.
0:06:36 is this sort of like a useful mental model to think about it that you
0:06:40 can gives me similar building blocks or how much more fine granular can
0:06:45 I get with what UCAN offers to me?
0:06:48 Yes, it's a great question.
0:06:50 So, in, role based access control or any of these, access control
0:06:55 list based systems, right?
0:06:59 you put a database that has You know, a list of users and what they're able to do.
0:07:05 So often their role, are they an admin?
0:07:07 Are they a writer?
0:07:08 Are they a reader?
0:07:08 You know, all of these things.
0:07:10 and, to update that list, you have to go to that database, update that
0:07:15 database, and on every request that you make, you have to check the list.
0:07:20 So sometimes we call this like, it's like having a bouncer at a club.
0:07:23 You know, you show up, you show them your ID.
0:07:25 They check, are you on the VIP list?
0:07:27 And then you're allowed into the club or not, And what those rules are, are set by
0:07:31 that, you know, by that bouncer, right?
0:07:34 These are the only rules, no others.
0:07:36 in a capabilities world, the analogy is, is often to having like a ticket to go
0:07:40 see a movie, So, this last weekend, I went to go see Wicked, it was awesome.
0:07:45 but I bought my ticket online, it showed up in my email, they didn't ID me on
0:07:49 the way in, I just showed them my ticket and they're like, Oh, great, yeah.
0:07:52 Theater 4, you can go in.
0:07:54 so as long as I had that proof with me.
0:07:57 I'm allowed in.
0:07:58 They didn't have to check a list.
0:07:59 There was no central place to look.
0:08:02 Capabilities, are not a new model.
0:08:05 They've existed for some time.
0:08:07 In fact, a big part of the internet infrastructure runs on top of
0:08:11 capabilities as well, or a subset of them.
0:08:15 But it hasn't found its way as much into applications because we're
0:08:18 so used to access control lists.
0:08:20 The granularity that you mentioned before is really interesting because,
0:08:24 in the capability system, anytime I make that delegation to somebody else,
0:08:28 I say, you're allowed to use this thing, or then you go to somebody else
0:08:31 and say, you can also use this thing.
0:08:33 You can grant them the ability to see or to use that.
0:08:37 or fewer capabilities.
0:08:38 So if it was like, here's a terabyte of storage, you could turn around and say,
0:08:42 well, here's only 50 MBs to somebody.
0:08:44 And so you can get as granular as you want, with it.
0:08:47 And, there's never any confusion about who's acting in what way, right?
0:08:54 So in a traditional system, if we had, you know, with, with access control lists,
0:08:59 you sat, you know, you ran a service.
0:09:02 between the user and me, and they made a request to you.
0:09:05 Well, they only have a link to you and you only have a link to me.
0:09:08 So when you'd make the request to me, you'd be using your terabyte of storage.
0:09:12 And so there are some cases where that can confuse the resource.
0:09:16 So it's like, oh yeah, you can totally store it, you know, use a terabyte
0:09:19 of storage, even though the actual user shouldn't be able to do that.
0:09:22 With capabilities, we get rid of that completely.
0:09:25 We have this entire chain of custody, basically, of this.
0:09:28 As granular as you want to get, it's very clear on every request,
0:09:32 what that request is allowed to do.
0:09:34 so I think this is going to become really important for things like, LLMs and other
0:09:38 sort of automated agents where you can tell it, Hey, go do things for me, but
0:09:43 not with all of my rights, not as sudo.
0:09:46 Only with, in this scenario for the next five minutes, these things
0:09:50 are what you're allowed to do.
0:09:51 And even if it hallucinates some other intention, those are the
0:09:54 only things it's able to do.
0:09:55 Yeah, I think this is, such an important aspect.
0:09:59 since I think you don't even need to reach as far as giving agency to a An
0:10:06 agent to an AI, but even if you want to go a bit more dumb and a bit more
0:10:11 traditional, if you want to use some off the shelf SaaS service, and, maybe that
0:10:17 thing integrates with your Google account.
0:10:20 Then you also like, you need to give the thing somehow access.
0:10:23 So you do like the, OAuth flow with Google and then it asks you
0:10:27 like, Hey, is it okay that we have access to all of those things,
0:10:30 that we can do all of those things?
0:10:33 And even though Google's already offers some pretty fine granular things there,
0:10:37 often I feel like, Oh, actually I want to make it even more fine granular.
0:10:42 Wait, you're going to have like access to all of my emails.
0:10:44 Can I maybe just give you access to my invoice emails
0:10:48 if this is an invoicing thing?
0:10:50 So I feel like it's both a bit overwhelming to make all of those
0:10:54 decisions upfront, like what should be allowed, Both from a application end
0:10:59 user perspective, me using the thing, but then particularly also from like
0:11:03 an application developer perspective.
0:11:05 And, yeah, it feels like a really, really important aspect of using the
0:11:10 app and building, designing the app.
0:11:12 And if that is not, intuitive and ergonomic, then I feel it's going
0:11:18 to, everyone's going to suffer.
0:11:19 The application developer, they're Probably just going to wing it, and
0:11:23 that will mean probably too coarse of a, granularity for application users, etc.
0:11:30 So I'm really excited that you're pushing forward on this.
0:11:33 maybe also to draw the analogy, between more traditional OAuth
0:11:38 flows and what UCAN is providing.
0:11:40 It's, should I think about UCAN as a replacement for OAuth from like both,
0:11:46 end user perspective, as well as from an application developer perspective?
0:11:50 Yeah exactly.
0:11:52 so the, the underlying mechanism is different, But we really wanted
0:11:56 it to feel as familiar as possible.
0:11:59 So even the early versions of UCAN used the same token
0:12:02 format and things like this.
0:12:04 We've since switched over, to some more modern formats.
0:12:08 There are problems with JWTs.
0:12:10 but yeah, exactly.
0:12:11 You can think of it as, local-first OAuth is one way of thinking about it, exactly.
0:12:16 Right.
0:12:17 So as an application developer, I need to make up my mind once to
0:12:22 say like, this is what's possible.
0:12:23 This is what is allowed and like define, and then the system then
0:12:28 enforces those rules, but often I, as an application developer get it
0:12:32 wrong and I need to like, either make the rules like more permissive,
0:12:38 or or less permissive over time.
0:12:40 And similar to how I might get wrong a database schema and then later need
0:12:46 to do those dreaded database schema migrations, what is the equivalent
0:12:49 of a schema migration, but for UCAN capability definitions, etc.
0:12:56 so all of the information that you need to fulfill a request in UCAN
0:13:00 is contained in the token itself.
0:13:02 so, these days we have a little, policy language, think of it a little bit
0:13:06 like, like SAML, inside the token.
0:13:08 And it says, okay, when you go to actually do something with this token, the, Action
0:13:14 has to match the following criteria.
0:13:16 you're sending an email.
0:13:17 So the two fields has to only be two people inside of the company.
0:13:22 Or, you can only send, newsletters on Mondays or whatever it is.
0:13:28 Right.
0:13:28 And you can scope that down arbitrarily, syntactically.
0:13:32 So updating those policies is just issuing a new certificate, to say
0:13:35 this is what you're allowed to do now.
0:13:36 and, you know, you can revoke the old ones if that's needed.
0:13:40 But I think the more interesting part of this actually is on the far other end.
0:13:44 So we were talking about, you know, the developer sets these policies.
0:13:47 And that's true, I would say, the majority of the time.
0:13:50 But it's not very, It doesn't respect user agency, right?
0:13:55 You're giving the developer all of the agency, but the user's the one
0:13:58 who owns whatever, let's say that it's a text editing app, right?
0:14:02 You know, so they own the document.
0:14:04 Why can't they decide, you know, when they share with somebody else what they
0:14:07 should be able to do with that document?
0:14:09 so in, say, you know, Google Docs, you've got that little share button in the top
0:14:12 corner and then says, you know, invite people and then you can say, well, they're
0:14:15 an editor and this person said, you know, another admin and this is another viewer.
0:14:19 This person can only comment.
0:14:21 I think the UI is.
0:14:22 You know, we'll usually stay like that, but you could add whatever
0:14:26 options you wanted in there, right?
0:14:28 Why not?
0:14:29 So when we were doing, back at Fission, the file system work, you could scope
0:14:33 down to say, like, well, you're allowed to write into only this directory, for
0:14:37 example, and that was very, very flexible.
0:14:39 Or, you're allowed to write files under a certain size limit, right?
0:14:43 And so the user now can make these decisions of like, I'm giving
0:14:46 you access to my file system.
0:14:48 I only want you, you know, maybe I'm, you know, I'm thinking back to my school days,
0:14:52 you know, a teacher and they're having students submit, assignments to them.
0:14:55 Well, you can only submit them to this one directory and I don't
0:14:59 want you filling up my entire disk.
0:15:00 So they have to be under a gigabyte or whatever, right?
0:15:04 And so you can imagine scenarios like this, where we're now inviting
0:15:07 the end user to participate in what should the policy be.
0:15:11 It's not all set completely.
0:15:13 The developer can absolutely set it in advance, but you can
0:15:16 also then refine it further and further, for the user's intention.
0:15:19 Right.
0:15:19 I love that.
0:15:20 Since particularly now with like LMs and AIs in general, now a non technical
0:15:26 user can now just in the way how they would say to another person, like,
0:15:30 Hey, I want to give Alice access to this file, but Alice is only allowed
0:15:36 to like read the first page here.
0:15:38 The second two pages, those are like my private notes.
0:15:42 Please don't give anyone access to this.
0:15:44 You know what?
0:15:44 Like actually Alice is allowed to also like comment on it.
0:15:48 Like just from like a, a very like colloquial sentence like that,
0:15:52 a computer can now derive, those capabilities very accurately.
0:15:56 Represented to the user, like, Hey, does this look right to you?
0:16:00 And, leveling up the entire application user experience.
0:16:04 so it's very reassuring to me that all of this is built on top of very sound
0:16:10 cryptography, however, even though I've studied computer science and like
0:16:14 I have done my cryptography classes.
0:16:17 That being said, I have, that's not my day to day thing.
0:16:20 And as an application developer, I'm trying to steer away from like low
0:16:25 level cryptography things as much as possible, just because I don't
0:16:28 consider myself an expert in this.
0:16:31 So it's very good to know that everything on that is built on top of very solid
0:16:36 cryptography, but how much as an application developer, how much do I
0:16:40 need to deal with like signing things, et cetera, or how much of that is
0:16:45 abstracted from what I'm dealing with?
0:16:47 Yeah.
0:16:48 so I would say that there's two layers here that people find.
0:16:52 correctly find scary, myself included, right?
0:16:56 cryptography and auth in general, both super scary topics.
0:16:59 I remember, you know, as a web dev, whatever, 10 years ago adding, in a
0:17:04 web app, the, You know, the Auth plugin and kind of going, and if I don't
0:17:09 touch it, hopefully it'll work, right?
0:17:11 really the goal with all these projects was to hide as much of the scary
0:17:15 complexities in there as possible.
0:17:18 So we handle all of the encryption and signing and all of this
0:17:21 stuff in a way that should make it, if we do our job well.
0:17:24 Completely invisible, to the developer.
0:17:27 So even, you know, we haven't talked about Beehive very much.
0:17:29 Beehive has both a, which is this, project I'm doing, at Ink & Switch
0:17:33 to add access control to Automerge.
0:17:36 It has both a encryption side, so that's read controls, and then capabilities for
0:17:41 these mutations or, or write controls.
0:17:44 and for encryption, there's a bunch of things that have to happen.
0:17:48 We have to serialize things in an efficient way.
0:17:51 We have to chunk them up.
0:17:52 We have to, make sure that we share the encryption key with everyone.
0:17:57 but no and nobody else, right?
0:17:59 And that could be, Thousands of people, potentially, and we've set
0:18:02 ourselves these, these goals of, you know, you should be able to run, run
0:18:05 this inside of a large organization or a medium sized organization.
0:18:08 how do you do all that stuff efficiently?
0:18:09 And our goal is you should be able to say, add these people, and it just works.
0:18:16 You do all your normal Automerge stuff, and on, you know, when you
0:18:19 persist to disk, or when you send it out to the network, then it gets
0:18:22 encrypted, then it gets secured, then it gets signed, all of this stuff.
0:18:25 And you don't have to worry about any of it.
0:18:27 when you set up Beehive, it generates keys, it does all the key
0:18:31 management for you, it does all of the key rotation, all of this stuff.
0:18:35 so, again, it's one of these things where it's like, I'm really excited about this.
0:18:40 and it's like super cool to get to work on.
0:18:42 And there's a lot of interesting detail on the inside, but in an ideal
0:18:47 world, nobody has to think about this other than I want to grant these
0:18:50 rights to these people and everything else is taken care of automatically.
0:18:54 I love that.
0:18:55 so you've motivated initially that UCAN, happened as a project while you've been
0:19:01 working on various projects at Fission.
0:19:05 and right now you're mostly focused on Beehive.
0:19:08 So can you share a bit more, what was the impetus for Beehive coming
0:19:14 into existence and then going into what Beehive is exactly?
0:19:19 Beehive
0:19:19 absolutely.
0:19:20 So, you know, we started UCAN very, very early in 2020, came out of
0:19:26 normal, regular product requirements of like, oh, well, we probably want
0:19:30 everyone to read this document.
0:19:32 How do we do that?
0:19:33 Or I don't want somebody to fill up my entire disk.
0:19:36 How do we prevent that?
0:19:37 And, that went through a bunch of iterations and we, we had a lot
0:19:40 of learnings come out of that.
0:19:42 I'd say that really the big one was in a traditional app stack, you have data
0:19:47 at the bottom, you know, you have to say Postgres and that's your source of truth.
0:19:49 And then above that, you have some computes, maybe you're running.
0:19:52 Whatever, Express.
0:19:53 js, or Rails, or Phoenix, or you know, one of these.
0:19:57 And then on top of that, you put in an Auth plugin, right, that uses all
0:20:02 the facilities of everything below it.
0:20:04 but that requires that you have a database that has all this information
0:20:10 in it that lives at a location.
0:20:11 We call this, internally at Ink & Switch, auth-as-place.
0:20:15 Right?
0:20:15 Because your auth goes to somewhere, right?
0:20:18 And on every request, you present your ID, they go, okay, sure, you know, here's
0:20:22 a temporary token, then you hand that to the application, the application
0:20:25 checks with the auth, you know, server again, and you do this whole loop.
0:20:28 And that has, you know, problems with latency, if you go offline,
0:20:32 this doesn't work, and it doesn't scale very well, right?
0:20:34 Like, even Google ran into problems with this and started,
0:20:37 adjusting their auth system.
0:20:38 we found at Fission, and I, I think this, this Very much holds true, like
0:20:42 we just kept learning this over and over again, is you can't rely on that system.
0:20:47 In fact, auth has to go at the bottom of the stack.
0:20:50 your auth logic and the auth, the thing that actually does the guarding of your
0:20:55 data has to move with the data itself.
0:20:57 So we call this "auth as data".
0:20:59 So for read control, it's no longer, oh, I'm making a request to a web server and
0:21:04 they may or may not send something to me.
0:21:05 It's, I've encrypted it.
0:21:06 Do you have the key?
0:21:08 Yes or no.
0:21:09 If you.
0:21:10 Have the key.
0:21:10 You can read it.
0:21:11 If you don't, you can't.
0:21:11 And it doesn't matter where you are.
0:21:14 You could be on a plane disconnected from the internet.
0:21:16 You can decrypt the data, right?
0:21:19 So we developed these ideas with, with UCAN and, the web native file system,
0:21:23 in particular, Fission unfortunately didn't make it, earlier this year,
0:21:27 or I, I'm not sure when this will be released in, early in 2024.
0:21:31 and, Ink & Switch reached out.
0:21:32 So we, we've, we've known those folks for a while, cause we've been, you
0:21:34 know, obviously working in the same space for a while and, PVH, the lab
0:21:39 director was actually an advisor at Fission and said, Hey, we have a bunch
0:21:42 of people that are interested in getting, auth for Automerge in particular.
0:21:48 could you apply UCAN and WNFS to Automerge?
0:21:53 And I said, I don't see why not.
0:21:55 Right.
0:21:56 and so we, we looked at it, a little bit deeper and went, well,
0:21:59 yes, like we, we could use these things directly, but they're tuned
0:22:02 for slightly different use cases.
0:22:04 UCAN is extremely powerful.
0:22:06 It's very flexible.
0:22:07 and it has a bunch of stuff in it for this, you know, network
0:22:10 layer, in addition to CRDTs.
0:22:13 You pay for that in space, right?
0:22:15 The certificates get a little bit bigger.
0:22:17 And so we said, well, okay, maybe, you know, we want these
0:22:20 documents be as small as possible.
0:22:23 You know, there's been a lot of work in Automerge to do compression, right?
0:22:26 Really, really, really good compression on them.
0:22:28 So the documents are tiny and, you know, you're not going to get that with UCAN.
0:22:32 So could we take the principles and the learnings from UCAN and
0:22:35 WNFS and apply them, to Automerge?
0:22:37 And so ultimately that's what we've done.
0:22:41 And there are a couple of different requirements that
0:22:43 have come out of it as well.
0:22:44 So it's tuned for a slightly different thing.
0:22:46 But essentially, Beehive says, what if we had end to end encrypted?
0:22:50 So in the same way that, you know, say, Signal, end to end encrypts your chats.
0:22:55 What if I had end to end encrypted documents?
0:22:58 That only certain people could write into, and I can control who can write into them.
0:23:03 Has there been any prior art in regards to CRDTs to fulfill those sort of
0:23:09 like end user driven authentication authorization requirements?
0:23:14 there's some, some nearer term stuff that was also exploring things with CRDTs.
0:23:19 But, you know, if you go really, really, you know, further back,
0:23:22 there's, uh, the Tahoe least authority file system, for example,
0:23:27 which was, you know, this encrypted at rest, file system capabilities
0:23:30 model, you know, whole, whole thing.
0:23:32 Mark Miller was doing capabilities based off going back into, you know, uh, The
0:23:38 late 90s, there's capability stuff that goes even further back, but he's, he's,
0:23:41 you know, really did the, the work that everybody points at, in, in the stuff.
0:23:44 But for CRDTs and for a local-first context where we don't assume at
0:23:48 all, like there's no server in the middle whatsoever, we may have been
0:23:54 the first to do this at Fission.
0:23:55 It's, it's possible.
0:23:56 I mean, when we got started, the local-first essay hadn't
0:23:59 even been published, right?
0:24:00 We were doing local-first without, without the term.
0:24:02 but there was a bunch of others in the space.
0:24:04 So, Serenity Notes has done related work, Matrix, Signal, obviously has done
0:24:09 a bunch of the end to end encryption stuff, and, local-first to auth, is a,
0:24:13 a project that has also worked with, Automerge, to do similar things.
0:24:17 so most of these projects, showed up, after the fact.
0:24:20 but yeah, so we're drawing from, in fact, we've talked to, all these
0:24:23 people and all of the fantastic work that they've done over the past few
0:24:26 years, and, collected the learnings, from them into, into Beehive.
0:24:31 That's awesome.
0:24:32 I would love to get a better feeling for what it would mean
0:24:35 to build an app with Beehive.
0:24:38 My understanding is that Beehive right now is very centric around Automerge.
0:24:42 However, it is designed in a way that over time, other CRDT systems,
0:24:48 other sync engines, et cetera could actually embrace it and integrate
0:24:52 it into their specific system.
0:24:54 I would like to get into that in a moment as well, but zooming into
0:24:58 the Automerge use case right now, let's say I have already built a
0:25:02 little side project with Automerge.
0:25:04 I have like some Automerge documents that are happily syncing the
0:25:09 data between my different apps.
0:25:11 so far I've maybe.
0:25:13 Put the entire thing, maybe I don't even, have any auth fences around it at all.
0:25:19 hopefully no one knows the end point where all of my data lives.
0:25:22 And if so, okay.
0:25:24 It's like not very sensitive data.
0:25:26 or maybe I'm running all of that behind like a tail scale network or something
0:25:30 like that, which I think in a lot of use cases, simpler use cases, this can also
0:25:34 be a very pragmatic approach, by the way.
0:25:37 when you can run the entire thing already, like in a fully secured frame of like
0:25:44 a, guarded network, and you, you're just going to run this for yourself
0:25:47 or like in your home network or for your family and you're all on like the
0:25:51 same, tail scale wire guard network.
0:25:54 I think that's also a very pragmatic approach.
0:25:56 but, let's say I want to build an app that I can share more publicly on the
0:26:01 internet, where maybe I want to build a TLDraw like thing where I can send over
0:26:06 a link where people can read it, but they need to have special permissions to
0:26:11 actually also write something into it.
0:26:14 I want to build the thing with Automerge.
0:26:16 What does my experience look like?
0:26:18 Yeah.
0:26:19 there are, I would say two parts to that question, right?
0:26:22 One is, I have an existing documents.
0:26:25 how do I migrate it in?
0:26:27 And, you know, could I use it with something, you know, you alluded to
0:26:30 other, other systems, in, in the future.
0:26:33 and, what does the actual, experience building something
0:26:36 with, with Behive look like?
0:26:38 So Behive is still in progress.
0:26:40 we're planning to have a first release of it, uh, in Q1.
0:26:44 and, you know, we're currently going at this with with the viewpoint
0:26:47 that like adding any auth is better than not having auth right now.
0:26:50 So like there's definitely like further work where we want to like really
0:26:54 polish off the edges of this thing but getting anything into people's hands is
0:26:57 better than than not having it right.
0:27:00 and there are some changes that we need to make to Automerge because
0:27:04 as I mentioned before you know auth lives at the bottom of the stack so
0:27:08 anything above in a stack needs to know something about the things below.
0:27:12 Off being at the bottom means that if you wanna do in particular mutation
0:27:15 control, Automerge needs to know about how to ingest that mutation.
0:27:18 So we do need to make some small changes to Automerge to, to make this work.
0:27:22 but the actual experience is, we're bundling it directly into Automerge
0:27:26 or the current plan at least, is we're bundling it directly into the Automerge
0:27:30 wasm, and then exposing a handful of functions on that, which is add
0:27:36 member at a certain authority level.
0:27:40 Remove member.
0:27:41 And that's it.
0:27:42 so your experience will be, we're going to do all the key management for you,
0:27:46 behind the scenes, under the hood.
0:27:48 if you have an existing document, it'll get serialized and encrypted
0:27:53 and put, you know, into storage.
0:27:56 And you can add other people to the document.
0:27:58 By inviting them using add member or remove member from that document.
0:28:03 maybe, maybe also worth noting, this gives you a couple extra, concepts to work with.
0:28:08 So today we have documents, and you can have a whole bunch of them, and
0:28:11 they're really independent pieces, right?
0:28:14 And maybe they can refer to each other by, you know, an Automerge URL.
0:28:17 instead, or in addition, I should say, not instead, you want to be able
0:28:22 to say, I'm building a file system.
0:28:24 If I give you access to the root of the file system, you should have access to.
0:28:27 The entire file system.
0:28:28 I don't want to have to share with you every individual thing.
0:28:32 So we have this concept of a group.
0:28:34 so you have your individual device, you have groups, and you have documents.
0:28:39 Each individual device has its own, under the hood, you don't have to worry about
0:28:43 this specific detail, but has its own key.
0:28:45 So it's in, Uniquely identifiable.
0:28:48 Somebody steals your phone, you can kick your phone out of the group, right?
0:28:52 Or out of the document and that, that's fine.
0:28:54 then we have groups.
0:28:55 So let's say that I have a group for everyone at Ink & Switch.
0:28:59 and then that can add everybody to that, but it doesn't have
0:29:01 a document associated with it.
0:29:03 It's purely just a way of managing people and saying, I want to add
0:29:07 everybody in this group to this document.
0:29:10 Right?
0:29:11 And so you can have groups contain users and other groups.
0:29:15 Then you have documents, which are groups that have some
0:29:18 content associated with them.
0:29:19 So I say on this document, here's who's allowed to see it.
0:29:21 So it could be individuals or other groups or other documents.
0:29:25 Other documents is interesting because I can say then you have
0:29:28 access to this document, this document represents a directory.
0:29:31 And so you also have access to all of its children, right?
0:29:33 In a, in a file system, you can do things like this.
0:29:36 So Add member, remove member becomes very, very powerful because now you can
0:29:40 have groups and, you know, set up these, hierarchies of, here's all of my devices.
0:29:46 All of my devices sit in a group of Brook's devices.
0:29:49 All of Brook's devices should be added to Ink & Switch, and Ink
0:29:53 & Switch has the following documents.
0:29:54 And then, you know, whenever one of my contract finishes and I get
0:29:57 kicked out of Ink & Switch, then they can kick all of my devices out
0:30:00 by, by revoking that group, right?
0:30:04 So using, Beehive is going to feel like that.
0:30:07 It's going to say, yeah, I know about the ID for Brooke's devices.
0:30:11 Please add her or, you know, contract finishes, please remove her.
0:30:15 all of the rest of the stuff should be completely invisible to you.
0:30:19 So when you persist things to disk or you send them to a sync server,
0:30:23 that all gets encrypted first.
0:30:24 And even the sync servers have permission.
0:30:28 There's a permission level in here of, you're allowed to ask for the,
0:30:33 the bytes off, from another node.
0:30:35 And they can prove that because you have these certificates under the hood, right?
0:30:40 because, and this is an uncomfortable truth, all cryptography is breakable.
0:30:44 So in 10 years, maybe they break all of our current ciphers.
0:30:48 Right?
0:30:48 It could happen.
0:30:49 In fact, older Cypher's already, you know, broken.
0:30:52 Or maybe quantum computing gets very, very advanced, and it becomes
0:30:56 practical to break keys, right?
0:30:58 Whatever it is.
0:30:58 Or there's an advancement in, discrete log problem, or whatever the thing is, right?
0:31:03 You know, we have some mathematical advance, and it gets broken.
0:31:05 the best thing to do, then, is to just not make those bytes available.
0:31:10 Make the encrypted content only pullable by people that you trust.
0:31:13 And yes, somebody could break into the sync server, let's
0:31:17 say, and download everything.
0:31:18 But that's a much higher bar than anybody can download.
0:31:21 Anybody on the internet can download whatever chunk they want, right?
0:31:23 But all of that is handled really for the developer to say, this is
0:31:26 the sync server, sync server has the ability to pull down these documents.
0:31:30 Or even the user could say, I want to sync to this sync server, I'm going
0:31:34 to grant that sync server access to my documents to replicate them.
0:31:37 But really, we're trying to keep the top level API for this
0:31:41 as boring as possible, right?
0:31:43 That is a top line goal.
0:31:45 Add member, remove member, and the sync server is just
0:31:48 another member in the system.
0:31:51 Got it.
0:31:52 So in terms of the auth as data, that, that mental model, that's very intuitive.
0:31:58 And, as you're like rewiring your brain as an application developer, like how
0:32:02 data flows through the system, now to understand that, like everything that's
0:32:07 necessary to make those auth decisions, should someone have access to, to read
0:32:12 this, to like write this, et cetera, that this is just data that's also being
0:32:17 synchronized, across the different nodes.
0:32:20 That is very intuitive.
0:32:22 is this something that in this particular case, at least with Beehive and Automerge,
0:32:27 is this purely an implementation detail?
0:32:29 And this is like your internal mental model of this data, or is this actually
0:32:34 data that is available somehow to the application developer that the application
0:32:38 developer would work with that as they work with the normal Automerge documents?
0:32:43 Yeah.
0:32:44 So, Again, we're trying to hide these details as much as possible.
0:32:48 So, you'll hear me talking about things like add member or groups, right?
0:32:52 And that sounds very access control list like.
0:32:56 capabilities are, like there's a formal proof of this, are more powerful.
0:33:00 Like they can express more things than access control lists.
0:33:03 So at least for this first revision, we've restricted ourselves down
0:33:06 to making things look like access control lists, on the outside.
0:33:11 and so it should feel very, very similar to doing things with role
0:33:15 based access control using, say, OAuth.
0:33:20 That should all feel totally normal.
0:33:23 You shouldn't really have to think about it in any special way.
0:33:25 In the same way that, you know, if you have a sync server, other than having
0:33:28 to set up the sync server, or maybe you pointed at an existing one, knowing that
0:33:32 it's there doesn't mean that you have to, like, design it from first principles.
0:33:35 Or, you know, same thing with Automerge.
0:33:38 Technically, you have access to all of the events.
0:33:41 But really you're going to materialize a view and treat it like it's JSON.
0:33:45 And so we're saying the same thing here with Beehive is you will automatically
0:33:50 get only the data that you can decrypt and that you're allowed to receive from
0:33:54 others and So, essentially, Beehive takes things off the wire, decrypts
0:34:00 it, and hands it to Automerge, and then Automerge does its normal Automerge stuff.
0:34:03 The one wrinkle is if an old write has been revoked, so it turns out
0:34:07 that somebody was, like, defacing the document and doing all this horrible
0:34:10 stuff, and we had to kick them out, we have to send it to Automerge,
0:34:13 Hey, ignore this run of changes.
0:34:15 And then it has to recalculate.
0:34:17 So that's the one change that we have to make inside of Automerge.
0:34:19 but really you will use Automerge as normal.
0:34:22 you will have an extra API that is add this person to this document or
0:34:25 to this group, and remove them, right?
0:34:28 As needed.
0:34:29 And you shouldn't have to think about any of these other
0:34:31 parts, even the sync server.
0:34:33 Like, Alex Good, who's the, the main maintainer of, of Automerge.
0:34:37 has been working on, on sync and improving sync.
0:34:41 and that project started around the same time as Beehive and we realized,
0:34:44 Oh, there's actually this challenge because we're, you know, on the
0:34:47 security side, trying to hide as much information from the network as possible,
0:34:50 including from the sync server, right?
0:34:52 Sync server shouldn't be able to read your documents.
0:34:54 To do efficient sync, you want to have like a lot of information about the
0:34:56 structure of the thing that you're syncing so that you have no redundancy.
0:34:59 Right?
0:35:00 And you can do it in a few round trips, all of this stuff.
0:35:02 So we ended up having to co design and essentially, like, negotiate
0:35:06 between the two systems, like, how, how much information can we
0:35:09 reveal, and still have it be secure?
0:35:11 And given that you can't read inside the documents, like, how do we
0:35:15 package things up in an efficient way?
0:35:17 But again, none of that information should be a concern for a developer
0:35:22 in the same way that the sync system right now, you don't really interact
0:35:24 with the sync system, other than you say, that's my sync server over
0:35:26 there and the bytes go over there.
0:35:28 There's an extra layer now of, it gets encrypted first
0:35:31 before it goes over the wire.
0:35:32 That makes sense.
0:35:33 I think as an application developer, there's typically sort
0:35:36 of this two pronged approach.
0:35:39 There is like, You, on the one hand, you ideally, you want to embrace
0:35:43 that things are hidden from you.
0:35:45 That you don't need to understand them to use it correctly, et cetera.
0:35:49 But particularly if something's new, some, maybe you're like an
0:35:52 early adopter of the technology.
0:35:54 you would like to figure out like, what are the worst case scenarios?
0:35:57 Maybe the thing is no longer being developed.
0:35:59 Could I take it over and like, can I become a contributor or maintainer
0:36:03 of, of that, or you'd still like to understand it for the sake of like
0:36:08 figuring, really understanding, is this.
0:36:11 The thing that I want.
0:36:13 and just by like understanding how it works, you can come to the right
0:36:16 conclusion, like, is this for me or not, particularly if it's not
0:36:19 yet as well documented, et cetera.
0:36:21 So channeling our like inner understanding application developer.
0:36:27 I'd like to understand a bit better of like how, Beehive and in that regard,
0:36:32 also the sync server works under the hood.
0:36:35 Like, it's hard enough to build a syncing system.
0:36:38 and now, you build an authorization layer on top of it.
0:36:42 What sort of implications does this have for the sync server?
0:36:46 And my understanding is that Alex Good is working on this and I think
0:36:50 this has been semi public so far.
0:36:52 And that there's like a, you know, like a sibling product or a sibling
0:36:56 project, next to Beehive called Beelay, which I guess like relays
0:37:01 messages in the Beehive system.
0:37:03 And I think that's a step towards what eventually, we're all dreaming about as
0:37:09 like a generic sync server that ideally is compatible with like as many things
0:37:14 as possible, I guess, at the beginning for Automerge, but also beyond that.
0:37:19 So what is Beelay?
0:37:21 What are its design goals and how does it work?
0:37:25 Beelay
0:37:25 So Beelay, has a requirement that it has to work with, encrypted chunks.
0:37:30 So, you know, we do this compression and then encryption, on top of it,
0:37:34 and then send that to the Sync Server.
0:37:36 The Sync Server can see, because it has to know who it can send these
0:37:39 chunks around to, the membership.
0:37:41 So Sync Server does have access to the membership.
0:37:44 of each doc, but not the content of the document.
0:37:47 so if you make a request, it checks, you know, okay, are you somebody
0:37:50 that, has the, the rights to, to have this sent to you, yes or no,
0:37:53 and then it'll send it to you or not.
0:37:55 And this isn't only for sync servers, you know, if you connect to somebody,
0:37:58 you know, directly over Bluetooth, you know, you'd do the same thing, right?
0:38:01 Even if, you know, you can both see the document.
0:38:04 There's nothing special here about sync servers.
0:38:06 To do this sync, well, we're no longer syncing individual ops, right?
0:38:10 Like, we could do that, but then we lose the compression.
0:38:13 It's not great, right?
0:38:15 And ideally, we don't want people to know, you know, if somebody were to
0:38:19 break into your server, hey, here's how everything's related to each other, right?
0:38:22 Like, that compression and encryption, you know, also hides
0:38:25 a little bit more of this data.
0:38:27 We do show the links between these, you know, compressed chunks, but
0:38:30 we'll, we'll get to that in a second.
0:38:32 Essentially what we want to do is chunk up the documents in such a way where,
0:38:38 there's the fewest number of chunks to get synced, and the longer ranges that
0:38:43 we have of, you Automerge ops that we get compressed before we encrypt it, right?
0:38:48 On the, I'll call it client.
0:38:50 It's not really a client in a local-first setting, right?
0:38:52 But like not on the not sync server when you're sending it to it.
0:38:55 the more stuff that you have, the better the compression is.
0:38:58 And chunking up the document here means basically, you're really
0:39:02 chunking up the history of operations that then get internally rolled up
0:39:07 into one snapshot of the document.
0:39:09 And that could be very long.
0:39:11 And, there's room for optimization.
0:39:14 That is like the, the compression here, where if you set a ton of times, like,
0:39:19 Hey, the name of the document is Peter.
0:39:22 And later you say like, no, it's Brooke.
0:39:24 And later you say, no, it's Peter.
0:39:26 No, it's Johannes.
0:39:28 Then you, you can like compress it into, for example, just the latest operation.
0:39:33 Yeah, exactly.
0:39:34 So, you know, if you want to think about how this, you know, to get, to get more
0:39:37 concrete, you know, if you take this slider all the way to one end and you take
0:39:40 the entire history and run length encoded, you know, do this Automerge compression,
0:39:45 you get very, very good compression.
0:39:47 If we take it to the far other end, we go really granular.
0:39:50 Every op, doesn't get compressed, but you know, so it's just like each individual
0:39:55 op, so you don't get compression.
0:39:56 So there's something in between here of like, how can we chop up
0:39:59 the history in a way where I get a nice balance between these two?
0:40:04 When Automerge receives new ops, It has to know where in the history to place it.
0:40:10 So you have this partial order, you know, you have this, you
0:40:12 know, typical CRDT lattice.
0:40:14 And then, we put that, or it puts it into a strict order.
0:40:18 It orders all the events and then plays over them like a log.
0:40:21 And this new event that you get, maybe it becomes the first event.
0:40:24 Like you could go way to the beginning of history, right?
0:40:26 Like you, you don't know because everything's eventually consistent.
0:40:29 So if you do that linearization first and then chop up the documents,
0:40:34 you have this problem where.
0:40:36 If I do this chunking, or you do this chunking, well, it really depends
0:40:39 on what history we have, right?
0:40:41 And so it makes it very, very difficult to have a small amount of redundancy.
0:40:46 So we found, two techniques helped us with this.
0:40:49 One was, we take some particular, operation as a head and we
0:40:55 say, ignore everything else.
0:40:56 Only give me the history for this operation.
0:40:58 Only instruct ancestors.
0:41:00 So even if there's something concurrent, forget about all of that stuff.
0:41:04 So that gets us something stable relative to a certain head.
0:41:08 And then to know where the chunk boundaries are, we
0:41:13 run a hash hardness metric.
0:41:15 So, the number of zeros at the end of the hash for each op, gives
0:41:20 you, you know, you can basically say, you know, each individual op,
0:41:23 there may or may not be a 0, 0, 0, so I'm, I'm happy with, with anything.
0:41:28 Or if I want it to be a range of, you know, 4, then give me two 0s at the
0:41:32 end, because that will be, you know, 2 to the power of 2 is 4, so I'll chunk
0:41:35 it up into 2s, and you, you make this as big or as small as you want, right?
0:41:38 So now you have some way of probabilistically chunking up the
0:41:41 documents, relative to some head.
0:41:44 And you can say how big you want that to be based on this hash hardness metric.
0:41:47 the advantage of this is even if we're doing things relative to
0:41:51 different heads, now we're going to hit the same boundaries for these
0:41:54 different, hash hardness metrics.
0:41:56 So now we're sharing how we're chunking up the document.
0:41:59 And we, Assume that on average, not all the time, but like on
0:42:04 average, older, operations will have been seen by more people.
0:42:08 So, or, you know, more and more peers.
0:42:11 So, you're going to be appending things really to the end of the document, right?
0:42:17 So you, you will less frequently have something concurrent with the
0:42:20 first operation using this system.
0:42:22 That means that we can get really good compression on older operations.
0:42:28 Let's take, I'm just picking numbers out of the air here, but let's take
0:42:30 the first two thirds of the document, which are relatively stable, compress
0:42:34 those, we get really good compression.
0:42:36 And then encrypt it and send it to the server.
0:42:38 And then for the next, you know, of the remaining third, let's take the
0:42:42 first two thirds of that and compress them and send them to the server.
0:42:46 And then at some point we get each individual op.
0:42:48 This means that as the, the document grows and changes.
0:42:52 We can take these smaller chunks and as that gets pushed further and further into
0:42:56 history, we can, whoever can actually read them, can recompress those ranges.
0:43:02 So, Alex has this, I think, really fantastic, name for this, which is
0:43:06 sedimen-tree because it's almost acting in sedimen-tree layers, but it's sedimen-tree
0:43:12 because you get a tree of these layers.
0:43:14 Yeah, it's cute, right?
0:43:15 and so if you want to do a sync, like let's say you're doing a sync
0:43:18 of like completely fresh, you've never seen the document before.
0:43:21 You will get the really big chunk, and then you'll move up a layer,
0:43:25 and you'll get the next biggest chunk of history, and then you move
0:43:27 up a layer, and then eventually get like the last couple of ops.
0:43:30 So we can get you really good compression, but again, it's this
0:43:32 balance of the these two forces.
0:43:35 Or, if you've already seen the first half of the document, you
0:43:38 never have to sync that chunk again.
0:43:39 You only need to get these higher layers of the sedimentary sync.
0:43:44 So that's how we chunk up the document.
0:43:46 Additionally, and I'm not at all going to go into how this thing works,
0:43:49 but if people are into sync systems, this is like a pretty cool paper.
0:43:53 It's called Practically Rateless Set Reconciliation is the name of the paper.
0:43:57 And it does really interesting things with, compressing how, all the information
0:44:02 you need to know what the other side has.
0:44:04 So in half a round trip, so in one direction on average, you can get all
0:44:09 the information you need to know what the delta is between your two sets.
0:44:13 Literally, what are, what's the handful of ops that we've diverged by without
0:44:18 having to send all of the hashes?
0:44:20 so if people are into that stuff, go check out that paper.
0:44:22 It's pretty cool.
0:44:23 but there's a lot of detail in there that we're not, we're not
0:44:25 going to cover on this podcast.
0:44:26 Thanks a lot for explaining.
0:44:29 I suppose it's like, Just a tip of the iceberg of like how Beelay works,
0:44:33 but I think it's important to get a feeling for like, this is a new world
0:44:37 in a way where it's decentralized, it is encrypted, et cetera.
0:44:42 There's like really hard constraints what certain things can do since you could
0:44:47 say like in your traditional development mindset, you would just say like, yeah,
0:44:52 let's treat the client like it's just like a, like a Kindle, with like no
0:44:56 CPU in it let's have the server do as much as the heavy lifting as possible.
0:45:01 I think that's like a, the muscle that we're used to so far.
0:45:04 But in this case, the server, even if it has a super beefy machine, et cetera, it
0:45:11 can't really do that because it doesn't have access to do all of this work.
0:45:15 So the clients need to do it.
0:45:17 And, and when the clients independently do so, They need to
0:45:21 eventually end up in the same spot.
0:45:23 Otherwise the entire system, falls over or it gets very inefficient.
0:45:27 So that sounds like a really elegant system that, that you're
0:45:30 like working on in that regard.
0:45:32 So with Beehive overall, like again, you're starting out here with
0:45:38 Automerge as the driving system that drives the requirements, et cetera.
0:45:43 But I think your, bigger ambition here, your bigger goals, is that this
0:45:48 actually becomes a system that is, that at some point goes beyond just
0:45:54 applying to Automerge, and that being a system that applies to many more other
0:45:59 local-first technologies in the space.
0:46:01 If there are application framework authors or like, like other people building a
0:46:07 sync system, et cetera, and they'd be interested in seeing like, Hmm, instead
0:46:11 of like us trying to come up with our own, research here for like what it
0:46:17 means to do, authentication authorization for our sync system, particularly if
0:46:23 you're doing it in a decentralized way.
0:46:25 What would be a good way for those frameworks, those technologies to
0:46:30 jump on the, the Beehive wagon.
0:46:33 so if they're already using Automerge, I think that'll be
0:46:37 pretty straightforward, right?
0:46:38 You'll have bindings, it'll just work.
0:46:40 but Beehive doesn't have a hard dependency on Automerge at all.
0:46:45 because it lives at this layer below and we, Early on, we're like, well, should
0:46:50 we just weld it directly into Automerge?
0:46:51 Or like, you know, how much does it really need to know about it?
0:46:55 and where we landed on this was you just need to have some kind
0:46:58 of way of saying, here's the partial order between these events.
0:47:02 and then everything works.
0:47:04 So, as, just as a intuition.
0:47:07 You could put Git inside of, Beehive, and it would work, I don't think
0:47:11 GitHub's gonna adopt this anytime soon, but like, if you had your own
0:47:14 Git syncing system, like, you, you could do this, and, and it would work.
0:47:18 you just need to have some way of ordering, events next to each other.
0:47:22 and yes, then you have to get a little bit more into slightly lower level APIs.
0:47:27 So I, when I build stuff, I tend to work in layers of like, here's the very
0:47:32 low level primitives, and then here's a slightly higher level, and a slightly
0:47:35 higher level, and a slightly lower level.
0:47:37 so people using it from Automerge will just have add member, remove
0:47:40 member, and like, everything works.
0:47:41 to go down one layer, you have to wire into it, here's how to do ordering.
0:47:47 And that's it.
0:47:48 And then everything else should, should wire all the way through.
0:47:51 And you have to be able to pass it, serialized bytes.
0:47:53 So, like, Beehive doesn't know anything about this compression that we were
0:47:56 just talking about that Automerge does.
0:47:58 But you tell it, hey, this is, you know, this is some batch, this is
0:48:02 some, like, archive that I want to do.
0:48:03 It starts at this timestamp and ends at that timestamp,
0:48:06 or, you know, logical clock.
0:48:07 please encrypt this for me.
0:48:09 And it goes, sure, here you go.
0:48:10 Encrypted.
0:48:11 And, you know, off it goes.
0:48:12 So it has very, very few, assumptions
0:48:15 That's certainly something that I might also pick up a bit further down the
0:48:18 road myself for, for LiveStore where the underlaying substrate to sync data
0:48:23 around is like a ordered event log.
0:48:26 And, if I'm encrypting those events.
0:48:29 then I think that fulfills, perfectly the requirements that you've listed,
0:48:34 which are very few for, for Beehive.
0:48:37 So I'm really looking forward to once that gets further along.
0:48:40 So speaking of like, where is Beehive right now?
0:48:43 I've seen the, lab notebooks from what you have been working on at Ink & Switch.
0:48:49 can I get my hands on Beehive already right now?
0:48:52 Where is it at?
0:48:54 what are the plans for the coming years?
0:48:56 So at the time that we're recording this, at least, which is in early
0:48:59 December, there's unfortunately not, not a publicly available version of it.
0:49:02 I really hoped we'd have it ready by now, but, unfortunately we're still, wrapping
0:49:06 up the last few, items in, in there.
0:49:09 but, Q1, we plan to have, a release.
0:49:12 as I mentioned before, there are some changes required, to Automerge to consume.
0:49:16 specifically to, to manage revocation history.
0:49:19 So somebody got kicked out, but we're still in this eventually consistent world.
0:49:23 Automerge needs to know how to manage that.
0:49:24 But.
0:49:25 Managing things, sync, encryption, all of that stuff, we, we hope to have
0:49:30 in, I'm not going to commit, commit the team to any particular, timeframe
0:49:33 here, but like, we'll, we'll say in the next few, in the next coming weeks.
0:49:37 right now the team is, myself.
0:49:39 John Mumm, who joined a couple months into the project, and has been working
0:49:43 on, BeeKEM, focused primarily on BeeKEM, which is a, again, I'm just going to
0:49:48 throw out words here for people that are interested in this stuff, related to
0:49:51 TreeKEM, but we made a concurrent, Which is based on, MLS or one of the primitives
0:49:55 for, for messaging layer security.
0:49:57 he's been doing great work there.
0:49:58 And, Alex, amongst the many, many things that Alex Good does between
0:50:02 writing the sync system and maintaining Automerge and all of these, you
0:50:07 know, community stuff that he does, has also been, lending a hand.
0:50:11 So I'm sure there's like for, for Beehive in a way you're, Just
0:50:15 scratching the surface and there's probably enough work here for, to
0:50:19 fill like another few years, maybe even decades worth of ambitious work.
0:50:24 Can you paint a picture of like, what are some of like the, like right now
0:50:28 you're probably working through the kind of POC or just the table stakes things.
0:50:33 What are some of like the, way more ambitious longterm things
0:50:36 that you would like to see in under the umbrella of Beehive?
0:50:39 Yeah.
0:50:40 So, There's a few.
0:50:41 Yes.
0:50:42 and we have this running list internally of like, what would a V2 look like?
0:50:45 So, one is, adding a little policy language.
0:50:48 I think it's just like the, bang for the buck that you get on having
0:50:51 something like UCAN's policy language.
0:50:53 It's just so high.
0:50:54 It just gives you so much flexibility.
0:50:56 hiding the membership, from even the sync server, is possible.
0:51:00 it's just requires more engineering.
0:51:02 so there are many, many places in here where, zero knowledge proofs, I
0:51:06 think, would be very, Useful, for, for people who knows, know what those are.
0:51:09 essentially it would let the sync server say, yes, I can send you bytes
0:51:14 without knowing anything about you.
0:51:16 Right,
0:51:17 but it would still deny others.
0:51:19 And right now it basically needs to run more logic to actually
0:51:22 enforce those auth rules.
0:51:25 Yeah.
0:51:25 So today you have to, sign a message that says, I signed this with the same
0:51:30 private key that you know about the public key for in this membership, we
0:51:36 can hide the entire membership from the sync server and still do this.
0:51:39 Without revealing even who's making the request, right?
0:51:41 Like, that would be awesome.
0:51:43 in fact, and this is a bit of a tangent, I think there's a number
0:51:45 of places where, that class of technology would be really helpful.
0:51:49 Even for things like, in CRDTs, there's this challenge where you have
0:51:53 to keep all the history for all time.
0:51:55 and I think with zero knowledge proofs, we can actually, like, this, this would
0:51:58 very much be a research project, but I, I think it's possible to delete history, but
0:52:02 still maintain cryptographic proofs, that things were done correctly and compress
0:52:06 that down to, you know, a couple bytes, basically, but that's a bit of a tangent.
0:52:10 I would love to work on that at some point in the future, but for, for
0:52:13 Beehive, yeah, hiding more metadata, Hiding, you know, the membership
0:52:17 from, from the group, making it, all the signatures post quantum.
0:52:21 that is like even the main, recommendations from, from NIST, the U.
0:52:26 S.
0:52:26 government agency that that handles these things only just came out.
0:52:30 So, you know, we're still kind of waiting for good libraries on it and, you know,
0:52:34 all, all of this stuff and what have you.
0:52:36 But yeah, making it post quantum, or fully, big chunks of it are already
0:52:40 post quantum, but making it fully post quantum, would, would be great.
0:52:43 and then yeah, adding all kinds of, bells and whistles and features, you know,
0:52:46 making it faster, adding, it's not going to have its own compression, because it
0:52:50 relies so heavily on cryptography, So it doesn't compress super well, right?
0:52:54 So we're going to need to figure out our own version of, you know,
0:52:58 Automerge has run length encoding.
0:52:59 What is our version of that, given that we can't run length encode
0:53:02 easily, encrypted things, right?
0:53:04 Or, or signatures or, you know, all, all of this.
0:53:06 so there's a lot of stuff, down, down in the plumbing.
0:53:08 Plus I think this policy language would be really, really helpful.
0:53:11 That sounds awesome.
0:53:12 Both in terms of new features, capabilities, no pun intended, being
0:53:16 added here, but also in terms of just, removing overhead from the system and like
0:53:22 simplifying the surface area by doing, more of like clever work internally,
0:53:27 which simplifies the system overall.
0:53:29 That sounds very intriguing.
0:53:31 The, the other thing worth noting with this, just, I think both to show point
0:53:35 away into the future and then also draw a boundary over where what Beehive
0:53:39 does and doesn't do, is identity.
0:53:41 so Beehive only knows about public keys because those are universal.
0:53:46 They work everywhere.
0:53:47 They don't require a naming system, any of this stuff.
0:53:50 we have lots of ideas and opinions on how to do a naming system.
0:53:55 but you know, if, if you look at, for example, uh, BlueSky, under
0:53:58 the hood, all of the accounts are managed with public keys, and then
0:54:02 you map a name to them using DNS.
0:54:04 So either you're using, you know, myname.
0:54:07 bluesky.
0:54:07 social, or you have your own domain name like I'm expede.Wtf
0:54:12 on BlueSky, for example, right?
0:54:13 Because I own that domain name and I can edit the text record.
0:54:15 and that's great and it, definitely gives users a lot of agency over
0:54:20 how to name themselves, right?
0:54:21 Or, you know, there are other related systems.
0:54:24 But it's not local-first because it relies on DNS.
0:54:28 So, like, how could I invite you to a group without having to know your public
0:54:32 key, We're probably going to ship, I would say, just because it's like
0:54:35 relatively easy to do, a system called Edge Names, based on pet names, where
0:54:40 basically I say, here's my contact book.
0:54:42 I invited you.
0:54:43 And at the time I invited you, I named you.
0:54:45 Johannes right?
0:54:46 And I named Peter, Peter, and so on and so forth, but there's no way to prove
0:54:52 that that's just my name for them.
0:54:54 Right.
0:54:54 And for these people, and having a more universal system where
0:54:59 I could invite somebody by like their email address, for example, I
0:55:02 think would be really interesting.
0:55:03 Back at Fission, Blaine Cook.
0:55:06 Who's also done a bunch of stuff with Ink & Switch in the past, had proposed
0:55:09 this system, the NameName system, that would give you local-first names
0:55:12 that were rooted in things like email, so you could invite somebody with
0:55:17 their email address and A local-first system could validate that that person
0:55:21 actually had control over that email.
0:55:23 It was a very interesting system.
0:55:25 So there's a lot of work to be done in identity as separate from, authorization.
0:55:29 Right, yeah.
0:55:30 I feel like there just always, There's so much interesting stuff happening
0:55:35 across the entire spectrum from, like, the world that we're currently in,
0:55:40 which is mostly centralized, for just in terms of, like, that things work at
0:55:45 all, and even there, it's hard to keep things up to date and, like, working,
0:55:50 et cetera, but we want to aim higher.
0:55:54 And one way to improve things a lot is like by going more decentralized but
0:55:59 there's like so many hard problems to tame and like, we're starting to just peel
0:56:04 off like the layers from the onion here.
0:56:07 And, Automerge I think is a, is a great, canonical case study there, like it has
0:56:12 started with the data and now things are around, authorization, et cetera.
0:56:17 And like, then authentication, identity there, we probably have
0:56:21 enough research work ahead of us for, for the coming decades to come.
0:56:25 And super, super cool to see that so many bright minds are working on it.
0:56:29 maybe one last question in regards to Beehive.
0:56:34 When there's a lot of cryptography involved, that also means there's
0:56:38 even more CPU cycles that need to be spent to make stuff work.
0:56:43 have you been looking into some, performance benchmarks, when you, let's
0:56:48 say you want to synchronize a certain, history of Automerge for some Automerge
0:56:54 documents, with Beehive disabled and with Beehive enabled, do you see like
0:57:00 a certain factor of like how much it gets slower with, Beehive and sort of
0:57:05 the authorization rules applied both on the client as well as on the server?
0:57:10 Performance benchmarks
0:57:10 Yeah.
0:57:10 So, it's a great question.
0:57:12 so obviously there's different dimensions in, in Beehive, right?
0:57:14 So for encryption, which is where I would say most people would expect there
0:57:19 to be the, the performance overhead.
0:57:21 There's absolutely overhead there.
0:57:22 You're, you're doing decryption, but we're using algorithms that decrypt on the
0:57:26 order of like multiple gigabytes a second.
0:57:29 So it's fine, basically.
0:57:32 and that's also part of why we wanted to chunk things up in this way,
0:57:35 because when we get good compression, you know, all, all of this stuff.
0:57:37 So if you're doing like a total, you know, first time you've seen this document,
0:57:42 you've got to pull everything and decrypt everything and hand it off to Automerge.
0:57:45 the, the encryption's not.
0:57:46 going to be the bottleneck.
0:57:48 and then on like a rolling basis, like as you know, per keystroke, yes, there
0:57:53 there's absolutely overhead there, but remember this is relative to latency.
0:57:59 So if you have 200 milliseconds of latency, that's your bottleneck.
0:58:03 It's not going to be the five milliseconds of, of encryption that we're doing or
0:58:08 signatures or, or whatever it is, there's a space cost because now we have to keep.
0:58:14 Public keys, which are 32 bytes, and signatures, which are 64 bytes.
0:58:19 So there is some overhead in space.
0:58:22 that happens.
0:58:23 but for the most part we've taken, we've chosen algorithms that
0:58:26 are known to be very, very fast.
0:58:28 They're, they're sort of like the, the, the best in class.
0:58:30 So I'll just rattle down, down, down a list for the, the, the, the best.
0:58:33 People that are interested.
0:58:34 so we're using, EdDSA Edwards Keys for signatures, and key exchange, chacha
0:58:40 for encryption, and BLAKE3 for hashing.
0:58:44 BLAKE3 is very interesting what you do.
0:58:45 Things like verifiable, streams.
0:58:47 So like as you're streaming the data in, you can start hashing even
0:58:50 parts of it as you're going along.
0:58:52 the really big, bottleneck, the, like, the heaviest part of the system.
0:58:57 or, or sorry, the part that we were at least happy with our original design on
0:59:00 that we then ended up doing a bunch of research on was, doing key agreement.
0:59:06 So if I have whatever, a thousand people in a company, and they're all,
0:59:12 you know, working on this document, I don't want to have to send a
0:59:14 thousand messages every time I change the key, which will be rotated.
0:59:18 every message, let's say, or you know, once a day, if we're being,
0:59:22 you know, more conservative with it.
0:59:24 and that's a lot of data and a lot of just like latency on
0:59:27 this and just a lot of network.
0:59:29 So we switched to, instead of it being linear, we found a way
0:59:32 of doing it in logarithmic time.
0:59:35 So we can now do key rotations concurrently, like totally eventually
0:59:39 consistently, in log n time.
0:59:41 and That has been, a lot of research, happened in there, but then that let
0:59:47 us scale up much, much, much more.
0:59:48 So the prior algorithm that we were using off the shelf from a paper
0:59:52 scaled up to, in the paper, they say about like 128 people, right?
0:59:55 It's sort of like your upper bound and we're like, uh, you know, we had set
0:59:58 ourselves these, these higher, levels that we actually want to work with.
1:00:02 and so now we can scale into, into the thousands.
1:00:05 When you get up to 50,000 people, yeah, it starts to slow down.
1:00:07 You start to get into, you know, closer to a second if you're doing,
1:00:11 very, very concurrent, you know, uh, 40,000 of the 50,000 people
1:00:14 are doing concurrent key rotations.
1:00:16 Doesn't happen very often, but like it could happen.
1:00:19 if one person's doing an update, then it'll happen.
1:00:21 in, like you won't even notice it.
1:00:23 Right.
1:00:24 So it depends on how heavily concurrent your document is.
1:00:26 Do you have 40, 000 people writing to your document?
1:00:28 Yeah.
1:00:28 You're going to see it slow down a little bit.
1:00:30 It's so amazing to see that.
1:00:32 I mean, in academia, there is so much progress in those various fields.
1:00:36 And I feel like in local-first, we actually get to benefit and like directly
1:00:42 apply a lot of like those, those great achievements from other places where
1:00:45 like, we can now like it makes a, Big difference for the applications that
1:00:49 we'll be using, whether there is a cryptographic breakthrough in efficiency
1:00:53 or being more long term secure, et cetera.
1:00:57 And like, I fully agree that latency is probably by far the most important
1:01:02 one when it comes to does it make a difference or not, but if my, like
1:01:06 battery usage, et cetera, is another one.
1:01:08 And like, If I synchronize data a lot, maybe I open a lot of data, like a lot
1:01:13 of documents just once because maybe I'm reviewing documents a lot and
1:01:17 like someone sends it, or maybe I'm an executive, I get to review a lot of
1:01:20 documents and I like, I don't really amortize the documents too much because
1:01:26 I don't reuse them on a day to day basis.
1:01:28 I think that initial sync also tends to matter quite a bit.
1:01:33 But, it's great to hear that, efficiency seems to be already,
1:01:37 very well under control.
1:01:39 So maybe rounding out this, you've been at Fission, you've been seeing, like, the
1:01:45 innovation around local-first in, like, three buckets auth data and compute.
1:01:51 As mentioned before, on this podcast, we've mostly been
1:01:54 exploring the data aspect.
1:01:56 Now we went quite deep on some of your work in regards to auth.
1:02:01 We don't have too much time to spend on something else, but I'm curious
1:02:06 whether you can just seed some ideas in regards to what does, where does compute
1:02:12 fit in this new local-first world?
1:02:15 Like, if you could fork yourself and like do a lot more work, what would you do be
1:02:23 The "compute" role in local-first
1:02:23 Yeah.
1:02:24 So, we, we had a project, related to compute at Fission,
1:02:27 right, right at the end.
1:02:29 and, I'm very fortunate that I actually have some grants to continue that
1:02:32 work after I finish with Beehive.
1:02:33 I'll switch to that and then, after that project, see what else is,
1:02:36 is, is interesting kicking around.
1:02:38 but, essentially the motivation is, all the compute for local-first stuff happens
1:02:43 Completely locally today, or you're talking to some cloud service, right?
1:02:47 Like maybe you're using an LLM.
1:02:48 So you go to, you know, use the open AI APIs, that kind of thing.
1:02:53 but what if you're on a very low powered device and you're on a plane?
1:02:58 Right.
1:02:59 you know, you still need to be able to do some compute some of the time.
1:03:02 So the, the trade off that we're trying to, to strike in, in these
1:03:05 kinds of projects is, what if I can always run it even slowly?
1:03:08 So let's say I'm rendering a 3D scene and it's gonna take a, a minute
1:03:11 to paint, versus I have a, desktop computer, you know, nearby and I can
1:03:18 farm that drop out to that machine because it's nearby in latency,
1:03:22 and it has more compute resources.
1:03:25 Or maybe, I need to send email to a mail server that only exists in one place.
1:03:30 Like, how can I do these, you know, compute dynamically where I can
1:03:35 always run my jobs or my resource management whenever, whenever possible.
1:03:40 Email server is a case where you can't always do this, right?
1:03:42 But when somebody else could run it.
1:03:45 Maybe I can farm that out to them instead.
1:03:46 so there's a lot of interest, I think, in how do we bridge between what is
1:03:53 sometimes called in the blue sky world, big world versus small world, right?
1:03:56 So I have my local stuff.
1:03:57 I'm doing things entirely on my own.
1:03:59 I'm completely offline.
1:04:00 And that is the baseline.
1:04:02 But when I am online, how much more powerful can it get?
1:04:06 Can I, you know, I'm not going to ingest the entire blue sky firehose myself.
1:04:10 I'm going to leave that to an indexer.
1:04:12 To do for me.
1:04:13 So when I'm online, maybe I can get better search, right?
1:04:17 Things like this, or maybe if I'm rendering PDFs, maybe I want to farm
1:04:20 that out to some, server somewhere rather than doing that with Wasm in my browser.
1:04:25 So kind of progressively enhancing the app.
1:04:28 And I think, there's a lot of like recent, Oh, even more relevant
1:04:31 with AI, but like with AI, this is particularly more relevant because
1:04:35 now suddenly, we get lot of work.
1:04:38 to be done that get massively benefits from a lot of compute.
1:04:43 And with AI, in particular, I think it's also like, now we're
1:04:47 in this, in this tricky spot.
1:04:49 Either we already get to live in the future, but that means, typically all of
1:04:54 like our Our AI intelligence is coming from like some very beefy servers and
1:04:58 some data centers and the way how I get that instant, those, these enhancements
1:05:03 is by just sending over all of like my context data into those servers.
1:05:09 well, I guess you could get those beefy servers also, next to your
1:05:13 desk, but that is a very expensive and I think not very practical.
1:05:17 I guess step by step, like now the newest MacBooks, et cetera, are already
1:05:21 like very capable and running things locally, but there will be always like
1:05:26 a reason that you want to, fan things out a bit more, but doing so in a
1:05:30 way that preserves like your, privacy around your data, et cetera, like
1:05:34 leverages your, your resources properly.
1:05:37 Like, if I'm just looking around myself, like I have an iPad over here,
1:05:41 which sits entirely idle, et cetera.
1:05:44 So.
1:05:45 It's as with most things, in regards to application developers, if it's
1:05:50 the right thing, it should be easy and doing, compute in sort of a
1:05:55 distributed way is by far not easy.
1:05:58 So very excited to, to hear that you want to explore this more.
1:06:02 Yeah.
1:06:02 Well, and you know, especially things like AI, you know, the, the question
1:06:06 always is I should never be cut off from, from performing actions, if
1:06:10 possible, like when possible, sometimes something lives at a particular
1:06:13 place and I'm not connected to it.
1:06:15 Fine, right?
1:06:16 email being, you know, the canonical example here.
1:06:18 Mail server lives in one place.
1:06:19 Okay, fine.
1:06:21 but why not with an LLM?
1:06:23 Like, maybe I run a smaller, simpler LLM locally.
1:06:27 And then again, when I'm connected and I'm online, I just get better results.
1:06:30 I get better answers.
1:06:32 so I'm never totally, totally cut off.
1:06:34 mean, there's plenty of research on distributed machine learning
1:06:38 and all of this stuff, but that's like, I would say in the future.
1:06:41 just kind of to put an arc on all of this stuff.
1:06:43 and everybody's seen my talks before has probably heard me give, give
1:06:46 this short spiel, once or twice.
1:06:48 but you know, in, in the nineties, when we were developing the web, right.
1:06:52 As opposed to the internet.
1:06:54 the assumption was that you had a computer under your desk.
1:06:57 It was a beige box that you would turn on and you would turn it off sometimes.
1:07:00 Right.
1:07:00 It was the last time you actually turned off your, your laptop,
1:07:02 or your phone for that matter.
1:07:04 And when you wanted to connect to the internet, you'd tie up your phone line.
1:07:08 That's no good.
1:07:09 So you would rent from somebody else, something that was always
1:07:12 online with a lot of power.
1:07:14 And we now live in a different world, but we're still, you know, the centralized,
1:07:18 you know, or the, the cloud systems rather, all have this assumption of,
1:07:23 well, we have more power and we're more online and are better connected than you.
1:07:28 Okay.
1:07:29 That's true, but how many things do we, does that actually matter for?
1:07:32 And with systems like Automerge and, you know, local-first things developing, it's
1:07:36 like, actually, you know what, my, my machines are fast enough now where I can
1:07:41 keep the entire log of the entire history.
1:07:43 And it's fine because we can compress it down to a couple hundred K and it's okay.
1:07:48 And I'm fast enough to play over the whole log.
1:07:50 And we can do all of this eventually consistent stuff and it doesn't
1:07:53 completely, you know, hurt the performance of my application.
1:07:56 It's massively simplifying the architecture.
1:07:59 Things have gotten out of hand.
1:08:00 So there is this dividing line between things that are still, you know, the
1:08:06 cloud isn't completely the enemy.
1:08:09 They do have some advantages, right?
1:08:12 But they don't, not everything needs to live there.
1:08:14 And so we're moving into this world of like, how much can we
1:08:20 Outro
1:08:20 Yeah, I love that.
1:08:21 I think that very neatly summarizes a huge aspect why
1:08:26 local-first talks to so many of us.
1:08:29 So I've learned a lot in this conversation and I'm really
1:08:34 excited to get my hands on Beehive.
1:08:37 As it becomes more publicly available, hopefully already a lot closer to the
1:08:43 time when the, this episode comes out.
1:08:45 In the meanwhile, if someone got really excited to get their hands dirty and
1:08:51 like digging into some of the knowledge that you've shared here, I certainly
1:08:55 recommend checking out your amazing talks.
1:08:59 I have still a lot of them on my watch lists and like our, I think
1:09:02 there's many shared interests that we didn't go into this episode here.
1:09:06 Like you're also, a lot into functional programming, et cetera.
1:09:09 And I think you're, you're like going really deep on Rust as well, et cetera.
1:09:13 So lots for me to, to learn.
1:09:15 But, If you can't wait to get your hands on beehive, I think it's also very
1:09:20 practical to, play around with UCAN.
1:09:23 I think there are a bunch of, implementations for, for various language
1:09:26 stacks, and that is something that you can already build things with today.
1:09:32 and I think, it's not like that Beehive will fully replace
1:09:34 UCAN or the other way around.
1:09:36 I think there will be use cases where you can use both, but this way you
1:09:39 can already get in the right mental model, and, and be ready, Beehive
1:09:44 ready when, when it gets available.
1:09:47 So that's certainly, what I would recommend folks to check out.
1:09:50 Is there anything else you would like the audience to do, look up or watch?
1:09:56 Yeah, so definitely keep an eye on the the Ink & Switch, webpage.
1:10:00 we have lab notes, at the time of this recording.
1:10:03 There's just the one note up there, but I'm, I have a whole bunch of
1:10:06 them, like many, in draft that I just need to clean up and publish.
1:10:10 we'll also be releasing an essay, Ink & Switch style essay, on, on
1:10:14 this whole project, in the new year.
1:10:16 And, yeah, keep, keep an eye out for, for when this all gets released.
1:10:20 there's a bunch of stuff coming, in Automerge, in, in the new years, I
1:10:23 can't remember if it's Automerge V2 or V3, but there's, you know, some,
1:10:27 some, some branding with it of like much faster, lower memory footprint,
1:10:31 better sync, and, and security.
1:10:33 And like all of these sort of, you know, big, big headline features.
1:10:35 So definitely keep an eye on, all the stuff happening in Automerge.
1:10:38 That's awesome.
1:10:39 Brooke, thank you so much for taking the time and sharing
1:10:42 all of this knowledge with us.
1:10:44 super appreciated.
1:10:45 Thank you.
1:10:45 Thank you so much for having me.
1:10:47 Thank you for listening to the Local First FM podcast.
1:10:49 If you've enjoyed this episode and haven't done so already, please
1:10:52 Please subscribe and leave a review.
1:10:54 Please also share this episode with your friends and colleagues.
1:10:57 Spreading the word about the podcast is a great way to support
1:11:00 it and to help me keep it going.
1:11:03 A special thanks again to Convex and ElectricSQL for supporting this podcast.
1:11:07 See you next time.