Ep. #125, Life After Cold Starts with Matt Butcher of Fermyon
In episode 125 of Jamstack Radio, Brian speaks with Matt Butcher of Fermyon. Together they explore cloud computing, the evolution of programming languages, the unsolved problems that led to WebAssembly, the limits of user attention, and why developers love serverless functions.
Matt Butcher is CEO of Fermyon. He previously founded Helm and was a Principal Software Development Engineer at Microsoft and Deis Core Contributor at Engine Yard.
In episode 125 of Jamstack Radio, Brian speaks with Matt Butcher of Fermyon. Together they explore cloud computing, the evolution of programming languages, the unsolved problems that led to WebAssembly, the limits of user attention, and why developers love serverless functions.
transcript
Brian Douglas: Welcome to another installment of Jamstack Radio. On the line we've got Matt Butcher from Fermyon. Matt, how are you doing?
Matt Butcher: Great, and thanks for having me. I'm really excited about this.
Brian: Yeah, pleasure. Honestly, I saw you come through one of my feeds on another podcast and I meant to listen to that before this podcast and I never got to it. I've been very, very, very busy with the day job and so podcasts have been on the wayside. No commute to listen to a podcast anymore.
Matt: Yeah, I'm way behind on podcasts now.
Brian: Excellent. We got to do some yard work. That's the one time I get, when it's not raining down here. Are you in Seattle?
Matt: No, I'm in Colorado.
Brian: Colorado, okay.
Matt: Yard work for me is shoveling snow.
Brian: Excellent, yeah. Well, that's podcast time. Why don't you introduce yourself? Since we're so comfortable with each other we're talking about our yard work. Who's Matt and what's Fermyon?
Matt: I think I can start, Brian, with where you and I met because you and I met at Fermyon, the first Fermyon appearance ever at a conference. We were at Open Source Summit North America last summer. So before that, actually we crossed like ships in the night, I think, again because I came out of Microsoft, I was very early in the container ecosystem and I was working at a company called Dais.
Containers had just reached that point at which they were looking stable enough that you could build a real thing on top of them, and at Dais we wanted to build a platform as a service, we wanted to build a competitor to Heroku. So we just dove into all of these rapidly emerging standards and rapidly emerging open source projects, so early on we were using the Fleet Scheduler from Core OS. Fleet got EOL'd a little bit right after Kubernetes, Core OS got acquired.
Brian: Yeah, I was going to say Coreos is what we used to call it, on the street. Yeah.
Matt: Yeah, they missed a major branding opportunity there to have a sandwich cookie for a logo. But yeah, we were really into the early container ecosystem and I had worked at Google in the past, and so when Google dropped this open source project that none of us knew how to pronounce initially and said, "This is how we're going to schedule containers. Check this out." And we all started playing around with Kubernetes at Dais, and the experience was game changing.
This was when Kubernetes was actually a fairly simple and fairly straightforward technology to use, and we were like, "Oh, we've got to replatform. We are going to replatform this entire PaaS on top of Kubernetes and be the first PaaS offering on Kubernetes." And so we started going all out on this. Around the same time as we were making this big transition, Kubernetes started gaining some traction and Microsoft pulled one of those moves where they're like, "Hey. Brendan Burns, guy who invented Kubernetes. How about you leave Google and come over here, and build the world's greatest Kubernetes team?"
And Brendan is like, "Yeah. Why would I pass that up?" So he left Google and went over to Microsoft, and a big part of his job was assembling a big team that could turn Kubernetes into the next big Azure service. So he and a couple of other folks like John Gosman, they orchestrated the acquisition of Dais. Which was awfully flattering, to know that somebody like Brendan was paying attention to our little Boulder startup, and we all joined Microsoft.
They divided us into two teams, and one team became the team that built AKS, the Azure Kubernetes Service. A really, really awesome team that just knew how to get stuff done and they built this service in, I think, record time. I know that it was the fastest growing Azure service in the history of Azure, they just knocked it out of the park. My team became the container open source development team, so we were kind of R&D and we had two things that were our mandate.
On one hand Brendan was like, "Look, just build open source tools that developers love to use, and all in the container ecosystem and the Kubernetes ecosystem to get folks trying new tools and going, 'Oh, this Kubernetes thing is really powerful. I can use it to do this, I can use it to do ETL transforms and stuff like that.'" So it was a great, great job.
The other half of that job was really more from Satya, Satya had really wanted to pivot Microsoft from being perceived as hostile to all open source, to being actually good, healthy, honest participants in the open source ecosystem. So it wasn't like a marketing thing, it was like, "No, you actually have to do the work. You have to be a reasonable human being that loves people and wants to contribute upstream."
So we're part of that kind of movement inside of Microsoft. The combination of those two things made this what I really think was one of the best jobs one can have inside of Microsoft. We got to go to all the open source summits, all the KubeCons and things like that, and we got to talk to internal Microsoft teams, the rest of the core Azure teams, the .Net team, the Hololens team, and just learn what they were trying with cloud and what they were succeeding with, and what was really hard. Then we'd go talk to customers and do kind of the same thing with them.
Out of this we would fill our hopper with ideas of things that people were struggling with, and we came into this with Helm, Helm was the package manager for Kubernetes and that was one of our team's projects, one of the things that we'd built at Dais that Microsoft acquired. But then we just started piling more and more open source projects on top of these.
Draft was a tool for quickly building Kubernetes applications, Brigade was a tool for doing data pipe lining. By the time 2020 hit, I think we were at eight or nine open source projects that were all inside of CNCF. One of the things that we discovered though was that there were problems that we couldn't solve terribly well, so the team got together and we said, "We've got this bundle of problems that we cannot figure out how to do with containers. Can't actually even figure out how to do them with virtual machines."
These are things like we wanted to be able to scale down to zero, so when no traffic is coming in, why do you need three copies of your server running? Can't you just scale it all down to zero, and then when traffic comes in, scale it up to three or five, or tens of thousands or however many you need? And we tried to do this with containers, the startup time on containers was just too slow. We tried all kinds of hyperoptimization techniques on virtual machines, still same story. So we had that and a couple other problems, cross platform, cross architecture problems and things like that that were starting to pile up.
Try as we might, we couldn't come up with a good solution. So we were sitting at dinner one night, after we'd spent the whole day planning, the whole team was off site doing this planning, and we were just in that decompression mode. You know how sometimes you get into the decompression state and rather than becoming exhausted and just being like, "Ahh," you get into this creative mode? Where you're kind of like, "Wouldn't it be wild if...," or, "How come we never thought about this?"
And we started talking about maybe there's a third kind of cloud computing that nobody has noticed. Maybe we've got virtual machines, containers, and an empty bucket where we could put a third kind of cloud computing, this kind of thing that should be able to scale up and down rapidly, that should have near instant start times, where you build it once and you should be able to run it on any hardware in any operating system. Wouldn't that be cool?
And we brainstormed a little bit about that and a couple of us got interested, and we really started running with it. That was, I think, late 2018 and so two or three of us played around with all kinds of different things and ended up looking at Web Assembly as a potential candidate for solving it. This is probably a good time to pause and explain what Web Assembly is.
Brian: Yeah. I think a lot of people have heard the terms, and I think we've all seen some conferences and maybe seen Mozilla employees talk about it. But yeah, a refresher would be great because I know it's been around for quite a few years.
Matt: Yeah. So Web Assembly, okay, you can take a really positive, awesome vibe, kind of, "Here's what Web Assembly is!" Or you can take the realist vibe, I'm going to go for the realist one here, just to throw it in for a change. It will not solve all the problems in the world, let's just get that out there in front. Realistically, the browser has a checkered history with language support, right? So the first real language that was supported in browsers, at least in Netscape, was Java and they needed a language to tie Java together with the rest of the platform.
Brendan Ike famously spiked out in like a week or two this thing that he originally called LiveScript that they renamed to JavaScript because Java had so much marketing momentum. So you had Java and JavaScript in the browser. JavaScript, I'm old enough that I wrote JavaScript when it was still called LiveScript and you could pop up alert dialogs, you could put things in the status bar which I don't think you can do anymore. It was a toy language. Java applets were supposed to be the way that we wrote really rich, in browser code.
As we all know, I don't even think applets are supported in any major browser anymore. It was just a technology that came and went. Then there were a whole bunch of others, there was ActiveX, there was Silverlight, there was Flash. All of these things really, at the core, were attempts to take the browser and treat is as a runtime for another language, or embed a runtime for another language inside of the web browser.
So meanwhile, all of these projects are coming and going and languishing and then, ultimately, support just fizzles out and they drop out of the browser. Well, JavaScript is totally kicking butt and taking names. It goes from this toy language that you can't do anything with, to this language where you can start writing rich, in browser UIs.
Then you get to the point where a couple people start saying, "Wait, we can take the JavaScript engine out of the browser and build NodeJS." And then Denno Ryan just keeps building cool stuff, NodeJS, then Denno, then we start to see Vertex which I think also came and went. All these technologies are pulling JavaScript engines out and starting to do server side JavaScript and cloud side JavaScript.
Meanwhile, the browser is stuck in this JavaScript only mode, so that's the stage set for where Web Assembly came from. Luke Wagner and a couple of other people working at Mozilla were looking at this problem and going, "Well, maybe it's just that up to this point, a lot of technologies have been dropped in the browser as extension, third party things that were really designed to help Microsoft or Adobe or Sun, or now Oracle.
What if we wrote a generic runtime, a specification that ideally any language could compile to, and then we could run that binary format inside of the browser? So we should be able to start with C, compile C to this Web like assembly language, and then be able to execute it in the browser and tie together the JavaScript and the compiled C code and get some higher degrees of interactivity." That was really the origin story, with a bunch of details omitted.
That's the origin story of where Web Assembly came from. So early on, the goal was to support languages like C, and then because this came out of Mozilla and Rust came out of Mozilla, Rust became a very early language. Then you started to see a trickle of other programming languages originally support this in browser model. The browser version didn't quite take off to the extent that I think we had all expected it to.
Maybe it's because the core thesis there wasn't right, that it's not the case that what we really wanted was a generic one, or maybe it's just because JavaScript to this point has gotten so powerful and so prolific that maybe the use cases were much narrower than what people originally thought. But the engine itself was really awesome, Figma I think is probably my favorite example of people who did cool stuff in the browser with Web Assembly. They took a C++ codebase, compiled it to Web Assembly, exposed it to their JavaScript, and Figma is one of my favorite tools, period. So I'm totally happy with the way that worked out for them.
Brian: Yeah. A powerful product too, as well. Figma, they've kind of unearthed... everyone tries to do multiplayer in their applications, and I think Figma successfully made that work and now I'm seeing a lot of other cool tools. Which, actually, CenoPi has a tool that he's working on. I don't think he's announced it publicly yet, but it's not in stealth mode. That does really cool things.
Matt: So you heard it here first.
Brian: Yeah. But please go on too, as well. So we've got Figma, C++ in the browser, using Web Assembly. Actually, I want to get to Fermyon too, as well. We all understand Web Assembly exists, I definitely have seen all the talks every year from Mozilla employees, talking about this thing, the future. But also I do write a ton of JavaScript, and I probably won't be like... It's good enough.
We were talking a little bit before we actually hit record, I have enough gray hairs that I don't think I'm going to learn new languages every weekend. I know the ones I can write code with and can ship a product with, so that's me. But where are we today with Web Assembly?
Matt: Yeah, I think I can pull the two stories I was just telling, together. Tell you where Fermyon came from, and then answer your question in the process there. So on the one hand, we've started with the story of a frustrated team at Microsoft that was really trying to push the limits and find a new kind of cloud compute. Then on the other hand, we're telling a story about a really awesome technology in the browser that just, for whatever reason, didn't take off in the way that maybe we had all looked at it and expected it to.
So here we are, looking for a really cool candidate to be a third kind of cloud computing, and our set of criteria are fairly well fleshed out, but one of them was it needed to run in a really secure sandbox. That was the one that got us looking at Web Assembly because we're looking, going, "OK. Where do people build highly sandboxed, runtime environments? Well, the cloud. Okay. We've already exhausted that one.
We can't find anything new there that's going to cut the mustard, pass the checklist. But the browser is another one." When you think about it, the browser is one of the most interesting, highly trusted platforms we run. We load random webpages that we have no idea who authored these things and they're loaded with JavaScript and we don't give a second thought about the fact that it's executing an entire application because we trust the security sandbox.
Web Assembly had to have an even tighter security sandbox than JavaScript did. Really, it was a capabilities based model where, by default, the code executing in the Web Assembly runtime isn't allowed to access anything. You have to tell it, "Okay. You can call these functions, okay. You can do this." The JavaScript outside can call into it, and that is really the security model that is necessary for the browser.
That's the security model we wanted for the cloud because what we wanted is to be able to run on trusted customer workloads. Say, "Yeah, whatever, we're not going to need to scan your software before you can execute it in the cloud. We don't do it with containers, we don't do it with virtual machines." We certainly didn't want to do it in whatever our third cloud runtime was. That got us looking at Web Assembly.
We ended up starting Fermyon, the company, with the idea of saying, "OK. Based on what we have learned, if we were to start over and write this third kind of cloud computing thing, what would it look like? Let's go build it." So that brings us up to about where you and I met, Brian, Open Source Summit North America last year, it was kind of our first, real, public demonstration of what we had built which was a spin, an open source developer tool, designed to build applications.
Really focusing very much on the serverless mode of writing applications, serverless functions. I can come back and talk about that in a minute, but that's really been our focal story. We've got a developer tool, Spin, that can do that, and then a cloud platform, Fermyon Cloud, where you can execute these things. So you can deploy your application up to Fermyon Cloud, if you want to run it on your own devices, on your own cluster, on your own bare metal you can use Fermyon Platform and install this kind of platform on Digital Ocean or Equinix or whatever.
That brings us to your question, so we tie the two together and then go, "Okay. What is Web Assembly now? And what's it doing? And how are people using it?" We were not the only people to look at Web Assembly and say, "Oh, this is a really promising technology." In fact, as you noticed, Brian, and all these conferences and podcasts and articles, people are finding a bunch of really novel applications for this technology. Again, because of the same virtues that attracted us are attracting people in other places.
I tend to think there are four big places where Web Assembly seems to have gained serious traction. The first one is the browser, we talked about that. Places like Figma and Adobe. But IoT is another surprising one. I worked in that field for a long time and the challenge is, to some extent, you're stuck writing a lot of low level code like C. Whether you're doing industrial IoT or whether you're doing consumer IoT, you've got this low level C codebase and then you've got to somehow build the application on top of those low level drivers for sensors and processors and things like that, that give the user a good experience.
So Web Assembly turns out to be because it's so easily embeddable, and again because of that great security model, turns out to be a really good fit for the IoT model where you can write the low level C code, add on a Web Assembly runtime, build all of your application in a higher level language of your choice, compile it down to Web Assembly and execute it. BBC, Amazon, Prime, Disney+, they're all big users of Web Assembly for their streaming players because they can write some low level shim code for every TV and set top box and Roku and AppleTV that you have out there, but then the application code that runs on top of it can use common libraries and you've got a lot of code reuse that can happen at the higher levels. It gives you a good over the wire update kind of story, it's kind of a cool technology.
So that's number two, right? Number one, browser. Number two, IoT. Number three I think is more on the plugin side, where a plugin or an extension, you start introducing these to your tool when you want to make it possible for other developers to bring value to your tool. The browser is a big example of this, even web platforms like Shopify, Shopify uses Web Assembly so you can write custom extensions and deploy them in your Shopify environment.
There are all these cases where really the use case is, "I have a platform, I want to expose to you, the developer, a way to extend that platform to get more of your stuff done so that, A, I don't have to do all of the work and, B, you can get stuff done the way you really prefer to get stuff done." I think one of the most novel ways I've seen this kind of plugin architecture used is from Single Store, and now I think other database companies are following along, but Single Store went, "Oh, nobody likes writing storage procedures in SQL. What if we just made it possible to write storage procedures in Python and compile it to Web Assembly, and push it into your database?"
So instead of having to suck data out of the database, transform it, and then put it back in the database, you can just transform it inside of the source that already has all the data. Bring the code to the data is a really nice way of applying that kind of plugin model, so I'm a fan of that kind of thing. Then the fourth one is the one I love, and that's the cloud environment where I just feel like Web Assembly is a great cloud runtime that has some very specific applications and, consequently, Fermyon has built our platform on that.
Brian: Yeah. So to be clear, at Fermyon, am I writing the code I want and then using Fermyon to deploy that and have fast compute without... The complaint is it's not fast enough, so you've got to do Rust or Zig or whatever the new hotness is. What am I actually deploying to Fermyon? Because you had mentioned serverless functions in compute, so am I writing my code and deploying that directly to your cloud offering?
Matt: The problem that we identified to solve, was that what we were hearing, even back when we were at Microsoft, it was that developers really liked serverless functions as a developer model because you dive right into the code you care about.
Brian: I'm a fan.
Matt: Yeah. You don't have to stand up an HTTP server, you don't have to spend half an afternoon configuring TLS, you don't have to do any of the process management. You just write a request and response handler. But the problem was, most of the architectures built on that are built on fairly aged technologies. Lambda. Every function is executing on a virtual machine. Virtual machines are like a very heavyweight object, and they are not fast to start up.
Consequently, to be able to execute Lambda functions, Azure functions, Google functions, those kinds of things, essentially they have to keep around a pre warmed instance of a VM that they can drop the code on in the last mile and try an execute it. Even so, they're talking about 200 milliseconds to 500 milliseconds to cold start your application, even though your application is just a function.
So we looked at that and went, "Oh, well. That's an architectural thing, the problem is it's the wrong cloud runtime." Now, there's another aspect to this same problem which is that developers complained to us quite a bit that writing serverless functions is great when you're in your IDE, and then you spend 45 minutes configuring an environment to run this thing.
I actually timed it, I did a straight Lambda function start to finish from an existing account, to, "OK. What does it take me to get there?" And I was in at about 47 minutes from that to having Hello World deployed and I'm like, "OK. I understand why people are complaining, this is just a long time." So we looked at this and said, "All right. Developers have told us about something they love, serverless functions. They've told us about a couple things they really don't like, really long cold starts, really clunky developer experience."
We really wanted to solve those two problems, so we built Spin, an open source tool, to make it super easy to write these kinds of applications. Then Fermyon Cloud is one way to deploy these applications into the cloud, to be able to run them. 200 milliseconds for a Lambda function. We cold start in a millisecond, so a couple orders of magnitude faster. The reason why is because Web Assembly is just that fast, and when you can optimize your binaries and upload, you can make it even faster. One millisecond is a good, solid starting point.
Brian: Yeah, I don't mind that one.
Matt: Yeah, not at all. Right?
Brian: Yeah. Now that I have a better understanding of the platform, it sounds like we at Open Sauce, my day job, we have things that run in cloud and the questions that we have now is like, "We are choosing to use Jamstack projects to host our serverless functions, Netlify functions." Don't build a whole Lambda and then maintain that and upload it with a ZIP folder, or anything like that. It's nice to have the function to compute, but it's the same cold start, it's the same. We're not owning that infrastructure. So with our enterprise product that we have not started working on as of yet, but anyone who's a VC, it's going to be done soon.
Matt: Same for us, by the way.
Brian: Yeah. What we need to be able to host this stuff in the cloud, that people can run, or host on their machine in their compute, but we also have the slowness of, "Ah, it's going to take..." So, for example, the Linux Foundation Insights is a competitive product to what we're building, except it's only for Linux Foundation companies. Their data takes at least 24 hours or even a couple days to update, and the reason for that is it's got to go through all these Git commits and turn that into insights. What we're trying to do is build something that's much faster, that we can do on demand and folks can use on their local infrastructure or host in the cloud. But it sounds like if we just wrote some JavaScript and put it up on Fermyon, and leveraged C, the power of one millisecond Web Assembly, we could be extremely competitive.
Matt: Yeah. I think any of these kinds of cases where you're looking at a fast kind of system like Lambda and going, "Okay, well, we can write pipelines to do it this way." Yeah, that's the kind of workload that we love to do. That, and web application backends.
Another nice thing about this technology is we know, user research suggests that user attention starts to dwindle after they have to wait around 100 milliseconds. So before you're even aware of the fact that your attention is starting to wane. So if you're waiting two or three hundred milliseconds or 500 milliseconds just to cold start before your application even starts running, and then you're returning data to the user, you're playing at a supreme disadvantage.
So we were like, "Serverless functions should be the kind of thing where we can write them and there's no cold starting there. You can deliver things nearly instantly."
Fermyon.com actually runs on our own platform, and a big part of the goal was, okay, the Google Page Speed Test, we need to score in the high 90s. We got all the way up to 100 at one point, and then I think we're back in the 99, 98. But part of that was because, okay, we really want to be able to say to people with a straight face and in all honesty, "Hey, this will do better for you, if you're writing your applications this way. It is legitimately faster than these systems that were built on a technology that's 10 or 15 years old."
Brian: Yeah. I'm excited to try this out. As I mentioned before I hit record, I'm no longer doing full time engineering, but I'm happy to do a weekend project and this sounds like perfect weekend project stuff.
Matt: There you go. That's what I want everybody to say, "Yeah, I'll try it over the weekend. Then, on Monday, I'm going to spend just a little more time working on it. Then, Tuesday, I'm going to start a new project." I love that, I love that.
Brian: Yes, exactly. It'll become a huge spike, and then, little do you know, it's now embedded in an entire infrastructure.
Matt: Right, yeah. That's right.
Brian: Perfect. Like I said, I'm excited. I hope the listeners are excited to try this out. I did want to actually ask the question of the name, what's the history behind the name?
Matt: Oh, and I'm the worst person to ask because I'm... Okay, so my actual education background. I had a PHD in Philosophy, so I have basically nothing in the realm of the hard sciences. Everybody else at Fermyon is more interested in physics and stuff like that, and Fermion is a term in physics. There are fermions and there are bosons, and nobody wants to have a company named Boson. I don't know why.
So Fermions are small particles, of which everything else is composed so we like that idea of the function as a service is this little, particulate piece that other bigger applications are composed of. Fermions tend to have a couple of characteristics, one of which is spin, so Spin is actually a pun on fermion. If you ever got really bored and perused our codebase, you would find all kinds of very nerdy, physics references that I actually don't understand.
Brian: That's amazing. Next time I see y'all at an Open Source Summit, or a KubeCon, I will corner the team and ask them more questions about these physics terms. But very clever. Cool. Anything else you want to mention? We're actually winding down the conversation before we hit picks, but is there anything else in the Fermyon world that you want to share?
Matt: Yeah. I guess we didn't really talk about languages much, another big deal to us was to be able to support a broad swathe of different languages. We started with the core compiled languages, Rust, Go, and we've worked our way into scripting languages. Most recently we added Python support and then we added JavaScript support. But that's a big deal, and one of the ways we think is a key interesting differentiation point, is that if you can compile it into Web Assembly with the system interface extensions, you can run it on Fermyon. I think that's a really good thing. Oftentimes these serverless platforms have to zero in on either one language or a small set of languages, and we've been trying to make it possible to add a plethora of different languages.
Brian: Excellent. Yeah. Looking forward to seeing more language support. It seems like there's a few new languages that are coming out, I mentioned Zig in passing but, yeah, Zig is something I've been looking at as well because it seems like people are getting a lot of success from that. At least a handful of companies are. Yeah, maybe you'll have Zig support pretty soon as well.
Matt: You actually can write Zig apps today, if you want. Usually not the most catchy one to lead with, it wows all of like nine people. But Zig is an awesome programming language, I really do like it. It's actually easier to compile C with the Zig tool chain, C to Web Assembly, than it is to use the C tool chain to compile C to Web Assembly. It's a great tool and is supremely architected. I'm a big fan of the Zig team and the work they do.
Brian: Cool. Yeah, I've got to reach out to the Foundation and looking forward to connecting with them and learning more. Well, with that I do want to transition us to picks. So folks, definitely try out Fermyon. Fermyon.com too, as well. Yeah, actually it's how it's said is how it's spelled so if you're not a physics major you'll find it also in the show notes. But Matt, we have Jam picks, these are things that we're jamming on, could be music, food, technology related, all of the above. Everything is on the table. And, if you don't mind, I'll go first. I've got two picks. One pick is I've been using lots of Looms recently. I've been doing this thing because our team is all remote, distributed. Actually, is Fermyon also distributed or do you guys have a-
Matt: Yeah, yeah. Very similar to you.
Brian: Yeah. So I've been doing these Monday morning, like, "Here are our metrics. How many user signups, how many people have done this interaction on the page." And I try to accompany that with a video. Actually, what I've been doing with those is Slack videos, which is very much like a Loom, five minute limit, quick, like, "Hey, here are the numbers, here's a quick demo, here are some things that are high priority fixes, here are conversations that I had last week." And summarize the day.
I find it's better than even having All Hands, like a whole hour long conversation. It's a quick, five minute, "Hey. State of the union. This is what we want to start the week out." Then we have our normal meetings and stuff like that. But I've been doing Loom videos and like, "Hey, I just talked to a potential customer. Let me just do a quick video, walk through that demo, send it over to the designer and get that feedback loop going."
It's so much better than jumping on an hour long call because these meetings tend to be an hour, I don't know why they're always an hour long. But it's like, "Hey, can we just chat?" And then it ends up being like, "Lets talk about our weekend." So I'm just like, hey, we can still have those calls, but let's do the Loom video, summarize whatever we're trying to get out and then we can be better prepped when we do need to get on a Zoom or a Slack call, or something like that.
Matt: I'm so glad to hear you say this because we've been trying the same thing with Slack videos and Loom, particularly for internal messaging and it's so great to be able to... I'll record a two minute video announcement in the morning, we've got about 30 something people at Fermyon. Two minute announcement video and it takes two minutes of my time, and everybody feels a little more connected and it's a meeting nobody has to go to, so I'm a big fan of Loom and the Slack videos.
Brian: Yeah. That's Loom's whole tagline, it's always about how many meetings you were able to eliminate from the week and I started noticing this thing, maybe this is another pick, but Calonly. I've got a Calonly but I can only book meetings two weeks out, and the reason for that is because I was getting to the point where my entire month would be booked up and it would just be impossible to find time to even have some deep work because folks would just jump on the calendar whenever they could because they had the link.
So I started turning off different Calonly... it's like a season. Every three months I'll turn off the one and then next quarter starts and I'll turn on the other one. But then you can only book a two week out, which basically helps me control the amount of meetings I had because, at this point, we're six people and we don't need that many meetings to get the job done.
Matt: No, that Calonly, that's a great tip. I'm going to do that too now.
Brian: Yeah, and I had one more pick as well which was my GitHub notifications, my management. Working at GitHub as a full time employee, doing notifications is almost near impossible because if you have any subscription to any big GitHub repos, like GitHub-GitHub, it's just impossible to get any sort of work done when you have the mono repo turned on. But it was not even just that, every team has their own repo at GitHub, so when I left GitHub it was like a breath of fresh air to actually use notifications properly.
And so my whole thing is if you need me to look at something, @ mention me and it will show up in my feed, and I only look at mentions. I'll just go down either in the morning or evening, just like email, I'll go down the mentions and respond to stuff. That way I can keep up to date on what I need to review as far as code goes, or answer questions and random issues, or provide a quick little video or whatever it is. But my pitch to everyone on the team, "@ mention me. I'll get to it pretty quickly. If you don't want me to get to it quickly, don't @ mention me and I'll find it in the next 30 days."
Matt: So is there a GitHub notification equivalent of Inbox Zero then? You're at @mention zero at the end of every day?
Brian: I've given up on inbox zero and notification zero. If something is really important, people will bump it and I think that's just, as engineers, we just know. A quick little bump and it will ping, just to get it back on top of the inbox, I think it's a good standard. Every now and then people will abuse that and be like, "Hey, I opened this up two hours ago. Bump." And you're like, "Ah, it doesn't work that way."
Matt: "Aren't you back from lunch yet? Aren't you back from lunch yet?"
Brian: Exactly, yeah. I've definitely had to cut, because a lot of my team is in Europe, myself off on how early I show up at the laptop because I can get sucked in pretty quickly and next thing I know it's 12:00 PM and I have no idea what happened in the morning. But I got so much work done.
Matt: All right, well, I've got two. Of course I smuggled a Web Assembly one in here, a project from Byte Code Alliance called Wiser. Wiser is basically a way to pre initialize... So Web Assembly, you can execute it, freeze it, and then store the frozen version and then start execution from there. Now, that's a generally dangerous thing to do but Wiser is a tool that makes use of that to say, "OK, you can pre initialize your code, the early part of a startup, and then freeze it and then you can cut that much time off your startup time in actuality."
This has really awesome implications if you're doing scripting languages, so our JavaScript and our Python implementations, for example, the first step is you initialize the interpreter, then you start loading scripts in, then you start executing. So what if you could start up the interpreter, load in all the scripts and then freeze out the binary image there? That's what we do with Wiser, it's a really awesome way of doing some performance optimization for scripting languages, in particular. I think we also use it for .Net to startup the .Net runtime. So that's my Web Assembly one.
Then, this week Apple released an app that I really do love that I use on my phone. Whenever I'm coding, or even more so when I'm answering email and I need that kind of soothing vibe going on, I'm a big fan of a variety of classical music, especially minimalism and stuff like that. Apple released a kind of version, I guess, of Apple Music that's called Apple Classical that's just oriented more towards classical music. I was skeptical when I first got it and thought, "This is going to be Apple Music with a bunch of different stuff in it."
But actually it really seems to be tuned into what somebody who's looking for classical music is looking for, whether it's a specific genre, maybe from a particular time period with a particular composer or particular music groups, or forms of music performances, groups. That kind of thing. It's clever, I like it, and I'm a big fan and have probably used it for about 15 hours so far this week.
Brian: Oh, so cool. Today I learned. I feel like I've been under a rock all week, real focused on the day job. But yeah, I'll definitely check out that app, for sure, and Wiser. You said it was dangerous, but when you said you can freeze compute I was like, "Wow, that's amazing, to be able to say this process can start on Monday as opposed to work through the weekend." I'm trying to think of other use cases I would use that, but if it's not recommended, maybe I'll give that thought process a break.
Matt: Yeah. I didn't want to give the impression that I didn't recommend going and trying random things with this because it's always fun just to lower your expectations about your success off... But we tried to do this with containers, many, many years ago, and see if we could figure out a way to freeze a container and then move it somewhere, and then reconstitute it. It was fraught with peril. Web Assembly is maybe a little closer to the kind of format you'd need for that and you still would need to figure out how to do things like handles to files or handles to network sockets outside of the runtime. But if you don't need to worry about those things, there's a lot more that you can snapshot safely.
Brian: Yeah, that's so cool. This is an awesome conversation, I feel like I'm significantly caught up to speed on what's happening in the Web Assembly world. Super excited about the success of Fermyon and what you guys are going to be doing in the future, so looking forward to rubbing shoulders again at a future conference and seeing the more ships and releases, for sure. Now it's on my radar.
Matt: Thanks. Yeah, this was so much fun. Thanks for having me.
Brian: Yeah, my pleasure. And folks, keep spreading the jam.
Subscribe to Heavybit Updates
Subscribe for regular updates about our developer-first content and events, job openings, and advisory opportunities.
Content from the Library
Jamstack Radio Ep. #138, What’s New with Next.js Featuring Nick Taylor of OpenSauced
In episode 138 of Jamstack Radio, Brian speaks with Nick Taylor of OpenSauced. This talk explores the improvements and new...
Jamstack Radio Ep. #133, React Server Components with Tom Preston-Werner of RedwoodJS
In episode 133 of Jamstack Radio, Brian speaks with Tom Preston-Werner, founder of GitHub and creator of RedwoodJS. This talk...
Jamstack Radio Ep. #122, Living on the Edge with Anthony Campolo of Edgio
In episode 122 of Jamstack Radio, Brian catches up with Anthony Campolo of Edgio. In this conversation they recap Anthony's...