1. Library
  2. Podcasts
  3. Jamstack Radio
  4. Ep. #90, Performance First with Ishan Anand of Limelight Networks
light mode
about the episode

In episode 90 of JAMstack Radio, Brian Douglas speaks with Ishan Anand of Limelight Networks. They discuss Ishan's performance-first career journey as well as tools and tactics for optimizing build times.

Ishan Anand is VP of Product at Limelight Networks. At the time of this recording he was CTO of Layer0 (recently acquired by Limelight Networks). Ishan has been launching and developing mobile applications for the iPhone (and other mobile platforms) since the day it was released.

transcript

Brian Douglas: Welcome to another installment of JAMstack Radio.

On the line we got Ishan Anand talking to us about the evolution of the Jamstack. Ishan, welcome on.

Ishan Anand: Yeah, thanks for having me.

A long time listener, really excited to be here.

Brian: Yeah, yeah, it's awesome.

I guess the podcast has been around for a long enough that folks who are now guests have been listening to it since the almost beginning, beginning-ish.

My question to you is, what's your relation to the Jamstack and what brings you on today?

Ishan: Yeah. I've taken, I like to say, a convergent evolution path to the Jamstack.

It gives me a different lens to look at the whole space.

But just by way of background, obviously I've been working my way up the stack.

Right out of college, I was actually doing device drivers in the computer music industry. If you've heard of this program Pro Tools?

Brian: Yep.

Ishan: I worked on that.

Brian: Oh, you actually worked on the actual software Pro Tools?

Ishan: Yes. Yes.

Brian: Okay, wow. I mean, I was a bedroom studio musician in college, so I'm familiar with Pro Tools.

Thank you very much for your hard work.

I haven't used it in a while, so I'm not sure if I actually used it while you were working on it, but thank you anyway.

Ishan: Yeah, this was actually in the early 2000's, and actually one of my first projects was writing the device driver for the Mbox, which was one of the first USB recording tools they supported, back when you had FireWire or USB and people were still doing PCI cards.

But it's a great place to work, half the people there are musicians, and I was a much better engineer than a musician so that's where I ended up.

Then I met my co-founder and started doing more application programming, mobile and cloud, but the consistent theme has usually been something about performance.

I say it's a convergent evolution, is a lot of folks in the Jamstack ecosystem, I feel start with static site generators and they're typically using them on, shall we say, smaller sites like blogs or marketing pages.

Then what we've seen over the last few years is people trying to apply those benefits to larger sites with more pages or more frequent updates.

And my Jamstack journey starts at the other end, at the large complex sites.

It starts with the prior company with my co-founder Ajay Kapur, and it was helping very large companies, typically e-commerce sites, go mobile through a kind of server side responsive platform.

We kind of stumbled into a lot of the underpinnings of Jamstack, but on really large sites without having to do static.

There's really four key areas there that were really signs of what the future was to come before it went off and did Layer0.

The first one was serverless, we were doing these transformations of desktop sites to mobile on the server, and what do front end developers love?

They want to do a it in JavaScript, which is language they already know.

We built out our own serverless platform running JavaScript on top of EC2. This is before AWS Lambda was out.

Brian: What year was this, that you were tinkering with this?

Ishan: This is 2014, 2015.

We started building it before Lambda came out and while we were building it, Lambda was announced and it's actually still running today. It's one of the oldest running pieces of serverless JavaScript.

Now there've been others actually around the same time within a year or two of us.

I don't know if folks remember Parse for example, also was doing server side JavaScript, so we weren't the only ones.

But what was unique about ours is it was really tuned for serving web requests, and at the time Lambda wasn't yet there.

The second key thing that we were working on is, because it was the early days, mobile phones were really slow, networks were slow, and so performance was, and still is a really important part of mobile.

And the value of caching at the edge was hugely important.

We actually spun up our own mini CDN with our own varnish nodes in order to get the best possible performance.

But one of the really interesting lessons I got out of that is, you can take a page and if it hasn't been designed to be cacheable from the beginning, it's really hard to get that engineering team to go and make it cacheable afterwards.

To make that really concrete for the listener, you've got an e-commerce page and it says, "Welcome back, Bob," at the top.

And you realize everything on this product page, except the, "Welcome back, Bob," and the cart count is cacheable.

Why don't we just serve this thing from the edge?

Can we pull that out so that the server doesn't give that on first load and then we can cache the whole thing and you just late load that.

It's a lot of work to get people to go through and do that page, by page, by page.

But when you're building something statically, it's, I like to say a Jedi mind trick, that gets you to think about, well how is this page going to react if it was just served from the CDN and no server was around?

That was the second thing. And then the third and the fourth that I think were really interesting was, because what we did is, the way we had done this transformation is we preserved all the functionality so it could still pass through and we're just changing how the front end looks, in order to make a mobile version of this site.

What we effectively had done is decoupled the front end from the back end.

It really became very clear to us, I remember in 2015 we had a chart and it showed one customer, their desktop team during their holiday freeze period...

Typical large e-commerce site, they go into a whole freeze, and they made no deploys during that time, during their peak traffic season.

And their mobile team, which was on our platform, did changes multiple times a day.

I think it was one change for the desktop team and 186 times for the mobile team.

And they're measuring and experimenting with ways to improve the conversion funnel, and one of the things we like to say is that as you go lower down the stack, it gets slower to iterate.

So making a database change is a lot more cautious and slow than when you change HTML.

When you decouple those two, you can let the HTML move a lot faster than application logic or even the database.

Then the other thing that we had also stumbled into was this deploy preview type workflow. We had built these things out called modes which were originally designed so you could create a mobile version of your site, a tablet version of the experience, all different devices. But people started using them to just push the latest version of their code so they could show it to somebody else to preview, and it very quickly got used as an ad hoc staging.

We saw a lot of these same benefits that come from decoupling, we saw these benefits that come from speed, but these are already highly dynamic sites.

We started at the other end of the market and so we realized that we can take this as a key set of benefits and bring that to large e-commerce sites, without having to deal with purely static.

That's where I come from and why my perspective on the Jamstack is, I know we call it a serverless first perspective rather than a static first perspective.

Brian: Yeah. I mean, that's an interesting introduction to the Jamstack itself, and you're also doing it pretty early around--

Because I started doing Jamstack type sites around 2014, 2013, probably even when I first started writing code.

I always approached it as I'm learning the front end, I'm going to forget about the back end.

And I think guys, like a lot of bootcamp students today, they can attest to their separation of concerns as they learn different things.

You might go really deep on React, but just use a Mongo to interact with a database.

That note that you brought up about doing database migrations or iterations can be really complicated.

And if you have to create a new dashboard, but also touch that dashboard and make sure it connects directly to the SQL, it can be mind numbing.

So you felt the pain pretty early on, which is interesting because like gives such a deep understanding of those pain points.

You've been giving this talk recently about the evolution of the Jamstack, so could you talk more about that and what you've been going around and preaching in remote conferences and stuff like that about?

Ishan: Yeah sure.

I've been talking about this problem of build friction, but I think what you are prompting in my head when you're talking about not wanting to worry about the database migrations and the back end stuff, and what's really, really happening here, if you go the classic product manager five why's, is Jamstack in some sense, is a solution to an organizational problem, like microservices is.

That organizational problem is that front end developers now have more capabilities than they've ever had before, now that JavaScript is more powerful, and they finally need their own space.

Their own home, where they can say, "This is my area where I can practice independently without having to be encumbered by the other parts of the system."

That's I think why Jamstack has been so popular and it's really resonated with a lot of folks.

I am more a front end developer these days, despite my background in systems programming, than I am a back end developer.

I remember back in the day, when Node came out, I was so excited we can finally have the same language on the front and the back end.

I was actually excited, I don't know if people remember Jaxer, which was another early attempt to do JavaScript service side.

I think that's why there's a lot of resonance here, but I think the thing that's been caught up in the Jamstack is this very static first mentality.

I think it's great, static is great for when it works, but it creates this build friction.

In the classic static version of Jamstack you, basically build all the pages ahead of time before it's served to the user in a deploy.

If you have a lot of pages or if your pages are changing frequently, then that build time can be a source of friction.

That latter one also gets forgotten, because in a large, say e-commerce organization, not only do they have a lot of products, but they also have people whose whole job it is to be constantly changing what categories...

The merchandisers are changing what categories things are in, what the copy is on a product, what different sets of landing pages for categories are, and there's a lot of iteration happening there.

Or maybe they just want to throw a promo banner at the top of all their pages and they don't want to hurt the performance of their website and do it client side, so they need to change the header across every single page.

If you have thousands or hundreds of thousands of pages, that can be a real problem.

I know at the Headless Commerce Summit last year in 2020, there is one in 2021, it was something like three quarters of the respondents were working on pages that had 10,000 pages or less.

But if you think about large e-commerce sites or household names, these are sites that can have easily over 10,000.

If you just take as a comparison, number of products, your typical physical retail grocery store is 60,000 products, 60,000 skews.

Online stores can be even larger, Staples is 200,000 and it's not hard to get into the millions.

I use an example, there's a B2B forklift parts company, and you'd think it's a small niche, but it's parts, so they've got 8 million skews.

On the flip side, it's not just e-commerce, so content, if you look at folks like Buzzfeed, the New York Times, Wall Street Journal, Washington Post, they're doing roughly 80,000 stories a year.

In fact, Washington Post is 182,000 per year, the last data I looked at, and that doesn't even count their archive.

So imagine if you change the header on that and everything else, or there's a breaking news update and they need to put it on the header of every single page, they have to rebuild all those pages.

There's a question both in terms of cost and computation, as well as time.

And then layer on top of that things like A/B testing personalization, and just having to do frequent updates, that becomes really, really hard, especially.

That build friction can become really problematic.

Brian: Yeah. I've chatted with Kyle Mathews from Gatsby and some of the stuff, the computation that they had to do.

Even with the Gatsby examples and templates, but also a lot of their customers, they come to mind because when you think of any customer has the 10,000 plus pages, there's a lot of things you have to do to... Don't build every single page.

We're seeing a trend of the ISR, basically is the acronym, but incremental static regeneration.

What's your thought on where that movement's going?

Ishan: I break it down into two sets of techniques.

There's ones that I'll say are still static techniques.

The first set is optimizing build times, there's something called incremental builds, separate from incremental static generation, and that's just any time there's a change, you all only build the pages that changed.

It's kind of the classic optimization you'd expect.

There's some other things you can do, like there are static site generators that are just simply faster than others.

Hugo is famously fast, there's a new one called Toast that's written in Rust.

These tend to actually not be written in JavaScript and they can be very fast.

Another technique is simply to just make everything render client side, but that has SEO problems and performance problems typically.

Then there's this slow evolution we've had over the years, towards dynamic techniques.

I like to actually say the very first one was the pre-history. I follow music, right? So before there was punk, there was proto-punk.

We looked back later and we said, "Oh, that was proto-punk," even though nobody called it at the time.

And my favorite example of that is actually the one Phil Hawksworth did, which I'm sure you remember, the V-lolli experiment.

And he used webhooks to dynamically trigger a rebuild anytime somebody create some user generated content, in this case the user generated content was a lollipop.

But the point was, he didn't have to know all of those ahead of time, at build time.

The next kind of evolution about a year later was when the folks at Next.js baked into the framework this thing they call incremental static generation.

They were really inspired by still or validate, it's a cashing policy, which actually, building our own CDN nodes, we've actually been intimately familiar with.

But it basically says when a request comes in, if it's already in the cache, let me just serve something up to you and then I'll go also fetch the request.

So maybe you get a very out of date version of the page or somewhat out date version of the page, but everyone after you will get a fresh version.

The way incremental static generation works basically, is from the user experience, they hit a page, this page has never been rendered yet so it's fresh.

The CDN will show them what's considered a placeholder page. It'll just be like those squares and those skeletons that gives you the shape of the page you're supposed to see.

And under the hood, the infrastructure quickly tries to build that page content.

When it's done, the placeholder page will load the JSON data in and actually display the page to the user.

It's essentially actually being client side rendered, but anybody else who comes and visits after that will get the statically built version of the page, if it's being served from the cache. What this lets you do is as your traffic comes in, you're basically being able to build out those pages.

I say it was in the framework, even though what's in the framework is the commands you as a developer use to trigger it, your setting either get static paths or get server side props.

But the actual implementation is really a product of your framework and your architecture.

There's a lot of orchestration under the hood that needs to happen to be able to say, on a cache miss, serve up this placeholder page.

Now go build the page content, serve up the JSON to the client side thing that's waiting and then store this statically rendered version separately into, say something like S3, so it can be served up.

It's actually now a product of your framework and your infrastructure.

This is like a crack in the Jamstack wall in the sense that we lose a couple of principles that are classically Jamstack.

We lose certain things around atomicity and immutability because some people are getting out of date versions, and some are not.

It's also less portable, you can't just take this thing and put it on FTP server anywhere else.

There was another technique, very similar that Netlify has put on an RFC for, called Distributed Persistent Rendering.

Essentially what Distributed Persistent Rendering says is, when a request comes in for a page that has not already been rendered.

In all these solutions, I should take a step back, let you say, "Let me make some of my site static and some of it 'dynamic'."

For the dynamic piece in DPR or Distributed Persistent Rendering, it just when the request comes in, they just generate the page content.

There's no placeholder page, there's no stillwater validate, and that page actually, I think they want it to persist through rollbacks as well, so you can roll backwards and forwards.

But under the hood the idea is very much still the same of, as traffic comes in, it's actually building out the pages that are necessary to build.

We support ISG and ISR, we also have something we introduced called parallel static rendering, which is you stick a CDN in front of serverless, server side rendering, and you basically predictively look when a deploy happens, well, what are the highest traffic pages?

Why wait for that traffic pages to come in? On a lot of really large sites, it follows a power law.

You can say, I know these are the most popular pages in terms of traffic, as soon as I deploy, start building just those. If another request comes in organically for some other page, well obviously build that one on demand, but in the idle time, start building out the stuff.

You kind of actually decouple the build time from the deploy time so that you can move really, really fast and not have to worry about stuff.

But all of these are really very dynamic techniques, and for some folks it's, I kind of call it an identity crisis.

For a long time Jamstack was defined as being static only, but now suddenly we've got DPR, ISR and these other techniques that are serving HTML from dynamic service side functions, and what is, and what is not Jamstack, suddenly gets a lot more gray.

Brian: I mean, you prefaced all this with the idea of punk and what we call proto-punk, even though it didn't have that name.

I think that Jamstack perhaps were having like, what the world we thought of Jamstack being, through static sites rendered at build time, is now getting dynamically rendered.

So now we're moving to another evolution, which I don't know if we're in the post Jamstack area.

I don't think we're post Jamstack, but perhaps we're in this... I know there was a pop-punk phase in the early 2000's that we went through, the 2004, and then we had emo, that was almost kind of like punk.

But anyway, if we get into really split the hairs on the music analogy, I think we're in the evolution and I think maybe two, three, five years from now, we'll probably look back and be like, "Yeah, it was a fun time. I'm so glad we're here, but during, as we were figuring these things out, it was cuckoo crazy, trying to figure out if this is the right way to go, moving forward."

Case in point, I actually completed a site over the weekend where I was using a GraphQL database or API and using Next.js.

And I had to make a decision if I was going to get static props or if I was going to get server props.

And I did both ways, and I think I ended up going with getting server props because I liked the way the data was being rendered.

I know I could just throw that on Vercel and it works, with a bunch of magic that they're already doing as well.

That's a decision I made just at a whim on a weekend because it was a project that probably will die in the future.

But I just had to make the decision, so at least I can show somebody, hey this is the working app, who cares how the data's getting there?

But there's a lot of practices and things I use to get that site up and running, and consuming the GraphQL API and having the database separate, that I still use from the Jamstack.

I just happen to not be building everything statically when I'm deploying it.

Ishan: Yeah, I'm famous for driving analogies into the ground. I really like the music one.

I'm going to try and actually hold myself back from doing what I habitually do, because it's like maybe Kurt Cobain era is over and now it's like, what comes next?

We're suddenly lost in a daze. But what's really interesting about what you were just talking about is... And it points to...

I gave this talk at the Certified Fresh Events that Brian Rinaldi runs, where I elaborate on this point a little bit more.

I think there's a new definition of Jamstack we're evolving towards, and that definition is one of really just two things for me.

It's serving data from the edge and doing it with a front end developer experience or empowerment.

I've used this analogy before, which is, I feel like at the end of this journey the ecosystem is on, we'll come back to caching, putting a CDN in front of servers.

And it's kind of like the T.S. Elliot quote, we'll know that for the first time, and it'll be the same primitives, but in new ways.

When you're picking between getStaticProps or getServerSideProps, you're not thinking in the old ways of cache control headers.

There's so much actually happening under the hood that you don't have to think about.

You're up in your front end React code, but it's actually orchestrating a ton of things under the hood for you, and that's what's new and that's what's different.

The static piece is a red herring, I feel.

It doesn't really matter how it gets to the CDN, whether it's statically built or it had to be rendered the first time, but what's crucial is that it's been designed such that it's served from the edge as much as possible.

If you actually take a look at the Jamstack benefits, and you just go to jamstack.org, there's performance, security, scalability, developer experience, almost every single one of those, the CDN is what enables it or some combination of the CDN plus an improved developer experience.

I sometimes have said it's made it somebody else's problem. Scalability and performance, it's because the CDN's handling it.

And if you've run your own CDN nodes, you know that's not trivial.

But as a front end developer, it's scalability of front developer-- Because you don't have to think about it, but somebody else is doing it.

That's true for almost everything else when you look at those benefits.

The CDN really is the underappreciated linchpin, and that to me, I think points the way to what the new definition of Jamstack should be in the future. I don't think it's binary, I think it's really a spectrum.

DPR and ISG might just be the tip of the iceberg of a whole set of different architectures that just try to do that, serving data from the edge in a way that front end developers love, and there could be a whole spectrum of those architectures out there.

I don't know, we'll see if that really is the case.

Brian: Yeah. The one thing I want to touch on real quick is the whole of, I don't have to worry about all that invalidating cache and stuff like that because that's a solved problem.

It's abstracted under those functions that I'm able to dig into.

But the one thing I think of and the that's actually been taking over the web world, at least with the front end web, is dark mode.

Now understanding how to implement dark mode into a site is trivial, but back when I was doing this, what?

Five years ago, we had to do so much more work to figure out, light or dark mode or themes, CSS themes?

And actually use server side rendering to keep that cache for, hey Ishan's going into my site, he's using this browser on this machine.

He wants a different theme, he's already chosen it, so next time he comes back, he's got that theme ready to go.

Now that's stuff that I don't even have to think about.

I set my themes and my CSS and someone sets it and it's good, it's in local storage cache in the browser and it's not actually affecting the way I write code for my site at all.

At least moving forward.

Ishan: Yeah, that's a great example. A, that you don't have to think about it.

And B, it's actually a good example of something that comes up a lot and has a lot of dollar value attached to it, which is personalization.

As we try to get Jamstack not just on larger and larger sites, but sites that need to have personalization, how are we going to make that work?

Client site only is possible in some cases where you're doing rough segmentation, but if you're the cohort of one, it's going to get a lot harder.

The solutions that appear to be on the horizon usually involve some type of computation at the edge to figure out which version or bucket the user's in and then stitching together the right content.

There's a deep philosophical question to ask, if we're saying Jamstack is only static, why is it okay for the edge to execute code on every request, but it's not for the origin?

And so it's like, well, if we're going to solve these problems and we're really looking to what Jamstack would be five years from now, I think it needs to basically embrace some form of dynamic, moving forward.

Brian: Yeah, cool. I mean, this is a super deep thought out conversation.

I'm really loving where we're taking this. I just want to ask you one final question.

Anything else folks should be looking out for and considering when thinking about the Jamstack and where it's going?

Ishan: Yeah. Well, one thing that I also do is we're very focused on performance.

It's really important to a lot of customers on our platform and previous company I did as well.

I had a newsletter Called Core Web Vitals Newsletter, and everyone knows that faster performance typically means better conversion rate on your website.

But what's really new is Google has now been very clear about how they're going to reward traffic to performance search engine rank.

Now performance not just means better conversion rate from your existing traffic, it actually means growth.

It means new revenue that you're actually probably stealing from a competitor, because SEO can be zero-sum game.

But the challenge I think for Jamstack, and that Jamstack developers should just be aware of, is that it's not a free pass on performance.

Sometimes we talk about it is, but often when you're running without a server, JavaScript on the front end ends up replacing what your server used to do.

And while not all Jamstack sites are JavaScript heavy, a lot of them are, especially those written in React, Angular, Vue, and typically they're single page apps.

And unfortunately some parts of Google's core of vitals can be particularly harsh on JavaScript heavy sites.

If you're a Jamstack developer building one of those sites, you should be armed on how to recognize and handle those issues.

Thankfully we've been hounding the Google Chrome team, as many others saying, "You're not tracking single page apps properly for certain cases."

And they just, for example, rolled out a change to Cumulative Layout Shift, which is under their metrics, which is going to be a little more forgiving to single page apps.

And they've said they're going to do more improvements in the future so I'd highly recommend folks pay attention to that.

Brian: Awesome. Well Ishan, thanks so much for all the tips and tricks, and storytelling as well.

I would love to transition us into JAMPick, so these things are what we're jamming on, could be music, food, tech related as well, so everything's fair game.

And since you're prepared, how about you start us off with some picks?

Ishan: Yeah. I'll give you two technical and one non-technical.

As you can tell, really focused on performance and there's some really smart things being done by some of the emerging frameworks, other than React.

Sometimes a lot of folks just think it's React or Vue, but Svelte and SolidJS, which just reached 1.0, are doing some very interesting things in terms of shipping less JavaScript to the server, and baking in more fine grain reactivity so that you don't have to have the virtual DOM taking up space and performance.

We've helped a lot of folks on the platform deal with their client site script, doing a lot of things in terms of interacting with their TTI and stuff like that.

And hopefully we'll get some of this in React, I'm hopeful for React server components, but I think people should just broaden their mind and take a look at what SolidJS is doing and what Svelte is doing.

We have a component in our platform that's a little debugging tool in your webpage that lets you understand its cacheabillity.

We wrote that in Svelte in order to be as lightweight as possible, and the team really loved it.

I recommend that. The second thing on performance is I'm really excited about the lessons from AMP getting into other frameworks.

I gave a talk called Watt you didn't know about AMP at CascadiaJS a few years ago.

AMP has been really underappreciated and I think maligned, there's a lot of technical great things in there, but developers were up in arms because they were forced to use AMP.

But we're finally starting to see some of those lessons that the AMP team was really ahead of the curve on, with signed exchanges and workerize JavaScript.

I think that's the second thing I'd say that's really exciting.

Then in non-technical, I've been spending a lot of time on reading up on health and nutrition.

I think we're at the calm before the storm in a potential watershed moment if the Apple Watch really does come out with some type of metabolic sensor or a glucose sensor.

I think what we've seen the tech industry go through with a backlash, I think we might actually see the food industry go through.

I've been reading a lot there and really interested in that.

And highly recommend, I think for engineers, especially if you're working from home, don't neglect your health.

Some of the people I recommend following are Kevin Hall and Ted Naiman and Peter Attia.

I think they'll appeal to... Well, they appeal to me and I think some of this audience, because they're doctors but they also have technical engineering background, so I think their way of explaining things would appeal to an engineering mind.

Brian: Cool. Yeah, thanks for sharing.

I haven't actually looked into the whole Apple Watch and all the rumors that folks are circling around.

But yeah, I have an Apple Watch myself and that would be very intriguing if it could tell me, "Hey, you should probably run around the block, so that way that pizza you had for lunch is not going to follow you around for the next 30 years of your life."

Ishan: Oh yeah. I mean, it's really fascinating because in metabolism it's one of the few things that has such a short, immediate duration, you can just see immediately, immediately being one to two hours.

So when people get that feedback mechanism, it can be a powerful form of behavior change.

But maybe it's two years away, I don't know. But it'll be interesting to watch.

Brian: Yeah, yeah, for sure. I've got a couple picks and I actually had some exercise-related picks too as well.

Ishan: Oh good.

Brian: Two exercise things I've been doing, kettlebell is something I picked up a couple months ago, back when there was a shortage on actual free weights, just weights, any sort of metal in general, back with the blockage of the canal and everything like that, in the pandemic, put that on top. I couldn't find any free weights, I needed some free weights to keep up to date with my fitness, and that's my choice exercise equipment.

So picked up kettlebells because you can get one kettlebell and do a lot of pretty good workouts.

Changed a lot of my workout into swinging motions and some active, so almost like a high intensity training as well, and it's changed a lot.

The other the thing I couldn't do a lot of because of rain and weather is do cardio outside, because I don't have any inside cardio equipment also.

The reason for all this is mainly because gyms were closed.

So because there's no gym that I could attend for the longest time in the Bay Area, I had to re-figure things out.

Then my neighborhood was just very highly populated with people walking with mask on, so I had to learn my running route was different, different times of day for that reason.

The kettlebell has been a godsend, as well as a weighted jump rope.

It's something I've always done, like speed ropes and stuff like that, doing sports in high school and college, and fell off that after years of being married and having kids.

But now getting back into it and I enjoy the weighted jump rope because, same reason.

You only have so much time in the day, so I can get a really good 12 minute workout with the weighted jump rope and really fill the cardio uptick, and not have to feel like I'd have to be on a treadmill for an hour and get that same sort of... Not even close to the impact.

Ishan: I'm really curious because I actually was thinking of taking up kettlebell and I'm curious what you'd recommend as the best starting weights to start using, and how to get started?

Because I was really concerned about all the movement, actually just tearing something and it felt like it was very easy to mess up and hurt yourself.

Brian: Yeah. You would think that.

With the free weights, I always went for a heavier 50 pounds, 45 pounds or whatnot, just to make sure... I can't really buy a bunch of them, so I'll get the weight that would probably have the most impact.

But the kettlebells, because there's so much movement so you don't actually need as much weight. As long as you're not just doing simple curls or anything like that, but you're doing kettle swings and then the actual jumping jack stuff, you can actually get away with the 15, 17 pounds.

Mine is 30 kilos, which I think that's about 17 pounds.

Sorry, my math is not going to happen right now, I might have to do that backwards.

But yeah, that's the one I have and I've not deviated from that weight at all.

I feel like rather than going for strength, I'm going for more of cardio and just sustainability exercises.

And changing my way of life instead of trying to get cut and swoll, it's more of like just make sure the heart's always beating, is my goal.

Ishan: Is there a lesson plan or a YouTuber or somebody you'd recommend for actually learning the movements?

Brian: I haven't really got attached to a specific YouTuber.

I know Athlean-X... Yeah Athlean-X is the YouTube channel.

They do more than just kettlebells, they do a lot of body strength and body weight movements and stuff like that, in addition to traditional weight lifting, but they also talk about nutrition and stuff like that.

He's been my go-to if I need a ketllebell workout specifically for the day, like I need one arm kettlebell workouts, I'll just Google it and find a playlist on YouTube.

That's been helpful.

I'm just usually on just in time, on demand, I'll just create a workout for the week, knowing that, hey I've got no meetings this week, I should probably integrate workouts throughout the days, throughout the weeks.

And that's what I've been doing, since.

Ishan: Oh, very cool.

Brian: Cool. And then the other thing I wanted to mention was I kind of alluded to it, I'll skip my actual TV pick and just go straight to Urkel, which is a front end... A JavaScript client to consume GraphQL.

If you know Apollo or Relay, Urkel's another one of those.

But they've also made a lot of different decisions to keep more best practices of GraphQL intact, as far as the GraphQL spec.

They also do some pretty cool things with cacheing, so if you...

As I mentioned, when I was trying to figure out getting static props or getServerSideProps, they actually have some really cool built in functionality that I don't also have to make decisions on the client level either.

Which has always been, I guess my problem with GraphQL for the past years I've been using it, is that cacheing has always been hard in GraphQL, even with the clients and I feel like with Urkel, their approach has just made sense to me.

Maybe it's a benefit of them coming so late in this evolution of GraphQL, that they could make those decisions and they could it figure it out, but I'm loving it, just like McDonald's.

I'm going to be definitely using it for a bit in a couple different projects.

And as I mentioned, this project I just built, it's actually, it's on GitHub it's Zora Next, it's under my GitHub handle, bdougie/zora-next.

I basically was building an NAT dashboard, so blockchain.

Was messing around blockchain trying to figure out and learn that myself for some videos that I'm making on YouTube, and I wanted to make it in Next.js and I wanted to use a GraphQL API, and it checked all the boxes.

Able to throw it together in a couple of weekends.

Ishan: Well we've actually heard that about GraphQL a lot and we've actually had to make some changes to the platform in order to allow cacheing of GraphQL as well.

Because it's using post and so much of internet infrastructure just assumed if it's get, that's the only thing cacheable, if it's post, it's not.

We've had to do a lot to help customers do that on our platform, so I'll definitely check that out.

Brian: Cool. Folks hopefully check out Ishan's work and articles that you're throwing out there as well as Layer0, and keep spreading the jam.