March 17, 2016
Ep. #4, Affordances for Developers
In this episode David and Steve are joined by Dustin Larimer from Keen IO. They discuss affordances - the visual cues or attributes of a thi...
On September 11th, 2018, Heavybit member Serverless Inc. hosted the Serverless user meetup at our San Francisco Clubhouse. The event was co-hosted by Cloudflare to highlight their latest serverless offering Cloudflare Workers. In these videos, learn directly from the engineers and product managers who are building the future of serverless at Cloudflare. Cloudflare Developer Relations team member Connor Peshek wrote the post below with additional information relevant to each of the talks. RSVP here to attend the next Serverless Meetup in person.
To understand the future of Serverless, it’s important to grasp the history of web applications. In the early 90’s, hosting web applications required buying and managing huge mainframes, regardless of your actual needs. Since then, the industry has worked tirelessly to remove the barriers to hosting, first by building commodity servers, then by sharing hardware via virtual machines, and more recently, containers. Now, with serverless, we’re able to easily run a container to handle a specific event.
Still, the difference between running your own cloud server and running a serverless application hasn’t been a meaningful one for users, because serverless is just using someone else’s container instead of someone else’s server. Your serverless functions are still running in massive data centers at a handful of pre-selected locations around the globe. Cloudflare recognizes that there are still plenty of opportunities to increase efficiency, and we’ve been working on new products to address them.
What if all of the API’s you needed to call from your serverless code were hosted in the same location as your serverless function? If that were the case, you could add as many layers to your code as you wanted and not need to worry about increased latency. And what if people integrating with your code didn’t have to have their API calls travel all the way to San Francisco when they’re in Australia? If a group of people in Australia are collaborating in a Google Doc, their keystrokes shouldn’t have to travel across the world in order to update the document. And we don’t want to use a peer to peer network because user’s devices can be unreliable.
Kenton wants the world to focus on running your code everywhere instead of inside of one massive central data center. Cloudflare Workers currently utilizes more than 154 data centers, and your serverless code can run in all of them at once. Workers ensures that users connect to the closest data center, meaning that the people using your application in Australia connect to the data center in Australia instead of a data center in San Francisco.
The next major hurdle for serverless is reliable and persistent distributed storage. Today when a serverless application is running at the data center that’s closest to you, a connection to the origin server must still be made to pull information from the database. Kenton’s goal is to change that. Cloudflare is working on storage at the edge using serverless technology. The broad concept is for most of a user’s database information to be stored in the data center nearest them, while remote databases can regularly communicate with each other, making all of a user’s data as close to them as possible without being stored on their own machine.
Traditionally speaking, code has always run in one of two places; a client browser, or on an origin server. Running code in the browser can be problematic for many reasons, not least of which is that you don’t always have control of the browser. On the other hand, code on your origin can often be difficult to update without breaking things. That’s why Cloudflare introduced Workers, a serverless solution that runs at the CDN layer between your origin server and the client’s browser.
Some of the solutions we’ve seen implemented are simple tasks, such as setting cache-control or CSP headers, or handling CORS. Additionally, the product is frequently used to set country based actions such as redirecting users to country-specific site variations or automatically translating pages to a visitor’s language. Workers can be leveraged to protect and cache private content too, including premium videos or images. And since Workers see both incoming requests and outbound responses, you can use a worker to monitor your responses for sensitive information and alert your teams accordingly.
A/B testing is another popular use for Workers, check the helpful code samples in our documentation.
Workers can alternatively house entire applications. One recent example is NPM completely removing their origin server and instead serving their entire site from a Worker with assets stored in a storage bucket.
We’ve been hearing a lot about serverless lately, but why would someone want to use it? Stephen Pinkerton explains that one of the primary reasons companies are moving towards serverless is because they want things to be “simple rather than complicated”. Focusing on Cloudflare Workers as the serverless platform, Pinkerton highlights the benefits an organization might enjoy by using a serverless platform for their applications.
In most circles, shipping an MVP is a pretty big ordeal. You need to write your application, spend money on servers to host it, manage all of the ops and deployment processes, and even then you still need to maintain it. Developers are increasingly moving towards serverless in an effort to eliminate these difficulties. Using serverless allows engineers to focus on their primary business and what adds value to their customers. There’s a diminished threat of being paged in the middle of the night to fix your infrastructure when it’s being handled by your serverless provider.
As with any infrastructure change, there are trade offs to consider when going serverless. You should be considering limitations like cold start and deployment times, cost, and how you plan to run your new serverless code alongside your more traditionally deployed projects. Many serverless providers still have long cold start times, deployed changes take a long time to get to all endpoints, and can be expensive to run. Cloudflare Workers takes a different approach to eliminate some of these trade offs. Instead of using NodeJS as our runtime for example, we use Service Workers, which can make cold start times as much as 400% faster than other serverless providers. If you’re experimenting with serverless for fun or at work, you should be sure to consider Workers for your next project.
Do you have valuable insights and experiences in the developer tool space that you’d like to share with our community? We want to hear from you – join our contributor program today.