July 5, 2016
Dev Tools Digest – July 5
We've got some fresh links and announcements from PagerDuty, Keen, LaunchDarkly and Stripe in this week's Dev Tools Digest.
Stormpath is basically a user management and authentication service for developers. We provide as a SaaS service a REST+JSON API that automates security for your apps, the apps that you're building. We provide user security workflows such as password reset, account email verification. We implement security best practices. How many people heard of the LinkedIn breeches and the Twitter attacks and things like that due to password problems? The Sony Playstation network issues with password reset?
We solve those problems so you as developers don't have to worry about that issue anymore. Of course, we offer developer tools, open source SDKs that wrap our REST+JSON API. That's us. Give us a shot. Just like you might use Stripe to offload payment processing, you can use Stormpath to offload user management security.
Tonight we are going to cover a decent amount of stuff. I'm going to go through this stuff fairly quick and although it might look like a lot, there's a lot of stuff that pertains to problems that we solve everyday as REST developers trying to implement best practices. You'll see a lot of things here that you've probably seen before. A lot of the things we're going to talk about tonight are best practices or de facto conventions in solving these particular problem sets.
APIs. We're here to talk about APIs first and foremost so that means our focus is on building apps, supporting developers.
Developers are your customers. Whether they're internal or external, we want to make things easy for them. The easier it gets for them the more likely they are to adopt your service and be happy and spread word of mouth and all that good stuff.
This presentation will focus mostly on pragmatism over ideology. How many people here have heard of the term RESTafarian? You guys heard about that? A RESTafarian is an ideologue who won't compromise. We try to implement the best practices and the proper techniques but every now and then we might diverge from some of them if they're overly complex. We want to focus on pragmatism so you can get your APIs out quickly and please your customers.
Then of course we're going to focus on adoption and scale. By scale in this presentation, I really mean distributed systems and web scale. In order to get large adoption and to propagate or proliferate your service, things need to operate at web scale. And this is really, how many heterogeneous systems can connect to yours and how many can you connect to? That's really what I mean by web scale. Not so much scale based on performance.
Why are we talking about REST? There's 6 major points in Dr. Roy Fielding's thesis on REST architectures. He lists them as scalability, generality, independence, latency with respect to caching, security and encapsulation. By these he means, again we just mentioned scale, when he talks about scalability he talks about ubiquity. Internet, web level scale, not necessarily machine kind of performance scale.
Generality. REST is a general architecture. It piggybacks on the common HTTP methods that are already part of HTTP specification. It's very general in its approach in that it can be leveraged by pretty much anybody who understands HTTP.
There is independence. Your implementation of a REST API can be completely independent from things that consume it. You might have Ruby on Rails back end or Python, Django back end, or Java back end. It doesn't matter.
You can be independent and you can interact with third party services and vice versa because of the independent nature of HTTP.
Latency is also a very important point that he brings up in his thesis specifically with regards to caching. It's something that people kind of step over a lot when they talk about REST APIs. Latency and caching are very important in the REST paradigm and we'll cover a little bit about what that means later on.
Security is important. There are secure headers or things that support security in HTTP headers and you can leverage those in REST based authentication schemes. There's security already built into the HTTP protocol and you can leverage those in REST.
Encapsulation. You can encapsulate details or complexity or migration paths inside of your application without exposing those details to your end users.
REST affords these 6 general properties as a general architecture.
This is a REST+JSON talk. Why JSON? I don't need to spend a lot of time on this. It's pretty much ubiquitous nowadays. I think there was a study published recently where I think somewhere north of 55% of all application development is being done in JSON now, and of course that covers web-based applications which I'm sure fill up a large percentage of that number.
JSON's very important. It's becoming useable and used by almost all developers nowadays. It's very simple. The JSON grammar is incredibly simple. It's very easy to read by people, it's very easy to parse by computers, and it's scalable in that it can handle many different use cases and it's flexible.
You can represent very many different data types and data formats, data models with JSON's relatively primitive structure.
HATEOAS. How many people in here have heard this term before? A decent number of people. HATEOAS is an acronym for, if you want to call it that, "Hypermedia As The Engine of Application State." That's really a big mouthful that means everything that a computer needs to know about your REST service is discoverable from a single initial URL endpoint.
If you can imagine yourself as a web browser, if you go to a homepage of a particularly complex site via the links, you can reference every other page that's publicly accessible. You can get sitemaps, you can traverse links that reference resources that reference other resources and you're able to traverse that entire document graft, if you will, because you've got that initial resource.
That's really what HATEOAS means in the REST scope is that, if I interact with the REST API, I should only need to interact with one and only one initial resource and then from that I can figure everything out on my own. This is not a mandate for REST architecture. It's not something that Fielding covers too well in his thesis. A lot of people now who think about these things really view that this is sort of a further restriction on REST architecture. It is RESTful. The purists like to think this is pure REST, but it's really a further restriction on REST. You can be RESTful without it strictly adhering to this principle. Keep that in mind if you see this term come up.
REST is easy, right? It's just JSON, it's just HTTP. HTTP has been around for forever so has JSON. It must be really simple to create a REST+JSON API, right? Not really. REST is really, really hard to get right for providers. If you're providing, if you're implementing a REST API and you're exposing this to potential customers it's very difficult to get this right. And the reason why is, REST is not a specification. There's no RFC or W3C standard for REST architectures.
It's a design approach, it's an architectural philosophy, and because it's a philosophy or design that doesn't have a concrete specification, it can be interpreted by people differently.
If you go on Stack Overflow and you do general searches on the internet, you'll find various opinions about how to do links and how to reference other resources and all sorts of other techniques, and it's because this is not a standard. Everything that we're going to be talking about tonight is really an accumulation of probably a year and a half worth of research on many different REST APIs, both JSON and XML based, many third party providers, public REST SAS services. And a lot of the things are going to be what we at Stormpath consider best practices and de facto conventions, and we hope it's useful for you as well.
The punch line here is that REST can be easy if you follow some of these guidelines and conventions.
In the course of this presentation we're going to cover a bunch of examples so you can see some of these principles in action. I'll use, just for the sake of simplicity for myself, Stormpath's domain. In Stormpath as a security and identity service, we model things like applications and directories which hold accounts in groups. Of course, almost everyone here is familiar with accounts in groups.
There's also associations between a lot of these things. There's One-to-Many, Many-to-Many associations. How do you effectively represent those in a REST or a JSON based representation? Also developer workflow, so password reset and account email verification, behavior essentially. How do I represent that in a REST+JSON world? I will reference these continually throughout the presentation. I think it's concept that most people are pretty comfortable with. Most apps need these stuff.
Let's talk about fundamentals. The first concept I want to introduce is that of a resource. REST as a paradigm says nothing about JSON, it says nothing about XML. It's basically a mechanism for data exchange and Hypermedia traversal. What that means is you can represent data in any format you want. That means that we need to restrict the vocabulary of how we represent resources.
Every time I talk about a resource and as a best practice, you should always think of them as nouns, not verbs.
An account is a resource, a directory is a resource, a group is a resource, a user is a resource. These are things that can be interacted with. They have properties. But you're not going to hear me talk about resources as verbs. I'm not going to have a resource that says "get application" or "delete a group." I'll cover some use cases as to why in just a second.
Also resources are very coarse grained, they're not fine grained. If you have an account, you want to represent the account with all of its properties. You have no idea that your consumers of your API are going to use or how they're going to use that information. You want to be very coarse grained. Give them as much information as makes sense in a single resource representation and then they will use that information in ways that you probably never could have envisioned.
They might find value in the data or the attributes or how it's referenced to other links, to other resources that you could have never really thought of. By putting the power in their hands to decide how they want to use their service, they can do a lot more than you probably could have supported out of the box. Keep your resources coarse grained. Represent data in larger chunks not finer, smaller kind of resource representations.
REST is an architectural style for use case scalability and when you keep things coarse grained again, you help your user solve problems without having to interact with you for very specific use cases.
As an example of why this is important: What if I am creating a REST API and I make my URLs reflect operations and resources? Maybe I would getAccount URL and a createDirectory or updateGroup or maybe I want to verify an account's email address. This looks fairly innocuous. It doesn't look very difficult or challenging or even that dangerous. The problem here is that what if you need to introduce a whole lot more behavior to support customers? Maybe some person wants to be able to search accounts or if they want to find a group based on the directory or they want to verify account email addresses by a certain token.
As you can see here this quickly explodes. This is not maintainable, it's not scalable, it's not supportable as a development team. The more and more behaviors you add to the URLs itself, the more difficult it is for you to know what's happening in your system. If you guys have dealt with like RPC back in SOP days, this smells like bad RPC. Don't do this. This is not a good idea. There's definitely a better way.
We really want to keep it simple. We want to simplify resources and then how they're referenced via URLs.
How do we do that? There's really only two types of resources you have to worry about to implement a REST API. There is what we call a Collection Resources and Instance Resources. A Collection Resource is itself a 1st class citizen resource, it has its own properties but it also contains other instances, other RESTful objects. It is a container. The other resource type is really an Instance Resource. It represents one data type in particular and not multiple things. It's just a single instance of a type.
Here are some examples:
We can have a Collection Resource, in this case it's labeled as /applications. The takeaway here is that this is a plural name. It's not /application, it's /applications. It's plural, it's self-documenting, it's readable. The name being plural is important for intuitiveness and readability. When you interact with this, you know that this thing exists for the purpose of one or more things. It's recommended that you keep your collection resources or endpoints in the plural. It's self-documenting and it's helpful.
Instance Resources, however, almost always tend to be a child or referenced or owned by a collection. In this case I'm showing an instance resource. It's owned by the applications collection, or it's a child of that collection, and it has its own unique identifier. Applications/a1b2c3 is a specific instance of an application. That's it. There's really collections and instances. That represents resources, it represents state.
What about behavior? How do I not blow up my URLs to have all these different verbs inside of them? You can basically handle that through behavior. HTTP as a specification has 5 primary methods that are well-defined that indicate behavior of how you interact with resources or documents on a web server. There is of course, GET, PUT, POST, DELETE and then there's HEAD for metadata.
How many people here think there is a 1:1 correlation between Create, Read, Update and Delete and GET, PUT, POST, DELETE? Okay, we got a couple people. It can be modeled as such.
It's very important to understand that it's not strictly a 1:1 correlation. GET, PUT, POST and DELETE can be used for Create, Read, Update, Delete, but it is not a 1:1 correlation. This confuses a lot of people.
If you ever go to Stack Overflow you'll see tons of questions about "Should I use POST or PUT in this particular scenario, and I don't know what to do, can you please help me?" Hopefully I'm going to make that very clear here and hopefully pretty helpful. We'll cover some of the difficult ones in a second, but these really do have a 1:1 correlation.
GET is actually a Read: "I really do want to read something back from the server." DELETE does mean: "Please remove it from the server." And HEAD is a metadata operation: "I want you to give me back some information. You don't have to return the actual body, the actual resource." These really do exactly what you think. But PUT and POST are not really obvious. They can be both used for Create and Update, and I'll explain exactly the scenarios where it makes sense to do these things.
PUT, most people associate with a general update command: "I've already got something and I just want to modify it or update it." But, it definitely can be used for Create if you allow your clients to specify the unique identifiers or the identifier of that particular resource. In this case, maybe I allow the client to specify the ID of an application. Maybe it's a global unique name or something like that. You can use PUT in this scenario because you're giving the server everything that it needs and you're telling it where you want it stored. This is a legal operation. PUT can be used for Create in this scenario.
It can also be used for Update. In this example, I'm putting some data to an existing location and it's got an ID already associated with it. But the key here is that the name and the description they have to be, in this particular example, the only two properties for a particular application resource. There shouldn't be any other properties associated with this request and the reason why is:
PUT must be a full replacement operation. It can't have only some of the properties and then some of them excluded. Every single property for that resource must be specified as PUT. The reason why, and this is really important, is that PUT is mandated by the HTTP spec. This is not REST, this is HTTP. The PUT operations must be idempotent.
An idempotent operation is any operation that when executed one or multiple times results in the exact same state on the server.
As an example, if I have a name and description and I send via PUT and I send a name in only on request A and then maybe a little bit later I send in just the description attribute, the server state between those 2 operations could be different. Maybe the name is updated by a different request or the description has changed by some background process. You can't guarantee identical state on the server when you only submit partial updates or partial representations. This actually breaks a fundamental property of the HTTP specs. Just keep in mind PUTs must be full replacements.
Another important thing, idempotency as a property, again, if you do some research around on Google and whatnot, idempotency as a property only pertains to the server state. It does not have anything to do with the client state. As an example, let's say I use a PUT to create that application, and then it's created, everything works fine, there's no constraint violations. But let's say I have a unique constraint of application names in my system. It means any app that you create has to be named the same. Or excuse me, they have to have unique names across all of the applications.
I can request this the first time or I can send in a request the first time with this payload, and then as a client, I can send that exact same payload in, and the server can respond to me with an error condition stating, "Hey, this is a unique resource. There's already something else with that name. You're not allowed to save this to the server again." While that is different behavior for the client between the two requests, the server state is the same between the two requests. Therefore, idempotency is maintained. It's important to know that idempotency only matters per the HTTP spec. For the server state not for the client. We talked about PUT.
This is POST being used as a Create. This is a lot more common. You'll see this referenced a lot more. POST can be used to create things typically instance resources when you interact with a parent resource. In this case, I'm interacting with the application's collection resource and I am indicating, "I want you to create a new child of that collection and here is the name of the application to be created."
In this case 201 is returned and it's important when a request is successful and a new resource is created, you always return a 201, not a 200. 200 OK just means that the request was successful. 201 means: "Not only was the request successful, but I created something for you and by the way, here's the location header that tells you the canonical location of where that's going to exist."
You always want to return 201s when you create things and you always want to set a location header to tell the client the canonical location of where that resource now resides, and then it can reference that in future requests to go interact with that resource.
You might notice here that this only a partial representation, so if the application has a name and a description property, I'm only specifying one as POST. The reason why that's okay is POST is the only one of the HTTP methods that does not need to be idempotent. And actually they're pretty vague on this in the HTTP spec, but POST basically means a server processing directive and not a whole lot more. It means you can do whatever you want with a POST operation. That's why this is legal to create partial updates via POST because this does not have to be idempotent per the HTTP spec.
This is Create. Same for an Update. In this case as you see, I'm only giving a partial representation when I want to update the data. The idea here is that only the name property of the particular application is being updated. I'm specifying its full resource URI /applications/ID. I'm only specifying the name. In this case 200 is okay â€” pun intended.
200 is an acceptable response code because you already know the identifier, you already know its location of where it resides. You don't need any additional data back to the client, 200 OK is perfectly acceptable. Again, that's because POST is not idempotent. You're allowed to do partial updates and partial data creates.
Media types. This is something that people kind of come up and they have a lot of questions about. Should I use media type, should I not use media types? What are best practices? The reason why I bring this up as part of the fundamental section is that this is really, really core to how REST as an architecture should function. Fielding specifically states two really important concepts that exist in RESTful architectures and that's the notion of distributed Hypermedia, basically the ability of resources or documents to link to or reference other resources and documents.
The other half of that is media types, you should have theoretically a rich media type library, if you will, or repertoire that allows you as a developer to describe your resources in detail using the media type specification.
A media type really is just a format specification, how data is structured and paired with a set of parsing rules so that machines know how to interact with that data structure and interpret it accordingly.
The client can tell the server what media types it wants back from the server via the Accept header. When the client sets the Accept header, it's telling the server, "These are the things that I accept, these are the media types that I understand, so please give me back the data in one of those formats so I can parse the information."
The response from the server to the client contains a content type header that says, "I get it, I know you're asking for these various different types of media types, but here's the one I'm actually sending back to you." Most of the time those two content types should be identical. Sometimes they don't align and then you can do content type negotiation in various ways. We'll talk about that in a little bit.
Here's some example of some media types:
Application/json is something everyone is familiar with. Application/foo+json says this is not just a JSON document, but it's a JSON document according to the Foo specification or Foo semantics. Then you can do other things. You can create key value pairs with semicolon attribute delimiters. You can provide more information to the media type so that clients can read that and figure out how to do things in a more specific manner. We'll touch on media types a little bit pretty soon.
These are really important. The RESTafarians say this is mandatory, you should have all of your data exchanged and identified via media type. We might veer from that or diverge from that kind of recommendation. If you can do this, it's very important that you do it. This is a really good practice. Highly, highly recommended, but there are some other ways to solve this problem and we'll talk about that in a bit.
We talked about fundamentals, let's talk about designing. We're going to create a REST API from the ground up. What are the things we have to worry about? I almost hesitate to introduce this first because URLs, I think it's just naturally something that developers think about. What are my URLs going to look like? Are they going to be intuitive to understand and can I put resources there?
URLs actually don't really mean a whole lot in the REST paradigm. What does matter is links and HREFs. As long as you can have links from documents or from resources to resources, that's really what matters.
But nevertheless, people like to think about these things because I guess it's easy to grab onto and it's one of the first things you think about.
Talking about URLs, if you were a potential customer, what would you want to interact with? What would you want to see if you're trying to hack out a script really quick to see if you can maybe put cURL together to test API interaction or integration? That's pretty obvious, right? Everybody likes the top one. It's easy to remember. You don't have to type so much. It's less error prone.
The bottom one, while totally valid and there's nothing technically wrong with this, it's not pleasant to look at, it's confusing. Do I have to go /service or do I forget API or REST? You don't know what those intermediate paths mean and it's just distraction, it's noise.
You want to provide a comfortable and pleasant experience to your developers or to your customers. Give them an easy to use host name that they could just interact with from day one.
People like the top one.
Sometimes I get questions about, "If I interact with that API via a REST client like cURL or command-line client, versus when I visit this in a browser, what am I supposed to see?" Because I had mentioned with content negotiation, the browser can tell the server, "Hey, I support XML and JSON and HTML and plain text. I support all these things."
The server can come back and say, "Oh, you support all those things. Well, I'm going to give you HTML because that's listed as the first in that list of Accept media types and so clearly, that's the one you prefer. Here's an HTML page that renders that information," which is totally valid. That's the whole reason why content header and negotiation exists and why it's functional and you can leverage that.
There's no right or wrong here. You can do whatever makes sense for your customers. We, however, at Stormpath chose, and I think a lot of other companies choose the approach where no matter what the Accept header says, as long as at least it says application/json somewhere in there, we always return JSON back to the caller. The reason why we do that is that, again, we want to help developers.
If they're experiencing problems: maybe their REST command client doesn't work that well or they've configured cURL incorrectly, or who knows maybe the command line tool or IDE tool that they're using is just blowing up. It's really convenient to just open up a web browser, go to a URI, hit enter and see the data that comes back.
For debugging purposes and tracing and developer level information, we find it much easier to represent JSON to the browser and to command-line clients equally. We just feel it's more convenient and it's been helpful for people.
Again, this is just a recommendation. There's no hard-set fast rule.
Versioning. How do I handle versions? This particular topic brings up a lot of debate in the REST community. The RESTafarians say you should only encode or represent versioning information in the media type. As you see here, the media type example shows that it's a JSON document, according to the Foo specification, that represents an application resource, and the version is at version 1. The data that comes back is applicable to version 1 parsing rules or data format rules. The top one states that you could just put a version identifier at the end of the URL and everything from that point down comes back at version 1.
The bottom, the media type approach is the purists approach. It is the cleaner approach and it is the recommended approach by most people who advocate REST architectures. Roy Fielding recommends this as well if it's possible for you. The reason why is that you're URLs don't have to change over time as you upgrade your service. They can remain the same. You don't have to redirect clients to new URLs if they're interacting with an older version. The server can look at that v=1 tag and knows that, "Oh, they're requesting a version 1 representation of my resource. Here, I'll render version 1 for them even though we're on version 4."
This is a much cleaner approach. The URLs stay the same. There's less change in the most important part of the request which is the URI. It's backwards compatible. There's a lot of really great benefits for this. The big deal with media types though in this approach is that not many people understand this stuff. You got to understand, especially if you're a SaaS provider, you're going to have customers of all levels of education, kids right out of high school, PhDs.
You're going to have so many people from different walks of life. The media type recommendation or approach is a very technical answer to something that could be simplified.
A lot of people don't like using this stuff because it imposes a significant technical challenge on the end user who's interacting with your API. It's correct, but if you're really going after adoption and you want to make your customer's lives easy, the URL at the top is the one that most public service providers kind of adopt. Stormpath does this, I think Yahoo and Google, and a whole lot of people, Twilio. I think most of the public SaaS providers do this approach because it's just so easy, and you know that any of your customers can go to that URL and everything from that point on is a particular version.
That being said, if they want to start using version 2, they have to change the URL and their source code. They have to go through maybe some configuration updates to be able to interact with these versioned resources properly. It imposes more change and more risk, but a lot of people don't think it's that big of a deal. The thing I would like to point out about the top approach, though, is that you want to keep your version information or version identifiers very succinct. Preferably atomic integers.
You might have internally an API that's at version 1.5.42 or 1.5 or some date. Your customers don't care about that. As long as whatever is being served from that endpoint is backwards compatible from day one when you produced your first resources from that version identifier, it can stay on that identifier. You don't want to introduce change. Every time you release a new version, you don't want to require your customers to go reconfigure the REST clients or their scripts or whatever they've built. It's really annoying, really disruptive.
Think long and hard about when it's appropriate to increment your version numbers and keep them atomic integers for simplicity.
It's also kind of a good gut check if you want to increase your version numbers a lot, maybe it says there's a little bit too much churn in your API, and you might want to rethink that as an engineering team as to whether or not it's a good idea to do that. Just be careful with version numbers. At Stormpath we do the top approach. We found it to be easier for our customers. The good news here is that if a media type is specified with a version number, it can, if you want it to, override whatever is specified in the URL. That may or may not be a good approach. I'll explain a little bit later.
It's good idea to support this if you can get around to it. It is the clean and appropriate way and the customers who are savvy enough to use that will benefit from all of those additional benefits.
We talked about resource format already, we talked about content types. In this presentation we're going to be covering application/json a lot. And, as I just said, start out with application/json and then allow or specify these additional media type specs or media type representations when you have time to do it. It will be helpful.
Date, times and timestamps. We've gotten some questions: "How do I represent time? And there's like 20 different ways to do this." I've seen on various posts: "Just the time or just the dates or do I introduce a time zone?" I find it amusing because there's a standard that's been around for forever. It's called ISO 8601 and it codifies how dates and times should be represented in a text format. Any time that you have to represent time to an end user, stick with the standard. There's a lot of parsing tools and libraries that already know this stuff, that can parse this stuff without any work on the developer's endpoint.
In this case you'll see it's a year with the letter T then a time format and then finally a time zone indicator. Here you see we're using Z for Zulu which is the same thing as UTC, Universal Coordinated Time. We recommend that anytime that you represent timestamps to your customers, your developers, always represent it as UTC. We've seen from an implementation side, at least, there's a lot of problems with how different data storers represent timestamps. MySQL, for example, handles this terribly. It's really hard to represent accurate timestamps with time zone metadata and MySQL. It never has really handled that scenario that well. Other databases don't do so well with it either.
But on top of that, your customers are likely to be in many different time zones â€”East Coast, West Coast United States, Pac Asia, Europe. Those are all different time zones as well. If you represent everything in your system as UTC, ISO 8601, you can let the customer format or render that data in their time zone as they see fit, and it doesn't cause any confusions or problems. There's no time collisions with different representations of time. If you stick with Zulu or UTC time you're going to save yourself a tremendous amount of problems and trouble.
We highly, highly recommend that you use that format and you use UTC from the operating system all the way up through your API. It's going to save a lot of people a lot of heartache, both yourself and your customers. I strongly recommend you stick with 8601.
HREFs. How do we represent links and references to applications? Again, I mentioned previously HREFs and Hypermedia specifically in Fielding's thesis is paramount. He listed as probably the most important part of HTTP and the REST architecture is that documents and resources should be able to reference other documents and resources.Every resource, therefore, should have a publicly accessible unique URL.
In Stormpath we encode this information as a first level property of a resource. We happen to call it HREF. I've heard other people call it link or URL or whatever. We call it HREF just to be a little bit semantically close to the Xlink specification for XML. But one of the interesting things about this particular representation where it's a first class property with all the other properties is you're going to see some really cool tricks for entity or resource expansion being able to expand the link into its full object resource and it's going to make the JSON representation pretty elegant and very easy to consume for customers and people that are parsing the JSON. There's some really nice attributes or qualities to this particular approach that I'll cover in a little bit.
Response bodies. Do you have to return them as part of the response? GET is obvious. If I'm issuing a GET request, I absolutely want that data back. With POST it's not so obvious. If I'm going to send some information into the server, maybe it's the entire resource representation, do I want to get that same exact data back that I just sent to you as a response? There's no mandate for it. We actually do recommend that you do that and the reason why is that when a client gets the data back maybe there's some data that's been updated that they don't have control over, then they want to see the freshest value.
For example, maybe there's an update timestamp. They have no control over that. Maybe it's only updated in the server, but when you return the data back to the customer they can see those fields that might have been updated automatically that they don't have control over and also, they know that they're looking at the freshest most recent version of that data. And it's important because if they institute client side caching, they can invalidate or update their cache locally to use the freshest, most recent version of that information from the server.
We do actually recommend that you send the data back. It will greatly simplify client code as well.
If they know there is always data to come back they can always relay it to whatever their storage mechanism is or their parsing mechanism. There's no special cases. It's easier to code for that particular use case. That being said, you only want to do this when it's feasible. If I'm uploading a 10 gigabyte video file, I clearly do not want that back. That's going to eat up bandwidth. Use some prudence with this.
You can also if you want allow customers to control this themselves. If you specify a control query parameter like _body=false, maybe you have a data limited API, maybe there's a quota that they can only exchange so much data per month on your public API. This control parameter allows them to reduce the amount of data that's exchanged over the network. You can give them the option to reduce the amount of data that comes and goes. Convenient if you want to use it.
Content negotiation. I'll just kind of jump over this quickly, we've covered this. The client specifies the Accept header. The important part here is that the preference for what you want to get back should be comma delimited in the order that you wish to receive information or that the client wishes to receive something. In this example, application/json comes before text/plain and that tells a server, "Hey, I prefer JSON over plain text. Please give me back JSON if you can, if not I'll deal with plain text but I prefer JSON so please send it." That's how clients tell the servers what they prefer.
Resource extensions. In addition to specifying the media types in the Accept header, you can specify an extension that goes on the resource URI. Instead of just the resource URI, I can have the URI .json or .csv. And this technique, if you want to support it, conventionally overrides the Accept header if it might be set. The reason why is that the URI is the most important part of the request, it's closest to the identity of the request. If I specify JSON on the URL that means that no matter what my browser is sending in the Accept header, I want to get back to JSON instead of plain text or whatever.
You can support many different formats, .csv, .xml if you want to, but this is a nice technique especially for developers that might want to test different formats and it's not so easy to set headers if you're just using a regular browser.
Resource References and Linking. Again, we said this is extremely important. The difficulty here is that this is really kind of tricky in JSON. Again, JSON as a format has no official specification for linking or referencing other data. XML has it. There's an actual W3C standard called XLink that is a specification by the W3C that says this is how we reference Hypermedia across documents or across resources. Since XML doesn't have that, how are we going to do this? There's a lot of contention right now on public discussion forums about what's the best way to do this? Should I have X, Y and Z attributes? How do I model this in JSON?
We went over and over and over again in our heads as to the best approach and we ended up coming up with, the KISS Principle won out for us. We really wanted to keep it super simple and reduce complexity. The way we solved this is we actually referenced, as you can see here in this example: I'm showing an Account Instance and it's got a couple properties. It's got its HREF which is where its canonical location of where it lives, it's got a given name, a surname. The tricky thing here is that there's this Directory attribute and the Directory is itself a complex object and the Directory owns the Account. The Account here is referencing its parent but it's a complex object so how do I reference that in JSON?
We found that the simplest way to do this was just to use a complex JSON object that itself has an HREF property. One of the interesting things about this approach is that in anchors, in HTML anchors as part of the XLink specification, they have this REL, R-E-L attribute like A HREF=blah, REL=blah. And REL means relation. In that REL attribute, you're kind of giving metadata about what kind of relationship is this? What does this mean for that particular link?
The interesting approach here is that in JSON, the attribute name itself is kind of the REL in this case, the relation. It's a directory attribute but it's also implied that that's the relationship that's being represented here. The account is now referencing its directory and it does it via a simple HREF. Again, we're going to talk about resource expansion in a little bit and you'll see how this is really, really elegant. It requires no change of code for your clients to represent more complex versions of this resource.
If it's just a single property, you know: "I have an attribute, it's complex object, it's got one HREF property, it's clearly a link." That's how you can reference multiple resources.
This is particularly interesting. It's pretty useful and it's an often requested feature in REST APIs. It's also known as Entity Expansion or Link Expansion. Basically, the concept works like this. What if in a single request, I wanted to get back the Account object and its parent Directory in a single payload? I want all of that information in one request. I don't want to have to get the account object and then look at the HREF and then go execute another HTTP request to get to the Directory. Please just give it back to me in one lump sum.
The way we've solved this and the way we've seen other people solve it as well is that we support this notion of an expand query parameter. We've seen people do this also as headers that the server interprets to expand the data. We think it's kind of nice to have as part of a query parameter because it's sort of a directive about that specific resource but it's not the core identifier for that particular resource. You can get additional metadata as query parameters and it's all represented fairly easily. You don't have to go through the effort of setting headers, but this is a personal reference. There's nothing wrong with using headers. We just feel that this is a little more easy to understand and self-documenting for people that want to interact with URLs.
In this case we've specified the expand=directory parameter and that's a directive to the server that says, "Okay, I'm going to give back the Account resource, but in the directory in addition to the HREF, I'm also going to populate all the other attributes for that particular directory. If you look at the structure of this JSON it's identical to the previous example without a directory. It's totally identical. There's nothing that has changed in the structure of the document except for we've just added a few more properties to that complex object.
The beauty in this particular approach is that you're client when it's consuming information, doesn't have to know, "Oh, I have to look at this particular data format for links, and then I have to look at a totally different format for expanded references." This technique basically allows the client to say, "Oh, I've got one HREF and nothing else. Clearly a link. It's got more than an A HREF, it's been materialized. I don't have to worry about anything else. I can process them both identically."
Really, really nice technique, somewhat self-documenting and the fact that it's just a simple HREF property to represent a location means that you're providing a low complexity resource representation to your customers. It's easy for people to understand this approach.
What about partial representations? Instead of getting an object and a bunch of others, what if I only want certain properties of a particular object? Again, maybe you have a quota limit on your API for data exchange. In this case, we've seen people offer a fields query parameter where they can specify the attributes that are inside of the resource and then your resource only has to return that particular data. An interesting point that I should note here is that when you see this query parameter, expand=directory and then the fields mechanism, you're interacting with the same resource both times. The same resource within your server.
The interesting thing to note here is that the query parameters are taken into account by caching in proxy servers as determining whether or not a particular resource is unique. Caching servers will see this URI and then they'll see the other one, and then even though in your server it's the same resource, the caching server is treating them as two separate resources. Query parameters are taken into account for unique identities for caching. Just be aware of that. It may or may not have an impact on you depending on if you cache or how you cache data. I just wanted to bring that up.
Pagination. What if I have a directory with half a million accounts in it? I'm not going to give you half a million accounts when you request that directory. That would be absurd and maybe it would slow things down and cause frustration. Clearly you'd probably want to chunk up a response like that into many different pages. We've seen a bunch of ways that different people support this. There seems to be some consensus on the approaches to support pagination. Most of the time, the query parameters are actually named Offset and Limit. Offset is the starting position from 0. In the overall collection, what index do I start my page at? Limit is once I started that index, how many results are you going to give me back?
This particular query says, "I want to start on the third page of data and I want 25 results back for that third page." You can use these two parameters and I've also seen people use a cursor query parameter that might retain like a serialized data cursor that they can use with their underlying database technology. Maybe it's Oracle cursor into the actual table or there's some kind of indexing mechanism in a distributed cache or NoSQL data store. Using a cursor query parameter is also pretty common as well. This is one approach you can use cursors.
One of the other things that's interesting about collections and collection resources is that in the spirit of HATEOAS, instead of the client having to worry about how to calculate pages â€” Do I have to worry about offset and limit? â€” you can provide additional attributes in your response representation that are themselves links to other pages. If they want to get the first page or the previous page or the last page of the overall collection, they can just interact with that particular HREF. They don't have to worry about "what's my starting index" and "how many results can I get back in a page?"
You can provide this as a really nice convenience mechanism to simplify user interfaces that might consume this directly or people that just don't want to mess with the offset and limit query parameters. This is a nice technique if you want to simplify usage for collections.
In this case you'll see and this is a good representation again why collections we call them collection resources, they're first class citizens. You see they have their own properties â€” there's an offset property, a limit property, a first property. These are first class resource properties and then you have an items property which is an array of the things that it contains for that particular page.
It's a resource, it's got its own properties but it does happen to have an array of things that it encapsulates.
Many-to-Many. How do I handle Many-to-Many relationships? In this example, accounts can be in many groups or can be assigned to many groups, and of course a group can have many accounts. Via the URL that you see here, it's a unidirectional association. I have to go to an account first in order to get its groups, but perhaps I have to go to a group first in order to get its accounts. How do I represent data relationship if it can be referenced in two different ways? You can do what I just said. You can have two different URLs that represent the data in that particular way.
We have chosen in our API, and we found this to be pretty convenient, I don't know if I'd call it a best practice, I think it is, it's been very useful for us, but we represent all of our relationships as a resource in it of themselves. In this case it's called GroupMembership. You can call it whatever you want. You can call it whatever mapping or whatever. We call this a GroupMembership. That means that you can interact with it as a first class citizen resource as well. Just like all the other resources, it's got its own HREF, it's got links to the account and the group that are linked together via this particular membership resource.
One of the nice things about that is you can interact with the membership and you can find out what the account or the group is. Maybe if you support querying, you can query for the account or the group and get back the other. One of the nice things about this approach too, is if you choose to offer metadata about the relationship at a future date, you already have the resource in place to do that.
Customers can already interact with it. It can already be part of your API. If I just added a property like: created by or made by John Smith at time, December 14th. You can provide additional metadata that might be useful to your clients if they want to know more information about these relationships.
By representing it as a resource in it of itself, you get a lot of extra power and capability.
Another also nice intrinsic benefit of this is if I issue a delete request to this URI, I immediately delete the association but I don't delete respective group or the account. I can add an account to a group; I can delete an account from a group just by deleting this particular resource. It's kind of a nice side benefit. You could attack this from many different angles depending on what makes sense to the customer. You'll see, how do I reference memberships? In this particular example I have my normal group's URI and if I go to that particular collection resource, I'm going to get immediately back a collection of groups, not the membership just the actual groups. But I can also additionally represent the memberships as another collection property.
Forgive me, this is not a relative path, these are ellipses. I'm truncating the beginning of this for brevity.
But the idea here is that you can reference a fully qualified URL a collection if you only care about the groups or I can also reference a collection resource if I care about the memberships themselves, the actual metadata associated or related to those associations. You can let the client choose which of the two that he prefers for his particular or her particular use case.
Errors. How do I represent errors in a RESTful way? This is a really important point that a lot of people overlook. I got to give props to Twilio. I don't know if there's any Twilio people in the house but they were one of the first people to kind of do this the right way, I think. We've adopted much of their particular approach and kind of expanded on it a little bit. This is how they represent and we represent RESTful errors.
In this case you see, maybe I'm posting to the directories endpoint and I'm trying to create a directory that has the same exact name of something that's already there. I will get back a 409 conflict from the server in this particular case. The error code could be any number of error codes, but in this particular example 409 indicates conflict with a resource state on the server and so I'm not allowed to create something that already exists. But I'm also in the response body returning a lot of information. I'm giving them the status which is the same exact HTTP status code as what's set in the response header.
The reason why I do that is that you'll only have one place to go to. You have one place and only one place to look in the response payload for all the information that's important to you. You can of course, or your clients can of course use the HTTP header and maybe that's more beneficial, but by repeating it here you give them another option that could be convenient for them.
There's also a code property which is a code unique and specific to your specific REST service. The reason why is I think there's something like, I want to say like 23 or 24 HTTP error codes for 4xx and 5xx. If you think about it, that's not a lot. That's not a lot of error codes to help you fully describe what went wrong to your customers.
By having a company or service specific error code for your API, you're able to convey particular conditions and errors in a much more specific manner.
We also like to provide a code prop or a code attribute in the payload that they know, kind of like you've seen Oracle and MySQL error numbers that indicate very, very specific conditions. That's very helpful for people debugging and trying to figure out why something failed. You can do that. In this case we've set a property called a property, and the value's name indicates "this request failed specifically because the name property of what you gave me is not valid."
Of course, they can choose to ignore this and come up with something else, but at least you're giving them a head start and you're explaining things in a non-technical way that's really helpful for people. The developer message then would be the thing that actually you show to your customers in a very technical way that helps them understand why it failed and maybe it gives them some resolution advice like, how do you fix this? You want to be helpful to them so that they know what's going on.
If all of these still isn't good enough, the beauty of the MoreInfo property is that they can go to a webpage that contains a whole bunch of information. Maybe you have some sample apps that you reference from there that showed them how to solve this particular problem. You can fully expand on what happened and why it's happening. You want to be as helpful as possible to your customers so that they'll be happy to use you and they'll continue to use you over time.
Security. This is obviously near and dear to my heart at Stormpath. Unfortunately, I don't have a whole lot of time to go through this stuff. But just in brief summary. Avoid sessions if possible. From a performance perspective it does help scale significantly the less state you have to maintain â€” stateless architectures and asynchronous event-driven architectures. They're all good for scalability from a performance perspective.
We recommend and we advocate authenticating every request if necessary or if possible.
For Stormpath for example, every single request that comes in our service is authenticated and we do that for security reasons. We don't want any interception or any manipulation of our data or our customer's data. One thing that's important is, authorize based on resource content not the URL. You don't want to say, "Is the user allowed to visit the URL that matches this?" That's a fairly brutal authorization policy approach because if you change your URLs for any reason, now your policies have broken.
What is a better approach is that you actually authorize based on the content of the resource. Maybe you say, "The current user or the current requester is allowed to see the account with ID 1234." It's a much more concrete, specific authorization check that is orthogonal or separate or not tightly coupled with the actual URL format. What you really care about what resource is being interacted with and what they're doing with it. Not so much the URL.
If you can use this finer grained authorization approach, it will scale better. It will be easier to maintain for your application over time.
How do you authenticate end users to your API? We really recommend using the existing protocol unless you're a security team. Oauth 1.0a, there's Oauth2. If you have to use HTTP basic authentication only use it over SSL. Even in intranet environments. It can be particularly risky. For example, certain infrastructure environments might log requests as they come in or the request data, the request headers, and if that logged data or that logged message contains a base64 username and password, that's a potential attack factor that people can use to find out passwords. If you have to use basic that's okay. Make sure it's only over SSL in all scenarios and don't log data. Don't log the authorization header in that environment.
Oauth 1.0a is pretty really secure. It's based on a digest-based authentication algorithm that does not require the password to be sent over the wire. That's a really secure technique and we recommend it. Oauth2 is a little weird. It's been going through a lot of problems. Recently the lead of the spec committee stepped down because they couldn't get any consensus. They were trying to solve all problems to all people instead of being narrowly focused on a particular solution. It's kind of gone its own wayward ways.
From an actual security perspective though if you look at the crypto behind it, Oauth 1.0a is actually still more secure than Oauth2 because of the crypto algorithms they use. Oauth2 is basically SSL with what they call a shared secret or a shared token or provider token. We recommend Oauth 1.0a as a security company if your data is secure.
If your data is not super secure â€” maybe you're a social network, maybe like Google+ or Facebook or whatever and a lot of that data isn't crazy secure. It's not like bank information. Oauth2 could probably be a decent use case. But if you have security sensitive workflows use Oauth 1.0a.
And only use a custom authentication scheme if you really, really, really know what you're doing. There are a lot of different vectors for attacking authentication protocols that people are unaware of.
At Stormpath we like to think we know this stuff pretty well. It's our bread and butter. We do have a custom authentication scheme that's very secure. It's very similar to Oauth 1.0a except it does a lot more in the areas of cryptography specifically around the request body. The request body is also encrypted or at least part of the encryption algorithm.
The other thing about custom authentication schemes is that if you create one and you're sure it's rock solid, nobody is going to know how to use it. Even then, you would only want to use it when you provide client support for it. For example, when Stormpath uses our custom scheme, all of our publicly provided open source SDKs implement that scheme and we'll use it when people communicate with the REST API. The point is that we're distributing those SDKs. We went through the effort to implement that algorithm across the various languages so our customers didn't have to worry about that complexity.
If you have to do this custom scheme, make sure you implement it for your customers otherwise they're never going to adopt it. The next best step if you don't do that is probably Oauth 1.0a.
Finally, I don't have a whole lot of time to talk about this, but use API keys instead of usernames and passwords. There's a lot of reasons dealing with enter P and variability and other things, but a really simple kind of takeaway is that if you don't use API keys and your customers use usernames and passwords to authenticate your REST API, the second they change their password any of their integration with your service software is going to fail. Don't use username password pairs for authenticating REST APIs. You don't want their software to break when they change their password in your system. It's a bad idea.
To round out that part, we'll talk really briefly about 401s versus 403s. 401 Unauthorized is kind of unfortunate because of the name, but it really means unauthenticated. It means the server says, "In order for you to access this resource, you need to provide valid credentials. I need you to authenticate with me. I need you to prove your identity." That's really what a 401 Unauthorized means.
403 is the actual real unauthorized kind of code and that's really a Forbidden. The server is saying, "I get it. I know who you are, you've proven to me that you are who you say you are, but sorry, you're still not allowed to access this resource, it's protected and you don't have the sufficient access rights to see it." Just make sure you understand 401 versus 403 and when it's appropriate to use those things.
HTTP Authentication Schemes. We probably won't cover this really now, but it's a simple handshake. The server issues a challenge via the WWW-Authenticate and the client if it understands it can submit what the server wants via the authorization header. Pretty simple stuff. If you have questions about this feel free to ask at the end of the presentation.
We talked about API keys. I'm kind of going to skip through that in the interest of time. We have some pretty good blog articles on the Stormpath website about why API keys are good, why they're beneficial. There's probably a good solid 6 reasons as to why this approach is better from a security perspective over other techniques. Feel free to check that out or ask me after the presentation. I'll be happy to fill in the gaps.
IDs. We've talked about this just barely. IDs should be opaque to your end users. They should be embedded within the URL, and the URL effectively is the new ID for modern REST JSON web services.
The client shouldn't ever have to know how to parse a URL or how to append an ID because if you give them a fully qualified URL every time as your identifier, all they have to do is execute an HHTP request in order to get access to that data.
They don't need to know where it goes as a token within the string or any of that stuff. They should be opaque. Clearly they have to exist for your benefit as an engineering team on the server side. But clients should never have to know about them, they shouldn't have to parse them and they shouldn't have to worry about how to specify them.
Of course, they should be globally unique especially if you are offering a larger scale service. Sequence generators typically have to execute within a single process or you can segment it up and try to deal with segmenting IDs across a cluster. It's a lot easier if you use things like a UUID algorithm so you don't have to worry about contention on machines if you get a sufficient load.
We actually have something that we use at Stormpath, it's not URL64, we call it URL62. It's similar to Base64 encoding, but it uses 62 characters that are URL safe. You could take an ID that we generate and drop it in a URL and it will be safe at all times. It basically does the same thing as Base64, it just uses 62 characters in the codec. That's nice because you want to keep URL friendly characters.
The less customers have to worry about how to decode URLs just to access a resource, the better. If it's just a drop in kind of ID or URL it makes their lives easier.
Method overrides are kind of interesting. I'll cover through this briefly. Some HTTP clients don't support DELETE and HEAD and PUT. Most of them, or especially the older ones, support only POST and GET. If you want you can provide an _method=DELETE or _method=PUT control query parameter and then the server can look at that and says, "Okay, I know you're sending in a POST, but I'm really going to treat this as a DELETE from that point down in the request chain."
I should note that this has to be over POST. It's not okay to do this over GET. If you only support GET or POST it's not okay to do this over GET because this might not be an idempotent operation depending on what you're trying to achieve.
Caching and concurrency control. There's ways to support things like caching and optimistic locking. For example, maybe when you create a resource, you can specify an ETag header to the client and that client says, "This is the unique identifier for this particular version of a resource." The client can then later send in a request to the server saying, "Give me some data if what you currently have doesn't match the thing that I'm specifying." And then the server can say, "No. The version I have is the same one that you have and therefore you don't need anything. I'm going to give you back a 304 Not Modified."
The response payload is much, much simpler. It's reduced, it's a minimal amount of bytes as opposed to a full HTTP response might have been. This is important for performance and efficiency of caching servers, especially proxy caching servers that you might have in your infrastructure.
Maintenance. If you have your REST API up and running, you've got all these best practices in place. What are some of the concerns that you might have for maintaining the API? Use HTTP Redirects. It's expected that every client should know how to handle a 302 Redirect. If you need to change the URLs of where things reside, it should be expected to be okay to move them to different locations as long as you automatically redirect the old request URLs to the new locations of wherever they're stored.
If you find yourself refactoring or you're cleaning up your service and you feel that things should be represented in different locations, it's okay, use HTTP Redirects. Your clients can still get to their data.
Also, use an abstraction layer for your endpoints if you have to migrate functionality. Maybe the clients always interact with a small thin interaction or a small thin layer of code that knows "They're accessing legacy functionality. I'm going to direct it over to this kind of controller over here, or I need to have this handler over here take care of this newer more recent request." Create an abstraction layer in your REST API and then you can make those redirections in your code without ever affecting your clients. It's a really nice benefit.
Finally, again I know this might not be digestible by a lot of people who aren't developers or they don't have a computer background but if you can, use custom media types. This is really the heart and soul of REST and of course Hypermedia. These are best practices. Use them. They exist. That's why they were defined as a protocol. And you'll find, hopefully, as people started adopting REST and they understand these principles more and more they'll be more useable by many other people and then the benefits will start to increase as everyone understands these things.
That's all I have for you. Thanks for the time. Again, we're Stormpath. It's free for any app. You guys can try it out. We'd love your feedback. We are, like probably many people here, a start up in the Valley and we love hearing customer use cases and feedback and advice. If you have anything for us we're definitely open to suggestions. And give it a shot. Thanks for your time.
Q: What are your thoughts on URL templates for token substitution?
A: You mean URL templates for token substitution for your customers of your REST API? I think it can be useful depending on the web framework that they're using.
The idea though is that unfortunately I think the design of those systems, specifically some of the REST architectures, like Ember has got a pretty good design, but I think it supports token substitution for IDs and whatnot. I think they do that because it has not been clear or well-designed by so many other systems that they force this knowledge of IDs on you that these frameworks have to support that notion because so many other services expect you to know about IDs and where they reside in the service.
As you saw in my examples, if you're HREF and every resource that comes back is a fully qualified URL resource or resource URL, they never need to know about IDs because they will always have a handle to go execute a request with.
Again, that best practice is not implemented in a large majority of services so you have a lot of frameworks that have to support template based mechanisms.
Q: How do you respond to requests when there is a large amount of shared or global states? For example: You've created an application to manage employee shifts, and would like to properly implement a REST API.
A: It might sound contrived, but it is possible most of the time I've seen to represent those things as nouns. For example, there could be a shift object or there could be a shift plan object that contains metadata or attributes about that particular plan that you could submit to the server and the server can inspect that and then asynchronously go off and trigger various things. But the point is that most of the time you can represent that as a noun and do a POST and then represent that information in the context of a single request.
Also equally important in that kind of paradigm is to make sure that it's coarse grained. You want to be able to provide a lot of information because maybe what's it's in a shift plan differs over a couple months as you're developing things.
If you make it coarse grained and you create a noun that represents that information, you can start to represent behavior as nouns.
I'll give you a really good example of that. In Stormpath's API, whenever a client needs to log in, whenever one of our customers has their own end user that needs to log into their apps, clearly that's a behavior. I am authenticating with a service. You might not immediately think of a noun that represents that use case, but it was actually easy for us to do.
We called it a login attempt. It's a login attempt, it's got properties, there's a date, there's information that has been submitted. And so for us, that behavior is encapsulated as a login attempt resource that is posted to the application resource. That creates a collection or children of the application, of all the login attempts that happen for that particular app. Again, it might feel contrived but it's amazing that when you go through that process, you start to clarify and clean things up into a coarse grained nature that actually simplifies things over time.
Q: How do you respond to GET requests that might require large amounts of processing or have long response times?
A: Just to be clear, the question is: Sometimes GETs, HTTP GET methods can take, depending on the service, a really long time to respond. Maybe some return in a couple milliseconds, maybe another GET, if it was synchronous would return in 10 hours.
The way I would actually model that in a REST API is that I would have a request object or a submission object like a report request or report submission that is then posted to some collection endpoint that can receive that data. That will return a 200 OK indicating, "Yes, I have received the request, I'm going to go start working on it. Here is the location back via the location header of where you should check whenever that thing's going to be finished."
And you might in the response body say as part of that response, "Hey, this information is going to take a really long time. The expected duration is 10 hours. Please go to this URL when it's finished." And then the client can pull via whatever mechanisms it has to figure out when that thing's done and then try back to pull that information.
Q: How do you migrate APIs across HTTP methods? For example, when you need to change an API call from a GET to a POST?
A: Yes. Well, you can actually. You can do a redirect. This is actually where media types might be beneficial if you had technical clients that could understand this stuff.
As part of the media type in the response to the GET, you could actually provide additional information that tells them, "Hey, this resource is over here and I want you to execute a POST." Unfortunately because you're kind of doing a backwards compatible migration mechanism, something like that's probably not in place or you can't just tell them immediately to go check the new URL.
That being said, for me, I'm a big stickler in backwards compatibility and not breaking things that already work, so I would keep the GET working in that particular scenario and then maybe support a query directive or HTTP header that can be inspected that overrides the default behavior to return or redirect to a new resource that tells them, "Hey, I need you to go pull this in 10 hours or something like that."
I would be hesitant to break existing clients, but I would provide a mechanism that allows them via header or a query parameter that new functionality can be implemented and then you can go redirect based on that.
Q: Are there any downsides to using UNIX timestamps?
A: Any downsides to using UNIX timestamps? The biggest downside that I can think of is, well there's 2 really, your customer processor or whoever is consuming the information â€” might be a Windows platform. Not a big deal. Windows has the ability or the libraries that it can call in order to figure that out. The other thing too, is that milliseconds since epoch I think can only go up to like 2024, or something like 2025, or 38.
If you have to represent timestamps, which kind of sounds weird. That's only 20 years away, 25 or 26 years away, it's not that far away. If you have data, maybe it's a construction planning site mechanism or something for like the Olympic committee, there are dates that aren't too far away that surpass that time. So 8601 never has that problem. You'll never have to worry. What did we have in 99? There was the crunch for the rollover. You might see the same thing in 2038.
Q: What value does adding human readable messages to software that is already in production?
A: For the scenarios where the app is already built and ready to go, it might not have that much benefit. But that being said, there are always cases where bugs might come up or maybe something occurs.
Maybe there's an internal server error with a new error code that represents information that you as a developer haven't seen yet while you were developing your app, and that extra information can help a debugger or a person involved in debugging. "My app was perfectly fine for a year and a half. Now all of a sudden it started failing, why?" Granted it might be the server's fault but at least you'll have that additional information to help figure it out and maybe come up with an immediate fix or just get a fix out the door.
For us it's more of a best practice to give as much information as humanly possible. But you're right, most of the time if an app you built is totally stable, it's never going to need that additional information. But your hurdle at least, I don't know about your particular use case, but if you're a public SaaS company the biggest hurdle to adoption is understanding APIs and being able to consume them easily.
It's much better to front load the work in providing human readable information to solve problems because that is way more important to your business model than not having that information.
It's really a holistic view of that stuff, but you're right. It doesn't really help in technical scenarios after the service is well established.
Q: If you have a resource with a very large child collection do you paginate the child collection or provide a URL to the child collection?
A: That's a fantastic question and we actually do both of those things. For example, in our API you have an account that, or actually it may be a better example as a group that has a collection of accounts that's a collection resource that's referenced by the group. There could be thousands of accounts in that particular group. We actually support even in the expand query parameter the ability to specify additional offset and limit values that pertain just to that particular property that's being expanded.
We do support pagination for nested collections or referenced collections within a resource. That has been a real benefit to our service because customers can still page data that might be more numerous than just the initial resource.
We do that in interesting ways. If you're interacting with an instance resource, ?offset=whateverlimit=whatever does not apply to that particular resource. What we would actually do is expand=groups(limit:25,offset=whatever) and that means that any of those attributes in the parenthesis set apply to that particular attribute of the resource.
That's how we solve it. There's no best practice that I've seen about how to address that. We thought that was particularly elegant because it was easy to read and that's just we've solved that problem. And it works well, and we haven't seen any issues with it.
Q: How would you model a password reset with a parameter for your users?
A: That's a fantastic question and I like it because you could actually go to Stormpath's API and do that today. If you actually tried it out you would see how we do it. The way we do it is the customer sends us â€” off the application resource there is a collection resource called Password Reset Tokens â€” they submit a POST to that collection endpoint. Again, when we submit a POST to a collection endpoint it typically means you want to create a child of that resource. We're actually creating a new token resource. Part of the request body needs to specify a username or an e-mail address as part of the payload.
When we get a submission to that token's endpoint, we inspect that and then we actually create a resource with the token that goes back to the customer. Then that can go in the email that they send out to their customer, or we actually can send out the email on their behalf so they don't have to deal with that particular use case. It is a behavior, it's a verb, but we have "nounified" it.
Again, it might sound contrived but oddly enough, the end goal is to actually simplify things and it's almost deceiving at how much something that simple can actually have a good impact on a stateful, or at least a noun resource based REST API. We nounified it and turned used POST to return back a token and then they can do whatever they want with that token.
Q: Is it common to see RESTful resources that only accept POSTs?
A: Yes, actually for us because of security reasons, you can't do a GET on that particular resource. When you create a token, we'll give you back the response code, a 200 OK, and the actual HREF of where that resource resides. But if you executed a GET on that resource it would give you back an error stating that you are not permitted to see that or whatever because there's security information associated with that. And that's totally legit.
REST doesn't say you have to support all the methods for each resource. It just says that under certain scenarios you have to adhere to certain behavior. You don't have to support CRUD for everything.
We end up not supporting it for that, and the reason why is that you don't need to look at that resource. In the password reset scenario specifically, when the end user gets their email and they click a link and it's got the token inside of it, all our customer has to do is just relay that token to us via a POST and then we process it and then delete it on our system. All of that is automated. They never actually need to look at the data. It's just a passing directive to us and it works great.
Thanks guys! My pleasure.