These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.07 Aug 2017
I was reading about the difficulties the City of New York was having when it comes to migrating off of the Palantir platfom, while also reading about the latest cybersecurity drama involving ransomware. I’m spending a lot of time studying cybersecurity lately, partly because they involve APIs, but mostly because it is something that is impacting every aspect of our lives, including our democracy, education, and healthcare. One thing I notice on the cybersecurity stage, is that everything is a much more extreme, intense, representation of what is going on in the mainstream tech industry.
Ransomware is software that gets installed on your desktop or servers and locks up all your data until you pay the software developer (implementor) a ransom. Ransomware is just a much faster moving version of what many of us in the software industry call vendor lock-in. This is what you are seeing with Palantir, and the City of New York. What tech companies do is get you to install their software on your desktop or servers, or convince you to upload all your data into the cloud, and use their software. This is business 101 in the tech industry. You either develop cloud-based software, something that runs on-premise, or you are a mix of both. Ideally, your customers become dependent on you, and they keep paying your monthly, quarterly, or annual subscriptions (cough cough ransom).
Here is where the crafty API part of the scheme comes in. Software providers can also make APIs that allow your desktop and server to integrate with their cloud solutions, allowing for much deeper integration of data, content, and algorithms. The presence of APIs SHOULD also mean that you can more easily get your data, content, and algorithms back, or have kept in sync the whole time, so that when you are ready to move on, you don’t have a problem getting your data and content back. The problem is, that APIs “CAN” enable this, but in many situations providers do not actually give you complete access to your data, content, or algorithms via API, and enable the true data portability and sync features you need to continue doing business without them.
This is vendor lock-in. It is a much friendlier, slower moving version of ransomware. As a software vendor you want your software to be baked in to a customers operations so they are dependent on you. How aggressively you pursue this, and how much you limit data portability, and interoperability, dictates whether you are just doing business as usual, or engaging in vendor lock-in. One thing I’m hopeful for in all of this, are the vendors who see transparency, observability, interoperability, and portability of not just the technical, but also the business and politics of delivering technology as a legitimate competitive advantage. This means that they will always be able to out maneuver, and stay ahead of software vendors who practice vendor lock-in and ransomware, whether of the slow or fast moving variety.
I spend a lot of time thinking about API rate limits. How they can hurt API providers, or as my friend Tyler Singletary (@harmophone) says incentivize creativity. I think your view on rate limits will vary depending on which side of the limit you stand, as well as your own creative potential and limitations. I agree with Tyler that they can incentivize creativity, but it doesn’t mean that all limitations imposed will ultimately be good, or all creativity will be good.
I found myself contemplating Github’s recent introduction of temporary interaction limits which means “maintainers can temporarily limit who can comment, create pull requests, and open issues among existing users, collaborators, and prior contributors.” While this isn’t directly about API rate limiting, it does overlap, and provide us with some thoughts we can apply to our world of API consumption, and how we sensibly moderate the access to the digital resources we are making available online.
When it comes to real-time fetishism around the digital world those with the loudest bullhorn often get heard and think real-time is good, while I am becoming less convinced that anything gets done in a 24-hour time frame. Despite what many want you to believe, real-time does not always mean good. Sometimes it might do you some good to chill out for 24 hours before you continue commenting, posting, or increase your consumption of a digital resource, whether you want to admit it or not.
Our digital overlords have convinced us that more is better and real time is always ideal. Temporary interaction limits may not be the right answer in all situations, but it does give us another example of rate limiting by a major provider that we can consider and follow when it comes to crafting limitations around our digital resources. This is what rate limitations are all about for me, thoughtful consideration about how much of a good thing you will need each second, minute, day, week, or month. It is a great way to turn a quality digital resource into something better or possibly maintain the quality and value of a seemingly infinite resource by imposing just a handful of limitations.
I was having a discussion with an investor today about the potential of algorithmic-centered API marketplaces. I’m not talking about API marketplaces like Mashape, I’m more talking about ML API marketplaces like Algorithmia. This conversation spans multiple areas of my API lifecycle research, so I wanted to explore my thoughts on the subject some more.
I really do not get excited about API marketplaces when you think just about API discovery–how do I find an API? We need solutions in this area, but I feel good implementations will immediately move from useful to commodity, with companies like Amazon already pushing this towards a reality.
There are a handful of key factors for determining who ultimately wins the API Machine Learning (ML) marketplace game:
- Always Modular - Everything has to be decoupled and deliver micro value. Vendors will be tempted to build in dependency and emphasize relationships and partnerships, but the smaller and more modular will always win out.
- Easy Multi-Cloud - Whatever is available in a marketplace has to be available on all major platforms. Even if the marketplace is AWS, each unit of compute has to be transferrable to Google or Azure cloud without ANY friction.
- Enterprise Ready - The biggest failure of API marketplaces has always been being public. On-premise and private cloud API ML marketplaces will always be more successful that their public counterparts. The marketplace that caters to the enterprise will do well.
- Financial Engine - The key to markets are their financial engines. This is one area AWS is way ahead of the game, with their approach to monetizing digital bits, and their sophisticated market creating pricing calculators for estimating and predicting costs gives them a significant advantage. Whichever marketplaces allows for innovation at the financial engine level will win.
- Definition Driven - Marketplaces of the future will have to be definition driven. Everything has to have a YAML or JSON definition, from the API interface, and schema defining inputs and outputs, to the pricing, licensing, TOS, and SLA. The technology, business, and politics of the marketplace needs to be defined in a machine-readable way that can be measured, exchanged, and syndicated as needed.
Google has inroads into this realm with their GSuite Marketplace, and Play Marketplaces, but it feels more fragmented than Azure and AWS approaches. None of them are as far along as Algorithmia when it comes to specifically ML focused APIs. In coming months I will invest more time into mapping out what is available via marketplaces, trying to better understand their contents–whether application, SaaS, and data, content, or algorithmic API.
I feel like many marketplace conversations often get lost in the discovery layer. In my opinion, there are many other contributing factors beyond just finding things. I talked about the retail and wholesale economics of Algorithmia’s approach back in January, and I continue to think the economic engine will be one of the biggest factors in any API ML marketplace success–how it allows marketplace vendors to understand, experiment, and scale the revenue part of things without giving up to big of a slice of the pie.
Beyond revenue, the modularity and portability will be equally important as the financial engine, providing vital relief valves for some of the classic silo and walled garden effects we’ve seen the impact the success of previous marketplace efforts. I’ll keep studying the approach of smaller providers like Algorithmia, as well as those of the cloud giants, and see where all of this goes. It is natural to default to AWS lead when it comes to the cloud, but I’m continually impressed with what I’m seeing out of Azure, as well as feel that Google has a significant advantage when it comes to TensorFlow, as well as their overall public API experience–we will see.
I get why SaaS, and API providers offer a handful of pricing plans and tiers for their platforms, but it isn't something I personally care for as an API consumer. I've studied thousands of plans and pricing for API providers, and have to regularly navigate 50+ plans for my own API operations, and I just prefer having access to a wide range of API resources, across many different companies, with a variety of usage limitations and pricing based upon each individual resources. I really am getting tired of having to choose between bronze, gold, or platinum, and often getting priced out completely because I can scale to the next tier as a user.
I understand that companies like putting users into buckets, something that makes revenue predictable from month to month, or year to year, but as we consumer more APIs from many different providers, it would help reduce the complexity for us API consumers if you flattened the landscape. I really don't want to have to learn the difference between each of my provider's tiers. I just want access to THAT resource via an API, at a fair price--something that scales infinitely if at all possible (I want it all). Ultimately, I do not feel like API plans and tiers will scale to API economy levels. I think as API providers, we are still being pretty self-centered, and thinking about pricing as we see it, and we need to open up and think about how our API consumers will view us in a growing landscape of service providers--otherwise, someone else will.
As I pick up my earlier API pricing work, which has two distinct components: 1) all API resources and pricing available for a platform 2) the details of plans and tiers which a complex list of resources, features, and pricing fit into. It would be much easier to just track resources, the features they have, and the unit price available for each API. Then we could let volume, time-based agreements, and other aspects of the API contract help us quantify the business of APIs, without limiting things to just a handful of API contract plans and tiers, expanding the ways we can do business using APIs.
As an API provider, I get that a SaaS model has worked well to quantify predictable revenue in a way that makes sense to consumers, but after a decade or more, as we move into a more serverless, microservices, devops world, it seems like we should be thinking in a more modular way when it comes to the business of our APIs. I'm sure all you bean counters can get out of your comfort zone for a bit, and change up how you quantify access to your API resources, following the lead of API pioneers like Amazon, and just provide a master list of API, CLI, and console resources available for a competitive price.
I was learning about the approach Amazon has taken with their serverless API developer portal, and highlighting their approach to API plans, and couldn't help but think there was more to it all than just rate limiting your API. Amazon's approach to API plans is in alignment with other API management providers, allowing you to deploy your APIs, meter, rate limit, and charge for access to your API--standard business of APIs stuff.
Controlling access to a variety of API resources is something that has been well-defined over the last decade by API management providers like 3Scale, and now Tyk and DreamFactory. They provide you with all the tools you need to define access to APIs, and meter access based upon a wide variety of parameters. While I haven't seen the type of growth I expected to see in this area, we have seen a significant amount of growth because API management providers are helping to standardize things--something that will grow significantly because of availability from cloud providers like AWS, Microsoft, and Google.
We have a lot of work ahead of us, standardizing how we charge for API consumption at scale. We have even more work ahead of us to realize that we can turn all of this on its head, and start paying for API consumption at scale. I do not understand how we've gotten so hung up on the click and the view, when there are so many other richer, and more meaningful actions already occurring every second, of each day online. We should be identifying these opportunities, then paying and incentivizing developers to consume APIs in most valuable way possible.
With modern approaches to API management, we already have the infrastructure in place. We need to just start thinking about our APIs differently. We also need to get better at leveraging POST, PUT, and PATCH, as well as GET, when it comes to paying for consumption. Imagine a sort of event driven API affiliate layer to the web, mobile, device, and even conversational interfaces--where developers get paid for making the most meaningful events occur. It makes the notion of paying per view or click seem really, really, shit simple.
Anyways, just a thought I needed to get out. The lack of innovation, and abundance of greed when it comes to API monetization and planning always leaves me depressed. I wish someone would move the needle forward with some sort of modular, event-driven API monetization framework--allowing some different dimensions to be added to the API economy.
I was learning about the AWS Serverless Developer Portal, and found their API plan layer to be an interesting evolution in how we define the access tiers of our APIs. There were a couple different layers of AWS's approach to deploying APIs that I found interesting, including the AWS marketplace integration, but I wanted to stop for a moment and focus in on their API plan approach.
Using the AWS API Gateway you can establish a variety of API plans, with the underlying mechanics of that plan configurable via the AWS API Gateway user interface or the AWS API Gateway API. In the documentation for the AWS Serverless Developer Portal, they include a JSON snippet of the configuration of the plan for each API being deployed.
This reminds me that I needed to take another look at my API plan research, and take the plan configuration, rate limit, and other service composition API definitions I have, and aggregate their schema into a single snapshot. It has been a while since I worked on my machine-readable API plan definition, and there are now enough API management solutions with an API layer out there, I should be able to pull a wider sampling of the schema in play. I'm not in the business of defining what the definition should be, I am only looking to aggregate what others are doing.
I am happy to see more folks sharing machine-readable OpenAPI definitions describing the surface area of their APIs. As this work continues to grow we are going to have to also start sharing machine-readable definitions of the monetization, plan, and access layers of our API operations. After I identify the schema in play for some of the major API management providers I track on, I'm going to invest more work into my standard API plan definition to make the access levels of APIs more discoverable using APIs.json.
Being able to provide different levels of access for a single API has been one of the telltale characteristics of any modern web API. Savvy API providers know they don't just make their valuable API resources publicly available for anyone to use, they know you can craft a logical set of plans that are in alignment with your wider business objectives, outlining how any developer can put an API to use--this is the essential business of APIs.
Mashery was the first API management provider to standardize this approach to API access, something further evolved by 3Scale, Apigee, and others. Amazon's release of their API gateway wove API management into the fabric of what we call the cloud, and the introduction of usage plans, does the same for API service composition. Making the identification, metering, limiting, and monetization of resources made available via APIs, a default function of operations in the cloud.
Being able to take any digital asset, whether it is data, content, or an algorithmic resource, and make available via a URL, and control who has access, while also metering their usage, and charging different rates for this usage, is where the business of APIs rubber meets the road. API service composition lets you dial in exactly the right levels of access, and usage, required to fulfill a business contract, delivering precisely the service that customers are wanting for their web, mobile, and device apps.
It's taken a decade for this key element of doing business on the web to mature beyond just a handful of vendors, then into an assortment of open source solutions, and now something that is just baked into what we know as the cloud--allowing us to plan API access consistently and universally across all the digital resources we are increasingly storing and operating in the cloud.
I have done a lot of reading in the last week, catching up on my monitoring of the API space. I have read a couple of posts about the reliability of APIs, and the overall viability of building applications, and businesses based upon them. A couple of the posts were focusing on the shuttering of ThinkUp, but a couple others were just part of the regular flow of these stories that question whether we can depend on APIs or not--nothing new, as this is a regular staple of bloggers and the wider tech blogosphere.
My official stance on this line of thinking is that I would not want to be building a business, and application that depends on leading API platforms like Twitter, Facebook, Instagram, and others, but I will happily build a business, applications, and system integration on APIs. You see, this isn't an API issue, it is a business and vendor viability issue. As with other sectors, there are badly behaved business, and there are well-behaved businesses--I try to choose to do business with the well-behaved one's (can't always achieve this, but I try).
I find the ongoing desire of the startup culture to point out how unreliable APIs are, while simultaneously supporting the overall business tone set by venture capital investment, often delusionally blind levels of support, is just not reconcilable. I'm not saying all VC investment is bad, so don't even take this down that road. I am saying that the quest for VC investment, and the shift in priorities once VC investment is acquired, then further shift with each additional round, and the final exit is setting a tone that is completely at odds with API reliability.
The problem really begins when APIs become the front-end for this blame. If I depend on vendors for my brick and mortar store, and the delivery trucks don't reliably bring my products, I don't talk about how you can't depend on trucks--I find new vendors. Of course, I can't find new vendors if they can't be replaced like Twitter and Facebook, but this is a whole other conversation, although it is also one that is a symptom of the tone being set by the VC currents (this is a business conversation). Blaming APIs instead of raising questions about the business ethics bar being set by venture capital shows the blinding power of greed, as the tech community refuses to blame VC $$, and shifts this to being about the viability of APIs, because I will get my VC $$ some day too bro!
I am not saying APIs are always good. I'm just saying they aren't bad. Hell, they aren't neutral. They are a simply a reflection of the business behind them, as well as being a reflection of the industry they operate in. Stop blaming them for businesses not giving a shit about developers, and the end-users. Maybe we could start changing the tone by admitting the #1 priority is always set by VC $$, and not by our API community, or even our end-users and customers, and all this shit is out of whack.
I track on the API operations of around 2000 companies. Honestly, most of the 10K+ APIs in the ProgrammableWeb API directory have long been gone, deprecated, acquired, and had the lights shut off. There are only a couple thousand companies, institutions, organizations, and government who are doing public APIs, with only a couple hundred doing them in an interesting way.
This should not diminish the API conversation, as public APIs are just one aspect of the API discussion, and honestly it is one are that not all companies and organizations will have the stomach for. However, a certain amount of available public APIs will always play an important role in setting the tone for the entire API space, providing us with a (hopefully a diverse amount of) lead(s) when it comes to crafting our own strategies for operating our businesses on the Internet. Very little of what I do as the API Evangelist involves ideas that originate from my own head, and is something that requires a wealth of examples which I can point to in the wild.
I am working on new ways to earn a living, while also generating research, analysis, code, and specification that the API sector can put to use in their own operations. One way I will be doing this, is by publishing technology, businesses, and politics of API design guides. This isn't API design as in the sense of REST, and how you craft endpoints, it is design thinking around the operations of your API. I learn a ton, flipping through the API portals of the 2000+ companies I keep an eye on, and I wanted to carve off small chunks of this, and share with you.
One area of API operations that has always fascinated me, and is one of the original founding focuses of API Evangelist, is how companies craft their API plans and pricing. Whether or not an API has a plans & pricing page is a very telling signal all by itself, as APIs are often a side-project, without any real support and investment from its parent company or organization, and the lack of a coherent plan, with simple pricing is often a sign of other deficiencies. However when an API plan and pricing page is present, I to find it to be a very telling representation of a company, organization, institution, government agencies, and even individual(s) in some cases.
I track on many data points for any single API I keep an eye on. Ranging from their Twitter account, and Blog RSS, to the activity of their Github profiles. One of the areas I link to, when present, is the plans and pricing pages. This gives me the ability to quickly check up on the pricing of APIs across various business sectors, something that drives my API plans research. With the help of my organization API, and my screenshot API, I was able to easily harvest, organize, and make available the plans and pricing for 250 of what I feel are some of the more relevant APIs out there today.
To help make my research more explorable, I organized my API plan and pricing highlights into a single PDF guide, which allows you to experience the best of my API research, without all the clicking and searching I had to do. When I record details about the plans and pricing for an API, I have a rating of 1-3 for what I call API plan coupling, which is how tightly coupled their monetization strategy is to their API. Is the pricing that is present directly applied to API consumption, or is it more for the SaaS or PaaS side of things? While not all of the plans & pricing I include in this research are tightly coupled with the API, they are all from platforms where the API does play a significant role in platform operations.
I have many of these the common building blocks employed for the API monetization strategy, and for the actual API plan implementations, recorded in a machine readable way, as part of my research. For this guide, I wanted to step back, and look at things through more of a design lens, and less from the technical, business, or even the political side of things. I think the visual of each plan and pricing page says a lot, and doesn't need a bunch of analysis from me to fluff it up. Just flipping through, you get a sense of what looks good, what doesn't, a process that I think will hopefully push forward more consistency across the API space.
There are numerous lessons present in the screenshots in this guide, around which building blocks are required to support API plans and pricing. Things like breaking things up into tiers, providing contact information, mechanisms for receiving a quote, demo, and know that a trial version is available. There are also lessons in how to present large amounts of very complex information, and some lessons in not how to do it, with plenty of evidence of why simple works. Oh, and that you should have a decent graphic designer! ;-)
I dismiss claims from the vocal startup, and VC elite, who point out the absolutes when it comes to API monetization, and pricing. That freemium won't work, and that offering unlimited access via API is always a mistake. Every company, API, and consumer will be different. In my opinion there are a wealth of strategies available out there to learn from, which we can apply in your own strategy. There are many variables, seen and unseen, that go into whether or not an API will be "successful" or not, and I strongly reject the notion there are absolutes -- there are only agendas, usually of the behind the scenes kind.
I have a number of API plans and pricing pages that I did not include in the current release of this guide. I will consider evolving, and shifting it up regularly, like I try to do with my other guides. I know I learn a lot from having them in a single place, that I can easily flip through and experience the design patterns present across these 250 API platforms, and hopefully you do to. I'm currently planning an interactive micro-app version of this research, as well as the coffee table edition for the API socialite who also enjoys entertaining.
You can purchase a copy of The Pricing & Plans for 250 API platforms over at Gumroad, get a copy of this design guide crafted from my API research, while also supporting what I do -- thank you!
I was pushing forward my API plan research this weekend, building on some of the tooling I developed during the last round, and the machine readable API plan format I hammered out late last year to help me define API plans. This time I'm applying it to nine of the SMS API providers I'm currently profiling, trying to get a new view of the plans of SMS APIs like Twilio and Plivo, but also working to continue polishing my 100K view on the SMS API industry.
Documenting The API Plans Of Nine SMS APIs
This last friday I got to work profiling the pricing, plans, rate limits, and access tiers for nine of the leading SMS API providers that I keep an eye on. From their pricing pages I gathered the common metrics, limitations, geographic considerations, pricing, and other element offered by each provider, putting them into separate buckets, standardizing how I record these valuable aspects of each API providers resource. This is a process I'd say 1/3 of the time falls right into place, 1/3 of the time it takes some translation and massaging to make it fit, and the other 1/3 things just don't fit into any common way of thinking.
Identifying The Common Metrics
I found it pretty easy to identify the common metrics applied to SMS APIs, as most everything revolving around the concept of the message and a number. There are nuances to these resources that are being metered, like the separation between SMS and MMS, and a local phone number, toll free phone number, and a short codes, but for now I'm just trying to put pricing into common buckets, by loosely grouping the metrics that are applied via API plans.
List of Pricing Across SMS APIs
Along with identifying the common metrics being applied to SMS APIs, I was able to establish a list of the pricing that is being leveraged across these providers. I have machine readable entries for sending and receiving SMS or MMS, setup and rental fees associated with purchasing numbers, and short codes. I have ways of separating out fixed costs, such as fees, or per unit calls, as well as being able to handle ranges, and different rates and limitation being applied to different packages. It is far from perfect, but I am feeling pretty good about it as v1 of the API plan format.
Linking Plan Metrics To Resources
One big disconnect for me currently is being able to fully understand how metrics, limits, and pricing applies to API consumption, such as a linkage between a specific API plan element, and an individual API endpoint, and method. Another dimension of this is that most API plan elements represent a small portion of all the API endpoints available--meaning you only pay for sending message, procuring a short code number, and not for doing configuration, logging, and other more utility endpoints. I have a way to link each plan element to a specific path or verb, but I won't be implementing until I apply this to more API providers. I am not going to worry about this one right now, as I know it will happen in one of the next sprints for this work.
The Unclear World of Limitations
Another big gray area that I am not sure how I will deal with completely, is the murkiness of how rate limits that are applied. Only in some cases are limits actually broken down in detail, including where the ceilings are. Most of the time, the pricing is used as a governor for rate limits, meaning if you can afford it, there are no limits. For now, I will just state "unclear" if I don't know where the limits are, and most likely just leave as an individual transaction, as opposed to when labeling it as "range", or "unlimited", as I do in other cases. While I think API limits will come into focus as I profile more APIs, I also know this will continue to be a murky area for some providers, either because of a lack of vision when it comes business strategy, or in some cases provides will be trying to obfuscate the limitations so they don't scare consumers off.
More API Plan Profiling Before I Iterate Further
There are many challenges present within as I work to compare these similar API plans, but as a first crack, I'm pretty stoked with what I have been able to accomplish, and make machine readable. In addition to including this data in the API listing for my API plan research, I also broke it out separately into a view that showcases the API plan details across all nine SMS API providers. I find it valuable to look at the plan elements alongside all of the API endpoints, but I also find it extremely valuable to see how the APIs plans size up against each other, without the technical details of the APIs distracting me--I like APIs, more than I like money. :-)
Next. I'm going to move my research into some of the other more mature API stacks like email, compute, storage, and other essential building blocks for websites, mobile, and device based apps. Too establish the draft format for my API plans format, I looked at 60+ diverse APIs, but for this sprint i wanted to target a handful of APIs, present within common business verticals. Before I iterate on the API plan format any further, I want to make sure it works for defining the details of individual API plans, but also does it in a way that makes it easy to compare the plans for similar APIs, in a specific business vertical as well. This SMS research provided me with a first implementation of API plans applied across nine separate, but similar APIs, but it is also available as a machine readable index for the business of these APIs, within the APIs.json files for each API, as well as the APIs.json for the overall SMS API collection.
A couple of weeks ago I started playing with a machine readable way to describe the pricing, and plans available for an API. I spent a couple of days looking through over 50 APIs, and how they handled the pricing, and their API access plans, and gathered the details in a master list, which I am using for my master definition. I picked up this work, and moved it forward over the last two days, further solidifying the schema, as well as launching an API, and set of admin tools for me to use.
While my primary objective is to help me establish a machine readable definition that I can use to describe the plans of the APIs I provide, as well as the ones that I monitor as part of my regular work in the space--I needed an easier way to help me track the details of each API's plan. So I got to work creating an simple, yet robust admin tool that allows me to add one or many plans, for each API that I track on.
To help me drive this administrative interface I needed an API (of course), that would allow me to add, edit, and delete the details for each plan, using my API plan schema as a guide. I got to work designing, developing, and launched the first beta version of my API plans API, to help me gather, and organize the details for any API I want, whether its mine, or one of the many public APIs I track on.
Now that I have an API, and an administrative interface, I'm going to get to work adding the data I gathered from my previous research. I have almost 60 APIs to enter, then I hope to be able to step back, and see the API plan data Ive gathered in a new light. Once I get to this stage, I'm looking to craft a simple embeddable page for viewing an API's plan, and create some visualizations for looking across, and comparing multiple APIs. I'm looking to apply this concept to verticals, like with business data via APIs like Crunchbase, AngelList, OpenCorporates, and others.
While my API plan schema is far from a stable version, it at least provides me with a beginning definition that I can use in my API profiling process. Here is the current version I have for the Algolia API, to demonstrate a little bit of what I am talking about.
This current version allows me to track the pages, timeframes, metrics, geo, limits, individual resources, extensions, and other elements that go into defining API plans, and then actually organizing them into plans, that pretty closely match to what I'm seeing from API providers. For each plan I define, I can add specific entries, that describe pricing structures, and other hard elements of API pricing, but then I can also track on other elements, giving me a looser way to track on aspects that impact API plans, but may not be pricing related.
I am pretty happy with what I have so far. Something I hope in a couple of years this could be used as a run-time engine for API operations, in a similar way that the OpenAPI Spec, and API Blueprint are being used used today, but rather than describing the technical surface area, this machine readable definition format will describe the business surface area of an API.
I am continuing to push forward my API plans research, where I look closely at the common building blocks of the service composition, pricing, and plans available for some of the leading API providers out there. I have no less than ten separate stories derived from Algolia, the search API provider's, pricing page--I will be using Algolia as a reference for how to plan your API, along with elder API pioneers like Amazon, and Twilio, for some time now.
On area of Algolia's approach I think is worth noting, is the enterprise level of their operations. They provide the most detail regarding what you get as part of the enterprise tier, being very public about their operations, in a way you just do not see with many API providers. When it comes down to it, the Algolia enterprise search plans are all about no limitations--I think their description says it well:
Your own dedicated infrastructure. Don't like limits? Meet our dedicated clusters. Optimal for high volumes of data, they scale to thousands of queries per second. Search performance and indexing times have never been so good.
The basic building blocks of how Algolia is monetizing their search API, records and operation API calls, melt away at the enterprise level. The lower four plans for Algolia API access meter the number of record, and operation API calls you make, and charge consumers, using four separate pricing levels. If you are an enterprise customer, the need for this metering melts away, eliminating the default limitations applied to lower levels of API consumption.
I support more transparency in enterprise API plans, as well as other partner tiers of access. I do not think Algolia's approach to delivering enterprise services is unique, but their straightforward, simple, and transparent approach to doing it is. In an API driven world, the enterprise levels of access do not always have to be that age old mating dance, that involves smoke and mirrors, and pricing pulled out of a magic hat--it can be just be about reducing the limitations around retail levels of API access, and getting business done.
How to monetize APIs is on of the top questions I get from companies, right after concerns around security and control. I have separated my research into two main buckets, the first is focused on the questions I should be asking around API monetization as I'm planning my strategy, with the second focused on the actual plans for the operations of leading API providers. There is a lot of overlap between the two, but I guess API monetization is more strategy, and API plans is more about operations.
When I started my API monetization research, it was very focused on how do you make money from your APIs, resulting in the poorly crafted title. I'm not making the same mistake with my API plans research, which is meant to help define a wide range of motivations for providing APIs. I think every API should have an API monetization plan, to cover the costs of acquisition, deployment, and management in a sensible way, and all APIs should have a plan--not all APIs need to have pricing.
This is why I labeled my research into API plans the way I did, instead of just focusing on API pricing. Not all APIs have a straightforward API monetization strategy that can be translated into "pricing", from the dark side of platforms that are just content farms, to the brighter side where platforms are focused on the social good. There are many motivations behind API operations, this is why I'm trying to come up with some common ways to reference these motivations in a machine readable way.
Keep an eye on my API plan research, as I'm rapidly evolving the building blocks that go into planning your API operations. I am also publishing some common, machine readable definitions from leading API players like AWS, Twilio, and more. All of this is very alpha work, so it might seem cluttered at first, but I am working to slice it up into a 101, 201, and 301 series, as well as some sections that are dedicated to learning, with others more focused on strategy and then execution.
There is a lot of work left to be done before I can compare the pricing of cloud storage APIs like Amazon and Microsoft, or messaging like Twilio and Tropo, but in the short term, the ability to find APIs that depend on donations like Court Listener, and easily identify the data and content farms like Crunchbase and AngelList, who don't have real business models that API consumers can benefit from, is important to my operations.
Ultimately there are two primary motivations here--one, getting a machine readable way to discover and compare APIs and the resources they are providing. Second, I want to provide a way for API providers to easily understand the common patterns available across the API sector, and provide tooling they can use to craft the API plan that works best for them, based upon the successful plans already being put to work across the space.
I've long wanted a machine readable way to describe, discover, and compare the pricing of the leading APIs that I track on. I've slowly documented some of the common building blocks of how APIs are monetizing their APIs, as well as the details of the plans that go into platform operations.
As with all of my work, I am not looking to pull a machine readable representation out of thin air. I am taking the API definitions, and underlying schemas at play with pricing and planning APIs from AWS and Twilio, and identify other common patterns extracted from the approach of other API providers, from a diverse cross-section of the API sector, to bring this into focus.
To help me establish a common JSON schema that could possibly describe pricing and planning behind common API platforms, I took a look at 60+ providers, to capture a v1 schema:
I took a quick pass through these API platforms looking for signs of pricing, as well as other motivations behind the operations. I am now taking a second pass, and actually crafting a JSON representation of each platform, to help me push forward the schema, and better understand where it falls short--here are the first five that I have defined so far:
Amazon EC2 API
Amazon S3 API
I have over 55 more providers to apply this schema to, before I am even willing to consider it an early start. Here is all of them in a single definition. I only have the plans described for the first five, but many of them do not even have plans, and various couplings with the monetization strategy of their core business.
This is another aspect I should probably clarify. There is a value for each provider that rates the coupling of their API monetization strategy to their core business model. I am not sure what this means, and quickly expanded this to be a four level ranking, pushing beyond just three levels. Overall it seems to be helping me understand the motivations behind the API, in relation to the core business mission of the company or companies that make things go round.
You are welcome to look at all 60+ of the companies I'm looking at, I've published as a Github Gist below. I've already added four or five new ones, and will be pushing forward with more, as I have time. There is still much to flush out, and I still can't compare apples to applies like Amazon compute to Azure compute, or Twilio SMS to Tropo SMS, but I am able to discover APIs who don't have detailed plans, and which ones are only about content generation, or possibly selling products and devices.
I suspect, much like my other API definition work, this API planning specification will continue to evolve as I profile more companies. Hopefully eventually the spec, and the patterns I've defined from the approaches of API providers that are working, will become patterns that other providers can emulate, working to standardize what is possible when searching and ranking the pricing across hundreds or even thousands of API providers.
I am spending a significant amount of time looking through the pricing pages for leading API providers, working to get a sense for some of the common approaches to API monetization in use across the space. Along the way I am also finding some simple and unique approaches from API providers that I wanted to share as bit-size API planning stories here on the blog.
As I was working to understand the coupling between the Box SaaS business model, and the one applied to their API, I noticed an interesting element, that was part of their enterprise API plan--custom terms of service. At first glance it doesn't seem like much, but making elements of your TOS dynamic, allowing them up to be used as a metric within your API plans, opens up a whole world of possibilities.
I have to note, this option is only available in the enterprise plan, which means only those with the most resources get this opportunity, but I still think its presence is meaningful. Right now, most terms of service and privacy policies are immovable shadows that guide how we do business and conduct our personal lives online, so the ability to think of them more dynamically, and one that could be tied to specific API access plans has huge potential. Unfortunately in the true Silicon Valley spirit, only some of this potential will be good, much of it will be in the name of exploitation, and the shifting of how power flows.
I have terms of service listed as a potential metric in my API plans research--we'll see where this goes, as my work evolves. I have a whole list of bit-size API monetization, pricing, and planning stories queued up. I will try to space them out, alongside other stories, but you will just have to suffer a little as I spend time expanding on my API monetization, and API plans research areas.
I have some really amazing resources exposed as APIs. Everyone is doing it these days, and I have some good ones, now I just need a plan. You know, actually, I need several plans, that will help me expose these resources to the right people, while remain as much control as I need, and also generate revenue from these resources, tailored to whoever I am offering them to.
Starting With The Essential API Building Blocks
I want to allow anyone to access my valuable APIs, however I want to control exactly how much they can access, and even limit it to a free trial, and a small amount of calls upon the API. I am going to call this Plan A, also known as my freemium layer, meant to just wet the appetite of any potential API consumer. I ow have a basic level of access to my API resources, that anyone can sign up for 24/7, yet I get to dictate exactly how many hits on the API they can make, and who gets access.
Moving Beyond Plan A For My API Strategy
Plan A only gets me so far, I need to also be able to establish additional plans that help me cover the costs of my operations, and hopefully also generate some revenue from API access and usage. I need to be able to establish any number of plans that I will need to be successful. Each additional plan that I offer, lets call them plan B, C, and D, should have the ability to sign-up 24/7 without approval, possess a trial version option if necessary, and allow for charging of setup costs to get going--if I so choose.
Defining Usage Metrics That Are Meaningful To Me
All of my plans will start with these elements, but then should allow me to set any other metric I desire. I want to track on API call, by message, according to bandwidth consumed or stored, the time period in play, the scope of compute being applied, and much, much more. Any of my plans should be able to to measured in these units, or anything else I want to define. If an API plan provides access to a blog, product catalog, the requirements might be different than when I provide access to images, videos, podcasts or other heavy objects. I should be able to define the metrics, and set a price per metric that can be applied to each APIs I am making available.
Establishing Limitations For API Access
Each of my plans should have well defined units of operations (metrics), cost per unit, and volume pricing, but also should possess limitations of what can be accessed. I need to restrict some plans to a daily amount, limit server loads by only allowing a certain amount of requests, bandwidth, per second, minute, hour, day, week, or month. If I want, I should be able to leave API access open ended. How I define the access for each of my API plans, should be tailored exactly for my intended audience for each plan, with an infinite number of plans possible.
Which APIs Methods Are Available In Each Of My Plan(s)
No plans are created equal. Which API resources, and methods, and verbs, are available in any given plan is again, tailored to the intended audience of each plan. My Plan A allows limited reading from just a handful of resources for everyone, while my Plan C allows for reading and writing from a variety of API resources, designed for a specific group of partners, defined by me. My public facing plans encourage consumption of specific resources that I have crafted, while my partner plans encourage two way usage, incentivizing my partners to add, update, and curate information available via the API resources I have made available.
All The Variables I Need To Compose The Plans I Need
I can create as many APIs, and individual methods, and package them up into plans, and measure their access by call, bandwidth, storage, time period, compute, and other critical metrics. It is within my control to set limitations, and volume levels of access across these metrics, charging exactly what I need to incentivize API usage, while also covering the costs of my operations, and bring in a healthy bit of revenue in the door as well--I have a full toolbox of what I need to orchestrate my API driven business model(s).
Provide The Features My Consumers Will Need Along With Each Plan
Along with the access of API resources within each plan, I need to also bundle other features, like support, service level agreement (SLA), and other resources that consumers will need to be successful with integrations. Again, each plan should be tailored for the intended audience, providing the API access they need, while also makes sure all their adjacent needs are met along the way as well.
Providing Variable Unit(s) of Value For All API Transactions
Whether its an API call, bandwidth transmitted, or duration of resource usage, you have measurable units of value, where a price can be set, in a way that lets it be adjusted by volume, or each plan level. An API call might be one cent (for sake of discussion), but if you access more than 10K per day, it goes down to half a cent per API call. For partner plans, each API call might be 1/10th of a cent by default, with 1/100th of a cent if you access more than 10K per day. Each API has its unit of value, with the price for this unit of value variable depending which plan you are in, and how much you consume.
How Do We Know What Price To Set For Each Unit Of Access?
You have an API, but where do you even start understanding where to price this resource, which you feel is really cool and valuable, but might be completely meaningless to others. You can start by looking at competing API providers, and follow any precedent that has been set in the industry to date. Beyond following the lead of others, you can break down your own story, and dial in exactly what it costs to deliver an API, and set the price for the industry, and lead others.
What Did Cost To Acquire What Is Need For An API Resources?
Every API starts somewhere. What did it take to discover the idea for an API, negotiate or license its usage, or maybe you had to purchase some content, data, or access to a programmatic resources. Before you get to work developing an API, there will be some investment to bring an API idea to production.
What Has Gone Into Developing An API?
Beyond the acquisition of an API, like the API usage itself, this might be a two way street, it might not just be costs associated, but also investment from other partners, investors, or otherwise. What has gone into normalizing resources, designing and developing the database, server, and the API itself. Consider the network too--what will it take when it comes to network bandwidth, and what are the DNS considerations to manage traffic. Thanks to cloud computing, there are many ways to meter areas like compute, storage, and bandwidth, make sure and put these tools to work for you.
What Are The Realities Of What It Will Cost To Operate An API?
What are the costs associated with maintaining the central truth of an API, its definition? What are the hard compute, storage, and bandwidth costs associated with API operations? I'm sure you have a good handle on what these are. How much do you have invested in management, monitoring, security, and evangelism? There are a lot of costs to consider beyond the visible aspects of API operations, but there are also a lot of associated costs to operations that often get left behind like support, and the creation of additional resources.
Let's Discuss Who Will Be Access An API And Our Plan For Different Levels
Now we have a better handle of what goes into acquiring, developing, and operating an API, but who actually be accessing the API, and who do we need to offer different plans to, tailored for their relationship with me, and their unique needs. We have discussed the opportunities around free, and free trial access plans, but what about other pro bono approaches like not-for-profit, and educational or research plans, to help eliminate costs for consumers, and encouraging meaningful access.
Once we have establish the entry levels of access we can begin crafting additional levels of retail, partner, or internal levels of API access. It doesn't need to stop there, you can craft as many plans for API access as needed to meet the needs of every potential group, inside or outside of a company. API providers should work to be as transparent as possible with available plans, and what is available within each plan level, all the way up to partner tiers and potentially reseller or private label tiers of access.
Now That We Have A Better Understand Of What Goes Into Our APIs, How Do We Set Pricing?
Even though we know what went into an API, we don't always know how API consumers will perceive the value around an API, so we must be ready to adjust pricing based upon this perception. How we limit, and incentivize usage needs to reflect this value, and the relationship with each group of consumers. There should be paths that incentivize usage, encourage the purchase of resources in bulk, by the timeframe (monthly, quarterly), or maybe at specific times that benefit the provider, or the consumer. Remember, pricing is a two-way street, that can benefit the provider, as well as the consumer, but strike the balance that makes API platforms go round.
Let's Remember There Isn't Always Direct Value Generation From APIs
Something that often gets overlooked in API operations is the indirect value they can generate. APIs are a potential marketing vehicle when done right, pushing forward the exposure of a brand, driving web or mobile traffic, or generating valuable data and content through network and application activity. Many of these activities can be measured, just like with direct API consumption, but should be treated different than commercial consumption, and potentially act as marketing, advertising, and word of mouth around the value an API brings to the table.
Greasing The Wheels With Valued Partners
Public APIs have dominated the conversation for some time now, but web APIs bring just as much, or more value to trusted partners. APIs make it so I can quickly share the valuable resources I possess with those I feel would benefit most. While I try to make all of my APIs publicly available, it is my partners who get preferred access, and if I do it right, they can generate value via my APIs, and benefit what I am doing. If I do this the right way, I should be able to generate revenue from my APIs, while also sharing that exhaust with my trusted partners, and if they are the right kind of partners, they kick exhaust back my way as well.
Realizing The Value Of APIs Internally
As we've learned from the Steve Yegges rant about APIs at Amazon, and as the myth tell us, Bezos ordered everyone at Amazon to use APIs for the exchange of internal API resources. Amazon understood that APIs bring agility, and efficiency when done well, and can help internal groups work together, and like public API access, internal consumers can also have plans tailored specifically for their needs. This is where the true benefits of APIs are evident, and unfortunately is the aspect of APIs that is least discussed out in the open for everyone to learn from.
Units of Value Can Go Both Ways, Depending On Who Is Access An API
It is very common for APIs to charge for access, restricting access by some of the common units of value listed above. It is less common for APIs to pay consumers for access APIs, incentivizing the publishing of content, posting of images and videos, and other value generating ways of putting APIs to work. What is the value of the first image added to an API for a business location vs. the 10th photo added? What is the value of encouraging API consumers to add their own content to a a system, augmenting existing information. There are endless ways to encourage developers to contribute to a platform, the only limit is your own plan for defining the boundaries of this participation.
We Should Be Making Value Transferrable Across API Providers
All of this provides a standardized way for API providers to define the value of API resources, and incentivize usage, while also generating needed revenue. Money is spent accessing some resources, while money is generated for publishing or refining other resources. This two way value generation shouldn't be locked up within each API providers silo, and be transferrable between API providers. In 2015, developers are using not just one or two APIs, they putting many different APIs to work, and while API providers need to cover costs, and generate revenue, so do API consumers. It is two sides of the same coin--the API balance.
How Do We Compare The Value Of Value Across API Platforms?
The first challenge we will face is transferring this value from one silo to another. Even when platforms have comparable API resources available, rarely will it be an apples to applies comparison. While this exercise gives us a better look into how resources can be defined, and have a price applied to their consumption across multiple plan levels, this is limited to discussion around each individual platform. Before we can compare the value across API platforms, we need to standardize the business model for each API, and establish a ranking system for each provider, that will act as a weight when comparing provider to provider, and across many providers within an industry.
This is just my way of exercising my views around the development of API business models. Next I will explore the concept of API rating in 2015 as I see it, and try to find the linkage to the unit price established as part of API management operations in this post. Everything I discussed above is not hypothetical. It is rooted in API infrastructure solutions provided by companies like 3Scale. To craft this story, I had my 3Scale administrative console open for my own APIs, and theoretically walked through some of the known approaches to API monetization.
Once I am done playing with my thoughts around rating APIs and the resources they provide, I think I'll revisit this post about plans, and craft a formal look at Amazon, Twilio, and other well known APIs, and help provide ready to go API business model blueprint that others can follow. While there is endless ways to experiment and play here, I think most of the API space is just looking for ready to go business models, that are proven and make sense, that they can plug into their operations.
After Combining My API Plans, Pricing, And Rating Research I See Hints Of An API Industry Economic Engine31 Oct 2015
After writing I Have A Bunch Of API Resources, Now I Need A Plan, Or Potentially Several Plans, and How Are We Going To Create The Standard And Poors And Moodys For The API Economy, I wanted to combine what I had learned while crafting these stories, and try to look at how these two areas could work together. The API plan and pricing research is derived from existing approaches to API service composition introduced by providers like 3Scale, however the rating portion is fresh territory for me, with very few precedents to follow.
What I see when I start wading through a structured approach for API providers to craft meaningful API plans to serve up their API resources through, and how developers will be paying for API usage, via the apps they build, I begin to see the potential of a structured approach to API plans, and pricing. When you start thinking of the implications across providers, and consider the opportunities for developers to manage API consumption across API providers, and exchange the credits they purchased, or generated via API usage--a potential blueprint for an economic engine for the API space begins to emerge.
I wanted to explore this concept, by crafting a visualization, and ponder how common approaches to API plans, and pricing, could be complimented by a standardized API industry rating system.
The only thing really original in this diagram, is the introduction of an API rating system, and the potential for developers being able to exchange credits between the API service providers they depend on. The rest of this standard API management approaches, that are defined by API providers like 3Scale. If you aren't familiar with modern approaches to API service composition, API providers can have many different API resources, as well as many different plans for subscribing to these API services, which provide a wealth of dimension for API providers to define, price, and limit how developers put API resources to use in applications.
The T Circle in the above diagram is where the current magic happens, when you put modern API management solutions to work for API operations. This allows you to mix and match access to your API resources, charging different prices, to different developer groups, introduce volume usage levels, and measure as many dimensions of consumption as you desire. Rarely do successful APIs have just one rate of access to them, this approach to API management allows providers to maximize access to resources, while also maximizing potential revenue around subscriptions, and usage consumption.
When you start putting API providers into standard API plans, and pricing framework like this, you start seeing intra-provider opportunities, industry wide benefits, as well as potential to really make API consumers lives much easier. If there were standardized ways to understand how API providers were pricing their resources, and how they tiered access, and adjusted pricing between these tiers, the API industry competition would heat up significantly. The problem comes in when you start allowing developers to transfer credits from provider to provider, while it may seem like you are transferring common units of value, but in many cases there are big differences between each API provider and what the value of one transaction may be.
Once you introduce an API rating system to the equation, it provides one possible way for rating each provider, and setting a benchmark that can be used to exchange credits between each platform. If each provider operated in a credit format, that could be cashed out using an exchange rate, or possible transferred from one platform to another, you'd start to see some potentially interesting market effects. I think you'd quickly see some negative, as well as positive effects, but I think some balance could be struck between some of the common API resources being served, and consumed across the API space. Newer resources, with fewer precedent would be very volatile for a period of time.
The R Circle in the above diagram is mean to reflect the potential of an industry wide API rating system, when you have standardized information on API pricing, across API providers. This pricing would vary across the plans each provider offers, but you could still come up with a median pricing for individual providers, and even entire industries. When you apply the rating system you could provide a potential exchange rate that could be applied when moving credits bought, and earned via each API platform, between API providers. Developers could explore which platforms allow them generate credits by engaging with API resources, exchange to other providers, where they could then spend the credits for other services they need, in lieu of cash. This also opens up the possibilities for markets for API credits exchanged across business sectors being impacted by APIs.
I'm just exploring concepts involved with common approaches to API plans and pricing, and brainstorming a potential API rating system, then using my imagination to understand the developer, and industry implications of this one possible future. Only one part of this equation exists, and it would take some significant work to bring such a thing into reality, but it is fun to explore, and consider one possible design for an economic engine, that could possibly scale, and drive the API industry.
I'm reviewing the business models of many of the top API platforms over the last couple of weeks, and I’m seeing some pretty interesting approaches to API monetization. As I look through each API, I see that some platforms don’t have their API monetization strategy together at all, while others are following the pretty proven “cloud utility” model handed down from providers like AWS, and then I see some who are continuing to standardize how we pay for, and monetize APIs--which makes me happy.
One interesting pricing page I reviewed over the holidays, was from the file conversion API, ConvertAPI. They have a credit based API monetization approach, allowing you to buy a certain amount of credits on a monthly basis, or make one-time basis. Each area offers four tiers, allowing for the purchase of credits at a various rates. One thing I found curious though, is that your credits purchased monthly do not roll over, where your one time purchases do—I will have to think about the pros / cons of this more, before I comment.
The ConvertAPI also has a "credits cost table", providing an overview of how credits actually translate into actual API calls. Many file conversion APIs use a one-to-one rate for making API calls, while others cost 2, 3, or even 5 credits per each API call. I like this approach to defining API costs, and when your API developer account includes a API for monitoring account levels, it starts getting us closer to the API pricing standardization, and the automation we will need to continue the growth of the API economy.
I’m finding a lot of individual API monetization stories as I go through this latest round of research. I’m thinking I will have to spend a couple of weeks in February or March of 2015, step back and look at the monetization strategies of the 700+ companies I’m tracking on, and hopefully provide some better analysis of where we stand with this vital layer of the API industry. Along with the work I’m doing to encourage companies to create machine readable API definitions for their APIs, accompanied with machine readable licensing using API Commons, I want to help encourage the standardization of API pricing—then who knows, maybe someday API pricing can be machine readable, allowing us to make real-time decisions on the APIs we use based upon cost.
Salesforce has a pretty cool Code Share area within the DeveloperForce ecosystem, which allows developers to share code snippets with the rest of the community.
Its a pretty cool way for anyone to share their techniques as code samples in a variety of languages, then letting the community vet the code, fork larger projects, and collaborate to improve the code for the greater good.
Acknowledging that they don't have the internal resources to fully support the process, Salesforce has halted code samples submission, announcing plans to migrate the Code Sharing program to Github.
Github is already the largest code sharing platform, providing social tools that developers are used to. Salesforce's publiclackowledgement that they don't have the internal resources to operate, the fact they identify the importance of such a program, and the decision to migrate to Github are all very savvy moves by the API pioneer.
Github is one of the most important platforms you can use to support your API. Using Github to host all your API code samples and SDKs, whether they are generated internally or by your developers community is becoming more than a novelty, but an essential part of API operations.
If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.