RSS

API Testing News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Spreadsheet To Github For Sample Data CI

I’m needing data for use in human service API implementations. I need sample organizations, locations, and services to round off implementations, making it easier to understand what is possible with an API, when you are playing with one of my demos.

There are a number of features that require there to be data in these systems, and is always more convincing when it has intuitive, recognizable entries, not just test names, or possibly latin filler text. I need a variety of samples, in many different categories, with a complete phone, address, and other specific data points. I also need this across many different APIs, and ideally, on demand when I set up a new demo instance of the human services API.

To accomplish this I wanted to keep things as simple as I can so that non-developer stakeholders could get involved, so I set up a Google spreadsheet with a tab for each type of test data I needed–in this case, it was organizations and locations. Then I created a Github repository, with a Github Pages front-end. After making the spreadsheet public, I pull each worksheet using JavaScript, and write to the Github repository as YAML, using the Github API.

It is kind of a poor man’s way of creating test data, then publishing to Github for use in a variety of continuous integration workflows. I can maintain a rich folder of test data sets for a variety of use cases in spreadsheets, and even invite other folks to help me create and manage the data stored in spreadsheets. Then I can publish to a variety of Github repositories as YAML, and integrated into any workflow, loading test data sets into new APIs, and existing APIs as part of testing, monitoring, or even just to make an API seem convincing.

To support my work I have a spreadsheet published, and two scripts, one for pulling organizations, and the other for pulling locations–both which publish YAML to the _data folder in the repository. I’ll keep playing with ways of publishing test data year, for use across my projects. With each addition, I will try and add a story to this research, to help others understand how it all works. I am hoping that I will eventually develop a pretty robust set of tools for working with test data in APIs, as part of a test data continuous publishing and integration cycle.


Using Google Sheet Templates For Defining API Tests

The Runscope team recently published a post on a pretty cool approach to using Google Sheets for running API tests with multiple variable sets, which I thought is valuable at a couple of levels. They provide a template Google Sheet for anyone to follow, where you can plug in your variable, as well as your Runscope API Key, which allows you to define the dimensions of the tests you wish to push to Runscope via their own API.

The first thing that grabs me about this approach is how Runscope is allowing their customers to define and expand the dimensions of how they test their API using Runscope in a way that will speak to a wider audience, beyond just the usual API developer audience. Doing this in a spreadsheet allows Runscope customers to customize their API tests for exactly the scenarios they need, without Runscope having to customize and respond to each individual customer's needs--providing a nice balance.

The second thing that interests me about their approach is the usage of a Googe Sheet as a template for making API calls, whether you are testing your APIs, or any other scenario an API enables. This type of templating of API calls opens up the API client to a much wider audience, making integration copy and pastable, shareable, collaborative, and something anyone can reverse engineer and learn about the surface area of an API--in this scenario, it just happens to be the surface area of Runscope's API testing API. 

Runscope's approach is alignment with my previous post about sharing data validation examples. A set of assertions could be defined within a spreadsheets and any stakeholder could use the spreadsheet to execute and make sure the assertions are met. This would have huge implications for the average business user to help make sure API contracts are meeting business objectives. I'm considering using this approach to empower cities, counties, and states to test and validate human services API implementations as part of my Open Referral work.

It told John Sheehan, the CEO of Runscope that their approach was pretty creative, and he said that "Google sheets scripts are underrated" and that Google Sheets is the "API client for the everyperson". I agree. I'd like to see more spreadsheet templates like this used across the API life cycle when it comes to design, deployment, management, testing, monitoring, and every other area of API operations. I'd also like to see more spreadsheet templates available for making calls to other common APIs, making APIs accessible to a much wider audience, who are familiar with spreadsheets, and more likely to be closer to the actual problems in which API solutions are designed to solve.


Sharing API Data Validation Examples

I was studying examples of how I can validate the data returned from a human services APIs demo, and develop a set of API tests, as well as API service providers who can implement the tests, for cities to consider as part of their API deployments that are serving up locations and organizations where you can find critical services. I'm looking for examples of the common things like API availability and response time, but I'm also looking to get very granular and specialized to organizational, location, and service APIs.

The image I borrowed from RunScope helps visualize what I'm talking about, showing us how we can keep an eye on the basics, but also getting really granular when specifying what we expect from of our APIs. I have a pretty good imagination when it comes to thinking of scenarios I want to test for, but I'm also looking for any API providers who might be already sharing their tests and being more transparent when it comes to their API monitoring and testing practices. If you know of any API providers that would be willing to share the lists of what types of things they test for, I'd love to hear more. 

I'm thinking a regular blog series on different examples of how people are testing APIs from a diverse range of business sectors might help stimulate people's imagination when it comes to API testing concepts. I'm thinking it is another area that we could all learn a lot from each other if there was just a little bit of sharing. I'd love it if the examples were machine readable and reusable in any API testing service, but I would settle for just a blog post, or sharing of a bulleted list of API tests via email, or another channel. ;-)


Adding Behavior-Driven Development Assertions To My API Research

I was going through Chai, a behavior, and test driven assertion library, and spending some time learning about behavior driven development, or BDD, as it applies to APIs today. This is one of the topics I've read about and listened to talks from people I look up to, but just haven't had the time to invest too many cycles in learning more. As I do with other interesting, and applicable areas, I'm going to add as a research area, which will force me to bump it up in priority.

In short, BDD is how you test to make sure an API is doing what is expected of it. It is how the smart API providers are testing their APIs, during development, and production to make sure they are delivering on their contract. Doing what I do, I started going through the leading approaches to BDD with APIs, and came up with these solutions:

  • Chai - A BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.
  • Jasmine - A behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. 
  • MochaMocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.
  • Nightwatch.js - Nightwatch.js is an easy to use Node.js based End-to-End (E2E) testing solution for browser based apps and websites. 
  • Fluent AssertionsFluent Assertions is a set of .NET extension methods that allow you to more naturally specify the expected outcome of a TDD or BDD-style test.
  • Vows - Asynchronous behaviour driven development for Node.
  • Unexpectd - The extensible BDD assertion toolkit

If you know of any that I'm missing, please let me know. I will establish a research project, add them to it, and get to work monitoring what they are up to, and better track on the finer aspects of BDD. As I was searching on the topic I also came across these references that I think are worth noting, because they are from existing providers I'm already tracking on.

  • Runscope - Discussing BDD using Runscope API monitoring.
  • Postman - Discussing BDD using Postman API client.

I am just getting going with this area, but it is something I'm feeling goes well beyond just testing and touches on many of the business and political aspects of API operations I am most concerned with. I'm looking to provide ways to verify an API does what it is supposed to, as well as making sure an API sizes up to claims made by developers or the provider. I'm also on the hunt for any sort of definition format that can be applied across many different providers--something I could include as part of APIs.json indexes and OpenAPI Specs.

Earlier I had written on the API assertions we make, believe in, and require for our business contracts. This is an area I'm looking to expand on with this API assertion research. I am also looking to include BDD as part of my thoughts on algorithmic transparency, exploring how BDD assertions can be used to validate the algorithms that are guiding more of our personal and business worlds. It's an interesting area that I know many of my friends have been talking about for a while but is now something I want to work to help normalize for the rest of us who might not be immersed in the world of API testing.


To Incentivize API Performance, Load, And Security Testing, Providers Should Reduce Bandwidth And Compute Costs Asscociated

I love that AWS is baking monitoring testing by default in the new Amazon API Gateway. I am also seeing new service from AWS, and Google providing security and testing services for your APIs, and other infrastructure. It just makes sense for cloud platforms to incentivize security of their platforms, but also ensure wider success through the performance and load testing of APIs as well.

As I'm reading through recent releases, and posts, I'm thinking about the growth in monitoring, testing, and performance services targeting APIs, and the convergence with a growth in the number of approaches to API virtualization, and what containers are doing to the API space. I feel like Amazon baking in monitoring and testing into API deployment and management because it is in their best interest, but is also something I think providers could go even further when it comes to investment in this area.

What if you could establish a stage of your operations, such as QA, or maybe production testing, and the compute and bandwidth costs associated with operations in these stages were significantly discounted? Kind of like the difference in storage levels between Amazon S3 and Glacier, but designed specifically to encourage monitoring, testing, and performance on API deployments.

Maybe AWS is already doing this and I've missed it. Regardless it seems like an interesting way that any API service provider could encourage customers to deliver better quality APIs, as well as help give a boost to the overall API testing, monitoring, and performance layer of the sector. #JustAThought


The New Mind Control APIs That Salesforce Is Testing On Conference Attendees Is Available To Premier Partners

The Dreamforce conference is happening this week in San Francisco, a flagship event for the Platform as a Service (PaaS) company. Salesforce is one of the original pioneers in API technology, allowing companies to empower their sales force, using the the latest in technology. Something that in 2015, Salesforce is taking this to the next level, with a handful of attendees, and partners in attendance at the conference.

Using smart pillow technology, Salesforce will be testing out a new set of subliminal mind control APIs. All attendees of the Dreamforce conference have agreed to be part of the tests, through their acceptance of the event terms of service, but only a small group of 500 individuals will actually be targeted. Exactly which attendees are selected will be a secret, even from the handful of 25 partners who will be involved in the test. 

Through carefully placed hotel pillows, targeted attendees will receive subliminal messages, transmitted via smart pillow APIs developed by Salesforce. Messages will be crafted in association with partners, testing out concepts of directing attendees what they will eat the next day, which sessions they are attending, where they will be going in the exhibit hall, and who they will be networking with The objective is to better understand how open the conference attendees are open to suggestion, in an conference environment.

While some partners of this mind control trial are just doing random tests to see if the technology works, others are looking to implement tasks that are in sync with their sales objectives. Ernst Stavro Blofeld, CEO of Next Generation Staffing Inc, says "the Salesforce test represents the future of industry, and the workforce--this weeks test is about seeing what we can accomplish at a conference, but represents what we will be able to achieve in our workforce on a daily basis."

Salesforce reminded us that this is just a simple test, but an important one that reflects the influence the company already has over its constituents. The company enjoys one of the most loyal base of business users, out of all leading software companies in the world, and this new approach to targeting a loyal base of users, is just the beginning of a new generation of API engineered influence.


Testing New Publishing

This is my testing.


How Do We Continue Moving Green Button Data And APIs Forward?

I'm preparing for a talk at the The Smart Grid Interoperability Panel Second Annual Conference, in Nashville Tennessee, specifically participating on a panel titled "Using Power Grid Open Data Initiatives". I accepted the request to go speak as part of my wider work on the Green Button iniative out of The White House, DOE, NIST, and the GSA. I was asked to provide some thoughts on how to help move the Green Button efforts forward earlier this winter, and again in the spring, and just haven't had the bandwidth to give any energy, so I saw this as a great opportunity to make some time as part of this panel.

In May, The White House, 18F, some Presidential Innovation Fellows, and myself were asked to move the Green Button ball forward, resulting in a new website, and developer area that is all hosted on Github. I'm bummed I wasn't able to make time to participate, but now I have been able to go through the new site, and developer area, and gather my thoughts on where we could go next with the effort.

Before we get started, lets start with the basics, what is Green Button? Nick Sinai (@NickSinai), Deputy Chief Technology Officer of the United States, puts it this way:

Green Button is a policy idea. It's the notion that customers – residential, commercial, industrial, and yes, government customers of energy – ought to get access to their own energy data, in a standard digital format. The Administration has articulated this several times, in a number of Administration energy, climate, and smart grid policy documents. The government is a big customer of energy, and we deserve our own energy usage and price data.

Green Button is also a public-private initiative, in response to a 2011 White House call to action. The White House, DOE, NIST, and GSA have been collaborating with the utility and tech industries on this growing effort for a few years now. The White House has been convening industry and celebrating its progress. NIST has been supporting industry on development of the standard, DOE has been working with utilities, and GSA has been an early adopter.

Finally, Green Button is an actual data standard, for formatting and transferring energy usage and price data. This standard can be implemented in both utility and non-utility contexts. Customers can manually download or upload a file formatted in the standard, and IT systems can automatically transfer data between them using the standard.

The White House, 18F, and fellow PIFs, have done a great job organizing everything under a single, Github pages site, at greenbuttondata.org. The site is a big improvement over what was there before, and moves us closer to solving the fragmentation in the Green Button effort, which I believe is the fundamental issue that is holding it back from being fully realized. Green Button is a big vision, and it is going to take some serious effort to bring together all of the industry, and 3rd party efforts, and most importantly getting individual consumers on-board with taking control over their own energy data.

To help organize my thoughts on how we can move Green Button forward, something I will be discussing on stage in Nashville, I wanted to walk through the site, and take a snapshot of where we are at, then work on a strategy for what is needed to keep the momentum we already have moving forward. Green Button is an extremely critical effort in not just empowering individuals and institutions to take control over their energy data, but also for the overall health of the energy sector, and whether the utility companies can see it or not, open data and APIs will be central to their continued success. So as I do with other areas, let's walk through where we are at currently with Green Button, to help prime the discussion about where we should go.

First Impressions
When you land on the greenbuttondata.org site, it looks like a modern, clean website effort, with a simple tagline and meaningful image to help you understand what Green Button is all about. The first description is "Helping You Find and Use Your Energy Data", and then when you scroll down you see an answer to the question What is Green Button? -- "Green Button is a secure way to get your energy usage information electronically. I like the concise messaging, but it is solely focused on the consumer, and leaves out the commercial energy users, public institutions, utilities & energy service providers, 3rd party software vendors, and energy efficiency organizations listed on the "use" page. I know this all revolves around energy data, but when you land on site, you should have the site speak to you, no matter who you are.

I think the "use" page does a great job in breaking down who the target audience is, but the first impression doesn't reflect this, and I don't think there is any path for these users to follow, once you do find yourself on greenbuttondata.org. As a data custodian, or 3rd party developer, I think the build or developer page will quickly speak to you, but as a consumer, you are quickly dropped from the focus of the site and will be completely lost in the rich information that is available. To on-board each user properly, we will need to begin to carve out paths for each user, one that start with a meaningful first impression off of the home page, then puts them on a path that leads them twoard action and the relevant resources they will need, depending on their role.

Learn About Green Button
I really like the learn page for Green Button. It is clean, simple, informative, and not overwhelming for me to learn about what Green Button is, from any perspectives. My only suggestions for next steps is that we begin massaging some of the rich content available under "library", and link to specific learning opportunities from this page. Each summary should provide users with links to follow, taking them to the detail they will need to fully understand each aspect of Green Button as it pertains to their position. However, with that said it is critical to keep this page a simple overview of Green Button, and make it an easy doorway to the world of energy data.

Using Green Button
As I said above, the "use" section provides a nice overview of who should be using Green Button, with sections focusing on commercial energy users, public institutions, utilities & energy service providers, 3rd party software vendors, and energy efficiency organizations, as well residential consumers. What is needed for each of these sections, is a clear call to action, taking you to another page that is a "getting started" specifically targeting your role in the Green Button movement. I don't think the lack of this, is a deficiency of what is currently there, it is just the next logical step for evolving this page to better onboard users, now that these roles are better defined.

The Green Button Community
This is the first time I've seen the Green Button community reflected on a single page like this, and is exactly what is needed to help reduce the fragmentation currently present across the community. Presenting the four groups in this way, complete with links to their sites, and relevant contact information to get involved, is very important for bringing together the community. My only critique is that the page could use a little more layout love, formatting, logos, and some polish to keep it looking as good as the rest of the site—nothing major. Eventually it would also be nice to have some sort of stream of activity across all of these efforts, aggregated here. I'm not sure how this would occur, as the groups are all using different ways to keep members informed, but something should be considered, further bringing together the community.

Build With Green Button
The developers section probably needs the most help right now, and I'm not 100% sure of how to make more coherent. It is a little redundant and circular, and its called "build" on home page, and "developers" in the top navigation, something that should be consistent. The RESTful APIs is represented twice, and i see glimpses of trying to provide separate information for data stewards, and 3rd party developers. Along with the development of specific paths for different target users, this needs to be reworked, and woven into those efforts, but I do like the focus on the open source nature of Green Button. The first time I landed on Green Button, I didn't fully grasp that it was an open source API that anyone can deploy, and that the sandbox version that is in operation is just a demo, something that I think we need to make fully clear to users of all types.

We have a big challenge ahead when it comes to helping data custodians understand what is possible, and hopefully we can help spur 3rd party developers in not just build around existing Green Button code, but also work to develop new versions in other languages, as well as specific cloud offerings on maybe Amazon or Heroku. The sky is the limit when it comes to developing, and building Green Button solutions, both server and client side, and we need to make the separation as clear as possible. It will take some serious architectural vision to help bridge what code is currently available for Green Button, and stimulate commercial energy users, public institutions, utilities & energy service providers, 3rd party software vendors, and energy efficiency organizations in understanding what is possible, and incentivize them to deliver solutions that are quickly deployable across the cloud landscape.

Green Button Library
As with the "learn" and "use" sections of the site, the current library section was a great step forward in bringing together the wealth of current resources available to support Green Button efforts. What is needed now is just the refinement of what is there, making it easier to access, and learn from, while also making sure that resources are appropriately grouped into buckets targeting each of the Green Button user groups. As I'm working through the the library of materials, I can see there are some seriously rich resource available, but there is a lot of disconnect between them because they are designed, developed and deployed to support different goals, by different authors—we need to establish some consistency between them.

Right now the library is very much a list of resources, and it would be nice to make sure it truly is a library that is organized, indexable, and is easy to navigate. Having all of these resources in a single location is an excellent start, but how do we start refining them, and making them much more usable by all users? Documents should have a consistent document formats, videos should have single Youtube (or other) channel, and much more. All of this would make it much more likely that these resources would be explored, an consumed by a wider audience, beyond just the alpha geek crowd.

Green Button Testing Tools
I'm happy to see the Green Button testing tool here, but ultimately it feels like one tool, that should be in a toolbox. Right now it is its own top level navigation item, and I don't think a single tool should be elevated to this level. I would change this to tools, and make the testing tool the one item in the toolbox right now, and I'm sure there are others we can quickly add to the toolbox as well. As with other areas, we should break down the toolbox by user group, making sure consumers easily find what they need, as well as utility providers, 3rd party developers and the other user groups.

Getting Started With Green Button
I was excited to see the Getting Started button, until I realized it was just an email address. ;-( That isn't getting started, it is sending an email. Getting started should be prominent, and truly provide users with easy paths to well...get started. If you are data custodian, utility, or energy provider, you should have a simple page that explains how to get started, in a self-service way—no email needed. Separately, there should be contact information for Green Button, something that hopefully includes much more than just an email address, with no face and personality behind it.

Acknowledging Where We Are
That is a quick walkthrough of where things are at with greenbuttondata.org, and in my opinion, things have come a long ways from what I was seeing spread across the Green Button landscape this last winter. Most importantly it is aggregating the community, developer, and the wealth of other resources into a library, which is a very critical step to continue moving Green Button forward. In my opinion the v1 tech of Green Button tech is in place, it is just lacking all the refinement, storytelling, and relationship building that is necessary to move Green Button into the consciousness of utility companies, 3rd party developers, and the average, everyday energy consumer—so how do we do this?

Green Button Needs A Champion
First, before we get into any of the nuts and bolts of what we can do to keep greenbuttondata.org rolling forward, the project is going to need a champion. I think The White House, DOE, NIST, and GSA are doing an amazing job of making sure things move forward, but greenbuttondata.org needs someone who is super passionate about energy data, APIs, understands the energy industry, and wants to put in the hours necessary to refine the information currently available, build relationships, and generate new content that will bring in new players. This role isn't some cushy job that you will get a regular paycheck for, but for the right person, I think it could be pretty lucrative, if you get creative in piecing together sponsorship and support from industry players, organizations, and worked hard on the grant writing front. Essentially you need The API Evangelist, for energy data and APIs.

Who's offering Green Button?
I think the first place to start is visible right on the home page, and looking at who is already putting Green Button data to work. I see 50+ entities who are already putting Green Button to work, so who are these people, and how can we showcase what they are up to. There are some meaningful implementations here, and I know there is some great material here to demonstrate the power, and importance of Green Button, and help spark the imagination of new visitors. Green Button has traction, the problem is not enough people know about it, or have the imagination to understand how it is being used. Let's take the time to showcase this, and create some really great content that will make the site more educational. This is something that can be led by the Green Button evangelist I talk about above, but is also something I think the community should also contribute to. I'm going to carve out some time to reach out to some of the providers listed, and see if i can showcase how they are putting Green Button to work, and generate more detail, and content that can be contributed to greenbuttondata.org. Do you want to help?

Showcase Of How Green Button Is Used
Building on the work above, we need a place to showcase how Green Button is being put to use. I'm not sure this should be a top level navigation item, but if we group each entry in the showcase, by the type of user, I think we can make it one stop of the path each user takes, as they learn about Green Button. As each commercial energy users, public institutions, utilities & energy service providers, 3rd party software vendors, and energy efficiency organizations, and individual energy users is learning about Green Button, they should also be exposed to examples of other similar individuals or companies like them that are already putting Green Button to work. This will go a long way in helping people see the potential of Green Button, and begin the journey of putting it to work across the energy industry landscape in new ways.

Blog For Bringing Green Button To Life
The greenbuttondata.org site needs a blog. This is something that will be difficult without a champion to keep alive, but a blog is going to be essential in bringing the site to life, helping share stories about the value Green Button is bringing to companies, organizations, institutions, and most importantly the average consumer. Without a blog, any developer community will not have a soul, or a personality, and it will be difficult to convince anyone that someone is home, and that they should trust and care about Green Button. I think a blog could easily be crowdsourced, allowing passionate folks like myself to post, as well as other organizations, companies, and key stakeholders to post relevant stories, that will give the site a heartbeat. A blog will be central to any of the suggestions I will have to help move things forward, and give greenbuttondata.org a personality that will go a long way in building trust amongst users, and across the industry.

Giving Some Coherence To The Developers Section
The developer section of greenbuttondata.org will be essential to scaling the effort, and right now the page is a little all over the place, and will take some significant effort to simplify, make usable, and bring the wealth of developer resources into focus. This will take some serious work, by someone who is a developer, architect, and can make actually organize everything into something that easily on-boards data custodians, and 3rd party developers with as little friction as possible. You have to walk them through the wealth of tooling that is already available, show them what is possible, and then give them the downloads they need to get things working in their world.

There is also a need for some other additional tooling, some of the current solutions are very enterprise oriented, and I think with some encouragement, providers could replicate Green Button tooling in other languages like Node.js, Python, PHP, and other platforms that will encourage rapid adoption by other providers. There also needs to be some ways to help people quickly bring Green Button to life using cloud platforms like AWS, Heroku, Google, Azure and other platforms that companies and individuals are already depending on.

The developer section is something that will take some deep thinking, hacking, and architectural magic from the champion, and Green Button evangelist. They will have to look at it through the eyes of each of the data custodians, and 3rd party developers who are already putting Green Button to use, and try to deliver things in a way that will speak to this, as well as potentially other new users. This type of work is not easy, and takes some serious effort, something you can't expect to happen overnight. However, if it is done right this can really help scale the number of Green Button implementations in the wild, and take things to the next level much quicker.

Turning The Library Into A Consistent Resource
I am happy to see all of the rich Green Button resources brought into a single location, but I wouldn't call it quite a library yet. It is a listing of valuable resources, that aren't really organized in a way that speaks to the different Green Button users, and are not consistent in form because they come from different sources. Even with this said, their is a wealth of resources available there, and with some work you could build a really nice, interactive library that can help educate users on how to put Green Button to work. Similar to the showcase, once the library is organized, and grouped by target user, I think the library can be a stop on the path that each user takes, showing them exactly the resources they need in the library to help onboard them properly.

Establishing Paths For Green Button Users
As I discussed above, each user needs a path they can take from the "use" page to begin their journey. Right now the "use" page is a dead end, where it should actually be a call to action, providing each visitor the chance to take a path through the site that speaks to them, without forcing them to have to wade through the wealth of resources that are currently there. From the "use" page, each user can be taken to a showcase of other implementations from similar users, then depending on their role, could be walk through other sections of the site, landing in the library, presenting with exactly what they need to get going. I'm not exactly sure what the user experience will be on these new paths, but I think once we profile the existing uses of Green Button, and unwind the developer and library resources, a pretty orderly route can be established for each user group. These paths will go a long way to onboard users in a fraction of the time, and maximize the potential reach, and scale of the platform—adding more implementations to the Green Button platform, and scaling the audience with each new user.

Taking Green Button From Site to Community
I will stop here. I think this is all that should be focused on for now, when it comes to moving Green Button forward. It is important to not bite off too much, and make sure we can be successful in moving things forward, and not make things more complex. The goal is to simplify what we have, now that we have everything organized into a single site, and begin the process of bringing greenbuttondata.org to life. With someone at the helm, an active blog, and a more coherent focus on each user group, I think that things will start picking up steam, and with more outreach, and involvement with existing Green Button implementations, and the existing Green Button community, we can move greenbuttondata.org from being just a site, and put it on its way to becoming a community.

It is important that Green Button evolves to become a community. It cannot remain just a government initiative. Green Button has to be a vibrant community that commercial energy users, public institutions, utilities & energy service providers, 3rd party software vendors, energy efficiency organizations, and individual energy users are all part of, otherwise it will always remain just something being pushed from the top down. Green Button has to be also owned by the individual energy users, and institutions, providing essential bottom up momentum to match the energy given from the top by the federal government partners—without this the energy industry will never buy in.

Profiling the existing Green Button implementations, and making the site speak to each of the user groups will be important for taking things to the next step. This process will help communicate to a next generation of implementations what is possible, and through regular showcasing and storytelling we can move Green Button beyond just a policy idea, and initiative from government, or just a technical data standard, and transform it into something that is a default part of the energy industry. In this new world, energy users will be used to having control over their data, and entirely new markets will be established delivering services to energy consumers, within this new space.

All the parts and pieces are there, and much like last round of work on the greenbuttondata.org site brought all these resources together, we need to figure out how to bring the energy industry together and show them the potential of Green Button data and APIs. We need to make sure energy consumers, both individual and institutional, understand the importance of having control over their energy data, and show data custodians that this is the future of doing business in the energy space. Once we can achieve this, Green Button will take on a life of its own, driven not just by the government, or even the utility providers, but by the energy of 3rd party companies who are delivering meaningful solutions for institutional, organizational, and individual energy consumers.


A Mobile Developer Toolkit With The University Of Michigan APIs

I am continuing my research into how universities are using APIs, and while I was going through the developer areas for the universities I track on, I noticed an interesting mobile developer toolkit, from University of Michigan.

When you land on the homepage of the University of Michigan developer portal, to the right you will see some valuable resources that is looking to help developers think through the bigger picture of designing, developing, deploying, testing and distributing, mobile application that are built on campus resources.

The University of Michigan mobile developer toolkit is broken down into four separate groups:

Design

Get Started

Distribute

Develop & Test

I think the resources they provide, represent a very long term vision around delivering API resources to developers, who will be building applications for the institution--something that all universities should look at emulating.

You want developers, who are building mobile applications on top of campus API resources to be successful, so providing them with the education, training and resources they need to deliver, is critical.

I also think it is cool, that at the bottom of the mobile developer toolkit, they provide two other links:

They want their app developers to socialize with other campus application developers, and be aware of opportunities to compete in hackathons and other competitions--on and off campus.

Developing mobile applications is the number one incentive for universities to deploy APIs, and jumpstart their API efforts like at BYU, UW and UC Berkeley, and it just makes sense to provide a mobile developer toolkit for developers. Education around APIs and mobile application development is critical to the success of any API initiative, but even more so, when it occurs across a large institution, by a variety of internal and external groups.

I’ll add the mobile developer toolkit to my list common building blocks for not just university APIs, but all API initiatives.


Contributing To The Testing & Monitoring Lifecycle

Contributing To The Testing & Monitoring Lifecycle

When it comes to testing, and monitoring an API, you begin to really see how machine readable API definitions can be the truth, in the contract between API provider and consumer. API definitions are being used by API testing and monitoring services like SmartBear, providing a central set of rules that can ensure your APIs deliver as promised.

Making sure all your APIs operate as expected, and just like generating up to date documentation, you can ensure the entire surface area of your API is tested, and operating as intended. Test driven development (TDD) is becoming common practice for API development, and API definitions will play an increasing role in this side of API operations.

An API definition provides a central truth, that can be used by API providers to monitor API operations, but also give the same set of rules to external API monitoring services, as well as individual API consumers. Monitoring, and understanding an API up time, from multiple external sources is becoming a part of how the API economy is stabilizing itself, and API definitions provide a portable template, that can be used across all API monitoring services.

Testing and monitoring of vital resources that applications depend on is becoming the norm, with new service providers emerging to assist in this area, and large technology companies like Google, making testing and monitoring default in all platform operations. Without a set of instructions that describe the API surface area, it will be cumbersome, and costly, to generate the automated testing and monitoring jobs necessary to produce a stable, API economy.


If I Could Design My Perfect API Design Editor

I’ve been thinking a lot about API design lately, the services and tooling coming from Apiary, RAML and Swagger, and wanted to explore some thoughts around what I would consider to be killer features for the killer API design editor. Some of these thoughts are derived from the features I’ve seen in Apiary and RAML editor, and most recently the Swagger Editor, but I’d like to *riff* on a little bit and play with what could be the next generation of features.

While exploring my dream API design editor, I’d like to walk through each group of features, organized around my indentations and objectives around my API designs.

Getting Started
When kicking off the API design process, I want to be able to jumpstart the API design lifecycle from multiple sources. There will be many times that I want to start from a clean slate, but many times I will be working from existing patterns.

  • Blank Canvas - I want to start with a blank canvas, no patterns to follow today, I’m painting my masterpiece. 
  • Import Existing File - I have a loose API design file laying around, and I want to be able to open, import and get to work with it, in any of the formats. 
  • Fork From Gallery - I want to fork one of my existing API designs, that I have stored in my API design taller (I will outline below). 
  • Import From API Commons - Select an existing API design pattern from API Commons and import into editor, and API design gallery.

My goals in getting started with API design, will be centered around re-use the best patterns across the API space, as well as my own individual or company API design gallery. We are already mimicking much of this behavior, we just don’t have a central API design editor for managing these flows.

Editing My API Design
Now we get to the meat of the post, the editor. I have several things in mind when I’m actually editing a single API definition, functions I want, actions I want to take around my API design. These are just a handful of the editor specific features I’d love to see in my perfect API design editor.

  • Multi-Lingual - I want my editor to word with API definitions in API Blueprint, RAML and Swagger. I prefer to edit my API designs in JSON, but I know many people I work with will prefer markdown or YAML based, and my editor needs to support fluid editing between all popular formats. 
  • Internationalization - How will I deal with making my API resources available to developers around the world? Beyond API definition langugages, how do I actually make my interfaces accessible, and understood by consuers around the glob.e
  • Dictionary - I will outline my thoughts around a central dictionary below, but I want my editor to pull from a common dictionary, providing a standardized language that I work from, as well as my company when designing interfaces, data models, etc. 
  • Annotation - I want to be able to annotate various aspects of my API designs and have associated notes, conversation around these elements of my design. 
  • Highlight - Built in highlighting would be good to support annotations, but also just reference various layers of my API designs to highlighting during conversations with others, or even allowing the reverse engineer of my designs, complete with the notes and layers of the onions for others to follow. 
  • Source View - A view of my API design that allows me to see the underlying markdown, YAML, or JSON and directly edit the underlying API definition language. 
  • GUI View - A visual view of my API design, allowing for adding, editing and removing elements in an easy GUI interface, no source view necessary for designing APIs. 
  • Interactive View - A rendered visual view of my API, allowing me to play with either my live API or generated mock API, through interactive documentation within my editors. 
  • Save To Gallery - When I’m done working with my API designs, all roads lead to saving it to my gallery, once saved to my working space I can decide to take other actions. 
  • Suggestions - I want my editor to suggest the best patterns available to me from private and public sources. I shouldn't ever design my APIs in the dark.

The API design editor should work like most IDE’s we see today, but keep it simple, and reflect extensibility like GIthub’s Atom editor. My editor should give me full control over my API designs, and enable me to take action in many pre-defined or custom ways one could imagine.

Taking Action
My API designs represent the truth of my API, at any point within its lifecycle, from initial conception to deprecation. In my perfect editor I should be able to take meaningful actions around my API designs. For the purposes of this story I’m going to group actions into some meaningful buckets, that reflect the expanding areas of the API lifecycle. You will notice the four areas below, reflect the primary areas I track on via API Evangelist.

Design Actions
Early on in my API lifecycle, while I’m crafting new designs, I will need to take action around my designs. Designs actions will help me iterate on designs before I reach expensive deployment and management phases.

  • Mock Interface - With each of my API designs I will need to generate mock interfaces that I can use to play with what my API will deliver. I will also need to share this URL with other stakeholders, so that they can play with, and provide feedback on my API interface. 
  • Copy / Paste - API designs will evolve and branch out into other areas. I need to be able to copy / paste or fork my API designs, and my editor, and API gallery should keep track of these iterations so I don’t have to. The API space essentially copy and pastes common patterns, we just don’t have a formal way of doing it currently. 
  • Email Share - I want to easily share my API designs via email with other key stakeholders that will be part of the API lifecycle. Ideally I wouldn’t be emailing around the designs themselves, just pointers to the designs and tools for interacting within the lifecycle. 
  • Social Share - Sometimes the API design process all occur over common social networks, and in some cases be very public. I want to be able to easily share all my API designs via my most used social networks like Github, Twitter and LinkedIn. 
  • Collaboration - API design should not be done in isolation, and should be a collaborative process with all key stockholders. I would like to to even have Etherpad style real-time interactions around the design process with other users.

API design actions are the first stop, in the expanding API design lifecycle. Being able to easily generate mocks, share my interfaces and collaborate with other stakeholders. Allowing me to quickly, seamlessly take action throughout the early design cycles will save me money, time and resources early on—something that only become more costly and restrictive later on in the lifecycle.

Deployment Actions
Next station in the API design lifecycle, is being able to deploy APIs from my designs. Each of the existing API definition formats provide API deployment solutions, and with the evolution in cloud computing, we are seeing even more complete, modular ways to take action around your API designs.

  • Server - With each of my API designs, I should be able to generate server side code in the languages that I use most. I should be able to register specific frameworks, languages, and other defining aspects of my API server code, then generate the code and make available for download, or publish using Github and FTP. 
  • Container - Cloud computing has matured, producing a new way of deploying very modular architectural resources, giving rise to a new cloud movement, being called containers. Container virtualization will do for APIs, what APIs have done for companies in the last 14 years. Containers provide a very defined, self-contained way of deploying APIs from API design blueprints, ushering a new ay of deploy API resources in coming years.

I need help to deploy my APIs, and with container solutions like Docker, I should have predefined packages I can configure with my API designs, and deploy using popular container solutions from Google, Amazon, or other coud provider.

Management Actions
After I deploy an API I will need to use my API definitions as a guide for an increasing number of areas of my management process, not just the technical, but the business and politics of my API operations.

  • Documentation - Generating of interactive API documentation is what kicked off the popularity of API design, and importance of API definitions. Swagger provider the Swagger UI, and interactive, hands-on way of learning about what an API offered, but this wasn’t the only motivation—providing up to date documentation as well, added just the incentives API providers needed to generate machine readable documentation.
  • Code - Second to API documentation, providing code samples, libraries, and SDKS is one of the best ways you can eliminate friction when on boarding new API users. API definitions provide a machine readable set of instructions, for generating the code that is necessary throughout the API management portion of the API lifecycle. 
  • Embeddable - JavaScript provides a very meaningful way to demonstrate the value of APis, and embeddable JavaScript should always be part of the API lifecycle. Machine readable API definitions can easily generate visualizations that can be used in documentation, and other aspects of the API lifecycle.

I predict, with the increased adoption of machine readable API formats like API Blueprint, RAML and Swagger, we will see more layers of the API management process be expanded on, further automating how we manage APis.

Discovery Actions
Having your APIs found, and being able to find the right API design for integration, are two sides of an essential coin in the API lifecycle. We are just now beginning to get a handle on what is need when it comes to API discovery.

  • APIs.json - I should be able to organize API designs into groupings, and publish an APIs.json file for these groups. API designs should be able to be organized in multiple groups, organized by domain and sub-domain. 
  • API Commons - Thanks to Oracle, the copyright of API of API definitions will be part of the API lifecycle. I want the ability to manage and publish all of my designs to the API Commons, or any other commons for sharing of API designs.

The discovery of APIs has long been a problem, but is just now reaching the critical point where we have to start develop solutions for not just finding APIs, but also understanding what they offer, and the details of of the interface, so we can make sense of not just the technical, but business and political decisions around API driven resources.

Integration Actions
Flipping from providing APIs to consuming APIs, I envision a world where I can take actions around my API designs, that focus on the availability, and integration of valuable API driven resources. As an API provider, I need as much as assistance as I can, to look at my APIs from an external perspective, and being able to take action in this area will grow increasingly important.

  • Testing - Using my machine readable API definitions, I should be able to publish testing definitions, that allow the execution of common API testing patterns. I’d love to see providers like SmartBear, Runscope, APITools, and API Metrics offer services around the import of API design generated definitions. 
  • Monitoring - Just like API testing, I want to be able to generate definition that allow for the monitoring of API endpoints. My API monitoring tooling should allow for me to generate standard monitoring definitions, and import and run them in my API monitoring solution.

I’d say that API integration is the fastest growing area of the AP space, second only to API design itself. Understanding how an API operates, from an integrators perspective is valuable, not just to the integrator, but also the provider. I need to be thinking about integration issues early on in the API design lifecyle to minimize costly changes downstream.

Custom Actions
I’ve laid out some of the essential actions I’d like to be able to take around my API definitions, throughout the API lifecycle. I expect the most amount of extensibility from my API design editor, in the future, and should be able to extend in any way that I need.

  • Links - I need a dead simple way to take an API design, and publish to a single URL, from within my editor. This approach provides the minimum amount of extensibility I will need in the API design lifecycle. 
  • JavaScript - I will need to run JavaScript that I write against all of my API designs, generating specific results that I will need throughout the API design process. My editor should allow me to write, store and execute JavaScript against all my API designs. 
  • Marketplace - There should be a marketplace to find other custom actions I can take against my API designs. I want a way to publish my API actions to the marketplace, as well as browse other API actions, and add them to my own library.

We’ve reached a point where using API definitions like API Blueprint, RAML, and Swagger are common place, and being able to innovate around what actions we take throughout the API design lifecycle will be critical to the space moving forward, and how companies take action around their own APIs.

API Design Gallery
In my editor, I need a central location to store and manage all of my API designs. I’m calling this a gallery, because I do not want it only to be a closed off repository of designs, I want to encourage collaboration, and even public sharing of common API design patterns. I see several key API editor features I will need in my API design gallery.

  • Search - I need to be able to search for API designs, based upon their content, as well as other meta data I assign to my designs. I should be able to easily expose my search criteria, and assist potential API consumers in finding my API designs as well. 
  • Import - I should be able to import any API design from a local file, or provide a public URL and generate local copy of any API design. Many of my Api designs will be generated from an import of existing definition. 
  • Versioning - I want the API editor of the future to track all versioning of my API designs. Much like managing the code around my API, I need the interface definitions to be versioned, and the standard feature set for managing this process. 
  • Groups - I will be working on many API designs, will various stakeholders in the success of any API design. I need a set of features in my API design editor to help me manage multiple groups, and their access to my API designs. 
  • Domains - Much like the Internet itself, I need to organize my APIs by domain. I have numerous domains which I manage different groups of API resources. Generally I publish all of my API portals to Github under a specific domain, or sub-domain—I would like this level of control in my API design editor. 
  • Github - Github plays a central role in my API design lifecycle. I need my API design editor to help me manage everything, via public and private Github repository. Using the Github API, my API design editor should be able to store all relevant data on Github—seamlessly. 
  • Diff - What are the differences between my API designs? I would like to understand the difference between each of my API resource types, and versions of each API designs. It might be nice if I could see the difference between my API designs, and other public APIs I might consider as competitors. 
  • Public - The majority of my API designs will be public, but this won’t be the case with every designer. API designers should have the control over whether or not their API designs are public or private.

My API design gallery will be the central place i work from. Once I reach a critical mass of designs, I will have many of the patterns I need to design, deploy and manage my APIs. It will be important for me to have access to import the best patterns from public repositories like API Commons. To evolve as an API designer, I need to easily create, store, and evolve my own API designs, while also being influenced by the best patterns available in the public domain.

Embeddable Gallery
Simple visualization can be an effective tool in helping demonstrate the value an API delivers. I want to be able to manage open, API driven visualizations, using platforms like D3.js. I need an arsenal of embeddable, API driven visualizations to help tell the store of the API resources I provide, give me a gallery to manage them.

  • Search - I want to be able to search the meta data around the API embeddable tools I develop. I will have a wealth of graphs, charts, and more functional, JavaScript widgets I generate. 
  • Browse - Give me a way to group, and organize my embeddable tools. I want to be able to organize, group and share my embeddable tools, not just for my needs, but potentially to the public as well.

A picture is worth a thousand words, and being able to easily generate interactive visualizations, driven by API resources, that can be embedded anywhere is critical to my storytelling process. I will use embeddable tools to tell the story of my API, but my API consumers will also use these visualizations as part of their efforts, and hopefully develop their own as well.

API Dictionary
I need a common dictionary to work from when designing my APIs. I need to use consistent interface names, field names, parameters, headers, media types, and other definitions that will assist me in providing the best API experience possible.

  • Search - My dictionary should be available to me in any area of my API design editor, and in true IDE style, while I’m designing. Search of my dictionary, will be essential to my API design work, but also to the groups that I work with. 
  • Schema.org - There are plenty of existing patterns to follow when defining my APIs, and my editor should always assist me in adopting, and reusing any existing pattern I determine as relevant to my API design lifecycle, like Schema.org.
  • Dublin Core - How do I define the metadata surrounding my API designs? My editor should assist me to use common metadata patterns available like Dublin Core.
  • Media Types - The results of my API should conform to existing document representations, when possible. Being able to explore the existing media types available while designing my API would help me emulate existing patterns, rather than reinventing the wheel each time I design an API. 
  • Custom - My dictionary should be able to be driven by existing definitions, or allow me to import and and define my own vocabulary based upon my operations. I want to extend my dictionary to meet the unique demands of my API design lifecycle.

I want my API design process to be driven by a common dictionary that fits my unique needs, but borrows from the best patterns already available in the public space. We already emulate many of the common patterns we come across, we just don’t have any common dictionary to work from, enforcing healthy design via my editor.

An Editor For Just My Own API Design Process
This story has evolved over the last two weeks, as I spent time in San Francisco, discussing API design, then spending a great deal of time driving, and thinking about the API design lifecycle. This is all part of my research into the expanding world of API design, which will result in a white paper soon, and my intent is to just shed some light on what might be some of the future building blocks of the API design space. My thoughts are very much based in my own selfish API design needs, but based upon what i’m seeing in the growing API design space.

An Editor For A Collective API Design Process
With this story, I intend to help keep the API design process a collaborative, and when relevant a public affair. I want to ensure we work from existing patterns that are defined in the space, and as we iterative and evolve APIs, we collectively share our best patterns. You should not just be proud of your API designs, and willing to share publicly, you should demonstrate the due diligence that went into your design, attribute the patterns you used to contribute to your designs, and share back your own interpretation—encouraging re-use and sharing further downstream.

What Features Would Be Part of Your Perfect API Design Editor
This is my vision around the future of API design, and what I’d like to have in my editor—what is yours? What do you need as part of your API design process? Are API definitions part of the “truth” in your API lifecycle? I’d love to hear what tools and services you think should be made available, to assist us in designing our APIs.

Disclosure: I'm still editing and linking up this post. Stay tuned for updates.


What Are The Incentives For Creating Machine Readable API Definitions?

After #Gluecon in Colorado the other week, I have API design on the brain. A portion of the #APIStrat un-workshops were dedicated to API design related discussion, and API Design is also the most trafficked portion of API Evangelist this year, according to my Google Analytics.

At #Gluecon, 3Scale and API Evangelist announced our new API discovery project APIs.json, and associated tooling, API search engine APIs.io. For APIs.json, APIs.io, and API Commons to work, we are counting API providers, and API consumers creating machine readable API definitions.

With this in mind, I wanted to do some exploration--what would be possible incentives for creating machine readable API definitions?

JSON API Definition
Interactive Documentation
Server Side Code Deployment
Client Side Code generation
Design, Mocking, and Collaboration
Markdown Based API Definition
YAML Based API Definition
Reusability, Interoperability and Copyright
Testing & Monitoring
Discovery
Search

The importance of having an API definition of available resources, is increasing. It was hard to realize the value of defining APIs with the heavy, top down defined WSDL, and even its web counterpart WADL, but with these new approaches, other incentives are emerging—incentives that live throughout the API lifecycle.

The first tangible shift in this area was when Swagger released the Swagger UI, providing interactive documentation that was generated from a Swagger API definition. Apiary quickly moved the incentives to an earlier stage in the API design lifecycle with design, mocking and collaboration opportunities.

As the API design world continues to explode, I’m seeing a number of other incentives emerge for API providers to generate machine readable API definitions, and looking to find any incentives that I’m missing, as well as identify any opportunities in how I can encourage API designers to generate machine readable API definitions in whatever format they desire.


Beta Testing Linkrot.js On API Evangelist

I started beta testing a new JavaScript library, combined with API, that I’m calling linkrot.js. My goal is to address link rot across my blogs. There are two main reasons links are bad on my site, either I moved the page or resource, or a website or other resource has gone away.

To help address this problem, I wrote a simple JavaScript file that lives in the footer of my blog, and when the page loads, it spiders all the links on the page, combining them into a single list and then makes a call to the linkrot.js API.

All new links will get a URL shortener applied, as well as a screenshot taken of the page. Every night a script will run to check the HTTP status of each link used in my site—verifying the page exists, and is a valid link.

Every time link rot.js loads, it will spider the links available in the page, sync with linkrot.js API, and the API returns the corresponding shortened URL, or if a link shows a 404 status, the link will no longer link to page, it will popup the last screenshot of the page, identifying the page no longer exists.

Eventually I will be developing a dashboard, allowing me to manage the link rot across my websites, make suggestions on links I can fix, provides a visual screen capture of those I cannot, while also adding a new analytics layer by implementing shortened URLs.

Linkrot.js is just an internal tool I’m developing in private beta. Once I get up and running, Audrey will beta test, and we’ll see where it goes from there. Who knows!


The 15 Sessions At API Strategy And Practice in Amsterdam

I am getting psyched going through the schedule lineup of 15 sessions at API Strategy & Practice in Amsterdam. In planning the session outline, Steve, Vanessa and I listened to what the #APIStrat audience asked for after New York and San Francisco, which was more of the deep technical, as well as a balance of the business and politics of APIs.

I think our lineup delivers on this, which we've broken up into three tracks:

API Provider

  • Design and Development
  • Service Descriptions
  • Hypermedia APIs
  • API Marketing & Developer Communities
  • Hardware and Internet of Things (IOT)

API By Industry

  • Media, Music and Audio APIs
  • Civic APIs
  • Enterprise APis
  • APIs in Financial Services
  • Community APIs

API Consumer

  • Discovery and Trust
  • Security and Testing
  • High Scalability
  • API Based App Development
  • Business Models

This lineup of sessions represent what we are seeing across the API space, with API design coming front and center, to hypermedia moving beyond an academic discussion and actually getting traction. That is what API Strategy & Practice is about, providing a venue to have discussions about the areas that are impacting the industry.

The best thing is, this is just the session lineup, we still have workshops, keynotes, fireside chats and panels.


Common Building Blocks Of API Design

Over the last couple months I’ve been taking a deeper look at the API design space, trying to understand more about the tools and services that are emerging, and the different approaches being employed throughout the API design lifecycle.

I started first with trying to understand the evolving motivations behind why people are using API definitions, then I spoke with API with the creators of API Blueprint, RAML and Swagger, the three leading API design providers out there, to understand more about the vision behind their various approaches to API design.

After talking to each of the providers, I wanted to understand more about the tooling that was emerging from each of the providers:

While each of these providers have their own approach to defining APIs, and the API design lifecycle, after looking through what they offer, you start seeing patterns emerge. After reviewing what API Blueprint, RAML and Swagger bring to the table, I squinted my eyes and try to understand what some of the common building blocks are for the API design space—resulting in what I consider 22 separate building blocks:

Definition - A central, machine readable definition of an API interface, authentication and potentially data model, in XML, JSON or Markdown. (Examples: API Blueprint, RAML, Swagger)

Parser - An API definition parser, available potentially in multiple languages and open up the programmatic generation of other API building blocks.

Design Tools - User interface tools, allowing for the building of central API definitions, either in a code view or GUI view.

Versioning - Systems allowing for the versioning of API definition, keeping track of all changes, allowing for rolling back of changes to previous versions.

Forkable - The ability to fork an existing API definition, and create a new branch, that can live separately from the API definition it originates from.

Sharing - Allowing for the sharing of API definitions and other API design building blocks with other users, employing common social sharing features of preferred networks.

Collaboration - Features that allow for collaboration between users, with discussion around all API design building blocks.

Mock Interfaces - Ability to deploy mock API interfaces generated from API definitions, allowing developers to play with API versions as they are designed.

Interactive Documentation / Console - Automatically generated API documentation which allows developers to make calls against APIs as they are learning about the interface, turning API education into a hands on experience.

Notebook / Directory - A local, or cloud based storage repository, providing a single place to create and manage API definitions, and execute other API design building blocks.

Testing - Manual, automated and scheduled testing of API interfaces using their API definition as a blueprint.

Debugging - Manual, automated and scheduled debugging of API interfaces, providing detailed look inside of API calls, allowing developers to understand problems with API integrations.

Traffic Inspection - Logging and analysis of API traffic from testing, debugging and all other API usage during the API design process.

Validator - Tools for validating API calls, enabling developers to determine which types of calls will be valid, using the central API definition as guide.

Server Code Generators - Tooling that generates server side implementations using API definitions in a variety of languages.

Client Side Code Generator - Tooling that generates client side API code libraries in a variety of languages.

Github Sync - The ability to store and sync API definitions with Github, providing a central public or private repository for the definition of an API resource.

Command Line - Command line tooling for programmatic execution of all API design building blocks.

Websockets - Providing tools for API communication via websockets using the central API definition as a guide.

Translator - Tools for translating between various API definitions, allowing the transformation from RAML to Swagger, and between each of the available API definitions.

Annotation - Tools and interfaces for allowing the annotation of API definitions, providing a communication platform centered around the API design process.

Syntax Highlight - Tools and interfaces for the highlighting of API definitions, providing IDE-like functionally for API designers.

As I try to do with API management and integration, I’m just trying understand what these providers offer, and how it is helping API developers be more successful in designing quality APIs. This isn’t meant to be a perfect list, and if there are any building blocks you feel should be present, let me know.

You can follow my research in API design over at the Github repository I’m publishing everything to when it is ready. Like other areas of my research my goal is to produce a final white paper, while keeping the Github research repository a living store of API design information for the community.


API Design Tooling From API Blueprint

As part of my research in the world of API design, I’m looking into the different approaches by API Blueprint, RAML and Swagger, to provide API definitions, services and tools that assist developers in better designing APIs. I have already look at the evolving motivations behind API definitions, and some insight into the vision behind Swagger, API Blueprint and RAML, next up is taking a look at the tooling that is emerged around each approach.

I began with a look at the tooling around Swagger, and next up is to look at API Blueprint, from Apiary.io, which is centered around a markdown based API definition language:

  • API Blueprint - Apiary.io’s API definition language designed to allow anyone, not just developers to design APIs

To put API Blueprint to use, Apiary provides a parser:

  • Snowcrash - The API Blueprint parser built on top of the Sundown Markdown parser

When it comes to tooling around API Blueprint, it is all about the Apiary.io platform:

  • Apiary.io - Collaborative design, instant API mock, generated documentation, integrated code samples, debugging and automated testing

Apiary.io delivers the features we are seeing emerge around Swagger and RAML, and more:

  • Server Mock - Providing a mock API interface allowing you to experiment with an API interface before you write any code
  • Interactive Documentation - Auto generated API documentation that allows developers to authenticate and make live calls to an API while learning the documentation
  • GitHub Sync - Apiary uses Github to store each API Blueprint, allowing it to be stored publicly or privately on Github, with automatic updating of API docs with each Github commit
  • Command Line Tools - A separate command-line interface available as ruby gem, allowing for the automation and integration of API Blueprints it your regular workflow
  • Traffic Inspector - Providing a proxy to run API calls through allowing the breakdown of each call to APIs, helping developers understand and debug APIs much easier
  • Discussion - Communication tools within API blueprint documentation allowing team and public developer conversations

I did find two other open tools for API Blueprint:

  • HTTP Call Validator - Gavel is a tool for deciding which HTTP API call is valid and which is not
  • API Blueprint Testing Tool - Dredd is a command-line tool for testing API documentation written in API Blueprint format against its backend implementation. 

I’d say that Apiary with API Blueprint was the first company dedicated specifically to API design. Swagger was born as a set of tools out of Wordnik, and not designed to be a product, with RAML coming later. While Swagger was pushing API design into new areas beyond just interactive docs, Apiary and API Blueprint was the first API design only startup to emerge.

During 2006-2012, API management was being standardized by pioneers like Mashery, 3Scale and Apigee--now API design is now being defined by providers like Swagger,  API Blueprint, and RAML. It shows that the API space is continueing to expand and mature, increasing the need to refine not just API design, but the overall API lifecycle. 


What Are The Common Building Blocks of API Integration?

I started API Evangelist in 2010 to help business leaders better understand not just the technical, but specifically the business of APIs, helping them be successful in their own API efforts. As part of these efforts I track on what I consider the building blocks of API management. In 2014 I'm also researching what the building blocks are in other areas of the API world, including API design, deployment, discovery and integration.

After taking a quick glance at the fast growing world of API integration tools and services, I've found the following building blocks emerging:

Pain Point Monitoring
Documentation Monitoring - Keeping track of changes to an APIs documentation, alerting you to potential changes in valuable developer API documentation for single or many APIs
Pricing Monitoring - Notifications when an API platform's pricing changes, which might trigger switching services or at least staying in tune with the landscape of what is being offered
Terms of Use Monitoring - Updates when a company changes the terms of service for a particular platform and providing historical versions for comparison
Authentication
oAuth Integration - Provides oAuth integration for developers, to one or many API providers, and potentially offering oAuth listing for API providers
Provider / Key Management - Management of multiple API platform providers, providing a secure interface for managing keys and tokens for common API services
Integration Touch Points
API Debugging - Identifying of API errors and assistance in debugging API integration touch points
API Explorer - Allowing the interactive exploring of API providers registered with the platform, making calls and interacting and capturing API responses
API Feature Testing - The configuring and testing of specific features and configurations, providing precise testing tools for any potential use
API Load Testing - Testing, with added benefit of making sure an API will actually perform under a heavy load
API Monitoring - Actively monitoring registered API endpoints, allowing real-time oversight of important API integrations endpoints that applications depend on
API Request Actions
API Request Automation - Introducing other types of automation for individual, captured API requests like looping, conditional responses, etc.
API Request Capture - Providing the ability to capture a individual API request
API Request Commenting - Adding notes and comments to individual API requests, allowing the cataloging of history, behavior and communication around API request actions
API Request Editor - Allowing the editing of individual API requests
API Request Notifications - Providing a messaging and notification framework around individual API requests events
API Request Playback - Recording and playing back captured API requests so that you can inspect the results
API Request Retry - Enabling the ability to retry a captured API request and play back in current time frame
API Request Scheduling - Allowing the scheduling of any captured API request, by the minute, hour, day, etc.
API Request Sharing - Opening up the ability to share API requests and their results with other users via email, or other means
Other Areas
Analytics - Visual analytics providing insight into individual and bulk API requests and application usage
Code Libraries - Development and support of code libraries that work with single or multiple API providers
Command Line - Providing a command line (CL) interface for developers to interact with APIs
Dashboard - Web based dashboard with analytics, reports and tools that give developers quick access to the most valuable integration information
Gateway - Providing a software gateway for testing, monitoring and production API integration scenarios
Geolocation - Combining of location when testing and proxying APIs from potentially multiple locations
Import and Export - Allowing for importing and exporting of configurations of captured and saved API requests, allowing for data portability in testing, monitoring and integrationPublish - Providing tools for publishing monitoring and alert results to a public site via widget or FTP
LocalHost - Opening up of a local web server to a public address, allowing for webhooks and other interactions
Rating - Establishment of a ranking system for APIs, based upon availability, speed, etc.
Real-Time - Adding real-time elements to analytics, messaging and other aspects of API integration
Reports - Common reports on how APIs are being used across multiple applications and user profiles
Teams - Providing a collaborative, team environment where multiple users can test, monitor and debug APIs and application dependencies
Workflow - Allowing for the daisy chaining and connecting of individual API request actions into a series of workflows and jobs

What else are you seeing? Which tools and services do you depend on when you are integrating one or many APIs into your applications? What tools and services would you like to see?

I'm looking at the world of API design right now, but once I'm done with that research, I will be diving into API integration again, trying to better understand the key players, tools, services and the building blocks they use to get things done.


IRS Modernized e-File (MeF): A Blueprint For Public & Private Sector Partnerships In A 21st Century Digital Economy (DRAFT)

Download as PDF

The Internal Revenue Service is the revenue arm of the United States federal government, responsible for collecting taxes, the interpretation and enforcement of the Internal Revenue code.

The first income tax was assessed in 1862 to raise funds for the American Civil War, and over the years the agency has grown and evolved into a massive federal entity that collects over $2.4 trillion each year from approximately 234 million tax returns.

While the the IRS has faced many challenges in its 150 years of operations, the last 40 years have demanded some of the agency's biggest transformations at the hands of technology, more than any time since its creation.

In the 1970s, the IRS began wrestling with the challenge of modernizing itself using the latest computer technology. This eventually led to a pilot program in 1986 of an new Electronic Filing System (EFS), which aimed in part to gauge the acceptance of such a concept by tax preparers and taxpayers.

By the 1980s, tax collection had become very complex, time-consuming, costly, and riddled with errors, due to what had become a dual process of managing paper forms while also converting these into a digital form so that they could be processed by machines. The IRS despereatly needed to establish a solid approach that would enable the electronic submission of tax forms.

It was a rocky start for the EFS, and Eileen McCrady, systems development branch and later marketing branch chief, remembers, “Tax preparers were not buying any of it--most people figured it was a plot to capture additional information for audits." But by 1990, IRS e-file operated nationwide, and 4.2 million returns were filed electronically. This proved that EFS offered a legitimate approach to evolving beyond a tax collection process dominated by paper forms and manual filings.

Even Federal Agencies Can't Do It Alone

Even with the success of early e-file technology, the program did not get the momentum it needed without the support of two major tax preparation partnerships--H&R Block and Jackson-Hewitt. These helped change the tone of EFS efforts, making it more acceptable and appealing to tax professionals. It was clear that e-File needed to focus on empowering a trusted network of partners to submit tax forms electronically, sharing the load of tax preparation and filing with 3rd party providers. And this included not just the filing technology, but a network of evangelists spreading the word that e-File was a trustworthy and viable way to work with the IRS.

Bringing e-File Into The Internet Age

By 2000, Congress had passed IRS RRA 98, which contained a provision setting a goal of an 80% e-file rate for all federal tax and information returns. This, in effect, forced the IRS to upgrade the e-File system for the Internet age, otherwise they would not be able meet this mandate. A working group was formed, comprised of tax professionals and software vendors that would work with the IRS to design, develop and implement the Modernized e-File(MeF)-Program-Information) system which employed the latest Internet technologies, including a new approach to web services which used XML that would allow 3rd party providers to submit tax forms in a real-time, transactional approach (this differed from the batch submissions required in a previous version of the EFS).

Moving Beyond Paper One Form At A Time

Evolving beyond a 100 years of paper process doesn't happen overnight. Even with the deployment of the latest Internet technologies, you have to incrementally bridge the legacy paper processes to a new online, digital world. After the deployment of the MeF, the IRS worked year by year to add the myriad of IRS forms to the e-File web service, allowing software companies, tax preparers, and corporations to digitally submit forms into IRS systems over the Internet. Form by form, the IRS was being transformed from a physical document organization to a distributed network of partners that could submit digital forms through a secure, online web service.

Technological Building Blocks

The IRS MeF solution represents a new approach to using modern technology by the federal government in the 21st century Internet age. In the last 15 years, a new breed of Internet enabled software standards have emerged that enable the government to partner with the private sector, as well as other government agencies, in ways that were unimaginable just a decade ago.

Web Services

Websites and applications are meant for humans. Web services, also known as APIs, are meant for other computers and applications. Web services has allowed the IRS to open up the submission of forms and data into central IRS systems, while also transmitting data back to trusted partners regarding errors and the status of form submissions. Web services allow the IRS to stick with what it does best, receiving, filing and auditing of tax filings, while trusted partners can use web services to deliver e-Filing services to customers via custom developed software applications.

Web services are designed to utilize existing Internet infrastructure used for everyday web operations as a channel for delivering trusted services to consumers around the country, via the web.

An XML Driven Communication Flow

XML is a way to describe each element of IRS forms, and its supporting data. XML makes paper forms machine readable so that the IRS and 3rd party systems can communicate using a common language, allowing IRS to share a common set of logic around each form, then use what is known as schemas, to validate the XML submitted by trusted partners against a set of established business rules that provide enforcement of the IRS code. XML gives the ability for IRS to communicate with 3rd party systems using digital forms, applying business rules to reject or accept the submitted forms, which then can be stored in an official IRS repository in a way that can be viewed and audited by IRS employees (using stylesheets which make the XML easily readable by humans).

Identity and Access Management (IAM)

When you expose web services publicly over the Internet, secure authentication is essential. The IRS MeF system is a model for securing the electronic transmission of data between the government and 3rd party systems. The IRS has employed a design of the Internet Filing Application (IFA) and Application to Application (A2A) which are features of the Web Services-Interoperability (WS-I) security standards. Security of the MeF system is overseen by the IRS MITS Cyber Security organization which ensures all IRS systems receive, process, and store tax return data in a secure manner. MeF security involves an OMB mandated Certification and Accreditation (C&A) Process, requiring a formal review and testing of security safeguards to determine whether the system is adequately secured.

Business Building Blocks

To properly extend e-File web services to partners isn't just a matter of technology. There are numerous building blocks required that are more business than technical, ensuring a healthy ecosystem of web service partners. With a sensible strategy, web services need to be translated from tech to business, allowing partners to properly translate IRS MeF into e-filing products that will deliver required services to consumers.

Four Separate e-Filing Options

MeF provided the IRS with a way to share the burden of filing taxes with a wide variety of trusted partners, software developers and corporations who have their own software systems. However MeF is just one tool in a suite of e-File tools. These include Free File software that any individual can use to submit their own taxes, as well as free fillable digital forms that individuals can use if they do not wish to employ a software solution.

Even with these simple options, the greatest opportunities for individuals and companies is to use commercial tax software that walks one through what can be a complex process, or to depend on a paid tax preparer who employ their own commercial versions of tax software. The programmatic web service version of e-file is just one option, but it is the heart of an entire toolkit of software that anyone can put to use.

Delivering Beyond Technology

The latest evolution of the e-file platform has technology at heart, but it delivers much more than just the transmission of digital forms from 3rd party providers, in ways that also make good business sense:

  • Faster Filing Acknowledgements - Transmissions are processed upon receipt and acknowledgements are returned in near real-time, unlike the once or twice daily system processing cycles in earlier versions
  • Integrated Payment Option - Tax-payers can e-file a balance due return and, at the same time, authorize an electronic funds withdrawal from their bank accounts, with payments being subject to limitations of the Federal Tax Deposit rules
  • Brand Trust - Allowing MeF to evolve beyond just the IRS brand, allowing new trusted commercial brands to step up and deliver value to consumer, like TurboTax and TaxAct.

Without improved filing results for providers and customers, easier payment options and an overall set of expectations and trust, MeF would not reach the levels of e-Filing rates mandated by Congress. Technology might be the underpinning of e-File, but improved service delivery is the thing that will seal the deal with both providers and consumers.

Multiple Options for Provider Involvement

Much like the multiple options available for tax filers, the IRS has established tiers of involvement for partners to be involved with the e-File ecosystem. Depending on the model and capabilities, e-File providers can step up and be participate in multiple ways:

  • Electronic Return Originators (EROs) - ERO prepare returns for clients or have collected returns from taxpayers who have prepared their own, then begin the electronic transmission of returns to the IRS
  • Intermediate Service Providers - These providers process tax return data, that originate from an ERO or an individual taxpayer, and forward to a transmitter.
  • Transmitters - Transmitters are authorized to send tax return data directly to the IRS, from custom software that connect directly with the IRS computers
  • Online Providers - Online providers are a type of transmitter that sends returns filed from home by taxpayers using tax preparation software to file common forms
  • Software Developers - write the e-file software programs that follow IRS specifications for e-file.
  • Reporting Agents - An accounting service, franchiser, bank or other person that is authorized to e-file Form 940/941 for a taxpayer.

The IRS has identified the multiple ways it needed help from an existing, evolving base of companies and organizations. The IRS has been able to design its partner framework to best serve its mission, while also delivering the best value to consumers, in a way that also recognizes the incentives needed to solicit participation from the private sector and ensure efforts are commercially viable.

Software Approval Process

IRS requires all tax preparation software used for preparing electronic returns to pass the requirements for Modernized e-File Assurance Testing (ATS). As part of the process software vendors notify IRS via an e-help Desk, that they plan to commence testing, then provide a list of all forms that they plan to include in their tax preparation software, but do not require that vendors support all forms. MeF integrators are allowed to develop their tax preparation software based on the needs of their clients, while using pre-defined test scenarios to create test returns that are formatted in the specified XML format. Software integrators then transmit the XML formatted test tax returns to IRS, where an e-help Desk assister checks data entry fields on the submitted return. When IRS determines the software correctly performs all required functions, the software is approved for electronic filing. Only then are software vendors allowed to publicly market their tax preparation software as approved for electronic filing -- whether for usage by corporations, tax professionals and individual users.

State Participation

Another significant part of the MeF partnership equation is providing seamless interaction with the electronic filing of both federal and state income tax returns at the same time. MeF provides the ability for partners to submit both federal and state tax returns in the same "taxpayer envelope", allowing the IRS to function as an "electronic post office" for participating state revenue services -- certainly better meeting the demands of the taxpaying citizen. The IRS model provides an important aspect of a public / private sector partnership with the inclusion of state participation. Without state level participation, any federal platform will be limited in adoption and severely fragmented in integration.

Resources

To nurture an ecosystem of partners, it takes a wealth of resources. Providing technical, how-to, guides, templates and other resources for MeF providers is essential to the success of the platform. Without proper support, MeF developers and companies are unable to keep up with the complexities and changes of the system. The IRS has provided the resources needed for each step of the e-Filing process, from on-boarding, to how to understanding the addition of the latest forms, and changes to the tax code.

Market Research Data

Transparency of the MeF platform goes beyond individual platform operations, and the IRS acknowledges this important aspect of building an ecosystem of web service partners. The IRS provides valuable e-File market research data to partners by making available e-file demographic data and related research and surveys. This important data provides valuable insight for MeF partners to use in their own decision making process, but also provides the necessary information partners need to educate their own consumers as well as the general public about the value the e-File process delivers. Market research is not just something the IRS needs for its own purposes; this research needs to be disseminated and shared downstream providing the right amount of transparency that will ensure healthy ecosystem operations.

Political Building Blocks

Beyond the technology and business of the MeF web services platform, there are plenty of political activities that will make sure everything operates as intended. The politics of web service operations can be as simple as communicating properly with partners, providing transparency, or all the way up to security, proper governance of web service, and enforcement of federal laws.

Status

The submission of over 230 million tax filings annually requires a significant amount of architecture and connectivity. The IRS provides real-time status of the MeF platform for the public and partners, as they work to support their own clients. Real-time status updates of system availability keeps partners and providers in tune with the availability of the overall system, allowing them to adjust availability with the reality of supporting such a large operation. Status of availability is an essential aspect of MeF operations and overall partner ecosystem harmony.

Updates

An extension of MeF platform status is the ability to keep MeF integrators up-to-date on everything to do with ongoing operations. This includes providing alerts when the platform needs to tune-in platform partners to specific changes with tax law, resource additions, or other relevant news of operations. The IRS also provides updates via an e-newsletter, providing a more asynchronous way for the IRS MeF platform to keep partners informed about ongoing operations.

Updates over the optimal partner channels are an essential addition to real-time status and other resources that are available to platform partners.

Roadmap

In addition to resources, status and regular updates of platform status of the overall MeF system, the IRS provides insight into where the platform is going next, keeping providers apprised with what is next for the e-File program. Establishing and maintaining the trust of MeF partners in the private sector is constant work, and requires a certain amount of transparency -- allowing partners to anticipate what is next and make adjustments on their end of operations. Without insight into what is happening in the near and long term future, trust with partners will erode and overall belief in the MeF system will be disrupted, unraveling over 30 years of hard work.

Governance

The Modernized e-File (MeF) programs go through several stages of review and testing before they are used to process live returns. When new requirements and functionality are added to the system, testing is performed by IRS's software developers and by IRS's independent testing organization. These important activities ensure that the electronic return data can be received and accurately processed by MeF systems. Every time an IRS tax form is changed and affects the XML schema, the entire development and testing processes are repeated to ensure quality and proper governance.

Security

Secure transmissions by 3rd parties with the MeF platform is handled by the Internet Filing Application (IFA) and Application to Application (A2A), which are part of the IRS Modernized System Infrastructure, providing access to trusted partners through the Registered User Portal (RUP). Transmitters using IFA are required to use their designated e-Services user name and password in order to log into the RUP. Each transmitter also establishes a Electronic Transmitter Identification Number (ETIN) prior to transmitting returns. Once the transmitter successfully logs into the RUP, a Secure Socket Layer (SSL) Handshake Protocol allows the RUP and transmitter to authenticate each other, and negotiate an encryption algorithm, including cryptographic keys before any return data is transmitted. The transmitter’s and the RUP negotiate a secret encryption key for encrypted communication between the transmitter and the MeF system. As part of this exchange, MeF will only accommodate one type of user credentials for authentication and validation of A2A transmitters; username and X.509 digital security certificate. Users must have a valid X.509 digital security certificate obtained from an IRS authorized Certificate Authority (CA), such as like VeriSign or IdenTrust, then have their certificates stored in the IRS directory using an Automated Enrollment process.

The entire platform is accredited by the Executive Level Business Owner, who is responsible for the operation of the MeF system, with guidance provided by the National Institute of Standards (NIST). The IRS MITS Cyber Security organization and the business system owner are jointly responsible and actively involved in completing the IRS C&A Process for MeF, ensuring complete security of all transmissions with MeF over the public Internet.

A Blueprint For Public & Private Sector Partnerships In A 21st Century Digital Economy

The IRS MeF platform provides a technological blueprint that other federal agencies can look to when exposing valuable data and resources to other agencies as well as the private sector. Web services, XML, and proper authentication can open up access and interactions between trusted partners and the public in ways that were never possible prior to the Internet age.

While this web services approach is unique within the federal government, it is a common way to conduct business operations in the private sector -- something widely known as Service Oriented Architecture (SOA), an approach that is central to a healthy enterprise architecture. A services oriented approach allows organizations to decouple resources and data and open up very wide or granular levels of access to trusted partners. The SOA approach makes it possible to submit forms, data, and other digital assets to government, using XML as a way to communicate and validate information in a way that supports proper business rules, wider governance, and the federal law.

SOA provides three essential ingredients for public and private sector partnership:

  • Technology - Secure usage of modern approaches to using compute, storage and Internet networking technology in a distributed manner
  • Business - Adherence to government lines of business, while also acknowledging the business needs and interest of 3rd party private sector partners
  • Politics - A flexible understanding and execution of activities involved in establishing a distributed ecosystem of partners, and maintaining an overall healthy balance of operation

The IRS MeF platform employs this balance at a scale that is unmatched in federal government currently. MeF provides a working blueprint can be applied across federal government, in areas ranging from the veterans claims process to the financial regulatory process.

The United States federal government faces numerous budgetary challenges and must find new ways to share the load with other federal and state agencies as well as the private sector. A SOA approach like MeF allows the federal government to better interact with existing contractors, as well as future contractors, in a way that provides better governance, while also allowing for partnership with the private sector in ways that goes beyond simply contracting. The IRS MeF platform encourages federal investment in a self-service platform that enable trusted and proven private sector partners to access IRS resources in predefined ways -- all of which support the IRS mission, but provide enough incentive that 3rd party companies will invest their own money and time into building software solutions that can be fairly sold to US citizens.

When an agency builds an SOA platform, it is planting the seeds for a new type of public / private partnership whereby government and companies can work together to deliver software solutions that meet a federal agency's mission and the market needs of companies. This also delivers value and critical services to US citizens, all the while reducing the size of government operations, increasing efficiencies, and saving the government and taxpayers money.

The IRS MeF platform represents 27 years of the laying of a digital foundation, building the trust of companies and individual citizens, and properly adjusting the agency's strategy to work with private sector partners. It has done so by employing the best of breed enterprise practices from the private sector. MeF is a blueprint that cannot be ignored and deserves more study, modeling, and evangelism across the federal government. This could greatly help other agencies understand how they too can employ an SOA strategy, one that will help them better serve their constituents.

You Can View, Edit, Contribute Feedback To This Research On Github


API Testing and Monitoring Finding A Home In Your Companies Existing QA Process

I've been doing API Evangelist for three years now, a world where selling APIs to existing companies outside of Silicon Valley, and often venture capital firms is a serious challenge. While APis have been around for a while in many different forms, this new, more open and collaborative approach to APis seems very foreign, new and scary for some companies and investors--resulting in them often very resistant to it.

As part of my storytelling process, I'm always looking for ways to dovetail API tools and services into existing business needs and operations, making them much more palatable to companies across many business sectors. Once part of the API space I'm just getting a handle on is the area API integration, which includes testing, monitoring, debugging, scheduling, authentication and other key challenges developers face when building applications that depend on APIs.

I was having a great conversation with Roger Guess of TheRightAPI the other day, which I try to do regularly. We are always brainstorming ideas on where the space is going and the best way to tell stories around API integration, that will resonate with existing companies. Roger was talking about the success they are finding dovetailing their testing, monitoring and other web API integration services with a company's existing QA process--something that I can see will resonate with many companies.

Hopefully your company already has a full developed QA cycle for your development team(s), including, but not limited to, automated, unit and regression testing--something where API tests, monitoring, scheduling and other emerging API integration building blocks will fit in nicely. This new breed of APi integration tools don't have to be some entirely new approach to development, chances are you are already using APIs in your development and API testing and monitoring can just be added to your existing QA toolbox.

I will spend more time looking for stories that help relate some of these new approaches to your existing QA processes, hopefully finding news ways you can put tools and services like TheRightAPI to use, helping you better manage the API integration aspect of your web and mobile application development.


API Providers Guide - API Design

API Providers Guide - API Design

Prepared By Kin Lane

June 2014





Table of Contents

  • Overview of The API Design Space
  • A New Generation Of API Design
  • Developing The Language We Need To Communicate
  • Leading API Definition Formats
  • Building Blocks of API Design
  • Companies Who Provide API Design Services
  • API Design Tools
  • API Design Editors
  • API Definitions Providing A Central Truth For The API Lifecycle
  • Contributing To The Deployment Lifecycle
  • Contributing To The Management Lifecycle
  • Contributing To The Testing & Monitoring Lifecycle
  • Contributing To The Discovery Lifecycle
  • An Evolutionary Period For API Design






Freemium API Tools Can Drive Experimentation And Innovation

I’m a firm believer in the power of the freemium model when it comes to APIs. Nothing is as it seems when you are deploying managing or consuming APIs. You have to have room to innovate and iterate without signing contracts or paying too much, before you find exactly the right integration or deployment that works.

This freemium approach to APIs is one of the biggest reasons I’ve been supporting 3Scale since early days of API Evangelist. 3Scale was the original API service provider to offer a truly freemium tier for anyone wanting to deploy an API, and remains passionate about this to this day.

During the API Strategy & Practice conference in NYC last week, I had the pleasure of meeting the SmartBear team, who share a similar perspective of the space, resulting in them launching a new suite of free tools that will help you develop, test and monitor while building API driven software.

SmartBear has published four new free tools for testing and development:

SoapUI is a free and open source cross-platform Functional Testing solution. With an easy-to-use graphical interface, SoapUI allows you to easily and rapidly create and execute automated functional, regression, compliance, and load tests.

LoadUI is a free, open source, Web Services load testing solution. With a visual, drag-and-drop interface, it allows you to create, configure and redistribute your load tests interactively and in real-time. LoadUI supports all the standard protocols and technologies.

DéjàClick is an easy-to-use and powerful addition to your web browser for web macro recording that turns multi-step web interactions into one-click super-bookmarks. Markup and annotate web pages, run web performance tests, and share recordings. Leverage those scripts with AlertSite to measure, diagnose, notify and report on web performance and user experience from an end-user perspective.

Deliver fast, feel-good customer experiences from your online store. The AlertSite for Magento extension enables webstore owners to quickly see at a glance the current availability of their ecommerce site and how it is performing in the real world.

The goal of the new freeware initiative is to put robust tools for quality software development into the hands of developers and testers immediately while allowing them to upgrade to the feature-rich paid versions when it makes practical sense. I sat down with Lorinda Brandon, Director of Strategy, SmartBear Software while at #apistrat, where she said:

The burgeoning API industry is a reflection of a new phase of innovation and collaboration in the software industry as a whole. Businesses and organizations are now sharing their data and functionality via APIs like never before, often for free. In fact, businesses are even inviting the developer community to come together in hackathons to build apps and mashups using their APIs. While we applaud the energy and pace, we also want to keep the focus on building high quality products. In 2013, SmartBear wants to empower this collaborative global community of developers to create high-quality, low-cost (or free) offerings of their own by making even more of our tools available for free through our Freeware Initiative.

I agree 100%. Freemium can be a critical marketing vehicle in the API industry, one that will bring attention to your tools and services, while also providing room for your customers to put tools to use in a meaningful way, that will truly add value to their world--without overcommitting. Once a tool proves itself in someone’s world, they can choose to upgrade and evolve to more premium offerings.

All six tools from SmartBear will be added to my tools section, which is meant to provide a toolbox of free and downloaded software for API owners and developers to take advantage of. SmartBear says this is just the beginning and they will be adding more tools in the future. As they become available I will publish and keep track of via API Evangelist tools.


The API Evangelist Toolbox

I've spent a lot of time lately looking for new tools that will help you plan, develop, deploy and manage APIs.  My goal is to keep refining the API Evangelist Tool section to provide complete API tool directory you can filter by language or other tag.  

I've added a number of open source tools to my database lately.  But I know there are many more out  there.  So I put out on the Twitterz that I was looking for anything that is missing. Here is what I got:

Resulting in the following tools being suggested:

Carte - Carte is a simple Jekyll based documentation website for APIs. It is designed as a boilerplate to build your own documentation and is heavily inspired from Swagger and I/O docs. Fork it, add specifications for your APIs calls and customize the theme. Go ahead, see if we care.
Charles Proxy - Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).
Fiddler - Fiddler is a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.
foauth.org: OAuth for one - OAuth is a great idea for interaction between big sites with lots of users. But, as one of those users, it’s a pretty terrible way to get at your own data. That’s where foauth.org comes in, giving you access to these services in three easy steps.
Hurl - Hurl makes HTTP requests. Enter a URL, set some headers, view the response, then share it with others. Perfect for demoing and debugging APIs.
httpbin: HTTP Request & Response Service - Testing an HTTP Library can become difficult sometimes. PostBin.org is fantastic for testing POST requests, but not much else. This exists to cover all kinds of HTTP scenarios. Additional endpoints are being considered. All endpoint responses are JSON-encoded.
InspectB.in - InspectBin is based on the idea of RequestBin (requestb.in), set your http client or webhook to point to your InspectBin url. We will collect http requests and show it in a nice and friendly way, live!
I/O Docs - I/O Docs is a live interactive documentation system for RESTful web APIs. By defining APIs at the resource, method and parameter levels in a JSON schema, I/O Docs will generate a JavaScript client interface. API calls can be executed from this interface, which are then proxied through the I/O Docs server with payload data cleanly formatted (pretty-printed if JSON or XML).
localtunnel - The easiest way to share localhost web servers to the rest of the world.
Postman - REST Client - Postman helps you be more efficient while working with APIs. Postman is a scratch-your-own-itch project. The need for it arose… Postman helps you be more efficient while working with APIs. Postman is a scratch-your-own-itch project. The need for it arose while one of the developers was creating an API for his project. After looking around for a number of tools, nothing felt just right. The primary features added initially were a history of sent requests and collections. A number of other features have been added since then. Here is a small list.
RequestBin - RequestBin lets you create a URL that will collect requests made to it, then let you inspect them in a human-friendly way. Use RequestBin to see what your HTTP client is sending or to look at webhook requests.
Runscope - OAuth2 Token Generator - Tools for developers consuming APIs in their mobile and web apps.

All tools have been added to the API Evangelist toolbox.  As I continue to work with and define, I will add more meta data that will help you find the tool your looking for. 

Thanks John Sheehan (@johnsheehan),  Phil Leggetter (@leggetter) and Darrel Miller (@darrel_miller). 


The Secret to Amazons Success Internal APIs

Last year there was an accidental post from a Google employee about Google+.  The internal rant was accidentally shared publicly and provides some insight into how Google approached APIs for their new Google + platform, as well as insight how Amazon adopted an internal service oriented architecture (SOA).

The insight about how Google approached the API for Google+ is interesting, but what is far more interesting is the insight the Google engineer who posted the rant, Steve Yegge, provides about his time working at Amazon, before he was a engineer with Google.

During 6 years at Amazon he witnessed the transformation of the company from a bookseller to the almost $1B, Infrastructure as a Service (IaaS) API, cloud computing leader.  As Yegge's recalls that one day Jeff Bezos issued a mandate, sometime back around 2002 (give or take a year):

  • All teams will henceforth expose their data and functionality through service interfaces.
  • Teams must communicate with each other through these interfaces.
  • There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
  • It doesn’t matter what technology they use.
  • All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

The mandate closed with:

Anyone who doesn’t do this will be fired.  Thank you; have a nice day!

Everyone got to work and over the next couple of years, Amazon transformed itself, internally into a service-oriented architecture (SOA), learning a tremendous amount along the way.

Think about what Bezos was asking!   Every team within Amazon had to interact using web services.  If you were human resources and you needed some numbers from marketing, you had to get them using an API.  He was asking every team to decouple, define what resources they had, and make them available through an API.  Every team within your company essential becomes a partner of the other.

Some of the lessons Amazon learned along the way:

  • Support - Support for your teams interface becomes critical
  • Security - Every teams becomes a potential DOS attacker requiring service levels, quotas and throttling
  • Monitoring / QA - Monitoring and QA are interconnected, you will need smart tools for not just telling if something is up and running, but actually delivering the expected results
  • Discovery - Service discovery becomes important.  You will need to know what APIs there are, if they are available and where to find them.
  • Testing - Sandbox and debugging is essential for all APIs
Of course this is a very small sampling from a rant of a former employee.   There are dozens, maybe hundreds of individual lessons like these that Amazon had to discover organically.

Yegge’s  points out that, “Organizing into services taught teams not to trust each other in most of the same ways they’re not supposed to trust external developers.”

This makes deploying internal APIs a great exercise for preparing your company for the coming API economy where, you will have to have expose self-service, partner and public APIs to stay competitive in your industry.

When Amazon started, it was difficult to see how the bookseller would be come the e-commerce powerhouse it is today, let alone to see that Amazon would transform culturally into a company that thinks and operates as a service oriented architecture, delivering Amazon Web Services, that not only changed how the company operates, but created an entire platform that would change how the Internet works.


How Do I Convince My Managers of the Importance of Having Internal APIs?

One of the ways I develop content for this site is by talking through my day to day experiences as an API Evangelist. I feel that talking through what I’m learning in real-time, is the best way to make it stick, while also sharing with the public. There is a lot of value that comes out of my daily learning, and it would be wrong to not share this.

I had someone approach me on LinkedIn today and ask:

I'm trying to convince my colleagues and managers on the importance of having internal API, not just as a prologue to public API, but also as a way to have cleaner, better-understood interfaces internally for our developers. Do you have any thoughts or tips on the subject?

Internal evangelism is a big part of my role, and I know its something that other API advocates face as they are trying to sell the concept of APIs to decision makers within their company. Testing the water with APIs internally is the best way to get started, and learn what goes into planning, deploying and managing an API.

When trying to sell your bosses on APIs for internal use, I would start with:

  • Decoupling - Deploying resources and your data within your company as individual RESTful resources decouple them, allowing them to consumed and managed independently. This kind of of decoupling of data and resources enable more agile scaling, migration and integration into a wide range of organizations and applications across your company. Consider how APIs have enabled Netflix to migrate from the data-center to the cloud, with unprecedented growth simultaneously, and also enabling them to grow globally.
  • Product Management - Allocating your companies data and resources as independent RESTful APIs will simplify your product management allowing them to be defined, launched, deployed and even killed off independently of each other. This type of allocation will allow very granular ownership of each product, allowing product managers to own specific APIs or groups of API products, and scale product ownership with new managers as needed.
  • Organization Interoperability - APIs enable much more flexible collaboration between different departments that may be geographically or organizational distant. With a growing virtual workforce, this type of interoperability will be key to companies being able to staff up and meet the demands of their growing businesses.
  • Multi-Use - APIs will enable your companies resources and data to be used across multiple implementations from internal applications, websites, mobile and tablet devices without building out separate systems.

That is where I’d start with selling colleagues and managers on the importance of having internal APIs. APIs can introduce flexibility and agility not only into your IT and development operations, but also introduce a RESTful business architecture into your company.

In addition to the benefits APIs will introduce internally, all of this will set the stage for improved business development with your partners and your company will be primed, if and when you are ready to open up your APIs to the public.

UPDATE:

The person who contacted me on LinkedIn got back to me with this:

Thank you. I showed your post to one of the managers here and it actually made him changed his mind. The next step it to persuade we need to use vendors such as 3Scale, Apigee or Mashery instead of building it all by ourselves. 

API Products and Services

I'm seeing a lot of chatter on the Internets lately about API development and best practices. Like 10 Common Mistakes Made by API Providers at RWW and APIs: an Important Part of Product Strategy at ProgrammableWeb.

I had the pleasure of sitting with the engineer from Mashery a couple weeks ago and listen to their assessment of the API market. Ever since then I've been reviewing their approach to API deployment and working to understand the playing field better.

Today I came across Sonoa and the free API tools at Apigee. Apigee offers:
  • API Testing
  • API Debugging
  • API Analytics
  • API Protection
I'm checking out Sonoa and what they offer more. I am also adding testing, debugging, and protection to my list of API building blocks.

If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.