Degree Days
Weather Data for Energy Saving
The JSON API is carefully designed and well tested. Its HMAC-signature-based security scheme is a little tricky to negotiate, but, once you've figured that out, using the JSON API should be straightforward so long as you are using a programming language that makes it easy to create the JSON request and parse the JSON response.
However, if you are using Java, .NET, or Python, we do still recommend you use the client libraries we've created for those languages. They are fast, small, really easy to use, and with them you can be up and running fetching data in a couple of minutes.
If you're using another programming language and are choosing between the JSON API and the XML API, we suggest you choose based on your experience and your language's handling of JSON and XML. If you're using JavaScript for example, then JSON will probably be much easier for you to work with. But there is no other significant difference between the JSON and XML APIs, they both let you do exactly the same things, they both use the same security scheme, they are both served by the same backend system, and you can use either or both through the same API account.
We also have some sample code that uses the JSON API from JavaScript, Node.js, Office/Excel, PHP, PowerShell, R, Ruby, and VBA.
And the JSON API test tool is useful for testing different JSON requests and seeing the JSON responses that come back.
base64url
encoding, and send the encoded data as HTTP request parameters to http://apiv1.degreedays.net/json
or https://apiv1.degreedays.net/json
Each of these steps is explained below:
There are currently three main types of request:
LocationDataRequest
– for fetching data from a specified Location
(which can be a station ID, a longitude/latitude position, or a postal/zip code).LocationInfoRequest
– typically used to get information about the station that would be used to generate data for a given Location
, without actually getting that data back. It's basically LocationDataRequest
but without any data in the response.RegressionRequest
– used to send energy data to the API so it can test thousands of regressions and send back a regression model with the HDD and/or CDD base temperature(s) that give the best statistical fit.All three types of request are highly configurable. The request formats of LocationDataRequest
and LocationInfoRequest
differ only in the name of the request (i.e. "LocationDataRequest"
or "LocationInfoRequest"
). The docs on the API's regression functionality explain how to configure a JSON RegressionRequest
. All JSON requests are sent to the API in the same way explained further down on this page.
Below is an example JSON request (a LocationDataRequest
) that shows a few of the main options. You can copy/paste it into the JSON API test tool to try it out. Below the example request are more details of the various options you can use within the JSON.
If you have a specific data-fetching pattern in mind, and you're struggling to figure out what JSON you'll need, please feel free to email us for help at info@degreedays.net.
{ "securityInfo": { // The "endpoint" is the URL you send your request to. You can send it // to http://apiv1.degreedays.net/json (to call the API over HTTP) or // https://apiv1.degreedays.net/json (to call it over HTTPS); just make // sure the endpoint you specify here is exactly the same as the URL you // actually send your request to. "endpoint": "http://apiv1.degreedays.net/json", // The "accountKey" comes with your API account. "accountKey": "test-test-test", // The "timestamp" should be the time of the request, in ISO format. // You should update this for each request you make. "timestamp": "2024-10-03T16:12:15Z", // The "random" should be a random string, generated afresh for each // request. It can be a random number or a UUID or similar. "random": "66fec25f0f9bb" }, // Your Request can be a LocationDataRequest (shown here) or a // LocationInfoRequest (explained further down this page). "request": { "type": "LocationDataRequest", // Specify the Location you want data from. You can use a // StationIdLocation, a LongLatLocation, or a PostalCodeLocation; all // are explained further down this page. "location": { "type": "PostalCodeLocation", "postalCode": "02532", "countryCode": "US" }, // Now specify what data you want. You can specify up to 120 DataSpec // items in a single request (e.g. HDD and CDD in 60 base temperatures // each). Each DataSpec can be either a DatedDataSpec or an // AverageDataSpec; both types are shown in this example request. Give // each DataSpec a unique name in the "dataSpecs" object so you can // identify the corresponding DataSet in the response. "dataSpecs": { // The following specifies HDD with a base temperature of 65 F, // broken down daily (a value for each day), and covering the last 7 // days. "dailyHDD" is the name we have chosen to give this // DataSpec; we'll get a DataSet with the same name in the response. "dailyHDD": { "type": "DatedDataSpec", // The Calculation defines the type of degree days and the base // temperature. You can specify a HeatingDegreeDaysCalculation // or a CoolingDegreeDaysCalculation; both are explained further // down this page. "calculation": { "type": "HeatingDegreeDaysCalculation", "baseTemperature": { "unit": "F", "value": 65 } }, // A DatedDataSpec needs a Breakdown, it can be a DailyBreakdown, // a WeeklyBreakdown, a MonthlyBreakdown, or a YearlyBreakdown. // All are explained further down this page. "breakdown": { "type": "DailyBreakdown", // A Breakdown has a Period to specify the dates that the // data should cover. It can be a LatestValuesPeriod or a // DayRangePeriod; both are explained further down this page. "period": { "type": "LatestValuesPeriod", "numberOfValues": 7 } } }, // The following specifies CDD with a base temperature of 21.5 C, // broken down monthly, and covering June, July, and August of 2024. "monthlyCDD": { "type": "DatedDataSpec", "calculation": { "type": "CoolingDegreeDaysCalculation", "baseTemperature": { "unit": "C", "value": 21.5 } }, "breakdown": { "type": "MonthlyBreakdown", "period": { "type": "DayRangePeriod", "dayRange": { "first": "2024-06-01", "last": "2024-08-31" } } } }, // The following specifies 5-year-average HDD with a base temperature // of 15.5 C. If you made this request today you could expect to get // an average of data from 2019, 2020, 2021, 2022, and 2023. // NB as this covers 5 years it will use more request units than the // much-shorter items defined above. For repeated testing you might // want to remove it or temporarily reduce its "numberOfValues" to 1 // if your API account does not have a high rate limit. "averageHDD": { "type": "AverageDataSpec", "calculation": { "type": "HeatingDegreeDaysCalculation", "baseTemperature": { "unit": "C", "value": 15.5 } }, // The Breakdown for an AverageDataSpec must be a // FullYearsAverageBreakdown. "breakdown": { "type": "FullYearsAverageBreakdown", "period": { "type": "LatestValuesPeriod", "numberOfValues": 5 } } }, // The following specifies hourly temperatures for just the most // recent full day. You can fetch hourly temperatures covering much // longer periods, but here we specify just one day to keep the // example response short. // NB fetching hourly temperatures uses extra request units and // requires at least an API Standard account. So we suggest you // remove this TimeSeriesDataSpec from your code if you don't need // it. "hourlyTemperatures": { "type": "TimeSeriesDataSpec", "calculation": { "type": "TemperatureTimeSeriesCalculation", "interval": "Hourly", "temperatureUnit": "C" }, // Hourly data is always broken down hourly, but // TimeSeriesDataSpec takes a DatedBreakdown (like // DailyBreakdown) to enable you to get hourly data that matches // your degree days exactly, and so you can easily specify // hourly data covering e.g. the last x days, weeks, or months. // It is a little confusing at first, but it is useful too! "breakdown": { "type": "DailyBreakdown", "period": { "type": "LatestValuesPeriod", "numberOfValues": 1 } // If you want figures for the current day so far, add the // following property here inside the breakdown: // "allowPartialLatest": true } } } } }
LocationDataRequest
The above JSON is just an example, you can easily configure your own LocationDataRequest
to specify exactly the data you want. The JSON API test tool can be useful here: you can copy/paste in the example JSON request above and then modify it to try out the options described below:
The Location
can be a station ID, or a "geographic location" for which the API will select the best weather station to use automatically:
// StationIdLocation "location": { "type": "StationIdLocation", // the "stationId" must match [-_0-9a-zA-Z]{1,60} "stationId": "EGLL" } // LongLatLocation "location": { "type": "LongLatLocation", "longLat": { "longitude": -135.23127, "latitude": 43.92135 } } // PostalCodeLocation "location": { "type": "PostalCodeLocation", // the "postalCode" is the postal/zip code of the location and must match // [- 0-9a-zA-Z]{1,16} "postalCode": "WC2N 5DN", // the "countryCode" should be the 2-letter ISO country code of the // location, in upper case e.g. US for United States, GB for Great Britain "countryCode": "GB" }
A DataSpec
can be either a DatedDataSpec
(for daily/weekly/monthly/yearly degree days), an AverageDataSpec
, or a TimeSeriesDataSpec
(for hourly temperature data). All three are shown in the example request JSON further above.
A Calculation
(which goes inside a DatedDataSpec
or AverageDataSpec
) can specify heating or cooling degree days:
// HeatingDegreeDaysCalculation "calculation": { "type": "HeatingDegreeDaysCalculation", "baseTemperature": { "unit": "C", "value": 15.5 } } // CoolingDegreeDaysCalculation "calculation": { "type": "CoolingDegreeDaysCalculation", "baseTemperature": { "unit": "F", "value": 65 } }
All temperatures must be specified as whole numbers or with one digit after the decimal point.
A TimeSeriesCalculation
(which goes inside a TimeSeriesDataSpec
) can specify hourly temperature data in Celsius or Fahrenheit:
// TemperatureTimeSeriesCalculation in Celsius "calculation": { "type": "TemperatureTimeSeriesCalculation", "interval": "Hourly", "temperatureUnit": "C" } // TemperatureTimeSeriesCalculation in Fahrenheit "calculation": { "type": "TemperatureTimeSeriesCalculation", "interval": "Hourly", "temperatureUnit": "F" }
A DatedDataSpec
(and a TimeSeriesDataSpec
*) can have the following kinds of breakdown:
// DailyBreakdown "breakdown": { "type": "DailyBreakdown", "period": { // Period goes here } } // WeeklyBreakdown "breakdown": { "type": "WeeklyBreakdown", "firstDayOfWeek": "Monday", "period": { // Period goes here } } // MonthlyBreakdown "breakdown": { "type": "MonthlyBreakdown", // "startOfMonth" is optional, the default being ---01 for regular calendar // months starting on the first day of each month. The ---DD format is XML // Schema's gDay format for days that recur each month, based on ISO 8601. "startOfMonth": "---01", "period": { // Period goes here } } // YearlyBreakdown "breakdown": { "type": "YearlyBreakdown", // "startOfYear" is optional, the default being --01-01 for regular calendar // years starting on Jan 1st each year. The --MM-DD format is XML Schema's // gMonthDay format for days that recur each year, based on ISO 8601. // The example below specifies years starting on May 21st: "startOfYear": "--05-21", "period": { // Period goes here } } // CustomBreakdown "breakdown": { "type": "CustomBreakdown", "dayRanges": [ { "first": "2023-10-16", "last": "2023-11-14" }, // Typically one range would start the day after the last day of the // previous range (i.e. no gap). But you can have gaps between ranges // if you like (e.g. if your energy data has gaps), as in this example // which leaves a gap between 2023-11-15 and 2023-11-20. { "first": "2023-11-21", "last": "2023-12-17" } ] }
*As mentioned in the example JSON above, it might seem strange that a TimeSeriesDataSpec
(for hourly temperature data) would take these breakdowns in the same way that a DatedDataSpec
(for degree days) does, but it works this way to give you flexibility in how you specify the time-period that the hourly data should cover, and to make it easy for you to get hourly data that lines up perfectly with your degree days.
All the breakdown types above can also have an allowPartialLatest
property that you can specify as true
(as opposed to the default value of false
) to enable you to fetch time-series data that includes figures for the current day/week/month/year so far. For example:
// A DailyBreakdown with allowPartialLatest set to true: "breakdown": { "type": "DailyBreakdown", "period": { // Period goes here }, "allowPartialLatest": true }
Please note that the most recent time-series data can be a little volatile, as weather stations sometimes send multiple reports for the same time, some delayed, and some marked as corrections for reports they sent earlier. Our system generates time-series data using all the relevant reports that each weather station has sent, but the generated figures may change if delayed or corrected reports come through later. If you are storing partial-latest time-series data we suggest you overwrite it later with figures generated after the day has completed and any delayed/corrected reports have had time to filter through.
An AverageDataSpec
can currently only have one type of breakdown:
// FullYearsAverageBreakdown specifies that data should be averaged from the // full calendar years specified by the period: "breakdown": { "type": "FullYearsAverageBreakdown", "period": { // Period goes here. Typically you'd use a LatestValuesPeriod to // specify that the average should come from the most recent x full // calendar years. } }
A Period
can be specified in two ways:
// LatestValuesPeriod to get the most recent available data: "period": { "type": "LatestValuesPeriod", "numberOfValues": 12, // "minimumNumberOfValues" is optional - you can specify it if you would // rather have a failure than a partial set of data with less than your // specified minimum number of values. (Otherwise you may get back less // data than you asked for if there aren't enough temperature-data records // to generate a full set for your specified location.) "minimumNumberOfValues": 12 } // DayRangePeriod to get data covering your specified dates: "period": { "type": "DayRangePeriod", "dayRange": { "first": "2019-01-01", "last": "2023-12-31" } // "minimumDayRange" is optional - you can specify it if you would rather // have a failure than a partial set of data covering less than your // specified minimum range (like for "minimumNumberOfValues" above). "minimumDayRange": { "first": "2021-01-01", "last": "2023-12-31" } }
LocationInfoRequest
and two-stage data fetchingExcept in name, LocationInfoRequest
looks exactly the same as LocationDataRequest
. Take the example LocationDataRequest
JSON above, change "type": "LocationDataRequest"
to "type": "LocationInfoRequest"
, and you will have a valid LocationInfoRequest
.
Each API request you make uses request units that count against your hourly rate limit. A big LocationDataRequest
can use a lot of request units, but a LocationInfoRequest
will only ever use one. See the sign-up page for more on request units and rate limits.
If you try this out in the JSON API test tool, you will see that LocationInfoResponse
does not contain any data (it has no "dataSets"
object)... It is typically used for two-stage data fetching, which can be useful if you are dealing with geographic locations (postal/zip codes, or longitude/latitude positions), but storing data by station ID (returned in every successful response). For this use-case, two-stage data fetching can help you save request units (see right) and improve the efficiency of your system by avoiding re-fetching data that you already have stored.
When you want to add a new location into your system (e.g. if a new user signs up with a new address), you can do the following:
LocationInfoRequest
with the geographic location and the specification of the data that you want. This will only take one request unit. You won't get any data back, but you will get the station ID that the system would use for an equivalent LocationDataRequest
. If you already have data stored for that station ID, use it; if not, progress to stage 2.LocationDataRequest
with the station ID from stage 1. This will take more request units (more the more data you fetch), but using the station ID will save a request unit, such that your two-stage fetch will use the same number of request units in total as you would have used if you had made a LocationDataRequest
with the geographic location in the first place.Two-stage fetching will only improve efficiency and save request units if/when you have enough geographic locations in your system that some of them end up sharing weather stations. But, if that is the case, two-stage fetching can really help your system to scale well as more and more geographic locations are added in.
RegressionRequest
for advanced regression functionalityWith RegressionRequest
you can send energy data to the API so it can test thousands of regressions and find the HDD and/or CDD base temperatures that give the best statistical fit. Our docs on the API's regression functionality cover all the RegressionRequest
configuration options and the data you can expect in the RegressionResponse
. They do not, however, cover how to send a RegressionRequest
to the API and get a response back. This process is the same for all request types, so we suggest you follow the instructions on this page or in one of the language-specific code samples (JavaScript, Node.js, Office/Excel, PHP, PowerShell, R, Ruby, or VBA) to get your code sending LocationDataRequest
to the API and getting a response back, then get it sending RegressionRequest
as well.
You can also test a RegressionRequest
quickly with the API regression docs and the JSON API test tool.
You need to send five parameters to the endpoint URL (http://apiv1.degreedays.net/json
or https://apiv1.degreedays.net/json
):
request_encoding=base64url
signature_method=HmacSHA256
(or alternative)signature_encoding=base64url
encoded_request
(the JSON string, base64url
encoded)encoded_signature
(the signature, base64url
encoded)HTTP POST
is ideal, but, if that's difficult for you, a GET
should be OK too.
The signature and base64url
encoding are both explained below:
The signature has two purposes:
The signature_method
will ideally be HmacSHA256
. If you can't do that for whatever reason (it might not be supported by your programming language), HmacSHA1
should work instead. Of course the method you use to make the signature has to correspond with the signature_method
parameter that you send with your request.
Your JSON string should be fed into the HMAC function, with the key being your security key (one of the access keys that comes with your API account). If the HMAC function requires a byte array, convert the JSON string into bytes first using UTF-8
encoding. The HMAC function should return the signature as a byte array. Encode that using base64url
(as described below), and use the resulting string as the encoded_signature
parameter.
base64url
encoding is required to:
encoded_request
parameter); andencoded_signature
parameter).base64url
encoding is like regular base 64, but with a few differences:
+
characters are -
(minus) instead;/
characters are _
(underscore) instead;=
characters as padding, and no line breaks.Many libraries can do base64url encoding automatically, but if not, then hopefully you can generate a regular base-64 string and convert it using the simple rules above. Note that you might find it works OK with the =
characters and line breaks left in, but to ensure future compatibility it's best to remove them.
If your base-64 function takes a byte array instead of a string, to encode the request JSON you should encode the JSON string into bytes first (using UTF-8
encoding) and then feed the bytes into your base-64 function.
If, in your programming language, you're struggling to generate anything base 64, email us at info@degreedays.net as we should be able to set up an alternative like hex.
As mentioned above, an HTTP POST is the best way, but, if that's difficult in your programming language, a GET should work too.
If you can handle a compressed response, you can set the Accept-Encoding
header to gzip
or deflate
. Then the JSON response will come back in compressed format. This won't make much of a difference if you're only fetching a small amount of data with each request, but it's well worth using compression to reduce bandwidth if you're fetching daily data covering long periods or in lots of base temperatures.
The JSON API test tool can be useful for debugging your HTTP requests, as it generates and shows all the HTTP request parameters for any JSON request you give it. Usually the test tool's "Auto-prepare request" option is helpful, but you should switch it off if you are checking your encoded_request
or encoded_signature
against the ones generated by the test tool, as you don't want it modifying your JSON request before generating the HTTP parameters.
Below is an annotated example JSON response to the example JSON LocationDataRequest
above. It shows all the types of JSON data that you should see, apart from Failure
objects (covered later), but, when writing your parsing code, please do bear in mind that we may add in extra JSON data in the future.
{ // The following "metadata" is included with every response "metadata": { "rateLimit": { // "requestUnitsAvailable" shows how many request units your account // has left in the current period. "requestUnitsAvailable": 2116, // "minutesToReset" is the number of minutes until your available // request units are reset to their full allowance. "minutesToReset": 33 } }, // If you made a LocationDataRequest in your JSON request, the "response" // will be a LocationDataResponse. (Unless it's a Failure instead - see // further below for more on this.) "response": { "type": "LocationDataResponse", // There will always be a "stationId", assuming your request could be // satisfied. "stationId": "KFMH", // The "targetLongLat" will always have the same format, however you // specified the Location in your JSON request. If you specified a // StationIdLocation it will give the location of that station; if you // specified a LongLatLocation it will repeat those coordinates back to // you; if you specified a PostalCodeLocation (as in the example JSON // request) it will give you the coordinates that the API used to // represent that postal code. "targetLongLat": { "longitude": -70.59047, "latitude": 42.7455 }, // The "sources" array represents the stations that were used to // generate your data. There will always be at least one, and at // present there will only be one (we've allowed for the future // possibility of combining data from multiple stations without breaking // compatibility). "sources": [ { "station": { "id": "KFMH", "longLat": { "longitude": -70.5215, "latitude": 41.6585 }, "elevationMetres": 40, "displayName": "Otis Air National Guard Base, MA, US" }, // How far is this station from the "targetLongLat" (see above)? // (It will be 0 if you specified a StationIdLocation in your // request.) "metresFromTarget": 11242 } ], "dataSets": { // Every DataSet has a unique name in the "dataSets" object that // matches the name you gave the corresponding DataSpec in the // "dataSpecs" object of your JSON request. // The following examples show the formats you can expect for the // different types of DataSet: "dailyHDD": { "type": "DatedDataSet", "percentageEstimated": 1, "values": [ { // "d" is the first day of the period that the value "v" // covers. The "ld" (last day) property isn't included // for daily data as it's always the same as "d". // We don't like cryptic abbreviations or properties // that are only sometimes present, but big responses // can easily have hundreds of thousands of values, so // we keep them small to minimize bandwidth and memory // footprint. "d": "2024-09-26", "v": 3.8 }, { "d": "2024-09-27", "v": 2.6, // "pe" is short for "percentage estimated". If it's // missing it's the default value of 0. "pe": 0.6 }, { "d": "2024-09-28", "v": 1.4 }, { "d": "2024-09-29", "v": 0.3 }, { "d": "2024-09-30", "v": 2.9 }, { "d": "2024-10-01", "v": 3.9, "pe": 1 }, { "d": "2024-10-02", "v": 1.4 } ] }, "monthlyCDD": { "type": "DatedDataSet", "percentageEstimated": 0.6, "values": [ { "d": "2024-06-01", // "ld" is short for "last day". It's only included if // it's different to "d" (the first day) i.e. if the // data is weekly or monthly or yearly instead of daily. "ld": "2024-06-30", "v": 17.4, "pe": 2 }, { "d": "2024-07-01", "ld": "2024-07-31", "v": 73.6 }, { "d": "2024-08-01", "ld": "2024-08-31", "v": 24.1 } ] }, "averageHDD": { "type": "AverageDataSet", "percentageEstimated": 0.2, "firstYear": 2019, "lastYear": 2023, // "annual" has the average-annual total. "annual": { "v": 2478.3, "pe": 0.2 }, "monthly": { // "1" has the average value for January; "2" has the // average value for February; etc. "1": { "v": 523.9, "pe": 0.01 }, "2": { "v": 435, "pe": 0.3 }, "3": { "v": 363.1, "pe": 0.06 }, "4": { "v": 203.5, "pe": 0.007 }, "5": { "v": 85.9, "pe": 0.4 }, "6": { "v": 21.1, "pe": 1 }, "7": { "v": 1.9, "pe": 0.2 }, "8": { "v": 4.2, "pe": 0.01 }, "9": { "v": 35.8, "pe": 0.1 }, "10": { "v": 125.4, "pe": 0.08 }, "11": { "v": 257.3, "pe": 0.007 }, "12": { "v": 421.2, "pe": 0.03 } } }, "hourlyTemperatures": { "type": "TimeSeriesDataSet", "percentageEstimated": 0.1, "values": [ { // "dt" is short for "date-time". They are in ISO // format, so you can easily parse out the local time // (from the first 16 characters), and the time-zone // offset is there too if you want it or the time in // UTC. "dt": "2024-10-02T00:00-04:00", "v": 10 }, { "dt": "2024-10-02T01:00-04:00", "v": 9.8 }, { "dt": "2024-10-02T02:00-04:00", "v": 8.5 }, { "dt": "2024-10-02T03:00-04:00", "v": 7 }, { "dt": "2024-10-02T04:00-04:00", "v": 6.8 }, { "dt": "2024-10-02T05:00-04:00", "v": 6.2 }, { "dt": "2024-10-02T06:00-04:00", "v": 7.2 }, { "dt": "2024-10-02T07:00-04:00", "v": 9.2 }, { "dt": "2024-10-02T08:00-04:00", "v": 13.5 }, { "dt": "2024-10-02T09:00-04:00", "v": 16 }, { "dt": "2024-10-02T10:00-04:00", "v": 16.5 }, { "dt": "2024-10-02T11:00-04:00", "v": 17.8 }, { "dt": "2024-10-02T12:00-04:00", "v": 17, "pe": 1 }, { "dt": "2024-10-02T13:00-04:00", "v": 17.2 }, { "dt": "2024-10-02T14:00-04:00", "v": 17.6 }, { "dt": "2024-10-02T15:00-04:00", "v": 16, "pe": 1 }, { "dt": "2024-10-02T16:00-04:00", "v": 15.5 }, { "dt": "2024-10-02T17:00-04:00", "v": 14.9, "pe": 1 }, { "dt": "2024-10-02T18:00-04:00", "v": 12.9 }, { "dt": "2024-10-02T19:00-04:00", "v": 12.2 }, { "dt": "2024-10-02T20:00-04:00", "v": 11.9 }, { "dt": "2024-10-02T21:00-04:00", "v": 11.9 }, { "dt": "2024-10-02T22:00-04:00", "v": 11.4 }, { "dt": "2024-10-02T23:00-04:00", "v": 11 } ] } } } }
You can generate a similar response yourself by running the example JSON request further above through the JSON API test tool. But please note that the annotated response above is just an example to show the response structure; the figures you get in your live response will be different.
If something goes wrong, the JSON response will contain a Failure
.
Every Failure
has a "code"
that indicates the cause of the failure.
These codes are named in a hierarchical way. For example, if a failure is caused by an invalid request, its code will begin with "InvalidRequest". The idea is that you can quickly test for broader types of failure code without having to know or itemize all the sub-types (like "InvalidRequestAccount" and "InvalidRequestSignature").
New codes may be added into the API at any time. New codes might be sub-types of existing types (like if "InvalidRequestSomeNewCode" was added as a sub-type of "InvalidRequest"), or they might be completely new (like "SomeCompletelyNewCode"). If you're writing logic that checks for different failure codes, make sure that it won't blow up if it comes across a code that it doesn't recognize.
Any request can fail completely for a variety of reasons. Here's an example of the sort of JSON response you can expect if it does:
{ // Note how the "metadata" comes through as usual even though the request // failed. This is expected, you can rely on the "metadata" being there. "metadata": { "rateLimit": { "requestUnitsAvailable": 1954, "minutesToReset": 11 } }, // Note how the "response" property is an object of type "Failure" rather // than the "LocationDataResponse" or "LocationInfoResponse" you'd usually // expect. "response": { "type": "Failure", "code": "LocationNotRecognized", "message": "Sorry, we do not recognize the location that you specified. Our postal-code database did not recognize the specified PostalCodeLocation, and was consequently unable to find its longitude/latitude position." } }
Here are some failure codes you might see:
Location
failures (all codes starting with "Location"):
LocationNotRecognized
– if you request data from a location that the API does not recognize as a weather station or real-world geographic location. This type of failure will occur if you specify a StationIdLocation
with an unrecognized ID, or a PostalCodeLocation
with an unrecognized postal code (or with the wrong country code).LocationNotSupported
– if you request data from a location that is recognized but not currently supported by the API. This can happen if you request data from a LongLatLocation
or PostalCodeLocation
for which the API is unable to find a good weather station, or if you request data from a StationIdLocation
specifying a weather station that has stopped working. For more information on this failure, see here.RateLimit
failures (all codes starting with "RateLimit"):
RateLimit
– if you've hit your rate limit. You can check the metadata
to see how many minutes you will have to wait until it resets. If this happens a lot, you might want to upgrade your account (just email us at info@degreedays.net
if so).RateLimitOnLocationChanges
– rate limit on the number of times a low-end location-limited account can change the location(s) that they access data from. You probably won't need to special-case this – it starts with "RateLimit" and that is probably all that matters. But there's more information here if you want it.InvalidRequest
failures (all codes starting with "InvalidRequest"):
InvalidRequestDeliveryFormat
– this means there's something wrong with the way you sent the request. Typically it means you've not included all the request parameters mentioned above, or their values aren't in the expected format.InvalidRequestJson
– this means your JSON (encoded within the encoded_request
parameter) does not match the expected format.InvalidRequestAccount
– if a request is sent with an unrecognized account key.InvalidRequestForAccountPlan
– if you request hourly temperature data through one of the lower-level API account plans that doesn't support it.InvalidRequestSignature
– if a request is sent with an invalid signature, either because of an error in the security key or an error in the code that uses that key to generate the signature.InvalidRequestTimestamp
– if a request is sent with an invalid timestamp, typically caused by an out-of-sync clock on the client machine.Service
failures (all codes starting with "Service"):
ServiceTemporarilyDown
– a temporary problem preventing the API service from functioning properly (sorry!).ServiceUnexpectedError
– a unexpected error in the API service (sorry again!).The list above is not complete, and more failure codes may be added at any time. Instead of trying to get your system to handle them all, we suggest you just watch out for the codes you want to do something specific with. Always test for codes with e.g. code.startsWith("LocationNotRecognized")
so that your system will be able to handle us adding new sub-types of the codes you're watching out for.
DataSet
failure (a partial failure)A LocationDataRequest
can succeed partially, but some or all of its DataSpec
/DataSet
items can fail. For example, you might specify that you want data from 100 years ago from a specific PostalCodeLocation
. The API might find a good weather station to match the postal code (partial success), but not one with coverage going that far back in time (partial failure).
In such instances, you can expect to receive a Failure
in place of the DataSet
that you would usually receive if everything had worked. Like a DataSet
, the Failure
will have a name that matches the name of the corresponding DataSpec
from your request. You can check the "type"
property to see if you have a Failure
or the DatedDataSet
/AverageDataSet
/TimeSeriesDataSet
you were hoping for.
{ "metadata": { // details omitted for clarity }, "locationDataResponse": { // the JSON that comes before "dataSets" is omitted for clarity "dataSets": { // successful DataSet items omitted for clarity "nameOfCorrespondingDataSpecFromRequest": { "type": "Failure", "code": "SourceDataCoverage", "message": "Sorry, the source does not have enough recorded temperature readings for us to be able to generate data covering enough time to satisfy your specification." } // successful DataSet items omitted for clarity } } }
LocationInfoResponse
If you submit a LocationInfoRequest
you'll get a LocationInfoResponse
back. LocationInfoResponse
is basically just LocationDataResponse
without the "dataSets"
property. And of course its "type"
will be "LocationInfoResponse"
instead of "LocationDataResponse"
.
Beyond these differences, LocationInfoResponse
is identical to LocationDataResponse
(including the same request-failure possibilities), and you should be able to re-use most of your parsing code.
You can try this out with the JSON API test tool by copy/pasting in the example LocationDataRequest
further above, changing "type": "LocationDataRequest"
to "type": "LocationInfoRequest"
, and sending it to the API to get a LocationInfoResponse
back.
This page focuses on the technical details, but is also worth reading the higher-level integration guide for tips on the various approaches to integrating with the API. We have helped a lot of businesses integrate their software with our API so we are very familiar with the patterns that work well for common use cases.
© 2008–2024 BizEE Software – About | Contact | Privacy | Free Website | API | Integration Guide | API FAQ | API Sign-Up