Getting started
Exploring and using data
Exploring catalogs and datasets
Exploring a catalog of datasets
What's in a dataset
Filtering data within a dataset
An introduction to the Explore API
An introduction to the Automation API
Introduction to the WFS API
Downloading a dataset
Creating maps and charts
Creating advanced charts with the Charts tool
Overview of the Maps interface
Configure your map
Manage your maps
Reorder and group layers in a map
Creating multi-layer maps
Share your map
Navigating maps made with the Maps interface
Rename and save a map
Creating pages with the Code editor
How to limit who can see your visualizations
Archiving a page
Managing a page's security
Creating a page with the Code editor
Content pages: ideas, tips & resources
How to insert internal links on a page or create a table of contents
Sharing and embedding a content page
How to troubleshoot maps that are not loading correctly
Creating content with Studio
Creating content with Studio
Adding a page
Publishing a page
Editing the page layout
Configuring blocks
Previewing a page
Adding text
Adding a chart
Adding an image block to a Studio page
Adding a map block in Studio
Adding a key performance indicator (KPI)
Configuring page information
Using filters to enhance your pages
Refining data
Managing page access
How to edit the url of a Studio page
Visualizations
Managing saved visualizations
Configuring the calendar visualization
The basics of dataset visualizations
Configuring the images visualization
Configuring the custom view
Configuring the table visualization
Configuring the map visualization
Understanding automatic clustering in maps
Configuring the analyze visualization
Publishing data
Publishing datasets
Creating a dataset
Creating a dataset from a file
Creating a dataset with multiple files
Creating a dataset with media files
Creating a dataset by connecting to a remote service
Federating an Opendatasoft dataset
Publishing a dataset
Publishing data from a CSV file
Publishing data in JSON format
Supported file formats
Configuring datasets
Automated removal of records
Configuring dataset export
Checking dataset history
Configuring the tooltip
Dataset actions and statuses
Dataset limits
Defining a dataset schema
How Opendatasoft manages dates
How to find your workspace's IP address
Keeping data up to date
Processing data
Translating a dataset
Deciding what license is best for your dataset
Types of source files
OpenStreetMap files
Shapefiles
JSON files
XML files
Spreadsheet files
RDF files
CSV files
MapInfo files
GeoJSON files
KML/KMZ files
GeoPackage
Connectors
Saving and sharing connections
Airtable connector
Amazon S3 connector
ArcGIS connector
Azure Blob storage connector
Database connector
Dataset of datasets (workspace) connector
Eco Counter connector
Feed connector
Google BigQuery connector
Google Drive connector
How to find the Open Agenda API Key and the Open Agenda URL
JCDecaux connector
Netatmo connector
OpenAgenda connector
Realtime connector
Salesforce connector
SharePoint connector
U.S. Census connector
WFS connector
Harvesters
Harvesting a catalog
ArcGIS harvester
ArcGIS Hub Portals harvester
CKAN harvester
CSW harvester
FTP with meta CSV harvester
Opendatasoft Federation harvester
Quandl harvester
Socrata harvester
data.gouv.fr harvester
data.json harvester
Processors
What is a processor and how to use one
Add a field processor
Compute geo distance processor
Concatenate text processor
Convert degrees processor
Copy a field processor
Correct geo shape processor
Create geo point processor
Decode HTML entities processor
Decode a Google polyline processor
Deduplicate multivalued fields processor
Delete record processor
Expand JSON array processor
Expand multivalued field processor
Expression processor
Extract HTML processor
Extract URLs processor
Extract bit range processor
Extract from JSON processor
Extract text processor
File processor
GeoHash to GeoJSON processor
GeoJoin processor
Geocode with ArcGIS processor
Geocode with BAN processor (France)
Geocode with PDOK processor
Geocode with the Census Bureau processor (United States)
Geomasking processor
Get coordinates from a three-word address processor
IP address to geo Coordinates processor
JSON array to multivalued processor
Join datasets processor
Meta expression processor
Nominatim geocoder processor
Normalize Projection Reference processor
Normalize URL processor
Normalize Unicode values processor
Normalize date processor
Polygon filtering processor
Replace text processor
Replace via regular expression processor
Retrieve Administrative Divisions processor
Set timezone processor
Simplify Geo Shape processor
Skip records processor
Split text processor
Transform boolean columns to multivalued field processor
Transpose columns to rows processor
WKT and WKB to GeoJson processor
what3words processor
Data Collection Form
About the Data Collection Form feature
Data Collection Forms associated with your Opendatasoft workspace
Create and manage your data collection forms
Sharing and moderating your data collection forms
Dataset metadata
Analyzing how your data is used
Getting involved: Sharing, Reusing and Reacting
Discovering & submitting data reuses
Sharing through social networks
Commenting via Disqus
Downloading datasets
Submitting feedback
Following dataset updates
Sharing and embedding data visualizations
Monitoring usage
Monitoring your workspaces
Analyzing user activity
Analyzing actions (API calls)
Analyzing source data
Analyzing data usage
Analyzing a single dataset with its monitoring dashboard
Analyzing back office activity
Using the data lineage feature
Managing your users
Managing limits
Managing users
Managing users
Setting quotas for individual users
Managing access requests
Inviting users to the portal
Managing workspaces
Managing your portal
Configuring your portal
Configure catalog and dataset pages
Configuring a shared catalog
Sharing, reusing, communicating
Customizing your workspace's URL
Managing legal information
Connect Google Analytics (GA4)
Regional settings
Pictograms reference
Managing tracking
Look & Feel
Branding your portal
Customizing portal themes
How to customize my portal according to the current language
Managing the dataset themes
Configuring data visualizations
Configuring the navigation
Adding assets
Plans and quotas
Managing security
Configuring your portal's general security policy
A dataset's Security tab
Single sign-on with OpenID Connect
Single sign-on with SAML
Parameters
- Home
- Exploring and using data
- Exploring catalogs and datasets
- An introduction to the Explore API
An introduction to the Explore API
Updated
by Léo VDB
An API, or Application Programming Interface, is a tool designed to allow different software systems to communicate with each other. If you want to use data stored somewhere online, the point of an API is to allow you to interact with that data in a way the source will understand. And if you want to share your data with others, an API allows you to define what kind of access they have.
Opendatasoft provides several different APIs to interact with the platform, but the main API used to access a given workspace's public data is our Explore API.
What the Explore API allows you to do
Opendatasoft's Explore API gives you access to public data on the Opendatasoft platform. As such, the Explore API allows you to perform three kinds of actions:
- Explore: Ask for records and fields that you would like to see. The data is given to you in a JSON object.
- Export: Export the entire dataset according to specified conditions. To specify the conditions you use a query language we call "ODSQL." ODSQL is our own query language, very similar to SQL.
- Analyze: You can combine data within a dataset, and/or can perform simple analysis on a dataset.
For example, you might query a dataset that contains schools, the number of students at those schools, and the region where each school is located, and combine that information by asking for the total number of students per region.
Interacting with the API
So, how do you actually go about using the API? The answer is at once simple and complicated.
As we saw above, using the API consists in requesting things of it, and receiving a response. In API-speak, you make a "call" or "request" and receive, in the case of ODS, a JSON object in return. So when you use the API, your API call is sent to the Opendatasoft server, and is answered with a JSON object.
Open a browser window, and paste this URL into it:
http://data.opendatasoft.com/api/v2/catalog/datasets
. What you see is a JSON object containing a kind of list of the datasets, with links to further JSON objects. And remember that you can replace "data.opendatasoft.com" with any Opendatasoft domain, and you'll see the data for that domain. That's the API in action!Of course, on its own, this isn't very interesting. But when other tools are used, the API can be a powerful way of interacting with the data. Read on for more details.
Practically speaking, you'll want to use certain tools to make interacting with the API more practical and useful. For example, API calls can be made using platforms such as PostMan that are made to interact with APIs. If you're a developer, you can use Curl or Python's Requests Library.
Exploring the data
As we saw above, exploring the data is one of three kinds of actions you can take with the Explore API. By exploring the data, we mean that you can request records from a public dataset in order to process and use it on your end.
There are methods you can use—and limits to be aware of—when specifying what data you want to be given:
- Select: You can select a specific range of columns
- Where: You can filter the data according to a condition
- Order by: You can use this to sort according to a designated column
- Limit/offset: You can limit what records are returned, or else jump directly to a specific record
- Group by: You can group data according to certain field values or functions applied to these fields.
Exporting the data
API calls are limited to 100 records per call. But if you need to handle all of the data at once, you are able to export the entirety of the data.
The same methods listed under "Exploring the data" can be used to tailor your requests.
Analyzing the data
After exploring or exporting the data, you may wish to perform some basic analysis on it. This is called aggregating, and you can use different functions to combine, or aggregate, the data in productive ways:
- avg (average)
- count
- count distinct
- envelope
- bbox
- max (maximum)
- median
- min (minimum)
- percentile
- sum
These functions are applied to "groups" that can be defined by the method "Group by," described above.
This can come in handy if, for example, you want to obtain the total expenses each month, when your dataset is a list of expenses and their dates. In this case, you might group the expenses by month, and perform a sum of the expenses column. In this way, even with a small amount of analysis you can begin to understand and make real use of the raw data. And, depending on how you use the API, you are able to do it in a standardized, automated, or scalable way.
Why upgrade to version 2?
Differences and advantages of v2 :
V1 | V2 | |
Use paradigm | 3 main endpoints for catalog and dataset : "/search" to retrieve data "/analyse" to use aggregation function "/download" to export data | 2 main endpoints for catalog and dataset : - "/records" to retrieve or perform data analyzes on a sample of the dataset (10 000 records max) - "/exports" to export the full dataset in various available formats Both endpoints use our ODSQL language, which provides among other things, aggregation functions. |
Exports | • Not all exporter formats are available • Group_by not supported | • All exporter formats are available • Group_by supported |
Internal and external uses | Only used by the old ODS tools | Used by the Studio and other external services (WFS, CSW, AUTOMATION, ...) |
URL encodage | Need to escape some special characters, for instance '#' | No need to escape special characters |
Parameter mapping:
V1 | V2 |
q, sort | Use select, where, order_by and group_by from our ODSQL language instead |
dataset | dataset is in the endpoint, it's not a parameter anymore |
rows & start | become limit & offset (keep same sens) |
refine.<facet_name>=<face_value> exclude.<facet_name>=<face_value> | refine=<facet_name>:<face_value> exclude=<facet_name>:<face_value> |
lang, timezone | no changes |
Query translation:
Exemple with this portal : https://documentation-resources.opendatasoft.com
V1/api/records/1.0 | V2/api/explore/v2.1 |
|
|
|
|
|
|
Where to now?
Hopefully by now you better understand why an API can be useful, and have a sense of what Opendatasoft allows you to do with the Explore API.
No doubt using an API isn't for everyone. But with a little work, the API can allow you to better understand—and efficiently use—your own data or other public data.
To answer your questions and help you on your way, we invite you to dive into our documentation for the Explore API.