exaptive menu
exaptive signup

So you have some data, and you need to build an application around it...

Dataflow Programming

Reusable Parts

Technology Agnostic

Responsive Web Applications

Data-rich Interactivity



The Studio works based on a dataflow paradigm. Data flows between elements that perform operations on the data. The elements of an application are encapsulated as modular components. The user wires components together to configure an application.


A flexible but governable data model is what makes it special. Values flowing through elements can be primitive, like integers or strings. They can be complex, like an entity containing multiple values, lists, arrays, arrays containing tables, and so on. The user enforces types and structure as much as he or she wants.

Think JSON, but not as ad hoc. You have fine-grained control for manipulating data, but you can enforce types and structure.

Think SQL, but more flexible. You have the power of arbitrarily complex queries but without being constrained to tabular structure.


Components accommodate the data, according to the user’s purpose. You can select and project tabular data into a network diagram. You can sort unstructured data into a table. The user does this in the dataflow, not to the data itself. You can use components off the shelf, edit them, or create your own.


The people with the data and the problem to solve – the application builders – are in control. They are not constrained by the requirements of a particular technology. Whether domain experts, data scientists, researchers, or developers, they have the power to work with the data intuitively. The data, the project, and the goal determine what makes sense.

Component developers can focus on their expertise. They write the algorithms or visualizations with the right tech for that function. They understand the data model their component needs. They don’t need to know what the data actually look like.

The result is a data application laboratory. Code is easy to reuse. The barrier to experimentation is low. The odds of innovation are high.


You can use any Javascript, Python or R library or any technology that you can connect to with Javascript, Python, or R. For data access and processing, you can work with relational or graph databases, Hadoop and Spark, flat files, and RESTful APIs. At the algorithmic layer, you’ve got a wide array of open-source data science tools at your fingertips. Same goes for data visualization and user interface. For a list of popular tools, check out our documentation on Working With Your Favorite Technology.