Skip to content

Creating an Ontology

Kaleb Houck edited this page Aug 18, 2021 · 22 revisions

Creating an ontology in DeepLynx requires knowing both how an ontology is described in DeepLynx terms and the individual API calls needed to transform that description into a stored ontology. These steps will familiarize you with the steps and terms necessary to create an ontology ready to accept data. See the API documentation that's part of this repository for more information on the individual API calls, their required properties, and response bodies.

Manually Creating an Ontology

1. Create a Container

A Container is like a bucket - in DeepLynx it contains your ontology definitions as well as the data that you've ingested under that ontology. Containers have information about data sources, class mapping, user permissions, and more. A valid, active Container is required before a user can create an ontology, manage data sources, or ingest data.

Creating a container is easy, simply POST this required information to {baseURL}/containers

{
    name: "Test Container",
    description: "Test Container Description"
}

2. Create Types/Classes (Metatypes)

The next step in creating your ontology is uploading your classes or types. In DeepLynx we refer to classes/types as Metatypes and their properties as Metatype Keys.

Creating a Metatype is easy, simply POST this required information to {baseURL}/containers/{container-id}/metatypes. (keep in mind that you cannot create two Metatypes with the same name.)

{
    name: "Test Class",
    description: "Test Class Description"
}

Once you've successfully created a Metatype, you can add its properties by adding Metatype Keys to it by POSTing a similar request body below to {baseURL}/containers/{container-id}/metatypes/{metatype-id}/keys. Please see the API Documentation for more information on the request body and the optional parameters not shown here.

{
	"name": "Quantity",
	"required": true,
	"property_name": "quantity", // property_name maps to the json property name of an incoming data object
	"description":"Name of ",
	"data_type": "number",
	"defaultValue": 0
}

3. Create Relationship Types (Metatype Relationships)

The next step in creating your ontology is uploading your relationship types, or Metatype Relationships.

Creating a Metatype Relationship is easy, simply POST this required information to {baseURL}/containers/{container-id}/metatype_relationships. (keep in mind that you cannot create two Metatype Relationships with the same name.)

{
    name: "Test Relationship",
    description: "Test Relationship Description"
}

Once you've successfully created a Metatype relationship, you can add its properties by adding Metatype Relationship Keys to it by POSTing a similar request body below to {baseURL}/containers/{container-id}/metatype_relationships/{metatype-relationship-id}/keys. Please see the API Documentation for more information on the request body and the optional parameters not shown here.

{
	"name": "Quantity",
	"required": true,
	"property_name": "quantity", // property_name maps to the json property name of an incoming data object
	"description":"Name of ",
	"data_type": "number",
	"defaultValue": 0
}

4. Associate Metatypes - Metatype Relationship Pairs

Now that you've created Metatypes and Metatype Relationships, along with their keys, you can start creating relationships between types. In order to do this you must have the three things - the parent Metatype ID, the child Metatype ID, and a Metatype Relationship ID of the type you wish this relationship to be. With this information you can now create a Metatype Relationship Pair

Once you have that relationship information POST it to {baseURL}/containers/{container-id}/metatype_relationship_pairs

{
	"name": "Relationship Pair Test",
	"description": "Relationship Pair Test Description",
	"origin_metatype_id": "",
	"destination_metatype_id": "",
	"relationship_id": "",
	"relationship_type": "many:many" // valid values are many:many, one:one, one:many, many:one
}

Using an .owl file or URL.

A container may be created from an existing ontology (.owl) file. This functionality is provided through an API endpoint /containers/import. Either a local file or URL pointing to a .owl file may be provided. Please note that if the URL is pointing to a file in GitHub, you must provide the link to the raw version of the file.

The URL should be provided through a path parameter. Additionally, the name of the container to be created must be provided and a description may be provided as well. The description can be either a string of text or the XML element that should be interpreted as containing the description, such as obo:IAO_0000115.

The ontology must be in a certain format to be properly parsed and stored in DeepLynx. Please adhere to the following guides:

  • Each class, relationship, and data property should have at most one definition annotation. It is expected that the obo:IAO_0000115 annotation is used for class definitions. Please open a GitHub issue if this will not work for your needs.
  • Certain characters cannot be parsed correctly. A container import attempts to remove such characters, but please open a GitHub issue if you are unable to import the ontology file due to this sort of issue.

Optional - Populating DeepLynx with the DIAMOND Ontology

There is an active ontology used for nuclear applications that can serve as a functional example. This below example points to the raw version of the DIAMOND ontology In your .env you should be able to determine your end point. Following our configuration included in the repo it would be: localhost:8090/containers/import

A tool that would allow you to test your locally hosted install would be Postman

In api_documentation there is an up-to-date postman .json file you can use to import the collection to your local Postman. If you choose to use this the recommendation is to create an Environment and set the baseUrl environment.

DeepLynx Wiki

Sections marked with ! are in progress.

Building DeepLynx

DeepLynx Overview

Getting Started

Building From Source

Admin Web App


Deploying DeepLynx


Integrating with DeepLynx


Using DeepLynx

Ontology

Data Ingestion

Timeseries Data

Manual Path
Automated Path
File/Blob Storage

Data Querying

Event System

Data Targets


Developing DeepLynx

Developer Overview

Project Structure and Patterns

Data Access Layer

Development Process

Current Proposals

Clone this wiki locally