Skip to content

Architecture & Technology

This document briefly covers the Orvium platform from a pure technological perspective.

This is a living document that will continue evolving during the development of the project. The following sections are merely informative, and they can change at any moment to adapt new requirements.

Web Application

Orvium Web App is the entry point to the Orvium platform. This component is a web application developed using the popular Angular Framework. The web application connects with REST APIs to authenticate users, write and retrieve data, show notifications, etc. The web application also integrates with the Metamask browser extension to allow users to interact with the blockchain.

Rest API

Orvium REST API provides the data and the business logic to the Orvium Web Application. This API uses an internal database to store all the metadata concerning the research article lifecycle. The metadata is used to give the user details about his articles, peer reviews, pending actions, notifications, etc. The REST API is also responsible to store the raw data from the articles.

In addition, the API communicates with the Ethereum blockchain to verify important details for the lifecycle of research article publications.

Cloud based

Our backend has been designed to run in the cloud and it supports auto scaling capabilities out of the box. This means that the platform will grow automatically based on that demand.

Besides, it integrates with other cloud services such as monitoring and alerting systems which give us great insights on the performance of the API.

Blockchain integration

Orvium is integrated with Metamask to interact with the Ethereum blockchain.

Event Driven Architecture

Some tasks in the platform can take some time to be completed. In these cases, we need to process the task in the background and inform users of what is happening under the scenes. For these reasons we designed our platform following an Event Driven Architecture. This is a powerful architecture pattern that give us great flexibility and functionality.

Looking to the future

Linked Data and Semantic Web

The Academic Knowledge Network is one of the sectors where structured data is key to publish content on the web. A number of initiatives, some of them at governmental level, have promoted linked data as the standard way to publish journals. One of the most important criteria to assess quality on public journals is based on the data exposed in their APIs follows the standards defined by DCMI. These allow crawlers to index data and establish relationships across academic data on the World Wide Web.

However, the current ontologies, agreed by the W3C and other entities as standards for scholarship publications, do not cover all the information we aim to provide. Part of our mission is to enrich the Academic Knowledge Network with more granularity and traceability of each piece of science shared on the Web.

To meet this challenge, we need to extend the current ontologies to add more information of our entities. Note that we are now making public, not only the scientific research itself, but also peer reviews, drafts and all versions between the first submission and the final article.

Decentralized Storage

Decentralized storage looks a promising solution for large the large datasets that are commonly used in scientific research. We are evaluating the use of solutions sch as IPFS for this purpose.

Big Data Analytics

Our goal is to provide meaningful insights to the users in the platform to help with their work such as:

  • Calculate the impact factor using public information and data
  • Identify citation and peer review rings
  • Suggest the right peer reviewers to increase the quality of the work
  • Facilitate peer review accuracy
  • Identify and suggest keywords for the manuscript
  • Find and identify indirect relations between publications
  • Automatically classify papers based on their content
  • Help to identify the journal that best fits a paper
  • Identify emerging trends and topics in specific research communities

We need to analyze large amounts of data to get those insights. For this reason we will integrate multiple data sources from existing research literature, connections between researchers and publications, social media feeds, etc. This data will be analyzed using machine learning tools.