Neurelo Build Docs
Neurelo Build Docs
  • Introduction
    • Core Concepts
    • Key Features
  • Getting Started
    • Sign-in/Sign-up
    • Dashboard
      • Collapsible Sidebar
      • Light/Dark Mode
      • Account Settings
      • Audit Events
      • User Management
        • Permissions (Member v/s Admin)
      • Org Settings
    • Starting your Neurelo Project
      • Quick Start Guide
      • Step 1 - Add a Data Source
      • Step 2 - Build Definitions
      • Step 3 - Create an Environment
      • Step 4 - Create an API Key
      • Step 5 - Start Runners
      • Try your Neurelo APIs
  • "How to" Videos
    • Product Overview
    • Neurelo APIs & SDKs
    • Project Setup
    • Definitions
    • Environments
    • Data Sources
    • Organization Management
    • Creating and Using Custom Queries
    • Using the Schema Builder to build Relationships
    • Mock Data Generation
  • Definitions
    • Neurelo Schema Editor
      • Schema Builder
      • JSON/YAML Editor
      • Schema Visualization: Entity-Relationship Diagram (ERD)
    • Custom APIs for Complex Queries
      • Write and Commit Custom Queries
      • AI-Assisted Query Generation
      • Deploying Custom API Endpoints
      • Using Variables in your Custom Query
    • Branches and Commits
    • API Docs
  • Environments
    • Creating a new Environment
    • API Playground
    • Observability
    • Migrations
    • API Keys
  • Data Sources
    • PostgreSQL
    • MySQL
    • MongoDB
  • Guides
    • Provisioning Cloud Databases for using with Neurelo
      • PostgreSQL
        • AWS RDS (PostgreSQL)
      • MySQL
        • AWS RDS (MySQL)
      • MongoDB Atlas
    • Mock Data Generation
    • Wipe Data Source
    • Remote Git Repository for Definitions
      • Connecting a Remote Git Repo
      • Creating Commits from Neurelo
      • Syncing Branches
    • Data Viewer
    • Environment/Data Source Tags
    • How to work with Embedded documents and References in MongoDB
    • How to download and use the Postman Collection for your Project
    • Building Python applications with Postgres and FastAPI
    • CI Integration using Neurelo CLI
    • Schema Migrations
    • Schema AI Assist
    • Auto-Introspection
    • Access Policies
    • User Auth
      • Google
      • GitHub
      • GitLab
    • MongoDB Atlas - Migrate GraphQL to Neurelo
    • MongoDB Atlas - Migrate REST Data APIs to Neurelo
  • MongoDB Atlas - Migrate REST Data APIs to Neurelo
  • MongoDB Atlas - Migrate GraphQL APIs to Neurelo
  • Neurelo Schema Language (NSL)
    • Example 1 - DVD Rentals
    • Example 2 - Simple "Posts" App
    • Example 3 - Bookstore
  • Neurelo API Reference (REST)
    • Examples of Neurelo Auto-Generated REST API endpoints
      • Example 1 - Simple “Posts” application
      • Example 2 - "DVD Rentals" application
      • Example 3 - "Bookstore” application
      • cURL API Examples
  • Neurelo API Reference (GraphQL)
  • SDKs
    • TypeScript / JavaScript SDK
    • Go SDK
    • Python SDK
      • Python SDK Tutorial -- News Application
        • News Application using Neurelo’s Python SDKs
  • CLI (Preview Version)
  • Self-Hosted Neurelo Gateways
  • Tutorials
    • Building a Real Time Chat Application with Neurelo and MongoDB using Python
    • Building A Financial Terminal with Neurelo and MongoDB in Rust
    • Building a Restaurant Management System with Neurelo and MongoDB using GraphQL in just a few minutes
    • Bringing Neurelo’s Data APIs to Life Instantly with MySQL
  • Project Examples
  • References
    • Supported Databases
    • Supported OS and Browsers
  • Support
Powered by GitBook
On this page
  • Introduction
  • Step 1: Define our Data Model
  • Step 2: Build our Data Model in Neurelo
  • Step 3: Deploy Our API Server (Backend)
  • Step 4: Write our Rust client
  • Step 5: Build our App & Visualize Our Data
  • Conclusion
  • Videos
  • Part-1: Neurelo + MongoDB
  • Part-2: Rust Client
  1. Tutorials

Building A Financial Terminal with Neurelo and MongoDB in Rust

Author: James Shockley (james@neurelo.com)

PreviousBuilding a Real Time Chat Application with Neurelo and MongoDB using PythonNextBuilding a Restaurant Management System with Neurelo and MongoDB using GraphQL in just a few minutes

Last updated 10 months ago

Introduction

I’ve always been fascinated with interfaces that feel of a different, or bygone era, and yet, somehow they persist.

Not necessarily legacy interfaces, but interfaces that are timeless. Interfaces which, despite an ever-evolving world of interface paradigms emerging and developing around them, remained understood as the correct medium for communicating information and executing processes by their intended user.

The most timeless example of a timeless user interface is a Bloomberg Terminal User Interface (TUI) - a software system for monitoring, analyzing, placing trades, and many other features required by users in the financial services sector.

Let’s build a reproduction of this classic, timeless interface using three exciting technologies - Neurelo, MongoDB, and Rust. This will be our end result:

Unlike MongoDB and Rust, unless you’re one of the thousands of users already building applications with Neurelo, Neurelo itself may benefit from an introduction.

Neurelo, in short, instantly turns your database into APIs, which you can then use to facilitate communication with your application.

What this means for us is that Neurelo will allow us to add a structured schema to the entities we need to store & query in MongoDB and deploy a backend that will resolve API requests to queries that are executed against our MongoDB collections.

This gives us the benefit of not having to maintain any database-driver connection state within our client application. Our terminal will be able to access our MongoDB cluster using the same HTTP endpoints that, say, a native mobile application, or even a web application would.

There’s a lot more about Neurelo can do, but we’ll only bite off what we need and leave the rest for later.

Step 1: Define our Data Model

Starting a new project by modeling the data is typically a great exercise, and today is no exception. Let’s create a quick sketch of the data we’ll need for our TUI.

In our application, we have Security, Trade, and a Portfolio objects, each of which have their own set of properties belonging to themselves as well as relationships between each other.

Let’s start first with a Portfolio - a Portfolio is a container entity and is ultimately an origin for a transaction, since a user (not pictured) will need a Portfolio to associate a Trade to.

The Trade itself is an intermediate entity, Trades contain a reference to a Portfolio, as well as something that is intended to be traded. That third entity, just described, is called a Security.

And, as mentioned, each of these objects contain their own properties belonging to themselves, separate from their relationships with other objects.

So what does our data model look like fully realized?

In case you are wondering what ERD (Entity-Relationship Diagram) utility I used to generate this- it’s actually a side-feature of Neurelo which we get just by modeling our data there!

Let’s move on to do exactly that.

Step 2: Build our Data Model in Neurelo

Second, create a new Project using either of the two symbols circled below

The most important option to consider when completing the Create Project modal is ensuring you are picking the correct Database Engine for your project. Neurelo supports many backend databases - Postgres, MySql and MongoDB, and most concepts are interchangeable, but with MongoDB we can also instantly provision an evaluation database on MongoDB Atlas without leaving Neurelo!

Now, you probably notice a lot going on right now, and that’s perfectly acceptable- what you’re looking at in the center is our Quick Start Guide, which we’ll work through one step at a time.

In the Quick Start Guide, you have two tasks. I’d like you to start the second one by pressing the Build Schema button

Now, you’re going to build a Schema. The purpose of a Schema in Neurelo is to express the entities in our ERD as a collection of Objects with Properties. More on Objects and Properties, later.

The Schema which we will create in a moment will be one component in a collection of Definitions. Think of Definitions, of which your Schema is a component, as a declarative state of how you intend to interact with your data- both data as it resides in your database, and data as you would interact with it with our various APIs. Simply put, Definitions define how you interact with your data.

You can create and edit a Schema in different ways within Neurelo. Either through the visual Schema Builder, or JSON / YAML Editor. Today, you’ll try both!

You’ll use the JSON Editor to get started, and I’ll ask you to perform a simple, optional task using the Schema Builder.

First, switch the Schema editing mode context . You may do this in the top right corner.

Next, click Commit

The Schema diff will be visualized for you and you’ll be asked to include a brief commit message. If this workflow feels familiar to you as a developer because you use Git, then great! You may also be interested to know that Neurelo allows you to manage your schema as code entirely within your own git repository as well- I’ll include a link to documentation on this feature at the end!

Since this is just an example, and not a demonstration of professional git conduct, we’ll include a recent happy memory as a meaningful commit message.

Next, using the Mode selector in the top right we’ll switch back from JSON Editor mode to Schema Builder, inverting the action we took at the start.

Now, we can navigate between our Objects (Entities, in our previous ERD) and their Properties. I would like to encourage you to (optionally) use the Schema Builder view to switch between the Objects and Properties which we created and familiarize yourself with them.

Before moving onto the next step and deploying our backend, I do have two notes for those trying to recreate this schema from scratch.

  1. Enums

We have an instance of an Enum named action which is a Property of the trade Object. The action Enum can represent either a Buy or Sell action.

Enums in Neurelo are defined at a Schema scope, but instanced & created as a Property of an Object. Meaning, you can share an Enum between multiple Objects!

The way to create an Enum is described below

  1. Relationships

The entirety of our project consists of two Many-to-One relationships

  • A relationship between many trades to one portfolio

  • A relationship between many trades of one security

I’ll focus just on describing the One-to-Many case within Neurelo, though any relationship type is supported.

This is most simply expressed in a simplified version of the JSON Schema demonstrating the relationship between Portfolios and Trades.

{
  "objects": {
    "portfolio": {
      "properties": {
        "id": {
          "type": "string",
          "identifier": true,
          "sourceName": "_id",
          "sourceType": "ObjectId",
          "default": {
            "function": "auto"
          },
          "description": "..."
        },
        "trade_ref": {
          "type": "array",
          "items": {
            "$ref": "#/objects/trade"
          }
        }
      }
    },
    "trade": {
      "properties": {
        "id": {
          "type": "string",
          "identifier": true,
          "sourceName": "_id",
          "sourceType": "ObjectId",
          "default": {
            "function": "auto"
          },
          "description": "..."
        },
        "portfolio_id": {
          "type": "string",
          "sourceType": "ObjectId",
          "description": "..."
        },
        "portfolio_ref": {
          "$ref": "#/objects/portfolio",
          "relation": {
            "attrKey": [
              "portfolio_id"
            ],
            "foreignAttrKey": [
              "id"
            ]
          }
        }
      }
    }
  }
}

It would not be inaccurate to consider the Relationship between two Objects itself to be considered a variant of a Property which is expressed on each Object involved in the relationship.

Awesome work keeping up! Modeling data gets easier with experience, but it never gets easy to do it manually. And while Neurelo won’t eliminate the inherent complexity in real-world data, we aim to make data modeling the only hard problem that you need to solve.

Checkout the new "Schema Generation using AI" capability we have just released in Neurelo to make even this part way easier

Step 3: Deploy Our API Server (Backend)

Now that our Schema is created and committed, we can deploy a backend based on that data model to handle our requests for us. In order to do that, we’ll just need to click a few buttons.

Our Schema represents the shape of the data we are going to use. So, how do we realize that as a running API server connected to our database?

Logically, we’d need to do a few things:

  1. Connect an entity to store our data. in this case a MongoDB database. Let’s call this a Data Source.

  2. Some location where hardware is deployed to run an API backend. Let’s call this a Gateway.

  3. Some container entity to associate with a Commit of our Schema, our Data Source and a Gateway. Let’s call this an Environment.

  4. A backend API Server to run queries against our Data Source. Let’s call this a Query Runner

  5. Some API Keys to authenticate requests against that Query Runner.

Here is how I conceptualize the relationship, minus API Keys.

First, we’ll create a Data Source. I’ll be using a MongoDB Atlas cluster. We’ll use the following Quick Start link in order to automatically provision the cluster in the background.

Once you name the data source & create a password, you’ll be ready to Submit and start the provisioning process.

If you’re a quick reader, you’ll notice the toast message containing the following:

Neurelo provisioned Data Source creation started. This may take up to 15 minutes. Please feel free to leave this page and come back to it later.

Let’s switch to the Data Sources tab and keep an eye on the status icon of our database in the top right of the card. As mentioned, creation can take up to 15 minutes, but is usually way faster.

Once the instance is provisioned, switch back to the Quick Start guide via “Dashboard”.

Now we’ll create a new Environment to act as a container which references the Data Source and Schema’s Commit we’ve created and designates a Gateway to deploy them onto!

As mentioned earlier, an Environment as a concept really just exists to realize Definitions (including our Schema) at a specific Commit against a Data Source on a Gateway. An Environment can be a short-lived entity, like for example, a “Testing” Environment in our case. Many different environments and data sources can be created within a project for specific use-cases e.g. development for building a new feature or fixing a pesky bug, QA, staging, production, or for use in a tutorial like this one.

Next, from Quick Start, we’ll generate an API Key, which will allow us to authenticate the incoming API request and route these API requests automatically to the correct Environment.

Since we’ll want to both simulate trade volume (Write) and refresh portfolio data (Read), we’ll opt for a Read/Write token and save that to a safe, but temporary, local location.

Finally, we’ll click the Start Runners button in order to deploy our API backend.

By navigating to the Environment tab, you can use your Environment’s Query Runner status indicator as a simple heuristic. Once the indicator transitions from red -> orange -> green it indicates that your API backend is ready to serve your requests!

Step 4: Write our Rust client

Let’s start by recapping what our requirements of the Rust client will be, as they relate to interacting with our data layer.

We know that in order to simulate a trading terminal within our application, we’ll need to handle the following cases:

  • Initializing empty portfolio objects

  • Initializing a starting set of security objects

  • Creating trade’s which reference a portfolio and a security

  • Getting the trades which belong to a given portfolio

As always, we’ll start with data model in api.rs

Let’s take a look at our CreateSecurity, CreateTrade, and CreatePortfolio structs

#[derive(Debug, Deserialize, Serialize, Args)]
pub struct CreateSecurity {
    #[clap(long)]
    pub ticker_symbol: String,
    #[clap(long)]
    pub company_name: String,
}

#[derive(Debug, Serialize, Deserialize, Args)]
pub struct CreateTrade {
    #[clap(long)]
    pub quantity: i32,
    #[clap(long)]
    pub price: f32,
    #[clap(long)]
    pub action: Action, 
    #[clap(long)]
    pub portfolio_id: String,
    #[clap(long)]
    pub security_id: String,
}

#[derive(Debug, Serialize, Args)]
pub struct CreatePortfolio {
    #[clap(long)]
    pub name: String,
}

The #[clap(long)] macro basically just indicates we’ll use the property name as a command-line argument when passing each property into our CLI to, for example, create an Object.

You may notice that both date and id properties are missing from our structs. Well, since both of these are the result of default functions at Object creation time, we don’t need these properties present in a cli create portfolio command.

Speaking of the cli though, how do we even want to interact with our objects through our CLI? Let’s set the bar exceptionally low for ourselves. We’d just like to Create and Get. Respectively, that may look like…

cli create portfolio --name 'Roaring Feline'
{
  "id": "667061c4e08be69d6d12666e",
  "name": "Roaring Feline"
}

cli create security --ticker-symbol 'MDB' --company-name 'MongoDB Inc'
{
  "id": "667061efe08be69d6d12666f",
  "ticker_symbol": "MDB",
  "company_name": "MongoDB Inc"
}

cli create trade --quantity 10 --action "BUY" --price 500.00 --portfolio-id 667061c4e08be69d6d12666e --security-id 667061efe08be69d6d12666f
{
  "id": "66706213e08be69d6d126670",
  "quantity": 10,
  "price": 500.0,
  "date": "2024-06-17T16:19:31.789Z",
  "portfolio_id": "667061c4e08be69d6d12666e",
  "security_id": "667061efe08be69d6d12666f",
  "action": "Buy"
}

cli get portfolio --id 667061c4e08be69d6d12666e
{
  "id": "667061c4e08be69d6d12666e",
  "name": "Roaring Feline",
  "trade_ref": [
    {
      "id": "66706213e08be69d6d126670",
      "quantity": 10,
      "price": 500.0,
      "date": "2024-06-17T16:19:31.789Z",
      "portfolio_id": "667061c4e08be69d6d12666e",
      "security_id": "667061efe08be69d6d12666f",
      "action": "Buy"
    }
  ]
}

And so on and so forth. Note that when we get our portfolio contents at the end, we see our array of trade objects which are associated with the portfolio we created. Let’s demystify how this happens!

As a quick aside, the primary interface for interacting with Neurelo Query Runners in-code is via the Neurelo SDK which is generated for your Schema. SDK support is available currently for Python, Golang and TypeScript + JavaScript, with more languages coming soon, including Rust!

The SDK works alongside VS Code & other text editors to give you excellent type checking and autocompletion. I will not be using that SDK today. We’ll be building our client the “hard” way. Which, as you’ll see, is actually still pretty easy.

So, let’s zoom in on that create trade command we issued. What happened there?

For that, let’s look in our http.rs module. In here, you’ll see how a custom Client interacts with the Neurelo API.

Creating a Client is actually quite simple! We’re just ensuring that we build a Reqwest HTTP client with our Gateway URL and our API Key set as the default url and header respectively

#[derive(Debug)]
pub struct Client {
    url: String,
    http_client: reqwest::Client,
}

pub fn new(url: String, api_key: String) -> Result<Self> {
	let api_key = HeaderValue::try_from(api_key)?;
	let default_headers = std::iter::once((API_KEY, api_key)).collect();

	let inner = ClientBuilder::new()
		.default_headers(default_headers)
		.https_only(true)
		.use_rustls_tls()
		.build()?;

	Ok(Self {
		url,
		http_client: inner,
	})
}

Note - The Gateway URL for your environment can be found in the Environments View of your project. The API Key is the key you created and saved locally earlier as part of setting up your environment. If you can't find your saved API key, you can go to "API Keys" under your environment, revoke the lost key, and create a new one.

Next, let’s look at the public interface for creating not just a trade, but all of our objects. Here, too we find the code is quite simple, all following a similar pattern aside from trade, which we’ll come to next.

pub async fn create_object(&self, object: CreateObject) -> Result<Response> {
	let object_string = object.to_string();
	match object {
		CreateObject::Portfolio(portfolio) => 
			self.create(&object_string, portfolio).await,
		CreateObject::Security(security) => 
			self.create(&object_string, security).await,
		CreateObject::Trade(trade) => 
			self.create(&object_string, trade.to_json_body()).await,
	}
}

Unlike our portfolio and security objects, our trade entity contains relationships back to our other entities, so we need to include the relationship name and our intent on how to realize that relationship.

In our case, we just need to create a trade which connects to an existing portfolio and security object. Our CreateTrade implementation can look like this

impl CreateTrade {
    pub fn to_json_body(&self) -> serde_json::Value {
        serde_json::json!({
            "quantity": self.quantity,
            "price": self.price,
            "action": self.action,
            // Name of the relationship
            "portfolio_ref" : {
		// Intent of the relationship
                "connect": {
		    // id of the portfolio this trade relates to
		    "id": self.portfolio_id,
                }
            },
            // Name of the relationship
            "security_ref" : {
		// Intent of the relationship
                "connect": {
		    // id of the security this trade relates to
                    "id": self.security_id,
                }
            }
        })
    }
}

And that’s it! This is all the custom logic necessary to represent create actions! The last thing I want to cover is, how exactly did we get the trades back when we performed a ‘get’ on a portfolio object earlier?

Like this..

get portfolio --id 667061c4e08be69d6d12666e
{
  "id": "667061c4e08be69d6d12666e",
  "name": "Roaring Feline",
  "trade_ref": [
    {
      "id": "66706213e08be69d6d126670",
      "quantity": 10,
      "price": 500.0,
      "date": "2024-06-17T16:19:31.789Z",
      "portfolio_id": "667061c4e08be69d6d12666e",
      "security_id": "667061efe08be69d6d12666f",
      "action": "Buy"
    }
  ]
}

The answer is actually really simple, we can manipulate our request with query parameters. For example, the following code adds a query parameter containing an arbitrary number of object id’s in order to filter and select for just those objects.

fn add_id_filter_param(&self, builder: RequestBuilder, ids: &Vec<String>) 
-> RequestBuilder {
	let k = String::from("filter");
	let v = json!({
		"id": {
			"in": ids
		}
	}).to_string();
	builder.query(&[(k, v)])
}

And this is also the answer to the original question of how do we get the trades associated to a portfolio when ‘getting’ one or more portfolio objects?

We just include a query param which includes both the scalar properties, the properties like id, name, and the related objects! The trades!

fn add_ref_param(&self, builder: RequestBuilder) -> RequestBuilder {
	let k = String::from("select");
	let v = json!({
		"$scalars": true,
		"$related": true
	}).to_string();
	builder.query(&[(k, v)])
}

Next, we need to create the actual models of our Portfolio, Security, and Trade entities, not just a representation of the CLI args necessary to request one.

In model.rs we redefine each object with deserialization logic which will allow us to work with these entities in-code. For example, immediately after creating a Portfolio, we don’t necessarily have any trade objects to reference yet.

#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct Portfolio {
    pub id: String,
    pub name: String,
    #[serde(skip_serializing_if = "Option::is_none", rename = "trade_ref")]
    pub trades: Option<Vec<Trade>>
}

#[derive(Debug, Deserialize, Serialize, Args)]
pub struct Security {
    pub id: String,
    pub ticker_symbol: String,
    pub company_name: String,
}

#[derive(Debug, Deserialize, Serialize, Args, Clone)]
pub struct Trade {
    pub id: String,
    pub quantity: i32,
    pub price: f32,
    pub date: DateTime<Utc>,
    pub portfolio_id: String,
    pub security_id: String,
    pub action: Action,
}

And the following intermediate enums which enumerate the variations of how Data contained in valid responses could be presented.

If you’re a confused by any of the concepts we’ve covered so far- don’t be concerned. None of this is necessary to know when working with Neurelo SDK’s.

#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum Response {
    Data(Data),
    Errors(Vec<Error>),
}

#[derive(Debug, Deserialize)]
#[serde(untagged)]
pub enum Data {
    One(Model),
    Many(Vec<Model>),
    Other(IgnoredAny),
}

#[derive(Debug, Deserialize, Serialize)]
#[serde(untagged)]
pub enum Model {
    Portfolio(Portfolio),
    Security(Security),
    Trade(Trade),
}

impl Model {
    pub fn inner_debug(&self) -> &dyn Debug {
        match self {
            Model::Portfolio(portfolio) => portfolio,
            Model::Security(security) => security,
            Model::Trade(trade) => trade,
        }
    }
}

#[derive(Debug, Deserialize)]
pub struct Error {
    pub error: String,
}

Step 5: Build our App & Visualize Our Data

While our CLI application is great, it doesn’t exactly scream “Trading Terminal”. For that, we’ll need a TUI (Terminal User Interface) library to create a mechanism for the following:

  1. Initialize and maintain some application state representing of Portolio, Trade and Security Objects

  2. Perform CRUD operations via our internal api’s defined in api.rs in order to keep our internal state in sync with the database

  3. Visualize some of our data & allow interactivity with the user

For sake of simplicity, let’s scope each of these requirements.

  1. We’ll create a set of Portfolio and Security objects at initialization

  2. We’ll create a Trade with a random Security for each Portfolio on some interval

  3. We will allow the user to switch between Portfolio objects and display the Trade Objects belonging to the selected Portfolio

As with the previous section, we’ll start first with our interface for the code, which will be the command tui, simply enough.

... tui

What happens behind the scenes from here is simple conceptually. An application, called App, is initialized at runtime and updates, by default, every 250 milliseconds. For brevity, we’ll call each update of the App state a “tick”.

The initialization process is mostly the same for our Security & Portfolio objects. I’ll provide the example code for Security below. In both cases, we:

  1. Read in our mock data from a local json file

  2. Use our internal abstraction over the Neurelo HTTP API via client.create_object

  3. On a happy path, update our app state with instances of our internal Model structs

async fn init_securities(client: &Client) -> Result<Vec<Security>> {
    let mock_securities = get_mock_securities()?;
    let mut model_securities: Vec<Security> = Default::default();
    for mock in mock_securities {
        let co = CreateObject::Security(CreateSecurity{ticker_symbol: mock.ticker_symbol, company_name: mock.company_name });
        let response = 
        client.create_object(co).await?;
        match response {
            Response::Data(Data::One(Model::Security(security))) => {
                model_securities.push(security);
            }
            other => {
                eprintln!("got non-one response: {other:?}");
            }
        }
    }
    Ok(model_securities)
}

fn get_mock_securities() -> Result<Vec<MockSecurity>> {
    let config_path = "./sample/securities.json";
    let file = File::open(config_path)?;
    let reader = BufReader::new(file);
    let securities: Vec<MockSecurity> = serde_json::from_reader(reader)?;
    Ok(securities)
}

Each tick, our App on_tick() function is ran, executing the following code of relevance for us.


pub async fn on_tick(&mut self) {
	self.ticks += 1;

	self.make_trades().await;
	self.refresh_portfolios().await;

	let p_idx = self.portfolios.state.selected()
		.unwrap_or(self.portfolios.state.offset());
	let trades = self.portfolios.items[p_idx].clone().trades
		.unwrap_or_default();
	self.logs.items = trades;

	
	let event = self.barchart.pop().unwrap();
	self.barchart.insert(0, event);
}

And the logic for make_trades() and refresh_portfolios() should be fairly self-explanatory.

Conclusion

Neurelo is a fantastic tool for building TUI’s, or any process or interface, with. The Rust programming language’s expressive type system is perfectly able to model, and interact with, our semantic data model defined in our Neurelo Schema.

Additionally, with the upcoming Rust SDK’s, you’ll have all the power of the Neurelo SDK’s currently available in other languages, like Python, on top of what Neurelo gives you already with REST and GraphQL APIs. And with the upcoming self-hosted Gateway deployment option, you’ll be able to execute those requests completely within your own network.

But, for those eager to try extending this example project with functionality available today, I encourage you to try some of the following side-quests below:

Videos

If you prefer a more visual learning experience, check out our videos for this tutorial

Part-1: Neurelo + MongoDB

Part-2: Rust Client

First, sign up with an account . If you have one already, great! Just sign in.

Next, copy the starting JSON contents from into the editor.

NOTE: Neurelo also has a CLI that can be used to manage your environments and backend deployments. After you complete the tutorial, download the from the dashboard and give a shot doing these steps via CLI!

Let’s create a simple terminal interface, using the clap crate for interpreting command line arguments. The full code is available in , so don’t worry about copying as we go. I’ll keep the focus on the concepts that are expressed in the codebase.

Since the documentation for , the TUI library we’ve selected, is already immaculate and an inspiration to myself, providing a reworded version of their own documentation here would only be a disservice to their team. If you want to understand Ratatui before proceeding, I personally recommend the tutorial.

I’ll keep the remaining content of this section specific to the changes that we made, that were not related to async Rust, which extended their example to meet our requirement of visualizing data that is interacted with via Neurelo and stored in MongoDB.

Create a per-minute “rollup” report to calculate the value of a portfolio using

here
here
Neurelo CLI
this repository
Ratatui
JSON Editor
demo
Manage the schema from your own GitHub repository
Try the Neurelo SDK with Python
Custom Queries
Page cover image