Building A Financial Terminal with Neurelo and MongoDB in Rust
Author: James Shockley (james@neurelo.com)
Last updated
Author: James Shockley (james@neurelo.com)
Last updated
I’ve always been fascinated with interfaces that feel of a different, or bygone era, and yet, somehow they persist.
Not necessarily legacy interfaces, but interfaces that are timeless. Interfaces which, despite an ever-evolving world of interface paradigms emerging and developing around them, remained understood as the correct medium for communicating information and executing processes by their intended user.
The most timeless example of a timeless user interface is a Bloomberg Terminal User Interface (TUI) - a software system for monitoring, analyzing, placing trades, and many other features required by users in the financial services sector.
Let’s build a reproduction of this classic, timeless interface using three exciting technologies - Neurelo, MongoDB, and Rust. This will be our end result:
Unlike MongoDB and Rust, unless you’re one of the thousands of users already building applications with Neurelo, Neurelo itself may benefit from an introduction.
Neurelo, in short, instantly turns your database into APIs, which you can then use to facilitate communication with your application.
What this means for us is that Neurelo will allow us to add a structured schema to the entities we need to store & query in MongoDB and deploy a backend that will resolve API requests to queries that are executed against our MongoDB collections.
This gives us the benefit of not having to maintain any database-driver connection state within our client application. Our terminal will be able to access our MongoDB cluster using the same HTTP endpoints that, say, a native mobile application, or even a web application would.
There’s a lot more about Neurelo can do, but we’ll only bite off what we need and leave the rest for later.
Starting a new project by modeling the data is typically a great exercise, and today is no exception. Let’s create a quick sketch of the data we’ll need for our TUI.
In our application, we have Security, Trade, and a Portfolio objects, each of which have their own set of properties belonging to themselves as well as relationships between each other.
Let’s start first with a Portfolio - a Portfolio is a container entity and is ultimately an origin for a transaction, since a user (not pictured) will need a Portfolio to associate a Trade to.
The Trade itself is an intermediate entity, Trades contain a reference to a Portfolio, as well as something that is intended to be traded. That third entity, just described, is called a Security.
And, as mentioned, each of these objects contain their own properties belonging to themselves, separate from their relationships with other objects.
So what does our data model look like fully realized?
In case you are wondering what ERD (Entity-Relationship Diagram) utility I used to generate this- it’s actually a side-feature of Neurelo which we get just by modeling our data there!
Let’s move on to do exactly that.
First, sign up with an account here. If you have one already, great! Just sign in.
Second, create a new Project using either of the two symbols circled below
The most important option to consider when completing the Create Project modal is ensuring you are picking the correct Database Engine for your project. Neurelo supports many backend databases - Postgres, MySql and MongoDB, and most concepts are interchangeable, but with MongoDB we can also instantly provision an evaluation database on MongoDB Atlas without leaving Neurelo!
Now, you probably notice a lot going on right now, and that’s perfectly acceptable- what you’re looking at in the center is our Quick Start Guide, which we’ll work through one step at a time.
In the Quick Start Guide, you have two tasks. I’d like you to start the second one by pressing the Build Schema button
Now, you’re going to build a Schema. The purpose of a Schema in Neurelo is to express the entities in our ERD as a collection of Objects with Properties. More on Objects and Properties, later.
The Schema which we will create in a moment will be one component in a collection of Definitions. Think of Definitions, of which your Schema is a component, as a declarative state of how you intend to interact with your data- both data as it resides in your database, and data as you would interact with it with our various APIs. Simply put, Definitions define how you interact with your data.
You can create and edit a Schema in different ways within Neurelo. Either through the visual Schema Builder, or JSON / YAML Editor. Today, you’ll try both!
You’ll use the JSON Editor to get started, and I’ll ask you to perform a simple, optional task using the Schema Builder.
First, switch the Schema editing mode context . You may do this in the top right corner.
Next, copy the starting JSON contents from here into the editor.
Next, click Commit
The Schema diff will be visualized for you and you’ll be asked to include a brief commit message. If this workflow feels familiar to you as a developer because you use Git, then great! You may also be interested to know that Neurelo allows you to manage your schema as code entirely within your own git repository as well- I’ll include a link to documentation on this feature at the end!
Since this is just an example, and not a demonstration of professional git conduct, we’ll include a recent happy memory as a meaningful commit message.
Next, using the Mode selector in the top right we’ll switch back from JSON Editor mode to Schema Builder, inverting the action we took at the start.
Now, we can navigate between our Objects (Entities, in our previous ERD) and their Properties. I would like to encourage you to (optionally) use the Schema Builder view to switch between the Objects and Properties which we created and familiarize yourself with them.
Before moving onto the next step and deploying our backend, I do have two notes for those trying to recreate this schema from scratch.
Enums
We have an instance of an Enum named action
which is a Property of the trade
Object. The action
Enum can represent either a Buy
or Sell
action.
Enums in Neurelo are defined at a Schema scope, but instanced & created as a Property of an Object. Meaning, you can share an Enum between multiple Objects!
The way to create an Enum is described below
Relationships
The entirety of our project consists of two Many-to-One relationships
A relationship between many trades
to one portfolio
A relationship between many trades
of one security
I’ll focus just on describing the One-to-Many case within Neurelo, though any relationship type is supported.
This is most simply expressed in a simplified version of the JSON Schema demonstrating the relationship between Portfolios and Trades.
It would not be inaccurate to consider the Relationship between two Objects itself to be considered a variant of a Property which is expressed on each Object involved in the relationship.
Awesome work keeping up! Modeling data gets easier with experience, but it never gets easy to do it manually. And while Neurelo won’t eliminate the inherent complexity in real-world data, we aim to make data modeling the only hard problem that you need to solve.
Checkout the new "Schema Generation using AI" capability we have just released in Neurelo to make even this part way easier
Now that our Schema is created and committed, we can deploy a backend based on that data model to handle our requests for us. In order to do that, we’ll just need to click a few buttons.
NOTE: Neurelo also has a CLI that can be used to manage your environments and backend deployments. After you complete the tutorial, download the Neurelo CLI from the dashboard and give a shot doing these steps via CLI!
Our Schema represents the shape of the data we are going to use. So, how do we realize that as a running API server connected to our database?
Logically, we’d need to do a few things:
Connect an entity to store our data. in this case a MongoDB database. Let’s call this a Data Source.
Some location where hardware is deployed to run an API backend. Let’s call this a Gateway.
Some container entity to associate with a Commit of our Schema, our Data Source and a Gateway. Let’s call this an Environment.
A backend API Server to run queries against our Data Source. Let’s call this a Query Runner
Some API Keys to authenticate requests against that Query Runner.
Here is how I conceptualize the relationship, minus API Keys.
First, we’ll create a Data Source. I’ll be using a MongoDB Atlas cluster. We’ll use the following Quick Start link in order to automatically provision the cluster in the background.
Once you name the data source & create a password, you’ll be ready to Submit and start the provisioning process.
If you’re a quick reader, you’ll notice the toast message containing the following:
Neurelo provisioned Data Source creation started. This may take up to 15 minutes. Please feel free to leave this page and come back to it later.
Let’s switch to the Data Sources tab and keep an eye on the status icon of our database in the top right of the card. As mentioned, creation can take up to 15 minutes, but is usually way faster.
Once the instance is provisioned, switch back to the Quick Start guide via “Dashboard”.
Now we’ll create a new Environment to act as a container which references the Data Source and Schema’s Commit we’ve created and designates a Gateway to deploy them onto!
As mentioned earlier, an Environment as a concept really just exists to realize Definitions (including our Schema) at a specific Commit against a Data Source on a Gateway. An Environment can be a short-lived entity, like for example, a “Testing” Environment in our case. Many different environments and data sources can be created within a project for specific use-cases e.g. development for building a new feature or fixing a pesky bug, QA, staging, production, or for use in a tutorial like this one.
Next, from Quick Start, we’ll generate an API Key, which will allow us to authenticate the incoming API request and route these API requests automatically to the correct Environment.
Since we’ll want to both simulate trade volume (Write) and refresh portfolio data (Read), we’ll opt for a Read/Write token and save that to a safe, but temporary, local location.
Finally, we’ll click the Start Runners button in order to deploy our API backend.
By navigating to the Environment tab, you can use your Environment’s Query Runner status indicator as a simple heuristic. Once the indicator transitions from red -> orange -> green it indicates that your API backend is ready to serve your requests!
Let’s start by recapping what our requirements of the Rust client will be, as they relate to interacting with our data layer.
We know that in order to simulate a trading terminal within our application, we’ll need to handle the following cases:
Initializing empty portfolio
objects
Initializing a starting set of security
objects
Creating trade
’s which reference a portfolio
and a security
Getting the trades which belong to a given portfolio
Let’s create a simple terminal interface, using the clap
crate for interpreting command line arguments. The full code is available in this repository, so don’t worry about copying as we go. I’ll keep the focus on the concepts that are expressed in the codebase.
As always, we’ll start with data model in api.rs
Let’s take a look at our CreateSecurity
, CreateTrade
, and CreatePortfolio
structs
The #[clap(long)]
macro basically just indicates we’ll use the property name as a command-line argument when passing each property into our CLI to, for example, create an Object.
You may notice that both date
and id
properties are missing from our structs. Well, since both of these are the result of default functions at Object creation time, we don’t need these properties present in a cli create portfolio
command.
Speaking of the cli though, how do we even want to interact with our objects through our CLI? Let’s set the bar exceptionally low for ourselves. We’d just like to Create and Get. Respectively, that may look like…
And so on and so forth. Note that when we get
our portfolio contents at the end, we see our array of trade
objects which are associated with the portfolio
we created. Let’s demystify how this happens!
As a quick aside, the primary interface for interacting with Neurelo Query Runners in-code is via the Neurelo SDK which is generated for your Schema. SDK support is available currently for Python, Golang and TypeScript + JavaScript, with more languages coming soon, including Rust!
The SDK works alongside VS Code & other text editors to give you excellent type checking and autocompletion. I will not be using that SDK today. We’ll be building our client the “hard” way. Which, as you’ll see, is actually still pretty easy.
So, let’s zoom in on that create trade
command we issued. What happened there?
For that, let’s look in our http.rs
module. In here, you’ll see how a custom Client
interacts with the Neurelo API.
Creating a Client
is actually quite simple! We’re just ensuring that we build a Reqwest HTTP client with our Gateway URL and our API Key set as the default url and header respectively
Note - The Gateway URL for your environment can be found in the Environments View of your project. The API Key is the key you created and saved locally earlier as part of setting up your environment. If you can't find your saved API key, you can go to "API Keys" under your environment, revoke the lost key, and create a new one.
Next, let’s look at the public interface for creating not just a trade
, but all of our objects. Here, too we find the code is quite simple, all following a similar pattern aside from trade
, which we’ll come to next.
Unlike our portfolio
and security
objects, our trade
entity contains relationships back to our other entities, so we need to include the relationship name and our intent on how to realize that relationship.
In our case, we just need to create a trade which connects to an existing portfolio
and security
object. Our CreateTrade
implementation can look like this
And that’s it! This is all the custom logic necessary to represent create actions! The last thing I want to cover is, how exactly did we get the trades back when we performed a ‘get’ on a portfolio
object earlier?
Like this..
The answer is actually really simple, we can manipulate our request with query parameters. For example, the following code adds a query parameter containing an arbitrary number of object id’s in order to filter and select for just those objects.
And this is also the answer to the original question of how do we get the trades
associated to a portfolio
when ‘getting’ one or more portfolio
objects?
We just include a query param which includes both the scalar properties, the properties like id
, name
, and the related objects! The trades
!
Next, we need to create the actual models of our Portfolio, Security, and Trade entities, not just a representation of the CLI args necessary to request one.
In model.rs
we redefine each object with deserialization logic which will allow us to work with these entities in-code. For example, immediately after creating a Portfolio
, we don’t necessarily have any trade
objects to reference yet.
And the following intermediate enums which enumerate the variations of how Data contained in valid responses could be presented.
If you’re a confused by any of the concepts we’ve covered so far- don’t be concerned. None of this is necessary to know when working with Neurelo SDK’s.
While our CLI application is great, it doesn’t exactly scream “Trading Terminal”. For that, we’ll need a TUI (Terminal User Interface) library to create a mechanism for the following:
Initialize and maintain some application state representing of Portolio, Trade and Security Objects
Perform CRUD operations via our internal api’s defined in api.rs
in order to keep our internal state in sync with the database
Visualize some of our data & allow interactivity with the user
For sake of simplicity, let’s scope each of these requirements.
We’ll create a set of Portfolio and Security objects at initialization
We’ll create a Trade with a random Security for each Portfolio on some interval
We will allow the user to switch between Portfolio objects and display the Trade Objects belonging to the selected Portfolio
Since the documentation for Ratatui, the TUI library we’ve selected, is already immaculate and an inspiration to myself, providing a reworded version of their own documentation here would only be a disservice to their team. If you want to understand Ratatui before proceeding, I personally recommend the JSON Editor tutorial.
I’ll keep the remaining content of this section specific to the changes that we made, that were not related to async Rust, which extended their demo example to meet our requirement of visualizing data that is interacted with via Neurelo and stored in MongoDB.
As with the previous section, we’ll start first with our interface for the code, which will be the command tui
, simply enough.
What happens behind the scenes from here is simple conceptually. An application, called App
, is initialized at runtime and updates, by default, every 250 milliseconds. For brevity, we’ll call each update of the App
state a “tick”.
The initialization process is mostly the same for our Security & Portfolio objects. I’ll provide the example code for Security below. In both cases, we:
Read in our mock data from a local json file
Use our internal abstraction over the Neurelo HTTP API via client.create_object
On a happy path, update our app state with instances of our internal Model structs
Each tick, our App
on_tick()
function is ran, executing the following code of relevance for us.
And the logic for make_trades()
and refresh_portfolios()
should be fairly self-explanatory.
Neurelo is a fantastic tool for building TUI’s, or any process or interface, with. The Rust programming language’s expressive type system is perfectly able to model, and interact with, our semantic data model defined in our Neurelo Schema.
Additionally, with the upcoming Rust SDK’s, you’ll have all the power of the Neurelo SDK’s currently available in other languages, like Python, on top of what Neurelo gives you already with REST and GraphQL APIs. And with the upcoming self-hosted Gateway deployment option, you’ll be able to execute those requests completely within your own network.
But, for those eager to try extending this example project with functionality available today, I encourage you to try some of the following side-quests below:
Create a per-minute “rollup” report to calculate the value of a portfolio using Custom Queries
If you prefer a more visual learning experience, check out our videos for this tutorial