Bringing Neurelo’s Data APIs to Life Instantly with MySQL

Introduction

We have published several tutorials on how you can get up and running with Neurelo. From building a Bloomberg-esque financial terminal to engineering a real-time chat application - we encourage you to explore them further.

In this tutorial, we will be doing things a bit differently. Rather than creating a tutorial with a predetermined application in mind, we will see how Neurelo can get you up and running in a matter of minutes, for any application! In this example, we will be leveraging MySQL as the database for the application.

To bring this thought to action and associate it with a level of randomness it demands, we will write a script to arbitrarily select an idea for a data model, create a schema for it, migrate it to our database, and build a fully functioning application layer on top of it.

One of the real strengths of Neurelo is in simplifying the developer complexity that lies in the layers between the application and the database, such as managing schemas, ensuring the use of the appropriate database programming interface, and writing the correct queries. Other critical layers include query controls, health checks and database audits, planning APIs with good up-to-date documentation, setting up observability metrics, and finally, enforcing security, including a flexible access control mechanism. We will see in a short while just how Neurelo helps simplify all of this and more.

In Search of a Data Model

Let us start by searching for the topic of our data model. To do this, we will write a small script to scrape the hundreds of titles on various data models that have been listed on this “Database Answers” website.

Note - this section is optional. You can jump directly to the next section if you want to jump into building an app.

We will be using Python’s requests and BeautifulSoup libraries for this.

import random
import requests
from bs4 import BeautifulSoup

url = "https://web.archive.org/web/20160308080311/http://www.databaseanswers.org/data_models/"
response = requests.get(url)

html_content = response.content.decode("utf-16", errors="ignore")
soup = BeautifulSoup(html_content, "html.parser")
ideas = []

for li in soup.find_all("li"):
    a_tag = li.find("a")
    if a_tag and "href" in a_tag.attrs:
    ideas.extend(li.get_text().split("\n"))
print(random.choice(ideas))

Running this script, the random data model topic that got generated, in our instance, was - “Student Questionnaires”. So let's build it! We will construct an application to store and manage student questionnaires.

Building a new application (for student questionnaires)

To get started, let us navigate to Neurelo’s dashboard and create a new project for this data model. A project is a primary "working" entity in Neurelo where everything begins.

Create a new project

To create a new project, navigate to and click on the 'New' button in the dashboard. A 'New Project' modal will open up, where we will fill in our basic project information, including selecting our database engine – we are going to choose MySQL here.

On clicking “Create”, you will see Neurelo’s quick start dashboard, which should provide you with a step-by-step onboarding process.

Build a Schema

We first start by building a new schema. To do so, click on the “Build Schema” button in the dashboard.

Neurelo provides many ways to get up and running when building a schema from scratch. The highlight being the Schema AI Assistant, which we will be using for this tutorial. If you already have a well-defined perspective on how your data model should look like, you can start with an empty schema canvas, and leverage our Visual Schema Builder to construct your data model definitions.

Clicking on the “Schema AI Assist” should make the Neurelo Schema AI playground popup.

Let us start with a simple prompt such as “Schema to manage Student Questionnaires”.

Et voila! A basic questionnaire schema just got created with four objects - “student”, “questionnaires”, “questions”, and “responses”.

  • “students”: student_id (integer), name (string), and email (string)

  • “questionnaires”: questionnaire_id (integer), student_id (integer), date (datetime)

  • “questions”: question_id (integer), text (string), and type (enum)

  • “responses”: response_id (integer), questionnaire_id (integer), question_id (integer), and answer

  • Enum QuestionType supports the options: TrueFalse, MultipleChoice, ShortAnswer, LongAnswer

The best part about this is that relationships were automatically defined by the Schema AI. For instance,

  • “students” have a one to many relationship with the “questionnaires”

    • One student can give multiple questionnaires

  • “questionnaires” have a one to many relationship with “responses”

    • One questionnaire can get multiple responses

  • Lastly, “questions” have a one to many relationship with the “responses”

    • One question can have multiple responses

Note - this AI generated schema may be different for you even with the same prompt

If you prefer to iterate upon the AI-generated schema, simply click on the “AI Assist” button on the top-right. The Schema AI playground should again pop up.

Describe your modifications for them to be put into effect. For example, we can iterate by saying - “Make students info more comprehensive”. And just like that, the “students” object became more detailed by adding in additional properties such as first_name, last_name, address, phone_number, and date_of_birth.

Now that we have a new schema, let's visualize it. To do so, you can use Neurelo’s ERD functionality, right from your schema view.

If you would like to explore the related objects, simply click on the object whose relationships you want to explore and you should get a focused view with only those objects

Now that we are satisfied with our schema, let us commit it using Neurelo’s Git Schema functionality. To do so, click on the commit button at the top right, enter your commit message, and hit “Commit”.

Migrations and Version Control

As you can observe, the commit view not only validates the schema but it also automatically generates migrations corresponding to your schema. It presents a Diff view so that you can perform a final review on your changes before you commit them. The Diff view is seen in Neurelo’s JSON schema format, which we call as Neurelo Schema Language (NSL). To get a comprehensive overview of NSL, check out our Neurelo Schema Language (NSL) reference page.

Here is a snipped of the migrations auto-generated by Neurelo:

To further view/edit them and keep a track of all your migrations, you can go to Neurelo’s migration view under Definitions as follows:

Now that we have our schema and its corresponding migrations, let us connect a new MySQL data source with Neurelo and initialize it with these new migrations. This will help us start the provisioning of data apis.

Connect to an empty MySQL database

We will be using an empty managed MySQL instance from Aiven in this example, however the steps remain the same for any of the other managed database services such as MySQL databases services in Oracle Cloud, Microsoft Azure database for MySQL, or Amazon RDS.

To connect to a data source, go back to the dashboard and click on the “Connect Data Source” button. A “New Data Source” view will appear. Enter in your datasource connection information, which includes your DB host name, port, DB name, username, and password. Select the gateway with the region nearest to your data source, and if your database is behind a firewall, make sure to add the provided Neurelo IPs to your allow-list to enable us to connect to this database.

Once done, click on “Test Connection” to verify it all. If the connection test was completed successfully, click on “Submit” to establish a connection.

Now that we have provisioned a new database and have created a schema, let's explore how we can connect this schema to our empty MySQL database and use Neurelo to automatically build API endpoints for us. To do this, we need to create a Neurelo Environment, followed by starting the runners.

Create an Environment

Think of environments in Neurelo as runtime workspaces that allow you to run your Neurelo APIs using a specific version (commit) of your schema definition against a specific data source (in this case, our "Evaluation DB" one).

Neurelo environments are designed to naturally align with typical Software Development Life Cycles (SDLC) as applications are developed, tested, deployed, and operated. Simply put, they help manage different stages of your project, whether it's development, testing, or production. In our scenario, we will create a testing environment.

To do so, navigate to the "Create Environment" button in the quickstart guide. Next, fill in the environment details, which include the commit of the Neurelo definitions you’d like your environment to run against, and your data source/region preferences. For this tutorial, we will select the latest commit and use the same region (aws-us-east-2) as our data source. We have also enabled observability, which will allow us to monitor and analyze the environment's API performance.

Upon creating an environment, click on the environment name (for instance, “Testing”) in the quickstart dashboard and you’ll be greeted with the following environment view:

There is a lot going on here, so let's break it down. Neurelo auto-generates both REST and GraphQL APIs for your schema. The “APIs” tab shows the REST and GraphQL APIs that have been generated and automatically updated in sync with your schema. It shows two things: a comprehensive documentation of your auto-generated API endpoints and an API reference for all the API endpoints that Neurelo has created for your schema.

Here is an example of an API reference created for our “students” object:

The APIs tab is also a hub where you can test and explore your REST and GraphQL APIs using the API Playground, which we will thoroughly explore shortly. Furthermore, Neurelo also creates both OpenAPI and GraphQL specifications along with a Postman Collection using your schema, which you can download by clicking on the "Specs" tab.

Apply Migrations

Let us now circle back to our student questionnaire application. Now that we have our schema, data source, and environment all set up, let's figure out how we can migrate our schema to the data source. To do so, hover over to the migrations tab and click on it.

You will be presented with the Migrations view. Unlike the one in the “schema” tab, this is a read-only viewer. Review the migrations, and when ready, click on “Apply 1 Migration” to apply the pending migrations. Once done, you should see a prompt indicating that the migrations were applied successfully and that your data source is up to date with your current schema.

Now that the migrations are complete, let's figure out how we can get some test data into our database so that we can play around with Neurelo's auto generated APIs and eventually build our application around it.

Generate Sample (Mock/Test) Data

Neurelo provides a simple, one-click solution for this called "Data Generator". You can find this in the top right corner of the environment's view. Upon clicking it, you will be presented with an option to choose the size of your test data. For our purposes, we will select “Medium” to get around 100 records.

Click on start and that should be it! You can easily visualize the new data in your database using Neurelo’s "Data Viewer". But before we do that, we need to start the runners.

Start Runners

A runner (API server) in Neurelo is the main component that executes and manages all API calls. In Neurelo's Cloud Data API Platform, a runner is a part of every environment inside a project. Runners need to be started for the application to be able to handle API requests and process them against the configured data source within an environment. The data viewer also requires these runners to be started.

To do so, click on the "Start Runners" button on the top right of your environment’s view.

Data Viewer

Next, navigate to the Data Viewer tab. You will be prompted to create a new API key. Make sure to copy and save this key somewhere safely, as we will be using this quite a bit in the following sections.

Once you have created your API key, you will be able to visualize your data, in the Data Viewer tab:

Notice how the data is quite realistic! This is because Neurelo uses AI to intelligently generate mock data that aligns with your schema and its attribute contexts. In fact, even the relationship identifiers should be correctly mapped between your collections. For example, notice how student_id for “students” are mapped with questionnaire_id in “questionnaires”, across two different tables in the screenshot below!

This is quite helpful when working on APIs that require relationship querying/filtering.

API Playground - Try Neurelo Data APIs

With our mock data in place, it is finally time to experiment with Neurelo's API endpoints. Navigate back to the APIs tab and click on the API playground icon, which should open Neurelo's API playground.

REST APIs

Neurelo supports APIs in both REST and GraphQL formats. We will be focusing on GraphQL APIs for this tutorial. However before we do that, let us also give the REST APIs a quick spin by trying the "Find many students" API. For this, simply go to the "Headers" tab in the API playground, and input the API key from before (if you don't have it, just create a new one). When done, click on "Send", and we should see all the student records in our database.

To fetch only the first two records, you can add a value of 2 to the "take" parameter in the "Parameters" tab.

You can do much more complicated querying with these APIs. For instance, if we would like to fetch all the student names whose email ends with the domain ".org", we can do so as follows:

You can even use Neurelo's APIs to perform complex create, update, and delete queries, and to further see them in action, we encourage you to check out our Neurelo API Reference (REST) guide.

GraphQL APIs

Let us now explore the GraphQL APIs. To switch the API playground’s view from REST to GraphQL, simply turn on the GraphQL toggle on the top right corner.

Let us now perform the same "Find many students" API call but with GraphQL. This can be structured as follows:

To order the results in descending order by student id, simply change the findManystudents to findManystudents(orderBy: {student_id: desc}). Similarly, we can work with skip, take, cursor, where, and distinct parameters. For example, say we want to sort the students by their last names in ascending order, and we would like to filter the results such that only phone numbers starting with “5” are fetched. This can be done with the following query:

Similarly, a create operation to “create one student” can be performed as follows:

You can even use Neurelo's APIs to perform complex create, update, and delete queries. One of the advantages of using Neurelo’s GraphQL APIs is the ease of constructing complex nested queries. For instance, say we would like to find out all the responses that were part of a `TrueFalse` question. This can be easily done as follows:

A thorough documentation on all of this is provided in our “Neurelo API Reference (GraphQL)" guide.

Another useful aspect of the environment - the ability to monitor API performance using the "Observability" tab. Neurelo also provides latency charts for all your API endpoints, along with fine-grained usage metrics (you can access these by clicking on a specific API call in the "Slowest Operations" section).

Yet another feature of API playground is its ability to automatically generate language-specific example code. Presently, the playground supports TypeScript/JavaScript (TS/JS), Python, and Go code snippets. It also generates cURL commands. To find these example codes/commands, navigate to the “Code Examples” option on the top right of the API playground.

Here is an instance of an auto generated cURL and Python example code for finding all the student ids:

Here’s yet another example, this time with TypeScript/JavaScript:

Tip: If you prefer using Postman, you can also download Neurelo’s generated Postman collections for your schema and import it to your Postman client. To get the Postman collection, you can download the collection from the "APIs" section in your environment under "Specs". A complete guide on this is available in the How to download and use the Postman Collection for your Project section

Custom APIs for Complex Queries

Let us say that you would like to define a rather complex query for some operations on your MySQL database. To facilitate this in a simple way, Neurelo provides Custom Queries. These essentially allow you to create Custom REST or GraphQL API endpoints.

Let us see this in action. Say we would like to find all the students who have a .com email address, whose phone numbers end with a “9” and fetch information about all their long form questionnaire answers. To create a custom query for this, navigate back to Definitions, and go to the “Custom Queries” tab.

Click on the “New” button and enter your new custom endpoint name. We will be building a REST API, but the same flow applies to creating a custom GraphQL API. We will call it a “longAnswersFetch”. Hit “Submit” and you should be presented with the Custom Query view for your new endpoint.

You can choose to use the “Query” area to write your own SQL command for this, or simply build it using the Custom Query AI Assistant. We will be using the latter for this tutorial.

Click on the AI Assist button, which should open up the Custom Query AI Playground. Next, in the prompt space, we will enter our prompt: “find all the students who have a .com email address, whose phone numbers end with a “9” and fetch information about all their long form questionnaire answers.” and hit enter.

Et voilà! Our Custom Query AI generated the SQL query for the intent. Click on the Test button to start testing this SQL in the Test view on the right. You will need to input your API-KEY for this. The best part is that, you can iterate right over here using AI (or edit things manually), till you are satisfied with its construction. When done, simply press the “Use This” button to transfer the SQL over to the query view.

Make sure to commit your changes once done! Neurelo provides an option to directly deploy this new commit to the environment (change the environment’s commit to point at this new commit). This can be done by checking the box titled “Deploy to environment <environment_name> after commit” in the commit popup as follows:

And that’s it! The best part is that the environment's view is synced with this commit in a few seconds, so when we head over to the “Testing” environment, we will find that this new Custom Query is included in the documentation, and can be accessed from the API Playground. Moreover, all the OpenAPI, GraphQL, and Postman specs have been regenerated with these changes.

And that’s it! If you would like to get more information on custom queries, be sure to check out our documentation on Custom APIs for Complex Queries.

Conclusion

As you just experienced, Neurelo lets you build your applications (working with a MySQL database) very quickly and easily, all the way from building a schema for your intent, getting migrations, mock data, and instant APIs with an API Server ready for consumption.

Feel free to try other examples, and check out the many other features of Neurelo which makes it way easier to build your applications with databases.

Last updated