Building Micro with Micro

By Oleksandr (Sasha) Antoshchenko

4/26/2024

Case Study

TLDR: This case study showcases how Micro isn't just our product—it's our tool of choice for building new functionalities within itself. Despite some current limitations in the Downstream Calls feature, we've already used Micro to develop and deploy critical components of our own infrastructure. What was estimated to take days was accomplished in just 30 minutes, exemplifying Micro's efficiency and potential. Check out our video for a walkthrough of this self-referential development journey.

Introduction

In this case study, we will dive into the most fascinating ability of Micro, building itself. We believe in demonstrating Micro's capabilities by using it to speed up our own development processes. Though as of writing this article Micro misses some functionality in the Downstream Calls area to enable its full potential, we have already used it to build some functionality that is actively used today!

While developing the Downstream Calls feature, we needed a way to enable users to test their code efficiently and securely. To enable that we would need to take the code the user provides, and execute it with some input comparing the expected results with actual. The obvious solution was to isolate the code execution to mitigate security risks. It would be unwise to let any user execute arbitrary code within, say, Micro's backend.

Our initial thought process involved building and deploying a new application, setting up a git repository, testing infrastructure, and everything else that typically accompanies a new micro-service. This effort was estimated as 3-5 days at a minimum. Then the 💡 moment struck: why not use Micro for this task? At the end it took us just around 30 minutes from start to production!

The video below explores the creation of this application and its integration within Micro. Grab some popcorn and watch how we brought this concept to life!

👇

The Problem

Our objectives were:

  1. To execute arbitrary JavaScript safely,
  2. Compare execution results with expected outcomes,
  3. Return the actual results and verify their correctness,
  4. Handle exceptions during execution,
  5. Manage invalid input from users.

Implementing this on our own backend posed significant security risks.

This diagram illustrates the integration and interaction between different components of Micro during the testing and validation processes:

flowchart TD subgraph Micro ui[Micro UI] backend[Micro's backend] end subgraph microApps as [Micro Applications] codeValidator[Micro Code validator] end ui --Run single test--> codeValidator ui --Run all tests\nfor Downstream--> codeValidator ui --Run all tests\nfor All Downstream--> codeValidator ui --Validate--> backend backend --Execute validation call--> codeValidator

The Solution

Using Micro provided a streamlined solution:

  1. Deploying a new API took less than a minute, a process that typically takes hours or days.
  2. The need to manually implement complex logic was eliminated; we simply defined the Use-Cases.
  3. Integrating a new API into Micro involved no additional effort compared to traditional methods.

Schema

The API schema is structured as follows:

Request

{
  "javascriptCode": "", 
  "functionName": "",
  "input": {}, 
  "expected": {}
}
  • javascriptCode is more or less self-explanatory, it's the code that the user wants to execute.
  • functionName is the name of the function that the user wants to execute. The code might contain more than one function, but only one can be executed.
  • input is the input that the function will receive. We made a decision to always pass the input as an object, even if it's a single value.
  • expected is the expected result of the function execution. The same logic as with the input applies here.

Response Success

If the execution was successful, the response would look like this:

{
    "pass": true,
    "actual": {},
    "exception": ""
  }
  • pass is a boolean value that indicates if the test passed or not. If actual is equal to expected from the request, then pass should be true.
  • actual is the result of the function execution.
  • exception is the exception that was thrown during the execution, if any.

Response Error

You should almost always include an error type in your API responses. In our case we would have an error if the request is invalid, or if the code execution failed, for some unexpected reason.

{
    "error": ""
}

Usecases

The application is working based on nine Use-Cases in total. We would not go into details of all of them, if you are interested, please watch the video above. Let's look at one of them, and you can extrapolate the rest from it.

  • Name: Test fail
  • Description: Basic case when expected does not equal actual
  • Request:
{
  "input": {
    "a": 1
  },
  "javascriptCode": "function f1(input) { return {b: 1}}",
  "functionName": "f1",
  "expected": {
    "a": 1
  }
}
  • Expected Response:
{
  "pass": false,
  "actual": {
    "b": 1
  }
}

As you can see, the expected result is {a: 1}, but the actual result is {b: 1}, so the test should fail.

This is essentially what you would test, if you had this application already, just with Micro, this is all you have to do to get it up and running.

Conclusion

Experiencing Micro build its own components not only matched but exceeded our expectations. The speed and simplicity with which we developed and deployed this application were unprecedented.

Micro is reshaping software development, blurring the lines between development and deployment, and leveraging AI to streamline processes. We invite you to join us on this innovative journey as we continue to explore and expand the possibilities of Micro.

Stay connected!

Please consider singing up for the waitlist , to be one of the first ones to try Micro.

Interested in working with us, investing in Svtoo, or any other question, please do not hesitate to Contact Us .