Peeking into HTTPS Traffic with a Proxy

Peeking into HTTPS Traffic with a Proxy
 minutes ↗

This article is about configuring a web application, Appsmith in this case, to run correctly behind a firewall that does SSL decryption, as a Docker container. Instead of a firewall, we’ll use a proxy, which, for the purpose of the problem statement, should be the same.

Table of Contents

Since the proxy needs to support HTTPS decryption, we’ll use mitmproxy, but Charles or any other proxy that supports this would also work just fine.

Setting up mitmproxy

Install with:

brew install mitmproxy

Now launch it using:

mitmweb --listen 9020 --web 9021

Let it run in a separate Terminal window in the background. This will also open the proxy’s web UI at http://localhost:9021. To get a console UI instead, use mitmproxy instead of mitmweb in the above command.

Let’s try running some requests through this proxy to see it’s working well. Start with:

curl http://httpbun.com/get

This should print a valid JSON as the response, with some details about the request itself. Let’s repeat this with the proxy.

curl --proxy localhost:9020 http://httpbun.com/get

You should again see the same response here, but this time, a new entry should appear in the mitmweb UI. Here, you can inspect the request and be able to see the path, headers and response of the request.

So we’ve confirmed that our proxy works. Let’s add HTTPS to the mix.

Again, same thing, but with HTTPS, without a proxy. You should see the same response as before, but without an entry in the proxy. That’s to be expected since we didn’t put a --proxy here. Let’s try that now.

curl --proxy localhost:9020 https://httpbun.com/get

This one, should also succeed, unless you’ve installed mitmproxy via a different method. Let’s see why.

The way an SSL proxy works is by establishing two SSL connections, one with the client (a browser, or curl), initiated by the client, and another with the server (the httpbun.com server in this case). Everything sent by the client is encrypted using the certificate of mitmproxy, and everything by and to the server is encrypted with the server’s certificate.

When installing mitmproxy via brew, the root certificate was automatically installed on your system, and so curl won’t complain about the certificate being unverified.

To illustrate this, we can run the same thing in a container, and we should see the error right away:

docker run --rm alpine/curl --proxy host.docker.internal:9020 https://httpbun.com/get

At this, you should see a certificate validation error. This is because the root certificate of mitmproxy isn’t installed inside the container’s environment, and so the curl invocation inside, won’t be able to verify mitmproxy‘s certificate.

To confirm that this is indeed because of mitmproxy, run the same docker run command without the --proxy host.docker.internal and you won’t see this error, despite running with https.

Now we’ve reproduced the situation where a process (a web server in our case), inside a Docker container, is trying to run behind an SSL-decrypting firewall (or, an SSL-decrypting proxy in our case here). Let’s see what we can do to get this to work.

Setting up

For our adventure here, we’ll use the Docker image of Appsmith, located at https://hub.docker.com/repository/docker/appsmith/appsmith-ce.

Let’s start a temporary Appsmith container with:

docker run --rm -d --name ace -p 80:9022 appsmith/appsmith-ce

Once this is ready, you should be able to access your Appsmith instance at http://localhost:9022.

Let’s try to run some curl requests inside this container, and get them to go through our mitmweb proxy.

docker exec ace curl --proxy host.docker.internal:9020 http://httpbun.com/get

This should work fine, and the request should show up in the proxy UI with full details as well. Now let’s do the same thing with https.

docker exec ace curl --proxy host.docker.internal:9020 https://httpbun.com/get

Let’s copy the root certificate into the container. For mitmproxy, the root cert is generated at first start, and is located at ~/.mitmproxy/mitmproxy-ca-cert.pem, going by the docs at https://docs.mitmproxy.org/stable/concepts-certificates/#the-mitmproxy-certificate-authority.

docker cp ~/.mitmproxy/mitmproxy-ca-cert.pem ace:/

With this command, we are copying the root certificate of mitmproxy into the container, into the root folder. Let’s run the same curl command now, providing it this root cert:

docker exec ace curl --proxy host.docker.internal:9020 --cacert /mitmproxy-ca-cert.pem https://httpbun.com/get

Now we’ll see the correct response, as well as full details of this request in the proxy UI.

Setting proxy on the whole container

We’re now at the point where it’s possible for requests inside the container to be run via the proxy, without any cert validation errors.

But this currently needs to be deliberate. Like in the example above, the curl command needs the cert to be specified explicitly. Instead, we’d like even ordinary curl commands to always go through the proxy, since, that’s how a firewall would work, and ultimately, that’s what we are trying to reproduce here.

Let’s stop the ace container and start it again with proxy configuration set.

docker stop acedocker run --rm -d --name ace -p 80:9022 \    -e HTTP_PROXY=http://host.docker.internal:9020 \    -e HTTPS_PROXY=http://host.docker.internal:9020 \    -e http_proxy=http://host.docker.internal:9020 \    -e https_proxy=http://host.docker.internal:9020 \    appsmith/appsmith-ce

Yep, that’s right. We need to set both http_proxy and HTTP_PROXY for all applications inside the container to take it seriously. 🤦

Let’s run a normal curl request on this container to see if the proxy settings are applied:

docker run ace curl http://httpbun.com/get

If the proxy configuration is working, then you should see this request appear in the proxy UI. Also for https URLs:

docker run ace curl https://httpbun.com/get

This, as we can expect, fails due to a cert validation error, since it’s using the proxy, but the proxy’s certificate can’t be verified. We can provide the root cert of mitmproxy using the --cacert argument, but we want it to apply to all requests in the container, without such explicit configuration, so we won’t do that.

Instead, we want to install the root certificate of mitmproxy to the truststore, so that it’s available to all processes in the container for validating SSL certificates.

How this is done, depends on the operating system, but in our case, since the container is Ubuntu, all we need to do is:

  • Copy the certificate file to /usr/local/share/ca-certificates.
  • If the cert has the .pem extension, rename it to use the .crt extension. This is because Ubuntu’s update-ca-certificates command only picks files with a .crt extension.
  • Run update-ca-certificates.

Let’s copy the root cert into the container, and install it by running the above commands inside the container:

docker cp ~/.mitmproxy/mitmproxy-ca-cert.pem ace:/usr/local/share/ca-certificates/mitmproxy-ca-cert.crtdocker exec ace update-ca-certificates

The output should say that one certificate has been added to the truststore.

Let’s run the same https request again:

docker exec ace curl https://httpbun.com/get

This should now print the correct response, as well as show up on the proxy UI with full details for inspection. 🎉


This has culminated in creating the PR #14207. This PR contains a fer QoL improvements over the solution above.

  1. We install ca-certificates-java, so that when we run update-ca-certificates, they are also installed into the JVM truststore. This is important since, one, Java maintains its own truststore (like Firefox), and two, Appsmith’s server runs on the JVM so we need this there as well.
  2. We provide support for a ca-certs folder in the volume, where users can drop any root cert files which will be auto-added on container startup.
  3. We run update-ca-certificates --fresh instead of just update-ca-certificates, so that any cert file removed from the ca-certs folder, also gets removed from the truststores.
  4. We mix up values of the proxy env variables, so that setting just one of http_proxy and HTTP_PROXY would be enough. This is also done for https_proxy and HTTPS_PROXY.
  5. We provide a friendly warning when there’s .pem files in the ca-certs folder, since, most likely, they are there because the user forgot to rename them to use the .crt extension.
  6. The JVM needs the -Djava.net.useSystemProxies=true to use the system configured proxy. Additionally, we also set the individual proxy configuration as additional system properties, so we can apply them when executing requests via Apaches’ web client libraries. Since, well, that library doesn’t respect system proxy configuration, although the rest of JVM does. Go figure.
  7. We set a NO_PROXY env variable to hosts that should not go through the proxy, like localhost and

Of course, considering our premise, which is to be able to use Appsmith behind an SSL decrypting proxy, all a user needs to do, is to place the firewall’s root certificate in the ca-certs folder, and restart the Appsmith container.

Bonus: Using Charles

Notes on using Charles instead of mitmproxy.

Install with:

brew install charles

Open Charles

Go to Proxy -> SSL Proxying Settings, under “SSL Proxying”, add a few domains you want SSL decryption to be done. Let’s add an entry under “Incude”, with host set to httpbun.com and port set to 443 (which is the default port of HTTPS).

Check with http curl, response should show up correctly, and the request should show up in Charles with full information.

Check with https curl, get an error response back, and the request should show up in Charles with incomplede information, and a red error icon.

To get the Charles’ root certificate, go to Help -> SSL Proxying -> Save Charles Root Certificate.... Provide a location to save this cert, like your home folder.

The other steps should be the same as explained above for mitmproxy.

New Intelligent Drag and Drop Experience for Appsmith’s User Interface Builder

New Intelligent Drag and Drop Experience for Appsmith’s User Interface Builder

New Intelligent Drag and Drop Experience for Appsmith’s User Interface Builder
Vishnupriya Bhandaram
 minutes ↗
No items found.

Developers can quickly build any custom business software with pre-built UI widgets that connect to any data source on Appsmith. It’s a reliable and fast method to develop internal tools quickly. We created Appsmith to help developers save valuable time building complex applications for internal uses within their organizations. For this to work for everyone, we believe that the core parts of Appsmith need to run smoothly and should be continuously improved.  Appsmith’s UI is built using React, Redux, Web Workers, Immer, among other things. 

One of the key issues that users faced with Appsmith was that when they would drag the widgets onto the canvas, they would only get dragged in if there was enough space on the drop area. This was not a pleasant experience; it would involve dragging the widget onto some other free area on the canvas, re-designing the desired drop area, and then dragging it up. We realized that this was a significant flaw slowing down the UI building process. So we immediately fixed it.  

Click here to view the issue

In this blog, we’ve interviewed Appsmith engineers Ashok M and Rahul Ramesha to learn more about the process and challenges involved in solving this problem. 

What was the issue with resizing and dragging widgets? 

Simply put, the problem that we had was that whenever we would drag a new widget or any existing widget into a position where it would collide with any other widget, that kind of movement was often restricted. We did not have an option to auto-resize the widget dragged into a particular space. There was also no possibility for the existing widgets on the page to automatically move around to make space for a new widget. For anyone trying to create an application, this can be a frustrating experience because when users design things, they don’t necessarily do it in order. There are many instances where they might remember to add something later. We wanted the experience of making an app on Appsmith should be smooth and delightful. 

Take a look at the screenshots below to see the previous experience:  

When you try to resize a widget, and there’s a widget already in the path, a user would not be able to resize the new widget without explicitly moving the existing widgets out of the way. 

In this image above, you can see that the ‘Container’ widget cannot be resized into the size shown in the image below without moving the ‘Checkbox’ widget.

These problems often arise when there is a real estate shortage on the canvas, when the widget's size falls short by a small size change or when the movement of other widgets is restricted within a particular space. For example, when placing a button between two existing buttons, the widget being dragged is one column size larger than the available space on the canvas. Generally speaking, users don't usually know how much resizing needs to be done to the existing widgets to make space for a new widget or dragging other previously dragged-in widgets.

What was user feedback around this issue? 

Users often asked us to allow dropping widgets on top of each other (which some of the UI building products and most of the canvas building products provide) to deal with UI block collision checks. These checks ensure that no two widgets are overlapping and are fully/partially out of the main canvas. Going in this direction would have meant building layers and dealing with layers. 

For some context, layers are actually Z-Index layers, which could have allowed for dropping widgets one on top of the other by adding a higher Z-index value. An example that comes to mind is Adobe Photoshop; Tooljet, Miro, Figma also allow layers in a way. For Appsmith, this kind of a solution isn’t ideal because one can often forget that there are widgets behind a widget in a lower Z-index layer, and adding more layers would mean more time for the dom to render and paint. 

After a few internal discussions around this, we found that this would not be a scalable solution, and it would also make resizing, selecting, focusing widgets very difficult. We also want to develop the experience of building UI on Appsmith to be more intelligent. 

Can you elaborate on this vision of enhancing UI building experience and the solution you created? 

When we brainstormed this issue, we knew that the solution had to be scalable. It also had to be intelligent enough to auto-adjust according to the screen resolutions of different devices. We wanted to develop Reflow as a solution to this problem. Reflow is a process of technically deciding which widgets to move and resize in real-time to allow space for the dragging/resizing widget. Widget resizing allows the user to resize a widget while holding another widget to make space. This only works when the widget is cramped against a boundary on the canvas.

How did you go about developing the solution? What were some other approaches you had considered, and what were their limitations? 

Conceptualizing and building this feature took less than time than expected. However, we spent time thinking about the right solution. We did this by trying out POCs of different solutions. We built three POCs to realize that reflowing while dragging would be an essential part of our solution. We then also had to consider the two behaviors of Reflow: Natural and Relative. 


  • While resizing a static widget, when colliding with a widget in a particular direction, the widgets reflow after cascading collision without maintaining any relative spacing
  • While dragging a static widget, the widgets reflow similar with cascading collisions. Even here, the dragging widget can be made to fit into any space.


  • While resizing a static widget, when colliding with a widget in a particular direction, all the widgets in the path of collision of the colliding widget will be moved while maintaining relative spacing till the edge of the canvas. At the edge of the canvas, it reduces the relative spacing on further resizing a static widget.
  • While dragging a static widget, when colliding with a widget in a particular direction, all the widgets move as per the reflow algorithm, similar to resizing reflow. The direction of collision is critical while reflowing with dragging. The static widget itself can move other widgets that can help fit in between any space on the canvas.

We developed two more POCs to get feedback on which reflow was more user-friendly and likable. We understood that ‘Natural’ was more predictable, but both behaviors had their own merits. Finally, we built “Drag and Drop Experience” to resize widgets at the corners to allow space for the dragging widget, which seemed essential. 

Can you explain your the new algorithm for the experience? 

At its core, the algorithm’s behavior is to push all the widgets the dragged widget is colliding with. Let us explain what happens under the hood in more detail; consider the widget dragged on the canvas to be a ‘static’ widget. When this type of widget is dragged onto the canvas, we compare its coordinates with all other widgets on the canvas to check for overlapping collisions. The overlapping widgets are further put through the same process recursively. This helps create a tree structure of widgets, wherein a parent node will have overlapping widgets as children nodes and become parents for their overlapping widgets. With the help of this tree structure, the direction of the static widget, displacement of the static widget and canvas boundaries, X and Y movement values of each widget are calculated. When moved along the X and Y axis from their original position, these widgets will create the illusion of pushing the colliding widgets.

Here’s a link to the code where this algorithm is implemented

This is the core logic of our algorithm, but there’s a lot more to this. For example, we are tweaking the direction of movement in corner cases, keeping track of multiple directions of widgets, smooth canvas exits, and entries, among a few more.

Can this algorithm be applied in other scenarios or projects?

So we will extend this project to the cut/copy/paste feature where you can paste a widget anywhere on the canvas, and the rest of the widgets will move away to make space for the copied widget. We will also be including it in the dynamic height project, where widgets like Table, List etc. can grow in height and push other widgets to the bottom. Another extension for this algorithm would be to push widgets around based on device resolutions, ie, develop position responsiveness of widgets.

Can you talk about the performance of your fix? What happens when there are hundreds of widgets on the canvas?

We tested it out with 100 widgets, and there was no problem with performance, but performance is expected to degrade with more and more widgets. We tested this out with our high-performance laptops by slowing down the CPU by 6x using Chrome’s CPUthrottle; there were minor lags but nothing that is unusable. 

What is the roadmap of this particular feature? Are there any further enhancements and improvements that you’re planning to make? 

We think that this is just the beginning! We’ve got some significant enhancements planned. 

  • Multiple widget reflow (Major Enhancement):

Reflow widgets even when multiple widgets are moved together.

  • Locked widgets (Major Enhancement):

So container jumps(moving a widget from the main canvas into a container or vice versa) will be tricky and irritate some users because people might not want to move widgets from the carefully designed positions. So we will lock a widget not to allow it to resize or move from its position.

  • Dynamic resize limit (Minor Enhancement): 

There is a resize limit for our widgets: 4 rows x 2 columns, and the same for all widgets. We can’t go below these dimensions. It doesn't make sense for widgets like a divider or sometimes button and checkbox, so we might try to get the minimum dimensions in real-time based on the widget it affects.

What was the most challenging part of building this feature? 

Building this feature was quite challenging because there aren’t many readily available examples on the internet; and building this also meant enabling others to understand what was in our minds. We wrote close to 8000 lines of code. Still, we’ve pushed only 4500 lines into the repo because we have had to build two behaviors to understand the solution among internal stakeholders better. 

We learned that there is no right way to build new experiences. Different solutions helped solve issues in different scenarios. We could always come up with a scenario that would cause the existing solution to fail. In the end, we had to choose a solution that catered to most of the scenarios and not all.

In the beginning, we built a solution to do one thing: to push colliding widgets in a direction and then add code to tackle one problem at a time. As the solution started to feel more and more refined, other problems surfaced. While trying to tackle a complex problem, identifying the core logic of the solution and adding to it one step at a time is critical in solving it.

Ashok M is a Frontend Engineer at Appsmith. 

Rahul Ramesha is a Frontend Engineer at Appsmith. 

We hope that you enjoyed reading this blog.

Appsmith is built in public, and we love sharing our knowledge with everyone. If you have a special request to know more about behind-the-scenes for specific features, write to me vishnupriya@appsmith.com

How 40 Lines of Code Improved Our React App’s Performance by 70%

How 40 Lines of Code Improved Our React App’s Performance by 70%

How 40 Lines of Code Improved Our React App’s Performance by 70%
Vishnupriya Bhandaram
 minutes ↗
engineering diaries

On Appsmith, developers can quickly build any custom business software with pre-built UI widgets that connect to any data source. These widgets can be controlled with JavaScript. We created this framework to help developers save on valuable time building complex applications for internal uses within their organizations. For this to work for everyone, we believe that the core parts of Appsmith need to run smoothly and should be continuously improved.  Appsmith’s UI is built using React, Redux, Web Workers, Immer among other things. 

In this blog, Sathish Gandham, a frontend engineer focusing on UI performance at Appsmith, will talk about how we improved the editing experience on Appsmith.

What is the Editing Experience on Appsmith? 

The editing experience on Appsmith involves writing bits of code to customize the functionality of the widgets and writing special commands and actions into the application. It is a crucial function in the Appsmith framework. 

Lag and Delay 

Building an application on Appsmith involves dragging and dropping widgets onto the canvas and writing custom code on the editor; however, we noticed that while editing text/code in the canvas, the editor would often freeze, resulting in a less than optimal user experience. When building an application, there should be no delay or lag. This was a huge issue that needed our immediate attention from an engineering perspective. For a natural typing experience, users want the keypress latency to be under 100ms, though 50ms would be ideal. 

To solve this problem, we needed to understand what happens when a user types. For this, we used: 

React profiler: This measures how often components in the application render and the “cost” of rendering. The profiler helps in identifying parts of an application that are slow. In our case, this allowed us to understand what components were rendered as we typed. 

Chrome Performance tools: This helped us quantify the problem, measure our progress, find the code taking longer to execute, and find unnecessary repaints. 

Please note that the React profiler is part of the React Developer tools browser add-on which is available for Chrome and Firefox.

From the React profiler, we see three pairs of long commits; each of these corresponds to the following property pane, and UI widget renders. Ideally, the property pane should render much faster than the canvas since there is nothing changing in the property pane except the input we are editing. Even the canvas should be rendering the widgets currently in use and not the rest. We realized that this was not the case and needed to be fixed. 

We profiled the property pane issue in isolation to identify what it takes to render it. For this, we used the performance tab in Chrome DevTools to see what happens when the property pane opens. This gives us some helpful information. 

  • ComponentDidMount of code editor is taking a lot of time 
  • Styles are also taking a long time to render
If you see the property pane commit in the screenshot above, you will notice that evaluatedValuePopup also takes significant time.

Here’s how we listed the tasks that lay ahead of us: 

  1. Identify as to why all the widgets were rendering when they don’t have to and fix it
  2. Optimize the code editor [Not apparent from the React profiles]
  3. Identify why all the controls in the property pane are rendering and fix it
  4. Optimize the global styles
  5. Optimize the evaluatedvalue pop-up

In this blog, I will talk about how we went about the first task. Before I get to that, here are a few tips for profiling: 

  • Try to split your problem into smaller pieces and profile them. With this, you won’t crowd your profile, and you can find the performance issues with ease. 

Example 1: To improve the editing experience, we just profiled a single keypress. 

Example 2: To profile a drag and drop action, we can split that into drag start, move, and drop.

  • Leave the application idle for 5 seconds after starting the profile and before stopping it. It will make it very easy to identify the work that has been done. [See A & C From profile above]
  • To measure the overall performance improvements, instead of measuring each optimization individually, it’s better to consider focussing on the overall scripting and time taken to render during an action. You can get this number from the chrome performance tab. [B & D from profile above]
  • In the React profiler, don’t just focus on the large commits. Go through each commit at least once, and see what’s getting rendered in that commit. The chances are that one or two small components are accounting for all those renders.

Here’s a short guide on reading the React profile: 

  • A: List of commits during our profile
  • B: The commit we are looking at
  • C: Details about the selected component (WidgetsEditor). Our widgets editor rendered three times during the profile at 6.1s, 8.6s, and 14.1s. 102ms, 328ms,83.1ms is the duration each commit took; it is not the total time the selected component took to render.
  • D: Zoomed in view on the component we selected and its children.

Here are the notes on the profile based on which we worked on improving the editing experience. You can download the attached profile and import it in your React profiler to follow along or just refer to the image above.

Please note that the React profiler is available only when you open a react app in dev mode in Chrome/Firefox, if you don’t have a local React development set up, you can use the standalone React developer tools to read the profile. 

Here are instructions on how to install it and start it: 


# Yarn

yarn global add react-devtools


npm install -g react-devtools



Follow this link to read the detailed notes from the profile we did to improve the editing experience on Appsmith. 

I’ve put some notes here for your reference: 

1. Evaluated value opening. Not related to editing.

2. Widgets editor, not sure why.

3. Editor focused. We should be able to avoid the rest of the property pane from rendering.

4. Small changes to property pane, its header, lightning menu, and action creator. Nothing changes for them, so they should not be re-rendering. Memoization can help here.

5. Same as above. 

6. We get the evaluated value back. Entire widgets editor is re-rendered (Deduced this from one of the two updates to table), we can optimise this

- If each widget subscribes to its data independently, we should be able to avoid the unnecessary renders of the widgets by

- Doing a deep check at the widget level

- update the store with only values that changed. 

7. PropertyPane is rendered with the updated value. EvaluatedValue taking most of the time.

8. From 8 to 17, these are commits like 4 & 5 above. 

9. 18 & 19 are widgets editor and property pane. I don’t see why these are required. I will look into it. 

Widgets Render When Not Needed

One of the most loved features of Appsmith is reactive updates. With reactive updates, you can see the widget change and show data. With traditional programming, you would have to reload the page in order to see the update in the widget. This is achieved by updating the data tree as and when you change something on the canvas and using the updated data tree to re-render the app. Due to the amount of data we have and the number of calculations we need to do, it took a long time and blocked the main thread.

To solve this problem, we moved the evaluations to a web worker freeing the main thread. A brilliant move to solve the problem at hand, but this created a new issue. The problem here was due to object reference changing. Since the data tree is coming from the worker, we would always get a new reference for every item in the tree even though only some of them changed. This reference change was making all the widgets re-render unnecessarily.

A few approaches we tried to solve this problem were:

  1. Get what keys changed from the worker (worker has this information) and update only those values in the reducer. This did not work because the list of keys was not complete. 
  2. Compute the diffs between the current data tree and the one received from the worker and update only what changed. Though this prevented the renders, we did not improve the overall scripting time we measured earlier. The reason is, computing the diffs itself took a lot of time, which would happen twice for each change.

Web Worker to the Rescue 

We moved the task of computing the diffs to the worker and used the deep-diff library to compute the diffs and let immer take care of immutability.

This helped us in two ways:

  1. Offloaded the expensive task of computing the diffs on the main thread.
  2. Reduced the size of the data we transfer between worker and the main thread (this was never a bottleneck for us, though).

This change alone brought down the keypress latency by half.

Instead of replacing the entire data tree from the worker, we get the only changes (updates) and apply them to the current state. applyChanges is a utility method from deep-diff. Immer takes care of the immutability.

If there’s anything to be said about performance improvement, it’s this, don’t take performance for granted and profile your code on a regular basis. Even a few lines of change or configuration can lead to a great performance gain. 

I hope you found this blog helpful. If you’d like to get in touch with Satish, ping him on Discord or visit his website.

It Took Us Less Than 1 Hour to Build a PgAdmin Clone with Low Code: Here’s How We Did It?

It Took Us Less Than 1 Hour to Build a PgAdmin Clone with Low Code: Here’s How We Did It?

It Took Us Less Than 1 Hour to Build a PgAdmin Clone with Low Code: Here’s How We Did It?
Vihar Kurama
 minutes ↗

PostgresDB is an open-source database that uses and extends the SQL language. It’s used widely across many organizations to build a variety of apps. Developers tend to love and prefer PostgresDB for its simplicity and extensibility. Postgres Admin (PgAdmin) has a neat interface for accessing any Postgres instance from any device with a web browser. It allows administrators to carry out tasks like managing user accounts, databases, clusters, and more.

However, PgAdmin has a few biting downsides; here are a few:

  • Installation can be difficult
  • Troubleshooting and debugging is a complication, especially if you’re new to Postgres
  • Takes time to load on a machine, and is prone to freezing, especially when establishing a new database connection
  • Slow to respond to queries
  • Cumbersome interface when dealing with multiple databases

I think these are problems that can be dealt with if there was just a better user experience. I set out to build a new version of the PgAdmin! And I did this in under a few minutes. In this blog, I will tell you how I did this.

But, I have to say, cloning the PgAdmin app is not an easy task; there are multiple challenges here, as we have to implement several functionalities. With a fully functional PgAdmin app, we should be able to:

  • Establish connection on any remote cloud-based Postgres instances
  • Manage databases, roles and users
  • Create, alter, and drop tables on the connected databases
  • Provide UI for data export/import from CSV files, schema checker, etc.
  • Write queries on an editor to run SQL statements and see the results of querying against your database

Doing all this in under minutes is an impossible task if we have to code things from scratch, so we will be using a low code platform like Appsmith. Using Appsmith to build internal applications is super fun; we can connect to any data source, write queries, and build UI 10x faster than usual. This is entirely web-based; depending on the preference, we either self-host this on any server or simple use the community cloud edition.

How did we build this?

It took us less than an hour to build this, as Appsmith comes with a Postgres data source integration. With this, we can run any PG query and use it anywhere across the application using dynamic JS bindings. We will talk more about the features of the application in the following sections; before that, here is a sneak peek of how the application looks:

CleanShot 2022-01-03 at 15.21.31@2x.png

The application has two pages, one for managing tables (Table Manager) and another for testing and running queries (Query Executor). Now, let’s deep dive into each of these tables, and talk a bit more about their functionalities.

Table Manager

On the table manager, you can find all the tables from the connected PG data source. It offers all the functionalities that a PgAdmin does; you can create, read, update and delete these tables on this admin panel. Performing these CRUD operations on tables is pretty straightforward on Appsmith. Additionally, you can filter the tables based on the schema and update them whenever needed.

When you click any table name (select a row) from the table widget, you’ll see a new container on the right, where we display the three essentials:

  1. General Table Details: Here, we can update the information of the table and set the owner.
  2. Column Details: Configure all the settings for columns inside the selected table.
  3. Constraints: Add any relations between the columns inside the table.

Using these three features, we can manage the column properties and add different relations between them. We’ll talk a bit more about this in the next sections.

Query Executor

This is the second feature of our Appsmith PGAdmin. Here, we can execute any queries.

CleanShot 2022-01-03 at 15.42.41@2x.png

Our auto-complete and slash commands feature will make it much easier to test queries in real-time. For building this, we’ve used the RichText Editor widget on Appsmith; whenever we execute the query, we display all the responses on the table widget. However, this can be customized within minutes based on different use-cases.

Now, Let’s Use the Appsmith PgAdmin

In this section, we’ll talk about the different functionalities provided by the Appsmith PgAdmin with a few examples. We’ll also deep dive into some of the queries that were used on Appsmith to implement the functionality.

To see this in action, you will need to switch data sources on Appsmith. For this, you can connect your PG database by clicking on the + icon next to the Datasources section. Next, on every query we will see the data source it’s connected to. Here, we can switch the data source by selecting it in the data source dropdown.

Managing Tables: When working with databases, we often need to configure names, schemas, on Appsmith PgAdmin. Doing this is super easy on Appsmith! As soon as you open the PgAdmin, you will see the list of all the tables on the table widget. To see their configuration, we can simply select any row, as soon as it’s selected, we’ll see a new container on the right, where we can configure the entire table details. Here’s a screenshot:

CleanShot 2022-01-03 at 15.31.01@2x.png

As we can see, on the right, when a table is selected, we can see all the details under the general tab. We can also delete a particular table by clicking on the delete button inside the table. This is done by adding a custom column on the table widget and setting the column type to a button. If you prefer modals, then you can configure tables to show modals where we can update the necessary details.

Configuring Columns in Table: To manage columns inside the table, select the table and navigate to the Columns tab inside the container on the right.

CleanShot 2022-01-03 at 15.32.17@2x.png

Here, we will see a list widget, which has all the details from the selected table. We can edit the data type and also the column names, using the interface that we’ve created on the list widget. In case, you want to delete a particular column you can use the delete button that’s present on the list item. Additionally, you can toggle the not-null option to set the ‘accept null values’ for individual attributes.

Here’s a GIF:

CleanShot 2022-01-03 at 15.33.48.gif

In the list above, we only see the data types from the table columns; to add constraints such as foreign keys, we can simply navigate to the constraints tab and add or configure the attributes.

With this, our custom PgAdmin is ready for use. We use this application for internal processes like managing our Postgres instances. We definitely find it faster than the native PgAdmin app, and it’s definitely nicer to look at!

If you want to build any admin panel, you can do it with Appsmith. We’ve got all the popular database integrations, and this list is growing! Do join our growing community on Discord for more tips, and tricks on how to make the most out of Appsmith.

Engineering Diaries: How We Created a Google Sheets Integration for our Product

Engineering Diaries: How We Created a Google Sheets Integration for our Product

Engineering Diaries: How We Created a Google Sheets Integration for our Product
Vishnupriya Bhandaram
 minutes ↗
engineering diaries

Everything starts with a spreadsheet. Well, almost everything. Spreadsheets are the backbone of all business operations, whether budgeting, people management, expense management, organizing lists, etc. Spreadsheets often become the first choice for fledgling businesses, mainly owing to their versatility and flexibility. Little goes a long way with spreadsheets.

For a startup, smart utilization of available funds is critical, and Google Sheets often comes in handy to plan projects, analyze risks, report metrics, generate quotes and predict financial outcomes. Start-ups even use spreadsheets to keep track of client lists, investor lists, and more. So what’s the problem? Things can get messy once you work on it collaboratively or establish strict flows around maintaining a spreadsheet database. Data can get corrupted, and not knowing the latest version, too many changes by too many people who have access to the database and no admin control.

This is where the power of low-code can be melded with the power of spreadsheets. Turning an excel sheet into a web application is a great way to contain errors due to poor data management, allow for granular control, admin and user access, and these applications scale along with your business.

Today, it’s pretty easy to make an application from Google Sheets in record time. There are many low code and no-code tools out there that can help you do this, including Appsmith. In this blog, we will talk a bit about the Google Sheets integration on Appsmith, how we built the integration and all the things you can do with it.

Behind the Scenes

One of our colleagues, Nidhi Nair, worked on making this integration a possibility. Nidhi is a platform engineer at Appsmith and she joined Appsmith a little less than a year ago, and she enjoys the creative liberty to explore her ideas at Appsmith.

“It was possible to use Google Sheets on Appsmith even without our integration. Users could do this using the REST API plugin. However, it’s not the most convenient, and I found it to be unwieldy and something that every user couldn’t configure intuitively or easily,” says Nidhi.

The Google Sheets integration was created to simplify the interaction for end users. “We identified a set of actions that users would want to use Google Sheets for and optimized the way they interact with data in their sheets,” adds Nidhi. This meant not having to deal with cells and columns, but just arrays. “We defined the scope of the integration to be similar to that of a database. A single sheet was understood to be a table that we wanted to manipulate with the integration. We identified the relevant actions for this,” she says. Implementing this integration meant doing considerable research around how users interact with sheets. For us, reducing the friction for users was a key priority. “We introduced something called RowObjects in the integrations that makes sure that users don’t necessarily have to tinker with the data themselves,” says Nidhi.

Key Challenges

“We wanted to be able to support the DB integration style interaction and also allow users that want to work on it as an excel sheet to be able to continue to do so. This was a relatively easy solution because of how flexible our logic for these integrations is,” says Nidhi, adding that a user could say ‘get me rows 1-10’ and for the next page, ‘get me rows 11-20.’ But they can also do something like: ‘Get me cells D3:J8, and on the next page, get me D11:J16’ (or whatever other logic they would like to use). While this may sound trivial, having the liberty to navigate across the sheet at will means that they can organize their data separately from how it is consumed in Appsmith.

The biggest challenge in creating this integration was to use Appsmith credentials as a provider for all instances to make it easy. With this, users don't have to set up configuration on Google; Appsmith has already done that for them. Setting up the configuration on Google comes with painful scrutiny, and it’s not for everyone, especially for people who do not deal with tech. “Appsmith’s one-click approval makes it easier,” says Nidhi.

However, this has a downside; Google Sheets on self-hosted instances cannot be used unless they connect to Appsmith’s cloud API.

The engineering team is also working on and hopes to solve shortly for storing authentication on a per-user basis. This will allow users to access the part of the sheets they have access to and limit access to those they don't.

To read more about the roadmap for features, follow this link.

How to Use Google Sheets Integration on Appsmith

With Appsmith's inbuilt Google Sheet Integration Plugin, you can use any Google Sheet as a data source or a backend to build robust applications.

Set-up Google Sheets Plugin

  1. Create a new account on Appsmith (it’s free!), if you are already an existing login to your Appsmith account.
  2. Create a new application by clicking on the Create New button under the Appsmith dashboard.
  3. We’ll now see a new Appsmith app with an empty canvas and a sidebar with Widgets, APIs, and DB Queries.
  4. Click on the + icon next to the APIs section and choose the Google Sheets option. Next, click on the New Datasource button, set the scope to Read and Write, and click Authorise.
  5. This will ask us to log in from our Google Account, choose the account we want to access Google Sheets, and log in. After successful authorization, this will redirect back to your Appsmith account.
  6. Now, you’ll find your Google Sheets Datasource under your APIs, and you can create the necessary queries by choosing this data source.

Excellent; now that you’ve completed the set-up, follow the instructions in our docs and get started on your app!

Learn How To Make An App With Google Sheets

Are you interested in learning more about our engineering processes? Follow us on Twitter and Linkedin to stay tuned!

Write to me, vishnupriya@appsmith.com, and I’d love to get to know what you’re building with Appsmith!



Akshay Rangasai
 minutes ↗

Product led growth as a concept is making the rounds. Investors cannot get enough of it, Twitter is filled with threads about this topic, and many dedicated channels and podcasts are talking about this “new” growth paradigm. However, product-led growth is not something new and has existed for ages. The internet and digital products (Tech companies - Primarily SaaS and consumer internet) have brought this in focus. Growth in organizations has always been tied to marketing and sales, but product-led growth forces us to change that perspective and think about organizations differently.

In this post, we will explore the history of growth for companies, the rise of product-led growth, and why the key to driving it is to think about engineering and product teams as revenue generators rather than just cost resources that need to be allocated optimally.

How the internet changed growth for companies

Historically, the three key pillars to driving growth are Price, Product, and Distribution. Price and product dictate demand, while distribution ensures products are available to satisfy demand. The new world with digital products changes a little bit but with massive consequences. Nevertheless, to understand why we need to focus on product as the key driver of growth, we need to dive into the history of product development and how businesses monetized products.

Before the advent of computers, most of the real-world products were physical objects that needed to be manufactured, priced and distributed. Product development took ages, manufacturing took as much time, and as if this was not enough, you needed to find distributors to distribute products. Computers solved this problem by making each step easier and faster and created a whole new category of digital products.

The software was quicker to make and did not have the manufacturing overheads, but distribution was still a problem. The other big problem in both models was the product itself. You did not know if the product was good or not till you bought it. The ability to have information to make sound product decisions was again constrained by distributing a publication (Like Gartner). If you subscribed, you got it; if not, you didn’t. This meant that product development was a cost, and sales and marketing were revenue-generating functions. Distribution was key. Most companies treated engineering and product as allocated resources, and sales took a lot of the value generated from selling this product. This was just how the old world worked.

The software also hacked the working capital cycles that plague most manufacturing companies. Not only do you need to set up the infrastructure and pay wages to workers, but you also had to foot the bill for each item being made, and then you made it back when selling the product. As your company scaled, so did the requirement of capital. Scaling rapidly was not just a function of people but also a function of how much money you had, or how many loans you could get to foot this bill. Software hacked this cycle; it was infinitely distributable, without any additional cost of production. If you had enough distribution, you could scale easily. The collection cycles also made the companies cash-rich and did not need as much money as other product companies to grow.

Then came the internet. While you can claim that the internet made distribution cheaper, it isn’t really clear if it did. It made distribution faster, but discovery (of products and customers) was still an unsolved problem. Gartner subscriptions came via email and mail. Search engines made distribution cheap. Discovery, which was one of the hardest problems to solve, got solved because of the speed and reach the internet offered. AWS made software development cheaper, you did not need a boatload of capital to start a software company. Gartner is getting replaced by Capterra and G2 for the same exact reason of discovery is cheaper, as is distribution.

All the constraints that existed before to make sales the top dog in a company have been eroding over the years. These advances have made it a level playing field (for the most part). Information availability (G2, Capterra), near 0 cost of distribution and setup means that the only way a company can differentiate itself significantly from competition and win is through product.

The new role of engineering in product-led companies

Cloud computing has made CI/CD possible, and this rapid iteration helps create value immediately instead of long product cycles. Every new feature that is released is monetizable almost instantaneously. Your engineering and product organization is directly contributing to revenue in a measurable way. This is not just an account expansion, or user acquisition focused growth; Retention is also significantly affected by your product's quality. Retention is the first step to growth.

Engineering has always been an expensive talent in organizations. Whether it was for silicon chips or software, it was a scarce skill always in demand. The old paradigm of product development classified these skills as costs, making sense in that era. However, for product companies selling software and other digital goods, engineering should be seen as a lever for revenue generation.

Newer organizations, especially in software, still treat engineering as a cost and allocate resources based on the cost that is saved by deploying this skill across the organization. Thus we see 100s of companies building their own recurring subscription infrastructure when multiple SaaS companies exist to solve this problem. Non-customer facing development improves efficiencies within the organization, but allocating the same resources to customer-facing development could potentially have a significantly greater RoI if internal tooling is handled by other bought software or potentially spending as little time as possible in building internal tools (Non-customer-facing software).

What we think this means is that org and incentive structures need to change. It is not just a sales and marketing team generating revenue but also teams of product managers and engineers responsible for revenue growth. This may sound too harsh given how teams are structured now, but with better focus and newer org structures, this will only seem natural.

Our experiences and conclusion

We think it is time for companies to rethink their build vs buy decisions and consider product and engineering revenue generators. Opportunity costs in whatever form must also be taken into account while making these decisions. In the new age, the product will determine the winner, and it is essential to align your teams on this mission.

Our experience building Appsmith has yielded disproportionate returns by keeping engineering’s primary focus on customer-facing features and requests. We are seeing impressive growth over the last few months with pure product improvements and additions (support is included, of course, by our engineers!). Maybe we were lucky as we are building an internal app builder, and our team was forced to eat our dog food, but this is something that we believe has worked well for us, and we think other organizations should consider it more seriously.

If you have any questions, suggestions or feedback, please feel free to share it at akshay@appsmith.com

Cover Image Credits: Photo by Isaac Smith on Unsplash

How We Made Connecting Data to Widgets Much Easier

How We Made Connecting Data to Widgets Much Easier

How We Made Connecting Data to Widgets Much Easier
Vishnupriya Bhandaram
 minutes ↗

What do we want our users to do on Appsmith? The obvious answer is that we want them to build fantastic internal apps for their organizations super fast. But what we're doing solves a more significant and perhaps intangible problem: avoiding repetitive and tedious grunt work. Developers don't want to do this, and we want to enable them to get to solutions faster. For this, Appsmith needs to be smarter and better too. The first step is to have a smooth onboarding experience — it's good for our users and great for us!

However, we noticed that users couldn't easily find critical elements essential to understanding how Appsmith works.


Our data connectors were hidden, and there were no obvious means of accessing them. There were a few more glaring pain points; for example, we felt that our widgets with pre-filled data, discouraged users from playing around with the platform. And even if our users were landing on the queries section, the flow was confusing and switching between data queries and widgets was non-intuitive. The product was not hitting the intended direction; to address all this confusion and more, we changed the navigation experience.

Simply put, our overarching goal with this update was to get people to connect their data to the UI.

Widget ➡️ Datasources ➡️ Querying Data ➡️ Connecting Widget With this flow in mind, we made several changes to the navigation experience.

Users can now:

  • Connect data sources faster
  • Find the right widget for available data
  • See relationships between UI and Data clearly

In this blog, we will talk about Appsmith's new navigation experience and how our design and platform pods went about the production process. Hopefully, you'll also get an inside look into the collaborative engineering and design practices here at Appsmith!

Merging APIs and DB Queries under Datasources

When a user is in an exploratory stage, what do they do first? Build UI or connect data sources? For us, it was a little bit of a chicken and an egg situation. We wanted to limit the fields to not overwhelm users with too many options and manage the flow into smaller edible nuggets.

Having APIs as a separate entity was counterintuitive. Merging it under data sources makes it easier for discovery and helps the user understand that all data comes from a data source — whether it's an API or a database query.

Along with this, we've also added subtle nudges and prompts such as this:


This reinforces the importance of connecting data sources to get apps working. ‍

The Right Widget for Your Data ‍

If you're confused about finding the best way to represent available data (a chart or a table?), the Appsmith platform can now predict the best widget based on your data.


This feature helps users narrow down options based on the type of data they have, speeding up the process of building an app. Earlier, users had to add the data source, write the query, go back to the canvas, scout for the right widget, (deep breath), drop it, and then bind it. With the new and improved flow, users can add data sources, write queries, select recommended widgets. Appsmith does everything else!‍

Clarity on Entity Relationships‍

Once you're inside a widget, it's essential to see how your data is linked to each other. We realized that when there are multiple data sources and queries on your application, sometimes it might be hard to navigate between data sources. So, we’ve added a way to be able to see entity relationships more clearly. With the upgraded new navigation experience, all the defined queries are listed on the widgets under incoming entities; you can directly choose (what) without writing JS bindings. When the widget performs any actions, for example, when a button is clicked, or a row is selected, these queries can now be seen under the outgoing entities section.


Make things more obvious

If there's one thing we've come to believe in after spending almost two months on this update, it's that keeping things simple tends to go a long way. This reflects even in the copy

Bind Data — Connect Data

APIs and DB Queries — Datasources

We introduced slash commands as a quicker and simpler way to connect widgets to a data source. Now you can trigger commands by typing "/" and use it anywhere you're writing Javascript (within moustache bindings) on Appsmith.


Slash commands also give a heads up for developers to start from somewhere. They become the way to initiate writing custom code and our auto-complete feature speeds up the process!

Our efforts with this new navigation experience were to make things simple and enable our users to find and understand features easily. As an open-source organization, we will always be a work in progress powered by our incredible community. We will strive to keep improving Appsmith!

‍If this new navigation experience helped you do let us know! As always, we would love to learn more about you and what you're building using Appsmith; please feel free to email me (vishnupriya@appsmith.com).

Become a Betasmith
Have a say in shaping Appsmith's future!‍

Join our community‍ Contribute or generally hang out with us!‍

Sign up for Events‍ Stay up to date on new features, live demos, etc.

Evaluating JS in the Browser for a Low Code Product

Evaluating JS in the Browser for a Low Code Product

Evaluating JS in the Browser for a Low Code Product
Hetu Nandu
 minutes ↗

Appsmith is an open-source low code platform for developers to build internal apps and workflows.

In Appsmith, our developer users define business logic by writing any Javascript code in between {{ }} dynamic bindings almost anywhere in the app. They can use this while creating SQL queries, APIs, or triggering actions. This functionality lets you control how your app behaves with the least amount of configuration. Underneath the hood, the platform will evaluate all this code in an optimized manner to make sure the app remains performant yet responsive.

Let us take an example of binding a query response to a table widget.

It all starts with the binding brackets {{ }} . When the platform sees these brackets and some code in it, in a widget or action configuration, it will flag the field as a dynamic field so that our evaluator can pick it up later. In our example let us bind usersQuery to usersTable


Since we have added this binding in our tableData field, we will flag this field and store it in our widget config

// usersTable config
  "usersTable": {
        "tableData": "{{
                .map(row => ({
                    name: row.name,
                    email: row.email
        "dynaminBindingPathList": [
            {"key": "tableData"}

In the background, our evaluation listener, always keeps a lookout for such events that would need an evaluation. For our example, this is a scenario that definitely needs an evaluation, so it kicks off our evaluator.

We pass on our current list of app data constructed in what we call as DataTree to the evaluator thread and patiently wait to hear back from it ⏱

// DataTree
    "usersQuery": {
        "config": {...},
        "data": [...]
    "usersTable": {
        "tableData": "{{
                .map(row => ({
                    name: row.name,
                    email: row.email
        "dynaminBindingPathList": [{"key": "tableData"}]

For performance reasons, we run our evaluation process in a separate background thread with the help of web workers. This ensures that evaluation cycles running longer than 16ms do not hang up the main thread giving the app bandwidth to always respond to user events.

Inside the thread, the event listener gets a wake-up call and gets to work.

  • Get differences: First it will calculate differences in the DataTree from the last time. This will ensure we only process changes and not the whole tree. In our example, we would see the usersTable.tableData has changed and usersTable.dynamicBindingPathList has a new entry. It takes each difference, filters any un-important changes, and processes the rest.
  • Get evaluation order with dependency map: It also maintains a DependencyMap between various entity properties. The evaluator will notice if any bindings have changed and recreate the sort order accordingly.For our example, we will infer that usersTable.tableData now depends on usersQuery.data. This means that the query response should always be evaluated before we can evaluate the table data and that whenever we see a change in the query response, we need to re-evaluate the table data as well
// DependencyMap
      "usersTable.tableData": ["usersQuery.data"]
  // Evaluation order
  • Evaluate: After creating an optimized evaluation order, we will evaluate the update the tree, in that said order. Evaluation happens via a closed eval function with the whole DataTree acting as its global scope. This is why we can directly reference any object in our DataTree in our code.
// Evaluator

  const code = `
    usersQuery.data.map(row => ({
      name: row.name,
      email: row.email
	const scriptToEvaluate = `
    function closedFunction () {
      const result = ${code};
      return result
	const result = eval(scriptToEvaluate);
  • Validate and parse: We always want to make sure the values returned after evaluation to be in the right data type that the widget expects. The ensures the widget always gets predictable data even if your code has returned some errors. This is also needed for any function down the line in the evaluation order, if it refers to this field, will always get a reasonable data type to work with.

And that completes it. At the end of this, we will have a fully evaluated DataTree that we can then send back to the main thread and start listening for any new event to do this whole process again.

// Evaluated DataTree
    "usersQuery": {
        "data": [...] 
    "usersTable": {
        "tableData": [...]

Our main thread gets an event saying the evaluation is complete, with the new evaluated DataTree which it stores in the app redux state. From here, the widgets pick up their data and render it.


Summarizing our philosophy

  • Pull vs Push: While building a low code app builder for varied developers, we thought hard about how the written code works with the rest of the platform. We wanted configuration to be easy to start yet powerful when it needed to be. For this reason, we went with a Pull based architecture rather than Push. What this means is that in most places, you won't have to think about how the data will get to a field. You write code that pulls everything from the global DataTree and sets it to the field where you write it. This way the moment the underlying data changes, it get propagated to all the fields dependant on it and you as a developer do not have to orchestrate ui changes.
  • One-way data flow: Since we are built on top React.js and Redux, we strongly embrace the one-way data flow model. What this means is that you cannot set a table's data directly to that field from some other part of the app. If you do need to update the table, you will have to trigger the query to run, which will then cause the table to re-render with the new data. This helps the code you write easy to reason about and bugs easy to find. It also encapsulates each widget's and action's logic in itself for good separation of concern.
5 Tips for Beginners to Learn Better and Stay Motivated

5 Tips for Beginners to Learn Better and Stay Motivated

5 Tips for Beginners to Learn Better and Stay Motivated
Arpit Mohan
 minutes ↗

Recently I met my college friend Aditya Rao. I've known him for more than a decade and always thought of him as a business & marketing leader. I was surprised to hear that he's been learning programming for more than 2 years now. I got curious to know about his experience of learning to code and we ended up chatting for about 2 hours about his journey as a 30-year-old beginner developer.

Here are a few tips he shared that helped him learn better and stay motivated through the course. I think other beginners will find these tips useful too, so I am sharing them here.

These are Aditya’s words, slightly edited by me for better readability.

1. Get rid of self-imposed starting barriers I took two computer programming courses during my undergraduate and I failed both. For a really long time, I thought that programming is some black box that's too complex & hard. This skewed view created a starting barrier for me. I know many people who are completely new to programming feeling the same way.

After 3 years, I can say that programming is anything but a black box. It is just beautiful. Something as simple as CSS is truly magical. Everyone has different mental barriers and most of those are self-created. Don’t get intimidated by these self-imposed views.

2. Be clear about your end goal When I was just starting out, an engineer friend told me, "When programmers can’t understand an error message in their code, they go and search the internet to figure out what that error is. There are probably other engineers out there who have faced and solved the same problem before. So, they take that solution and try it out. Of course, they have their fundamentals in place but everyone is still learning on the go.”

Learning on the go made sense to me. Moreover, it was quite liberating to hear this. I took this approach to learning & told myself - ‘I am not here to learn coding, I am here to solve a problem’. This approach of focusing on solving the problem at hand has empowered me to learn faster & better.

3. “You can get anywhere if you simply go one step at a time.” Pick up a small problem. Take out one weekend and just start solving it. The key is to get a small win and then to keep stacking up these small successes. The best thing about code is that it is repeatable. If you have a small piece of code that solves a small problem, you can always extend that to solve a larger problem later on.

Engineers take a big problem and break it down into smaller & simpler steps quite beautifully. If the small solutions for each step work, they can be put together to achieve a larger goal. This is the most valuable life skill I have learned while learning to code.

4. Ask for help & unblock yourself ASAP Coding isn’t hard by itself unless you are building a really breakthrough technology or the world’s next best search engine. But there are hard parts such as setting up AWS and setting up other infrastructure as needed. You will need a lot of help with these things. Always be ready to seek help.

When I asked my first question on StackOverflow, I got 5 downvotes on it. I was genuinely trying to understand something and people were telling me that I am not asking the question in the right way. It was demotivating for me as a beginner. Even if such things happen a couple of times, don't let random people deter you in asking for help from experienced engineers. The Internet, my engineering friends, and colleagues have helped me learn the most.

5. Build something useful to stay motivated I am a big proponent of the no-code movement. Technology should be like a bunch of Lego blocks anyone can play around with. Kids don't think how a lego block gets made, what is the material used or what its tensile strength is. They just use it to build something they want. I am sure there are people out there who care about the perfect piece of code. I have no benchmark on what is good code or bad code. The only benchmark I have is to build something that people find useful. I feel successful when I build something and people value it.

Checkout Aditya’s latest side project, TimeSpent.

What a CMU Professor Thinks About Failure and Future of Work

What a CMU Professor Thinks About Failure and Future of Work

What a CMU Professor Thinks About Failure and Future of Work
Arpit Mohan
 minutes ↗

Chinmay Kulkarni grew up in Bengaluru, India. He doesn’t recollect when he used a computer for the first time but vividly remembers that, "the first time we were taught anything to do with computers at school was with LOGO, which was sort of this drawing thing."

Playing around with the ‘turtle cursor’ of LOGO was the humble beginning of Chinmay’s experiments with computers. He took on a Computer Science major during his undergraduate at BITS, Pilani and even went on to pursue a Ph.D. in Computer Science from Stanford University. One of his friends from undergraduate days describes him as "the guy with a knack to simplify & explain things. He was just really good at breaking down things to their core fundamentals."

Today, Chinmay is an Assistant Professor of Human Computer Interaction at Carnegie Mellon University. He directs the Expertise@Scale Lab there that is trying to answer a prevailing question concerning the future of work in the age of automation.

"If we have better & better technology and greater & greater automation, what should people learn and what should people work on."

The findings from this research will mature to suggest new skills people should learn and technologies we should be developing for meaningful & interesting work & learning opportunities for millions of people to exist in our future that isn't only remote but also intertwined with lifelong learning. In this future, where learning opportunities need to be available at massive scale, Chinmay stresses on the usefulness of having conversations among peers: the non-experts, the people who are themselves learning or working in the same space.

In practice, research from his group has resulted in computational systems that structure peer learning at a massive scale. This includes creating the first MOOC-scale peer assessment platform and building PeerStudio, a comparative peer-review system. These systems and the associated pedagogy have been used by 100,000+ learners in MOOCs & thousands of students in university classrooms and have been adopted by companies such as Coursera and edX, in classes across disciplines including computer science, psychology, and design.

Heather McGowan, Future of Work Strategist, succinctly describes that in the future where not only more automation of physical work but also automation of cognitive work will happen,

"We need to stop learning ‘a set of skills’ in order to work. Instead, we need to learn to learn and adapt."

Learning to learn means to become good at being a beginner and not only embracing failure but by seeking it out in order to improve. Chinmay embraces this ideology through constant parallel experimentation.

He says, "If I have an idea, I try three or four different ideas in a similar space. Some of them are bound to fail but in contrast, I can see some of them succeed. You start out thinking all of them will succeed. In some way, it is useful information that you know that things you thought would have succeeded but didn’t not succeed give you a nice baseline to compare against the things that did succeed."

When he writes article drafts, he usually writes three different outlines, sends them all to people and asks them which one do they like more. He says, "I know that two-third of my work is going to be thrown away so I don’t spend too much time doing it. But on the other hand, once I have done this I can very quickly find things that don’t work and discard them." This way of learning about anything seems quite logical to him but he has also noticed that people don’t do parallel experimentation very much.

It is not surprising that humans stay away from trying out different ideas in parallel. Embracing this mindset of experimentation is innately bundled with accepting multiple failures at the same time.

Chinmay admits that it is a lot easier for a researcher to fail than it is for people whose jobs are to not fail at something. He says, "As a researcher, you have it a little easier. You are expected to fail. And also just because something you do fails people don’t think of you as a failure. Even the things you try that don't work have some merit."

He suggests that a simple change of perspective in how we look at what we do that can enable us to embrace failure much better. He says, "I think you can think about things that you do as a series of experiments rather than a series of missions that you are trying to complete. Experiments always have some chance of failure. So, just by thinking of things as experiments seems like you give yourself a chance to say, 'okay, maybe this is not going to work and that’s fine'. If you think about it like a mission, then you invest too much of your self-worth in succeeding."

Some of the most successful companies and professionals experiment and fail all the time. In one of Jeff Bezos’ letters to Amazon shareholders, he expounds on this:

"One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins. To invent you have to experiment, and if you know in advance that it’s going to work, it’s not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there. Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten. We all know that if you swing for the fences, you’re going to strike out a lot, but you’re also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it’s important to be bold. Big winners pay for so many experiments."

Chinmay recognizes that it is harder to fail for people whose jobs require them to succeed and that everyone has a different way of looking at and dealing with failure. He ponders, "The real question isn’t ‘are you okay with things failing’ but what you do when they fail. You can either have somebody learn from your mistakes or you can learn from them yourself. If you are really smart you can learn from other people’s mistakes too."

Is it Worth Joining an Early-stage Startup?

Is it Worth Joining an Early-stage Startup?

Is it Worth Joining an Early-stage Startup?
Arpit Mohan
 minutes ↗

This post is an attempt to put some method to the madness behind deciding whether joining an early-stage startup is worth it.

"The short answer is that it depends."

For the long answer — I have a framework to share that has helped me decide this multiple times in the past. Over the last 10 years, I have co-founded two startups, Gharpay & Bicycle AI, and worked at a few early-stage startups such as Exotel and Cure.Fit. About a month ago, I started building my third startup, Appsmith.

The challenge of working on new problems for new markets is a heady combination for me. But each time I started up or joined an early-stage startup, I found myself asking, “Is it worth it?” Though my answer has always been a resounding “yes”, my reasons behind the “yes” were different each time. I found that clarity on my reason behind the decision was critical.

We all have different life contexts, priorities, personalities and dreams. These unique factors influence our reasoning for choosing one career path over another. You shouldn’t let anyone tell you to ignore this stuff and just jump in.

So, here is my framework. It is quite simple and focuses on a few fundamentals.

Step 1: Ask three questions about yourself and answer honestly.

Step 2: Explore three reasons why it may be a bad idea to join an early-stage startup.

Step 3: Explore three reasons why it may be a fantastic idea to join an early-stage startup.

Step 1: The Three Critical Questions**

1. What kind of life do you want — right now and in the long term?

The answer depends heavily on your personal and professional goals. What are your priorities? Are you willing to invest a considerable amount of mental energy & time in your professional life right now? How does working at an early-stage startup fit within those priorities? Does it help or hamper your progress towards your long-term goals?

Personally, If I have a lot going on my personal front, I will want to prioritize that and hold out on making major commitments on my professional front. Try getting some clarity on what is your priority right now & what you want from life in the long-term. Clarity of thought acts as a great north star for all the decisions that you will end up taking.

2. What kind of a person are you?

Early-stage startups are not better or worse than late-stage startups or even large corporations. They are just different. And from what I have noticed, most people who fit in and perform well at early stages have a few overarching traits.

They are generalists. They look at their work as their craft and take great pride in it. They love challenges and are self-motivated to figure things out. They are quite comfortable saying ‘I don’t know’. They ask for help without hesitation. They also index heavily on finding a solution instead of focusing on the problem. Last but not the least, they are resilient.

Do parts of it seem like you? Look within and answer candidly.

3. What do you expect to gain from it?

Begin with the (expected) end in mind. What kind of professional growth do you need? What are your “must meet” and “good to meet” expectations from yourself at this point? Define these clearly. Take a step back and ask yourself — why are you even thinking of joining an early-stage startup in the first place?

If you don’t know what output you want, you can’t really decide what input to give and how to program the system, can you? State clearly what you want to gain from the experience.

Step 2: Why is joining an early stage startup a bad idea

1. Ambiguity gets a seat on the table

"No company (of any scale) has everything figured out."

The earlier a company is in its lifecycle, the more the number of unanswered questions. The work environment at early-stage startups can be a little (or very) chaotic because of this inevitable ambiguity.

You will not face much ambiguity on a daily basis at a late-stage startup or bigger company because they have already gone through their ambiguous phase. Whatever ambiguities are left to be figured out exist at the management level while you are shielded from it.

Everyone at an early-stage startup must embrace ambiguity. If you are not okay working with some level of uncertainty & ambiguity, early-stage startups may not be a good choice for you.

2. Get it right and get it fast, please.

"You can have something good or you can have something right now but you can’t have something good right now."

Early-stage startups don’t subscribe to this thought process. You got to deliver on everything super fast. You got to think fast, plan fast, build fast, ship fast and iterate fast. The company’s survival & success depends on how quickly it can execute & iterate on multiple things simultaneously. Some people choose to accomplish this by working longer hours, while others choose to do it by creating leverage (a topic for another day).

Timelines are mostly tight. You’ve got to deliver things right and you’ve got to deliver them fast. If you don’t subscribe to this work-style or if you feel this may stress you out, early-stage startups may not be a good professional choice.

3. ROI is subject to market risk and yes, it takes a long time to get any returns

The answer to ‘will your investment reap financial returns’ is always a probabilistic one. Same is the case with startups. Reaping any sizeable financial returns on your equity or ESOP (Employee Stock Option Pool) takes quite a bit of time. Standard equity or ESOP vesting periods span over four years.

Yes, there is a potential of high returns (more on this later) but you must also consider any financial trade-offs you are making. And don’t ignore the time frame of any expected ROI. Most early-stage VCs invest with a 10-year horizon.

"A garden takes time to cultivate before you see the flowers bloom. Early-stage startups are definitely not get-rich-quick schemes. "

If you need to make a lot of money quickly, early-stage startups are not your best bet.

Step 3 : Why is joining an early-stage startup a fantastic idea

1. A free ‘personal growth 101’ class

There is a lot to be done and everyone has limited bandwidth. You are mostly on your own. You will need to figure out how to do things yourself. How do you make technical choices? How do you sell the product to a potential customer? How do you pitch the team and its culture to a potential candidate? How do you say no? How do you prioritize for maximum output & outcomes?

There are no manuals to refer or company best practices to follow at early-stage startups. You’ve got to write these yourself. The learning curve is really steep when you get down to laying the foundation. Once you do these things from scratch, you’ll realize that you can figure out most things in life. You don’t have to rely on external folks/factors to accelerate learning. This builds a lot of self-confidence.

"Startups test your character and your core beliefs. "

How do you react when you are under pressure? What choices do you make when nobody is watching? How do you handle rejection from investors and customers? Are you able to take critical feedback and improve? You will uncover a lot about yourself while navigating such choices.

2. Diverse exposure

Are you an engineer? Great! You also have to pitch and sell the product to early customers. Are you a sales ninja? Cool! Please pitch in for writing the social media posts too. You do marketing? Awesome! Can you get on some customer feedback calls as well?

Sure, late-stage startups and large enterprises allow you to develop vertical depth of subject matter. But early-stage startups equip you with practical knowledge of how different verticals work and how they interact with each other to form a well-functioning organisation.

Early-stage startups push you to get out of your comfort zone regularly by exploring things out of your domain.

This experience will equip you with a lot more tools in your professional kitty. This diverse exposure is really useful in the job market of today and of the future.

3. Low risk, high reward investment (equity/ESOP)

I understand the value of equity wealth. In fact, one of the main reasons that I quit my day job to startup was to generate equity wealth.

Typically, early team members at startups get 0.2% — 2% equity share (depending on your experience, contribution, stage of the company etc). Early equity is given for risk rather than contribution. That’s why a founder’s equity is much higher than that of early team members.

Let’s run the numbers and see how early equity pans out in three scenarios. The assumption here is that you work at a fictitious startup that’s paying you a salary of 100K USD per year along with a total of 100K USD stock options (vested over 4 years).

Scenario 1:


Assuming the startup is growing at 2x in valuation for the first couple of years, your equity will be worth 1.6 million USD. In contrast, you’d only make 400K USD as salary over the 4 years.

Scenario 2:

Assuming the valuation grows at the rate of 2x for the first two years; post which, it grows at 1.5x each year. In this case, your equity will be worth 900 K USD.


Scenario 3:

Let’s take an even more conservative approach and assume that the valuation only increases 1.5x each year for the entire 4 years duration of your vesting period. Even in this case, your equity will be worth 506,250 USD.


While the gap between equity wealth and salary wealth has narrowed down significantly from scenario 1 to scenario 3, equity wealth is still higher than the overall salary for a startup walking a conservative growth path. Of course, the equity wealth can reduce to zero too if the startup shuts down or the valuations take a downward spiral.

Since most startups pay competitive salaries, ESOPs give you an extreme financial upside while limiting the downside. This ensures you can pay your mortgage, send your kids to school and also save for the future.

In light of the context above, you should have more clarity on whether the decision to join an early-stage startup is worth it for you.

I hope this helps you in reaching your decision. Happy to answer any other questions that you may have around early-stage startups. You can reach me at arpit@appsmith.com

P.S — In case you decide that you want to join an early-stage startup, we are hiring for engineering roles. Hit me up and let’s chat.