Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling

Designing the technical architecture behind reliable autosave and conflict resolution in modern web editorsWhenever an author makes a change to an article in Medium, this change is stored on the backend. If you open browser dev tools, you can see diffe…


This content originally appeared on Level Up Coding - Medium and was authored by Vlad Ogir

Designing the technical architecture behind reliable autosave and conflict resolution in modern web editors

Whenever an author makes a change to an article in Medium, this change is stored on the backend. If you open browser dev tools, you can see different requests flying back and forth, trying to store the latest state so that no data is lost.

But how do systems like this work? What challenges might you experience whilst building them?

In this article, I will design an editor similar to the one Medium has. I will cover incremental saves, a locking mechanism and failure states. I will look at this problem from both the backend & frontend perspectives, but I will skip some granular details that are not strictly tied to the whole mechanism, such as the WYSIWIG editor.

Sounds interesting? Let's buckle in!

Context

Let's imagine we are working on blogging software and are tasked with creating an editor like the one in Medium!

Requirements are:

  • We must have an autosave functionality
  • An article can be edited by one person at a time

We work in an agile environment and have no time to waste! So, we span our bread-and-butter setup and go off to deliver value.

We’ve added an endpoint to store all changes. This endpoint would be called every minute to persist the current state. For the database, we created a table with id, title and content for columns.

Below is the endpoint definition:

PATCH /api/post/:postId/content

Payload:
{
"content": "Lorem Ipsum"
}

Return 201
---
GET /api/post/:postId

Reponse:
{
"title": "Hello World!",
"content": "Lorem Ipsum"
}
Overview of the system

Problems

This initial MVP delivers value but has some problems:

  • Every time we store data, we override the whole content. One mistake can result in losing the whole content due to a race condition or something else.
  • Content can get quite large for some articles. Extra bytes are being transferred when there is no need for them.
  • We don't have things in place to handle failures. What if the user’s internet goes down?

With the problems outlined, let’s dive in and address them.

Incremental saves

In this section, we will dive into addressing the problem of storing the whole content. We will break the content up and store individual chunks, enabling us to store only the chunks that have changed.

Chunking Content

To address an issue with a large content size, we can split the content up into sections.

Let’s look at Medium’s editor:

  • We have multiple sections.
  • Each section can be a paragraph, image, code block, etc.
  • Styles can be applied to each section. For example, we can turn text into a heading.

To allow for this split, we will adjust the save endpoint definition:

PATCH /api/post/:postId/content

Payload:
[
{ "content": "First Paragraph" },
{ "content": "Second Paragraph" },
]

Return 201

With the new payload, we are sending individual sections across. But, it is now the frontend’s responsibility to split the data up.

As part of this split, we also need to update the database schema. This will be done by extracting content from posts into its table called post_content. This is where we will store individual chunks.

Updated tables within the database

Identifying individual chunks

We are still storing the whole content on each endpoint call. To make some content chunks optional, we need to have a way of identifying them.

To solve this, we can include an index for every content chunk. Just like with arrays, the index will represent where in the array this content is. This way, on the frontend, we can start sending only sections that have changed.

The new endpoint definition:

PATCH /api/post/:postId/content

Payload:
[
{
"index": 2,
"content": "Second Paragraph"
},
]

Return 201

With this approach, we are adding further responsibility to the frontend to keep track of chunks that have changed.

How would adding a new section work?

We’ve covered updates, chunks have to exist for them to be updated, but how would adding a new section work?

We have several scenarios that we’d need to cover:

  1. Adding a section when we have no data. The frontend has to figure out what index to insert the data at.
  2. Adding a section to the very end of the list. The frontend has to know what the “last index” should be.
  3. Adding a section to a taken index. The backend should return an error.
  4. Adding a section to an index that is greater than the last possible index. The backend has to keep track of the index and return an error.
Visualisation of possible scenarios

In all scenarios, we are creating a dependency on knowing the next index. It would have been nicer to let backend control indexes and remove that complexity from the frontend, or at least make it non-blocking.

But, before we touch that, one other scenario comes to mind: What if we want to insert a new paragraph between two existing paragraphs? If we apply how arrays work, we would need to bump up all existing indexes at the position where we are inserting the new paragraph by 1. Afterwards, we can add a new index.

inserting paragraph between to existing paragraphs

This can potentially be a heavy operation to do since it involves many database updates.

Two other approaches that can address this issue are:

  • order column
  • next_pointer column

We will cover each one next.

The order column is where we give each paragraph an integer, float or some other value that represents a position in the list. It’s easier to do, but it comes with its complexities. (The order column can sound very much like an index, but indexes, in general, are seen as integers.)

From personal experience, it’s easy to hang up on everything, starting from order 1, having no missing integers in the order, etc. When it comes to changing an order between two paragraphs (paragraphs 2 and 7), the temptation could be to keep a perfect order, rebalancing all the entries. But in truth, ordering is there to indicate position, it doesn’t have to be perfect (e.g., consecutive integers), as long as the numbers maintain the correct relative order (A.order < B.order if A comes before B).

So, what would happen if we move paragraph 2 after paragraph 7? Paragraph 2’s order should now be greater than Paragraph 7 and less than the next paragraph. That’s two selects and a single database update!

To retrieve data, you just need to do ORDER BY order and you get all the content in the right position.

An alternative approach is similar to the linked list data structure. We would give each paragraph a pointer to point to the next paragraph.

So, let’s apply it to our example. We will insert a new paragraph and point it to the “next pointer” from Paragraph 1 and then update Paragraph 1 to point to the new paragraph. That’s one select and two update operations.

Visual of how new paragraph can be inserted

This approach removes the need for numeric indexes; if anything, indexes make it more confusing. Instead, using a short (6–8 character long) hash key would be a great alternative. From the frontend perspective, they just need to say after which paragraph a new paragraph should go.

However, in some databases, integers may offer a better query performance. There is also a complexity behind how to retrieve all paragraphs in the correct order. But, some databases (MySQL, Postgres) have recursive functions to make an easy work.

Both approaches have their trade-offs, but in both cases, it will be up to the frontend to state where the paragraph should be positioned. This is why it is important to have an intuitive API that will allow for easier integration between the frontend and backend.

In summary, the order column approach offers simpler reads (order by) and potentially fewer updates for meves/inserts. While the next_pointer approach simplifies insert/deletes, but complicates reads and requires careful handling of pointers.

Going forward, I will focus on the order approach, but the same logic will apply to the next_pointer approach.

To add support for “insert” operations, we need to update our API. To achieve we can add support for both the order and operation type with each chunk:

PATCH /api/post/:postId/content

Payload:
[
{
"id": 2,
"content": "Second Paragraph",
"order": 3.0,
"operation": 2
},
]

Return 201

The order is self-explanatory, a float that we use to set the position. Whilst operation can be 1 for insert and 2 for update. Also, to remove confusion between index and order I renamed index to be id instead.

Where does this leave delete?

Delete will operate similarly to update operations. A paragraph has to exist for it to be deleted. A great thing about the order approach is that we don’t need to do any rebalancing.

We will need to update our API to add support for the delete operation, which can be under the integer 3.

PATCH /api/post/:postId/content

Payload:
[
{
"id": 2,
"operation": 3
},
]

Return 201

Failure states

What if the user loses an internet connection? What if the backend or the database goes down? What if we have an error on the backend? What if a user closes the browser?

From the backend perspective, because the database is a single dependency, requests would either succeed or fail. From the frontend’s perspective, the backend is its dependency. If the backend server is down, then it’s up to the frontend to handle it gracefully.

As a result, the frontend needs to keep track of changes made before storing them. If failure happens, changes will be kept on the frontend until the backend recovers.

We can also add retries and backoffs on the frontend. But we need to take into consideration other users. Others might be in the same position and collectively frontend may end up DDoSing backend servers.
Frontend has to capture all the changes before persisting them on the backend

In the worst case, if the backend is down for too long, we can set a threshold after which we will disable the editor and stop the capture of changes. We would let a user know that the system is down and that changes cannot be saved — copy your content manually or risk a loss!

Tracking Frontend Changes Before Saving

Let’s walk through a scenario to illustrate how the frontend can track changes in a local store:

Initial State

Assume the editor loads with the following two paragraphs fetched from the API:

[
{ chunk_id: "iasd1", order: 1.0, content: "Foobar" },
{ chunk_id: "iasd2", order: 2.0, content: "lorem ipsum" }
]

At this point, the local store of pending changes is empty.

First Set of Changes

The user performs several actions:

  • Inserts a new paragraph: frontend generates a unique ID (iasd3), determines its initial order (e.g., 3.0, following iasd2), and adds it to the local store marked as an 'insert' (operation: 1) with empty content.
  • Updates content of iasd2: The user changes the content to “hello world”. The frontend marks iasd2 in the local store as an 'update' (operation: 2), storing the new content.
  • Updates content of iasd1 and moves it: The user changes the content to “hello world” and moves it to be between iasd2 (order 2.0) and the new iasd3 (order 3.0). The frontend calculates the new order using a method like midpoint calculation ((2.0 + 3.0) / 2 = 2.5). It then marks iasd1 in the local store as an 'update' (operation: 2) storing both the new content and the new order 2.5.

Conceptual State of Local Change Store:

[
{ chunk_id: "iasd3", operation: 1, order: 3.0, content: "" },
{ chunk_id: "iasd1", operation: 2, order: 2.5, content: "hello world" },
{ chunk_id: "iasd2", operation: 2, order: 2.0, content: "hello world" }
]

This payload will later be persisted on the backend.

Performing Further Changes

Before the next save occurs, the user continues editing:

  • Updates the new paragraph iasd3: Text (“We updated text…”) is added. The frontend will override content for iasd3 in the local store. It will retain operation: 1 to signify it still needs creation on the backend.
  • Deletes Paragraph 1: iasd1 is deleted. The frontend marks iasd1 in the store for deletion (operation: 3).

Conceptual State of Local Change Store:

[
{ chunk_id: "iasd3", operation: 1, order: 3.0, content: "We updated texts..." },
{ chunk_id: "iasd1", operation: 3, order: 2.5 }, // Delete takes precedence
{ chunk_id: "iasd2", operation: 2, order: 2.0, content: "hello world" }
]

Last Change Before Save

  • Moves Paragraph iasd3: The user moves the paragraph to a position before iasd2(order 2.0). The frontend calculates the new order. Using the midpoint method between the conceptual start (order 0) and iasd2 (order 2.0) yields (0 + 2.0) / 2 = 1.0. The order for iasd3 is updated in the local store.

Final Conceptual State of Local Change Store (Ready for Save):

[
{ chunk_id: "iasd3", operation: 1, order: 1.0, content: "We updated texts..." },
{ chunk_id: "iasd1", operation: 3, order: 2.5 },
{ chunk_id: "iasd2", operation: 2, order: 2.0, content: "hello world" }
]

Generating the API Payload

When the autosave timer triggers, the frontend takes this final accumulated state and sends the payload for the PATCH /api/post/:postId/content request. This payload contains the necessary information for the backend to apply the net effect of all the user's changes.

Multiple tabs

The last requirement to address is the ability to lock the editor to a single user, and only an author can edit an article. But nothing is stopping an author from opening an article in multiple tabs or devices.

Locking

The first thing that comes to mind is locking. This involves locking the current editing session. This lock would have a TTL and needs to be refreshed each time we persist changes. Should the editor be opened somewhere else, we would notify a user in the other tabs that the editor is now locked.

This adds an additional overhead since we need to update endpoints to support locking. But also, what if a user refreshes the current page? We need to store some type of token in the frontend that will allow us to bypass the lock.

Once lock key is acquired we can use it to persist data

Opportunistic Locking

Since we are looking at storing the lock key on the frontend, it makes sense to consider opportunistic locking. Very similar to ETags, each lock key can represent a current state. When making an update, we can use that lock key to check if the state on the backend is still the same.

Let's look at how this would work:

  1. A hash key for the current state is generated on the backend.
  2. New changes are sent to the endpoint /api/post/:postId/content with the hash key included.
  3. If the hash key doesn’t match the latest hash, the request is rejected.
  4. Otherwise, the backend returns a new hash key for the new state.

This is very similar to the previous locking approach, but this time around, users can have as many tabs open as they want, except only one of them will be able to submit changes, and other submissions will be invalidated.

If the incoming hash key doesn’t match the backend’s current state, the request is rejected, indicating another change occurred concurrently. This effectively ensures only one set of changes based on a specific state can be applied, gracefully handling concurrent edits from multiple tabs.

Summary

There is definitely more to an editor than meets the eye. Various scenarios have to be addressed for an editor to work smoothly.

In the end, we have designed an editor that:

  • Saves chunks of data effectively
  • Locks editor to a single session
  • Has support to handle some failures

All this with a few database tables, a couple of endpoints and some logic on the frontend.

In practice, we may want to have a few extra endpoints to make the experience smoother. For example, we can ping the backend in the background to check if the content has changed. So there are a few extra things that could have been covered.

There is also a question about the scalability of this product. However, we are on the right track. We can easily add necessary indexes and partitions that would improve the operation of this product for both reads and writes.

Lastly, to expand this editor further, you may want to dive into how collaborative editing works, looking at concepts such as CRDT or taking a deep dive into techniques such as Event Sourcing to implement a robust undo/redo and version history.

On this note, I hope you enjoyed reading this article!


Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by Vlad Ogir


Print Share Comment Cite Upload Translate Updates
APA

Vlad Ogir | Sciencx (2025-04-24T16:17:00+00:00) Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling. Retrieved from https://www.scien.cx/2025/04/24/building-a-resilient-editor-like-medium-incremental-saves-locking-and-failure-handling/

MLA
" » Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling." Vlad Ogir | Sciencx - Thursday April 24, 2025, https://www.scien.cx/2025/04/24/building-a-resilient-editor-like-medium-incremental-saves-locking-and-failure-handling/
HARVARD
Vlad Ogir | Sciencx Thursday April 24, 2025 » Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling., viewed ,<https://www.scien.cx/2025/04/24/building-a-resilient-editor-like-medium-incremental-saves-locking-and-failure-handling/>
VANCOUVER
Vlad Ogir | Sciencx - » Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/04/24/building-a-resilient-editor-like-medium-incremental-saves-locking-and-failure-handling/
CHICAGO
" » Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling." Vlad Ogir | Sciencx - Accessed . https://www.scien.cx/2025/04/24/building-a-resilient-editor-like-medium-incremental-saves-locking-and-failure-handling/
IEEE
" » Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling." Vlad Ogir | Sciencx [Online]. Available: https://www.scien.cx/2025/04/24/building-a-resilient-editor-like-medium-incremental-saves-locking-and-failure-handling/. [Accessed: ]
rf:citation
» Building a Resilient Editor like Medium: Incremental Saves, Locking, and Failure Handling | Vlad Ogir | Sciencx | https://www.scien.cx/2025/04/24/building-a-resilient-editor-like-medium-incremental-saves-locking-and-failure-handling/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.