Warehouses today deal with a huge quantity of orders every day. Picking out orders often consists of 60% of the cost borne by a warehouse, and it can easily become a larger component if it is done inefficiently. Conversely, if it is done efficiently, it can reduce picking time, and considering how large a cost component picking is, it can lead to large improvements in the warehouse efficiency.
This is where order batching comes in. The basic idea is that, if incoming orders are batched (divided) such that the total picking time of all orders will be as small as possible, we can see how this can lead to improved warehouse workflow efficiency. This problem can easily be seen as a trip distance minimization problem. More formally, this is a variant of the TSP problem, the details of which are beyond the scope of the article. There is usually an optimal solution possible to this, but it is an NP-hard problem so analytical approaches are off the table.
Fortunately, there have been a lot of heuristic solutions that perform well. Despite the relative ease of constructing a solution, a lot of companies shy away from it and rely on informal knowledge. Part of it is the problem with constructing a proper pipeline around the solution and another, more pressing concern often is that not all packages can be treated the same. At the scale of the larger customer-facing e-commerce companies, there is a sense of relative uniformity to the orders. However, for B2B focused companies, order treatment often depends on the customer. This makes uniform batching difficult without adding a lot of constraints that would make the model brittle.
The Hopstack method inverts the problem by exploiting past data. Instead of adding constraints to the model based on customer requirements, we use past sales data to develop a rule engine that can pre-batch the orders that require similar treatment. These proto-batches are then passed to our model for providing optimal batches for the picker. This has the obvious benefit of avoiding brittleness-inducing constraints and beyond that, it significantly improves performance. A rule engine is way more efficient in splitting large order lists compared to a model. The relatively small proto-batches can be processed more easily by the model.
No amount of computing efficiency is relevant unless it translates to operational efficiency. We tested out our approach with one of our customers and found that it cut down the traversal distance by close to half in multiple tests. This was calculated by counting the number of bins a picker had to traverse for a given picking list (with and without the model being used). Besides this, a system like this reduces the complexity a picker has to deal with, compared to what they would if they need to spend their initial days learning picking routes from their more experienced peers.