Thanks so much for help so far – the advice to design our own split using replication todo list works well.
We have three companies, and for one of those companies one table that is very large, ~13M records – table 113 – Posted Sales Invoice Line. As per my other question, the primary key here is not integer based. Destination is ‘Azure Cosmos DB (NO SQL API)’
I have been able to sync some of this table data using CreatedAt and splitting by year. I have been able to replicate approx 2.6M of these records this way.
On the next replication for this table, I get an error-
WS Call failed (403) {“code”:”Forbidden”,”message”:”Message: {\”Errors\”:[\”Partition key reached maximum size of 20 GB. Learn more:….}
We have not configured partition in CosmosDB, we are using the Company.
Is there a suggested/better way to handle replication of large tables? I saw “Partition Key Format’ was available as an option in Field Actions, but unsure if this is answer/how to use this
We have four ‘problem’ tables, a mix of standard and custom tables, that contain between 12M and 15M records each, across multiple companies.
thanks
The Partition Key Format field is not for this.
But you can use the “Additional Partition Key Field” (on the table mapping page, might need to be added via Personalization) to overcome this. Use a field (like an account no. field) that has an even distribution across the data.
In the case where the original partition key limit was reached, what is the best way to drop and recreate the table in the destination? Is there a way to do this from Cloud Replicator, or should we do manually in CosmosDB? thanks
Unless this is a table where you need to support deletions, it doesn’t matter… Just leave it.
thanks!
Perfect, thanks! running again 🙂