0

Thanks so much for help so far – the advice to design our own split using replication todo list works well.

We have three companies, and for one of those companies one table that is very large, ~13M records – table 113 – Posted Sales Invoice Line. As per my other question, the primary key here is not integer based. Destination is ‘Azure Cosmos DB (NO SQL API)’

I have been able to sync some of this table data using CreatedAt and splitting by year. I have been able to replicate approx 2.6M of these records this way.

On the next replication for this table, I get an error-

WS Call failed (403) {“code”:”Forbidden”,”message”:”Message: {\”Errors\”:[\”Partition key reached maximum size of 20 GB. Learn more:….}

We have not configured partition in CosmosDB, we are using the Company.

Is there a suggested/better way to handle replication of large tables? I saw “Partition Key Format’ was available as an option in Field Actions, but unsure if this is answer/how to use this

We have four ‘problem’ tables, a mix of standard and custom tables, that contain between 12M and 15M records each, across multiple companies.

thanks

hougaard Answered question 2 days ago